DataLift 360 serves up optimized ads with the help of IBM Cloud

The ultimate goal of advertising is to spark interest and elicit a of meaningful reaction from the audience. In the offline world, there is an abundance of ads because they’re not targeted. Think about a newspaper. Half of the pages are printed ads that most people don’t look at.
Digital marketing has the potential to be more targeted than print advertising, but it’s often just annoying. Web and mobile app users frequently can’t wait to close an ad and get back to their content as quickly as possible.
The better targeted ads become, the fewer ads must be served for advertisers to meet their goals. Users are no longer bombarded with ads. They simply see a few, very relevant and very targeted ads, which provide a good ad experience and create an environment where users find value because ads fit their needs.
But how do advertisers serve the right ads to the right people at the right time?
Maximum return on ad spending
AppLift has a data-driven solution called DataLift 360, which empowers advertisers to acquire and retain users. AppLift clients are primarily large app developers and game publishers who seek to grow their user bases.
They run ads through the DataLift 360 platform to inform users about new content in the apps or related products or offers. The solution is deeply integrated with advertisers’ customer data. It can detect what users do in the apps and why they use the apps. This data informs the ad buying decisions and helps advertisers maximize the return on their ad spending. DataLift 360 helps AppLift customers get hundreds of thousands of app installs across iOS and Android each day.
Powerful solution
DataLift 360 is provisioned in an IBM hybrid cloud infrastructure that includes both IBM Bluemix bare metal servers and virtual resources located in data centers around the world.
AppLift chose IBM Cloud because of its global presence and high performance, as well as the flexibility to turn servers on and off quickly to meet demand due to seasonal traffic peaks or increases in partner business.
While AppLift’s customers are fairly equally split between the Americas, EMEA and the Asia-Pacific region, the global infrastructure is key to supporting the users to whom the ads are served, whom could be anywhere in the world. When there’s an ad request, the user must be served immediately. AppLift must  have service in physical proximity to where the users are to provide the low-latency, high-frequency environment advertisers demand. It’s a failure if a user opens an app and they’re supposed to see an ad but instead see a black screen or a loading screen. That’s a bad user experience.
Massive traffic
AppLift’s customers include a global network of more than 500 premium app advertisers. The company is provisioned to store and process more than 2 billion user profiles, which basically means it can cover every user who owns a smart phone worldwide.  The company stores, processes and uses that data in the forming of the decisions.
DataLift 360 computes tens of billions of opportunities to serve ads every day, meaning it makes decisions regarding whether to serve an ad to a specific user, which ad to serve, and for which price it is willing to serve the ad. With the IBM Cloud technology supporting it, the solution can horizontally scale to more than one million queries per second if necessary.
Read the case study for more about AppLift and IBM.
Learn more about IBM Bluemix.
The post DataLift 360 serves up optimized ads with the help of IBM Cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

OpenStack Upgrade from Mitaka to Ocata (across 2 releases!) with Mirantis Cloud Platform

The post OpenStack Upgrade from Mitaka to Ocata (across 2 releases!) with Mirantis Cloud Platform appeared first on Mirantis | Pure Play Open Cloud.
Earlier this year, we here at Mirantis found ourselves in a situation where we needed to upgrade across two OpenStack releases. It’s not an uncommon problem in enterprises; IT departments often bristle at the idea of doing major upgrades every six months — which is how fast new OpenStack releases arrive.  
To have this problem as an OpenStack distribution vendor, however, that’s less common. But that’s what happened.  
That’s not to say it took us by surprise. When we released Mirantis Cloud Platform 1.0 in March of this year, we wanted to provide the easiest transition possible — as well as feature parity — from Mirantis OpenStack 9.2. MOS 9.2 is based on OpenStack Mitaka, which has the extra added bonus of enabling us to provide customers the ability to move between Ubuntu Trusty and Ubuntu Xenial.
That’s the upside.  The downside is that it meant we were dealing with an OpenStack that was almost a year old, and that’s certainly not what we want.  Especially with a third release just three months away, we had to figure out how to get back on track, bypassing Newton and going straight to Ocata so our customers would be ready when Pike arrived.
This blog describes OpenStack upgrades in general, and how we performed a double upgrade. We’ll also share a DEMO video of real upgrade on a hardware cluster, upgrading Mitaka-based OpenStack running with a Ubuntu Trusty control plane to upgrade to Ocata running on Xenial.
What is an OpenStack Upgrade?
There’s a good reason that upgrades has been one of the hottest topics in OpenStack for the last few years. Like any system upgrade, an OpenStack upgrade is something scary to operations people, and it has been very painful in the community since the beginning.

I’ve been around long enough to remember that upgrades from releases such as Grizzly with Neutron were almost impossible. Historically, the OpenStack community has supported only upgrades of the actual components, and even then, only from one version to the next. However, considering the OpenStack project’s complexity and adding in the inevitable vendor plugins necessary for a truly enterprise-grade deployment, the reality is that guaranteeing that your setup was even upgradable was impossible, at least without further analysis.
Ultimately, in most cases “upgrade” really meant redeployment of the whole cloud and then migrating workloads to the new infrastructure. Projects such as CloudFerry were developed to ease the burden of resource migration between these clouds, but clearly that wasn’t the answer.
Technically speaking, an OpenStack upgrade consists of 3 steps:

Upgrade packages – upgrade to the latest packages, with binaries and python code.
Update config files – update configuration files with the latest parameters.
Sync databases – run a database sync to update schemas to the new structure.

Looking at those 3 steps, it might sound like an easy task. In fact, there is a significant portion of the community that got to thinking that if we could just containerize OpenStack and run it on Kubernetes, an upgrade would become a 5 minute exercise. The truth is that while containers provide a simple and faster of performing the binary upgrade portion of the process, the rest of the process remains the same. Containerization of OpenStack can’t solve problems such as:

Verifying release notes to ensure that current features are available in the new release, especially considering architecture changes over time.
Going through the list of your neutron, glance, and cinder backends and verifying compatibility for the target release. Neutron plugins are usually especially challenging, because the network is such a core part of the cloud. The last thing you want is for your workload to be disconnected during the upgrade.
Adopting structure and configuration options from the service configuration files and merging them with existing configuration files. You’ll need to go through the OpenStack Configuration Reference, as it will contain new, updated, and deprecated options for most services.
Considering the approach to upgrading the environment. Some users will live migrate instances or relocate workloads. You must have a plan about what phases touch what part of the infrastructure in order to ensure that you do not cause downtime.
Considering the impact of an upgrade to operation. The upgrade process interrupts management of the environment including the dashboard, but the upgrade should not touch significantly on existing/running instances. Instances might experience intermittent network interruptions, but it’s important to keep these minimal.
Developing an upgrade procedure and assessing it thoroughly by using a test environment similar to your production environment.
Preparing for failure. As with all major system upgrades, your upgrade could fail for one or more reasons. You must have a procedure ready for this situation by having the ability to roll back the environment to the previous release — including databases, configuration files, and packages, and not just containerized binaries.

Sounds simple doesn’t it?  So now that we’ve established by people are afraid to upgrade OpenStack, we’ll take you through the upgrade procedure in MCP and describe our experiences with various production setups.
MCP Openstack Upgrade Mitaka to Ocata
Before we look at how this was done, let’s start with defining what it was we actually did.
MCP has a specific reference architecture, which has the advantage of limiting the number of different component versions we need to worry about.  So let’s start by defining the requirement component matrix, and what is and is not part of the upgrade. The core requirement was to do it as automatically as possible with minimum of manual steps. Here’s where we started, and where we ended up:

MCP Mitaka
MCP Ocata

Control plane
VCP – Split into VMs
VCP – Split into VMs

OS version
Ubuntu 14.04 VCP and 16.04 on computes
Ubuntu 16.04

OpenContrail
3.1.1
3.1.1

OpenVSwitch
2.6.1
2.6.1

QEMU/Libvirt
2.5/1.3.1
2.5/1.3.1*

Oslo messaging
4.6.1
5.17.1

Keystone
9.3.0
11.0.2

Glance
12.0.0
14.0.0

Cinder
8.1.1
10.0.3

Nova
13.1.4
15.0.6

Neutron
8.4.0
10.0.2

Heat
6.1.2
8.0.1

Horizon
9.1.2
11.0.2

* Upgrading to 2.6/1.3.3 will provide you the ability to use Ocata’s new live-migration functionality in Nova, and utilizes the same procedure.
Some things to keep in mind:
As I already mentioned, Neutron and the network in general are the most difficult parts to get right, and perhaps the most delicate. MCP supports 2 network backends:

OpenVSwitch ML2 – If you’re running OVS, you’ll need to keep a particularly careful eye on this process; you will be upgrading neutron-openvswitch-agent as well as OVS itself, which can cause downtime on instance network connections if not done properly. On the other hand, MCP supports both standard DVR and non-DVR architectures, and thankfully there are no differences between Mitaka and Ocata from this point of view, so it is mostly a binary upgrade.
OpenContrail – OpenContrail is quite independent of the OpenStack release itself, which is a huge benefit when it comes to upgrades. OpenContrail 3.1.1 can run with Kilo, Liberty, Mitaka, Newton and Ocata, so this upgrade does not really touch the data plane, which means zero outage for workloads.

There are also some additional differences in architecture between the Mitaka and Ocata releases, so we had to add a few things:

Glance glare – GLare Artifact Repository was added in the Newton release cycle, and Glance requires it to configure extra service and keystone endpoints.
Nova Placement API – The Ocata release includes the new placement API as a mandatory service, so that must be added.
Nova nova_cell0 – CellsV2 deployments are mandatory in Ocata, which requires an extra database in Galera and use of the online migration command.
Cinder v3 endpoint – Ocata requires that the Cinder v3 endpoint be configured.
Keystone v2 client support was definitely removed in Ocata, so only the openstack client can be used for endpoint enforcement.

Nova db syncs
Doing an upgrade across two releases requires calling nova database synchronization in a specific order. Fortunately, DriveTrain is managed via Infrastructure as Code.  That means that by editing Salt formulas, we can reliably control the way in which the cloud is built, or any changes are made.
To make sure the changes are done properly, we first extended salt-formula-nova by modules for database syncs on based on version. Using this procedure, we are able to skip the Newton release and jump directly to Ocata.  These formulas make sure that we:

Check the version of api_db and sync api_db sync to the latest version used in Newton (version 20):

nova-manage api_db version 2>/dev/null
nova-manage api_db sync –version 20

Then run db sync to the latest version used in the Newton release (version 334):

nova-manage db version 2>/dev/null
nova-manage db sync –version 334

Online data migrations to the latest versions of the Newton DB:

nova-manage api_db version 2>/dev/null
nova-manage db version 2>/dev/null
nova-manage db online_data_migrations

Create a cell mapping to the database connection for the cell0 database:

nova-manage cell_v2 map_cell0

Create the default cell:

nova-manage cell_v2 create_cell –name=cell1

Map hosts to the cell:

nova-manage cell_v2 discover_hosts

Map instances to the cell:

nova-manage cell_v2 list_cells 2>&- | grep cell1 | tr -d “n” | awk ‘{print $4}’
nova-manage cell_v2 map_instances –cell_uuid <cell1_cell_uuid>

Api_db sync to the ocata version:

nova-manage api_db sync

Db sync to the ocata version:

nova-manage db sync

Online data migrations to ocata:

nova-manage db online_data_migrations

Fortunately, this kind of intricate planning and repetition is just the kind of thing that Infrastructure as Code is great for; once we’ve adjusted our Salt formulas, we are ready to plug them into our pipelines.
In general, we’ll split the upgrade into 2 parts: the control plane and the data plane.
Ocata OVS DVR default route missing
The second issue was that the DVR FIP namespace was missing the default route after the upgrade to Ocata. This caused the disconnection of all instances using the SNAT router. To solve this problem, our Neutron team sent the following patch to upstream OpenStack: https://review.openstack.org/#/c/475108/ .
OK, now that we’ve got that out of the way, we can go ahead and start the actual upgrade process.
Upgrade the OpenStack control plane (controllers)
The control plane upgrade is mostly independent of the data plane upgrade, and it does not have to be done all at once. Moreover, as you will see in the demo video, we can upgrade just the control plane to Ocata and still boot instances on Mitaka-based compute nodes.
The entire process is automated via the Jenkins pipeline, which has the following stages, but that doesn’t mean that the process isn’t human-controlled. Between each stage there is human judgment; approval is required to continue on to the next stage.

Let’s look at what happens during each step.
1. Prerequisite – Reclass model preparation
The first step is to make sure that the cluster reclass model is changed to include the right configuration for Ocata. Customers will ultimately have access to detailed information about those steps in the MCP documentation, but basically it requires the following steps:

Include the definition for the upgrade node, a VM called upg01 – this node is required for the Test Upgrade phase.
Include classes to backup and restore all databases
Include classes specific to Ocata, such as:

Glance glare definition for haproxy
Nova placement haproxy
Keystone endpoints
Galera nova cell database definition

All of those classes are publicly available shared models, based on Newton and Ocata pull requests, so anybody can update this model and include their own classes.
2. Test the Upgrade – Single VM verification
This the first stage of the actual pipeline, the goal of which is to verify cloud database consistency during the upgrade of the database schema. It automatically spins up a new single VM with an OpenStack controller and verifies API functionality against the dumped databases. DriveTrain will:

Create a new VM called upg01 for a single node of OpenStack on one of the KVM foundation nodes.
Automatically backup all databases via xtradb and rsync them into the backup node, which is by default a salt master.
Create new databases in the existing galera cluster with an “_upgrade” suffix (as in nova_upgrade) and restore them from the previous backup. This clone of production databases will be used for schema API verification using the single VM node.
Install a single openstack controller inside the upg01 VM
Verify that the APIs are working correctly by calling the following commands:

openstack service list
openstack image list
openstack flavor list
openstack compute service list
openstack server list
openstack network list
openstack volume list
openstack orchestration service list
After this verification phase, we can be sure that the upgrade will not fail due some specific database content or API configuration. It also verifies that the metadata model fits the Ocata deployment with the existing configuration.
3. Real Upgrade – control plane rebuild
Once a human has verified that the test upgrade succeeded, he or she can approve moving on to the next stage, which does the actual upgrade of the OpenStack control plane. During this stage, nobody can provision a new workload, and the cloud dashboard is unavailable, but there will be zero downtime for existing workloads. During this phase, DriveTrain will:

Backup the control and proxy node VM disks. It then stops the VMs of the control plane and copies their qcow2 disk files in case a rollback is needed.
Create new VMs for control and proxy nodes based on Ubuntu 16.04. It automatically provisions these nodes and registers them back to the salt master.
Execute the standard deployment phase for OpenStack control (ctl) and proxy (prx) nodes, performing redeployment from scratch on clean VMs with clean operating system installs.
Verify that services are working correctly by testing all API commands and service lists.

This stage takes approximately 30 minutes, depending on package download speed. If local mirrors are available, it can take as little as 15 minutes. A future containerized control plane could potentially cut this time in half.
4. (Optional) Rollback the Upgrade
The last stage is an optional rollback in the case of any issues or failures. Automated pipelines will stop, waiting on rollback confirmation. During this time you should verify that your cloud is functioning properly. Should you need to roll the changes back, the procedure is:

Change the source db data in the reclass model. (You will have configured this information when creating the backup/restore specification; now point to the original data.)
Stop the VMs running your control plane nodes and restore the clt and prx qcow2 disks.
Restore the previous functional DB schemas via the xtradb backup.
Start the controller VMs.
Verify your services are working correctly.

Rolling back does not take more than 5-10 minutes. Now let’s see it in action.
Demo Upgrade Video
All of that sounds like a lot of work, but in actuality there’s really not much for you to do to achieve this upgrade.  To see it in action, check out this video:

As you can see, we performed a real control plane upgrade from OpenStack Mitaka to OpenStack Ocata on a 6 node bare metal hardware cluster with Open vSwitch. We also had the new Ocata controllers boot instances on the existing Mitaka computes.
The process looked like this:

Check the MCP Mitaka-based cloud with existing instances running
Launch the Upgrade control plane Jenkins pipeline

We then built the pipeline and kicked it off.

When real upgrade finished, we verified the control plane upgrade by booting an instance on the Mitaka-based compute nodes.

We then confirmed the rollback action and reloaded the previous Mitaka control plane
Finally, we booted an instance to verify that the cluster works correctly.

Now let’s get to the data plane upgrade.
Data plane upgrade (computes)
As you saw in the previous section, the compute node upgrade is almost completely independent of the control plane upgrade, which actually gives us a great deal of flexibility in scheduling upgrades across zones.
As for upgrading the computes, the process will differ depending on the networking in use by the OpenStack cluster:

OpenContrail – With OpenContrail used for networking, you will only have to upgrade nova, qemu and libvirt, because OpenContrail is independent of the OpenStack release. However, since the versions of qemu and libvirt are same, the only thing you actually have to upgrade is nova’s python code. Therefore there is zero touching of OpenStack instances, and zero downtime.
OpenVSwitch – If you’re using OpenVSwitch, the process is more complicated, because it uses neutron-openvswitch-agents and upgrades OVS itself. However, MCP again ships the same version of OVS, so again all you’re really upgrading is python code. When you upgrade OVS-based OpenStack compute nodes, you will see a disconnection of instances during OVS reload and neutron synchronization, but there is zero downtime, as we’re just changing python code around.

So to do the actual upgrade, we have created an automated Jenkins pipeline, which actually does the appropriate steps in the right order.

What’s more, the pipeline enables the operator to see the outputs on test, sample, and so on. Let’s go through the steps:

The operator defines a subset of test nodes showing the difference between the two sets of packages.

To do that, you first specify the salt master credentials and URL so Jenkins knows where to find it. You can then specify the particular servers that you want to work with.  For example, you can specify cmp* for all compute nodes that start with cmp, or I@nova:compute for all compute nodes.

The first stage updates the repository with the latest packages and shows packages waiting for upgrade on the test node.

After approval, it runs the upgrade on the sample subset of nodes, upgrading packages and updating the appropriate configuration files. Then the operator can verify that the upgrade was successful, or wait for some time period before initiating the upgrade of the rest of the infrastructure.
As the next step, the pipeline sets the default expiration for approving the upgrade of the rest of the infrastructure to two hours. If it’s not confirmed, the upgrade is aborted. After confirmation, all nodes run the upgrade.

As you can see this automated procedure gives you a very powerful way to manage the upgrade of all your hypervisors without accessing them manually, and with complete audit capabilities.
Conclusion
In the future, MCP will likely utilize a containerized control plane, which can speed up package installation by downloading docker images. This process can save as much as 30 minutes, cutting the control plane upgrade time to a total of only 15 minutes. However, even with these changes, we still need to perform all of the other steps, such as database backup, architecture changes, configuration changes, and so on. On the other hand, a containerized control plane will add an extra layer such as Docker and Kubernetes, which can potentially increase operational complexity. We will have to monitor it and come with other tooling around to provide seamless upgrade of that layer as well.
In this blog, we demonstrated how easily we can take our customers from OpenStack Mitaka to Ocata clouds with backward compatibility and without significant outages. With the right tooling, upgrading OpenStack is no problem.
The post OpenStack Upgrade from Mitaka to Ocata (across 2 releases!) with Mirantis Cloud Platform appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Upcoming events

There’s a number of upcoming events where RDO enthusiasts will be present. Mark your calendar!

Join us for Test Day!

Milestone 3 of the Pike cycle was released last week, and so it’s time
to test the RDO packages. Join us on Thursday and Friday of this week
for the Pike M3 test
day. We’ll be on
the #RDO channel on the Freenode IRC network to test, help, answer
questions, and propose solutions.

Meetups

We encourage you to attend your local OpenStack meetup. There’s
hundreds of them, all over the world, every day. We post a list of
upcoming meetups to the RDO mailing list each Monday, so that you can
mark your calendar. Or you can search
meetup.com for events near you.

If you’re going to be speaking at an OpenStack meetup group, and you’d
like to have some RDO swag to take along with you, please contact me –
rbowen@redhat.com – or Rain – rain@redhat.com – with your request, at
least two weeks ahead of time.

And if you’re a meetup organizer who is looking for speakers, we can
sometimes help you with that, too.

OpenStack Days

The following OpenStack days are coming up, and each of them will have
RDO enthusiasts in attendance:

Sep 14, 2017 – OpenStack Days Benelux, Bussum, NL
Sep 26, 2017 – OpenStack Days UK, London
Sep 28 – 29, 2017 – OpenStack Days Italy, Milan
Oct 17, 2017 – OpenStack Days Turkey, Istanbul
Oct 18 – 19, 2017 – OpenStack Days Nordic, Copenhagen
Oct 19, 2017 OpenStack Days Canada, Ottawa

If you’re speaking at any of these events, please get in touch with me,
so that I can help promote your presence there.

Other Events

The week of September 11 – 15th, the OpenStack
PTG will be held in
Denver, Colorado. At this event, project teams will meet to determine
what features will be worked on for the upcoming Queens
release. Rich
Bowen will be there conducting video interviews, like last
time,
where OpenStack developers will be talking about what they worked on in
Pike, and what to expect for Queens. If you’ll be there, watch the
announcements from the OpenStack foundation for how to sign up for an
interview slot.

On October 20th, we’ll be joining up with the CentOS community at the
CentOS Dojo at CERN. Details of that event may be found on the CERN
website. CERN is located
just north of Geneva. The event is expected to have a number of RDO
developers in attendance, as CERN has one of the largest OpenStack
deployments in the world, running on RDO.

Other RDO events, including the many OpenStack meetups around the
world, are always listed on the RDO events page.
If you have an RDO-related event, please feel free to add it by submitting a pull
request on Github.
Quelle: RDO

How to lead digital transformation with IBM ODM for z/OS

This post was co-authored by Laurent Tarin.
90 percent of the world’s data has been created in the last two years. Some expect that there will be over 30 billion connected devices in the world by 2020. Mobile, social, analytic, cloud and cognitive technologies are transforming the way we do business. Customers expect immediate and customized interactions with the companies with which they do business.
Marketing departments often put pressure on IT to adapt applications that can enable the organization to be competitive. So how can a company leverage decades of investments made in IBM Z infrastructure to make the most of the fast-changing IT environment? Is it better to start from scratch or to transform existing Z assets?
80% of the world’s corporate data sits on or originates from mainframes. In total, mainframes process around 30 billion transactions a day and enables $6 trillion in card payments each year. This existing infrastructure moves a considerable amount of value—and this is just one example. It’s clear that any digital transformation efforts must consider how to extract the maximum value out of existing mainframe and IBM Z investments.
Three key elements for a successful digital transformation are:

Smooth integration of existing assets with new systems in the business
Development and deployment of business-critical applications that can keep pace with changing market requirements
Increased participation by the business in application development and updates

Technologies such as IBM z/OS allow organizations to adapt to new and emerging IT trends more easily. It is important to ensure that other technological processes and functions are equally agile to help businesses keep pace with the needs of the market. In mission critical applications, the decision policy logic or the business rules are often embedded within the existing code. This makes it difficult to isolate and maintain them when business changes are required.
IBM Operational Decision Manager for z/OS (ODM for z/OS) helps by externalizing decision logic from existing COBOL code. This makes it possible for business users to more easily and consistently collaborate with IT to adapt the applications that are essential to business operations.
By enabling rules-based decision logic to be defined and maintained by nontechnical subject-matter experts, ODM for z/OS offers advantages that include:

Single “source of truth” to access, share and update business decisions
Better alignment between business and IT teams
Improved quality of decisions and speed with which they can be changed
Traceability of how decisions are made within the applications
The robustness of IBM z14 to host and deploy critical applications

IBM ODM for z/OS can help simplify an organization’s digital transformation journey by externalizing critical business decisions from COBOL code. Leveraging existing z/OS assets can facilitate a smooth transformation and avoids the need to start from scratch, which could be both costly and time consuming.
With the combination of ODM for z/OS and the new IBM z14 it has never been easier to create dynamic, business-relevant applications that could serve as the building blocks for your digital transformation.
To learn more about ODM for z/OS, tune in to the webinar on August 21st, 2017 or visit our webpage here.
The post How to lead digital transformation with IBM ODM for z/OS appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Recent blog posts from the community

Here’s some of the great blogs from the RDO community which you may have missed in recent weeks:

Using NFS for OpenStack (glance,nova) with selinux by Fabian Arrotin

As announced already, I was (between other things) playing with Openstack/RDO and had deployed some small openstack setup in the CentOS Infra. Then I had to look at our existing DevCloud setup. This setup was based on Opennebula running on CentOS 6, and also using Gluster as backend for the VM store. That’s when I found out that Gluster isn’t a valid option anymore : Gluster is was deprecated and was now even removed from Cinder. Sad as one advantage of gluster is that you could (you had to ! ) user libgfapi so that qemu-kvm process could talk directly to gluster through ligbfapi and not accessing VM images over locally mounted gluster volumes (please, don’t even try to do that, through fuse).

Read more at https://arrfab.net/posts/2017/Jul/28/using-nfs-for-openstack-glancenova-with-selinux/

Nested quota models by Tim Bell

At the Boston Forum, there were many interesting discussions on models which could be used for nested quota management (https://etherpad.openstack.org/p/BOS-forum-quotas).Some of the background for the use has been explained previously in the blog (http://openstack-in-production.blogspot.fr/2016/04/resource-management-at-cern.html), but the subsequent discussions have also led to further review.

Read more at http://openstack-in-production.blogspot.com/2017/07/nested-quota-models.html

Understanding ceph-ansible in TripleO by Giulio Fidente

One of the goals for the TripleO Pike release was to introduce ceph-ansible as an alternative to puppet-ceph for the deployment of Ceph.

Read more at http://giuliofidente.com/2017/07/understanding-ceph-ansible-in-tripleo.html

Tuning for Zero Packet Loss in Red Hat OpenStack Platform – Part 3 by m4r1k

In Part 1 of this series Federico Iezzi, EMEA Cloud Architect with Red Hat covered the architecture and planning requirements to begin the journey into achieving zero packet loss in Red Hat OpenStack Platform 10 for NFV deployments. In Part 2 he went into the details around the specific tuning and parameters required. Now, in Part 3, Federico concludes the series with an example of how all this planning and tuning comes together!

Read more at http://redhatstackblog.redhat.com/2017/07/18/tuning-for-zero-packet-loss-in-red-hat-openstack-platform-part-3/
Quelle: RDO

Migrating to S/4HANA? Ask these questions first

This post is the first in a two-part interview series with Brian Burke, global SAP solutions executive with IBM SAP Alliance.
Thoughts on Cloud (ToC): While SAP ERP users have until 2025 to migrate to S/4HANA, many businesses are already in the process of moving to the platform. However, some customers still have not determined when, how and why they should migrate. Why do you think people are waiting?
Brian Burke, global SAP solutions executive, IBM SAP Alliance: Moving to S/4HANA is a major shift for a lot of businesses. If you’ve been an SAP customer for years, moving to S/4HANA might seem very complicated. You may have to learn new ways of managing your enterprise resource planning (ERP) applications and handling your SAP infrastructure, but once you do, I bet you will be thrilled.
SAP data and applications are mission-critical assets that reach across the entire business. SAP administrators already know this, of course, because all the line-of-business leaders call them if something goes down. Because these applications impact every aspect of your business, the considerations to have to make, especially as it relates to the infrastructure, are of huge importance.
For example, I’m hearing stories from clients using SAP S/4HANA for quarter-close financials. Activities that used to take three weeks can now be completed in as little as three hours. Also, utilizing a quick, easy-to-use infrastructure like cloud enables SAP customers to have one less worry.
ToC: If clients are currently running SAP on premises but want to transition to cloud, are they migrating to both S/4HANA and a cloud infrastructure at the same time?
BB: Not necessarily. Many SAP clients are moving their SAP applications to a cloud infrastructure first, then migrating to S/4HANA. This two-step approach can make the migration process a little easier for businesses currently using neither S/4HANA nor cloud.
However, several companies that are being spun off into their own entities or that are going through mergers or acquisitions are jumping straight to S/4HANA on cloud. For those businesses, designing a new infrastructure from scratch is a good opportunity to migrate to the new platform and a cloud infrastructure at once.
ToC: What aspects of SAP S/4HANA make managed cloud hosting a particularly attractive option to businesses?
BB: S/4HANA is relatively new, so it might be difficult to find people with the skills to run the platform effectively, regardless of whether it’s deployed on premises or in the cloud. If you decide to go with cloud, a managed services provider can eliminate that problem by providing the skills to accelerate the migration process and optimize your ERP environment.
Remember, we’re not talking about standalone applications for a single line of business. SAP ERP applications are critical to nearly every aspect of your business, so consistent deployment is vital. If your ERP application needs to be deployed across regions, you need to ensure that networks, security, processes and procedures remain consistent. Managed services can help you deliver that consistent experience to all your users, whether they’re across the street or across the globe.
Also, don’t forget about non-SAP applications, which may have valuable data that needs to be shared with the ERP solution suite. A cloud managed services solution can help with integrating those applications into the ERP environment to drive more value.
To estimate your annual savings from implementing cloud managed services, try the Cost Benefits Estimator.
For more information on how deploying SAP applications in a managed cloud environment may reduce complexities and costs, visit the IBM Cloud Managed Services for SAP website.
 
The post Migrating to S/4HANA? Ask these questions first appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Watson Analytics help Southern Connecticut State University improve student success

What are the determining factors of student success?
Southern Connecticut State University turned to IBM Watson Analytics Professional to find out what really is most important for keeping students on the path to graduation.
A 10-year data analysis found that campus experiences played the biggest role in whether students remained at a university, transferred elsewhere or dropped out completely. Many previous models relied on demographic information to predict a student’s behavior.
“We were able to start making decisions that were informed by data, rather than anecdotes,” said Michael Ben-Avie, director of the Office of Assessment and Planning at SCSU. “The shift from data-laden reports to infographics with visualizations from Watson Analytics considerably helped in this regard.”
The university has used Watson Analytics to open a new Academic Success Center, which offers tutoring services, academic coaching and other services designed to help students achieve. SCSU has also started a new internship program in its School of Business and computer science department in which students have access to Watson Analytics tools.
For more, read Campus Technology’s full story.
The post Watson Analytics help Southern Connecticut State University improve student success appeared first on Cloud computing news.
Quelle: Thoughts on Cloud