6 do-not-miss hybrid cloud sessions at IBM InterConnect

Are you looking at expanding your private or hybrid cloud? Or maybe you want to get the most out of your current capabilities? Whatever your goals are, you can’t afford to miss these six exciting sessions at IBM InterConnect 2017.
1. Session : Strategies for successfully enabling BPM/ODM in the hybrid cloud
How do you achieve maximum business value in a cloud environment? Learn how to harness the capabilities of hybrid cloud to deploy, implement and manage enterprise workloads like IBM Business Process Manager (BPM) and IBM Operational Decision Manager (ODM). Come learn the business requirements you need to consider when working with application performance, servers, business processes, financial objectives, service management, disaster recovery, infrastructure, risk management and hybrid cloud integration.
2. Session : Bluemix local system: Cloud adoption for enterprise IT
It can be expensive to deliver the right applications to the right users at the right time. IBM technical experts will share how IBM Bluemix Local System and IBM PureApplication can help accelerate and optimize your IT operations with a turnkey, integrated private cloud application platform. See how you can run both traditional and cloud-based applications while supporting open technology through built-in automation for simple and repeatable application environment deployments.
3. Session : Strategies for high availability and disaster recovery with private cloud
Every organization has its own high availability (HA) and disaster recovery (DR) requirements. IBM Bluemix Local System and IBM PureApplication provide many capabilities that allow you to implement HA strategies and DR scenarios for your applications and data. Join this roundtable discussion to share your feedback and learn best practices from IBM and your peers.
4. Session : Total economic value from AXA&;s PureApplication implementation
Launching new insurance products often means high upfront costs and time-consuming processes for setting up infrastructure. Learn how AXA Technology Services implemented a new underwriting process for household products on a pilot using IBM PureApplication Service in the cloud. You’ll also hear how it performed a proof of concept using PureApplication System in its data center. Both projects succeeded, and AXA Belgium purchased two IBM PureApplication System units for its on-premises use. Read the case study to see the total value of what was accomplished, and attend the session to see how you might be able to do the same.
5. Session : Habib Bank Limited&8217;s journey to platform as a service with IBM PureApplication System
Do you want to learn how your organization might streamline application delivery and reduce costs? Habib Bank will share its journey from traditional IT to a new cloud-based platform as a service (PaaS) solution with IBM PureApplication and WebSphere Application Portal. Hear how this transition helped the company deploy its applications 300 percent faster and save USD 500,000.
6. Session : Enterprise IT modernization at 1-800-FLOWERS
Do you want to learn how to modernize your business to meet new challenges? 1-800-FLOWERS has reinvented itself several times since its founding as a local florist in 1976. The company has gone from local retail to global telephone sales, and now it is on the leading edge of the DevOps revolution. Learn how 1-800-FLOWERS uses IBM PureApplication Software as part of its enterprise modernization and DevOps automation process to greatly improve provisioning times for new development, test and production environments. You&8217;ll also discover how patterns changed the company’s vocabulary from &;months&; to &8220;days.&8221;
These are just some of the many exciting sessions at InterConnect. If you’re still not signed up, register today, and then add these six sessions to your schedule. Don’t miss the opportunity to talk directly with our executives and technical teams by stopping by the InterConnect 2017 EXPO. I look forward to seeing you there.
The post 6 do-not-miss hybrid cloud sessions at IBM InterConnect appeared first on news.
Quelle: Thoughts on Cloud

5 can’t-miss VMware sessions at IBM InterConnect 2017

VMware is looking to have a big showing at IBM InterConnect this year.
With several VMware sessions planned for the conference, attendees will learn how their businesses can benefit when optimizing, migrating or extending workloads with VMware and IBM Cloud.
Here are five sessions VMware fans won’t want to miss:
Client experiences
Three breakout sessions offer an unbiased view from clients, who will explain the successes they’ve seen with VMware on IBM Cloud.
CIBC: Breaking virtualization beyond data center boundaries
Marc Gunter from the Canadian Imperial Bank of Commerce (CIBC) will share how the bank has taken advantage of the IBM and VMware partnership to accelerate its strategy for a multi-vendor hybrid cloud. This session is directed to IT managers, architects and decision makers. It will cover benefits, lessons learned, things to consider, pitfalls and more.
Dream Payments: Building an enterprise-class hybrid cloud with IBM Cloud for VMware solutions
Christian Ali and Chad Whittaker from Dream Payments, a FinTech company that powers payments for thousands of businesses across mobile devices and the Internet of Things (IoT), will explain how their company uses IBM technology to manage a payment volume that’s growing by more than 30 percent per month. They’ll also share how Dream Payments used VMware on IBM Cloud to migrate to cloud with ease and speed.
Multiplus: Design and deployment of VMware on IBM Cloud
Fabiano Barros will introduce attendees to Multiplus, the leader in Brazil for loyalty reward programs. He’ll explain how the company has benefited from implementing a hybrid cloud infrastructure with VMware on IBM Cloud.
VMware Expertise
The techies from VMware and IBM are top notch. Not only are they some of the smartest people on the planet, but they’re passionate about helping businesses get the most from VMware on IBM Cloud. Attendees can take advantage of their expertise and get answers to their toughest deployment questions.
IBM and VMware: Connecting it all
VMware experts Simon Kofkin-Hansen from IBM and Anil Kapur from VMware take attendees through solutions available with VMware on IBM Cloud, and how businesses can benefit from them; organizations can now migrate or extend existing VMware workloads to the cloud in hours rather than weeks. Kofkin-Hansen and Kapur will cover technology such as VMware Cloud Foundation, vCenter Server and NSX.
Networking magic: Intercontinental vMotion on IBM Bluemix in minutes
VMware Networking Sales Engineer Jon Schulz will demonstrate a long-distance vMotion across continents via the IBM Bluemix private network. He’ll review the topology of a truly distributed configuration on IBM Bluemix, including VMware vSphere, VMware NSX and VMware Virtual SAN. Schulz will also discuss application mobility, disaster avoidance, and automated disaster recovery between sites.
Find out more about what’s happening at InterConnect and get registration information.
The post 5 can’t-miss VMware sessions at IBM InterConnect 2017 appeared first on news.
Quelle: Thoughts on Cloud

FOSDEM Day 0 – CentOS Dojo

FOSDEM starts tomorrow in Brussels, but there’s always a number of events the day before.

This year, several speakers from the RDO community are participating in the CentOS Dojo ahead of the main event tomorrow.

Haïkel Guémar, Matthieu Huin – CI in the cloud – How RDO uses OpenStack infra tools for packaging
Spyros Trigazis – OpenStack @ CERN: Status update
Nicolas Planel, Sylvain Afchain, Sylvain Baubeau – Skydive – a real time network analyzer

Of course, since RDO works so closely with the CentOS CI infrastructure, all of the other content is relevant too. We’re looking forward to learning about the various aspects of the CentOS project, and strengthening our bonds between the two communities, today, and in the coming years.

Here’s KB addressing the opening session (Video).
Quelle: RDO

APM and DevOps: A complementary approach to agile, responsive development

Digital transformation has profound ramifications for your organization. The new landscape is disrupting business models, raising customer expectations and creating new channels to do business.
I bet you’re seeing the impact of digital transformation on your organization’s application development cycle as well. The rate of development probably isn’t decided entirely by you anymore. Instead, it’s driven by customers and the pace of the competitive marketplace—and the time between releases grows ever shorter.
Today, it’s standard for development teams to start on the next version of an application before the previous version is delivered or even completed. So how do you keep pace with these iterative, responsive and agile development cycles? An environment that incorporates both DevOps and end-to-end Application Performance Management (APM) is critical to business success.
What DevOps delivers

DevOps is a vital component of digital transformation. A recent survey by Evans Data found that 76 percent of developers polled considering DevOps to be very or somewhat important for their future.
DevOps breaks down the barrier between development and operations to deliver three key value propositions:

Accelerate the delivery of innovation with more frequent application updates
Reduce operational costs of delivering releases, eliminating expenses that have traditionally hindered agile delivery
Engage directly with the user base to focus development resources on high-value initiatives.

If you’re still uncertain how to make DevOps and APM a reality, download your very own APM DevOps for Dummies ebook.
What APM delivers
Before DevOps, APM tools used to be focused on production operations. But as more organizations adopt DevOps models, APM tools are expanding from operations into development. Development and testing environments tie closely to production environments, which makes APM easier to expand and implement. This enables development teams to take advantage of traditionally production-oriented APM capabilities that include:

Lower overhead and low cost monitoring
Management of complex dependencies and end-user experience
Highly scalable and flexible deployments with effective collaboration across development and operations

As one CIO of a retail organization summarized, “you’re going to increase productivity because you’re going to give the users [their] applications faster. You’re going to reduce IT resources and get more things done.”
Bring DevOps and APM together
To summarize, environments that incorporate both DevOps and complete APM enable development teams to be agile, responsive and ultimately more optimized for the dynamic, always-on hybrid cloud world. Embracing the DevOps methodology will help your organization reduce your delivery cycle times to hours instead of months, leaving more time to work on delivering a richer user experience.
Read this DevOps whitepaper to learn how development and operations can collaborate to optimize user experience every step of the way, leaving more time for your next big innovation.
Finally, check out all the DevOps expertise and best practices to be shared at IBM InterConnect 2017 in March.
The post APM and DevOps: A complementary approach to agile, responsive development appeared first on news.
Quelle: Thoughts on Cloud

Calculating the TCO of moving SAP workloads to cloud

As I was leaving our local movie theater on a recent day out with my family, my daughter noticed that I looked annoyed. She asked what was wrong. I decided not to say what I was really thinking: that’s two hours and $30 we’d never get back.
This made me think about all the other small investments we make, and then the bigger things, which ultimately reminded me of a question I get asked regularly when talking to clients: how can I assess the benefits of moving to cloud and managed services before making the investment?
Moving to the cloud can affect the bottom line. It can be hard to justify the total cost of ownership (TCO) up front when you don’t have a clear understanding of what the key tangible and intangible benefits might be, especially when moving ERP applications to the cloud. Fortunately, there are tools that can help you assess the net incremental revenue benefits of getting services to market faster.
One of those tools is the Cost Benefits Estimator, which helps organizations look at managed infrastructure, including those designed specifically SAP workloads. The results are based on third-party validated financial metrics and justification models.
How does it work?
It’s a self-driven tool that asks a few basic questions such as:

How many servers will be moved to the cloud?
How many full-time equivalent headcounts are required to support your current infrastructure and applications?
What industry are you in?
How many customers do you have?
What are the key drivers? (To increase customer reach? Improve time to market? Other factors?)

By answering these questions, the tool can help organizations estimate annual savings based on their environments. It can help them determine the numeric value of infrastructure savings. For example, it enables comparisons between the costs of SAP support labor costs for the current environment versus managed services on the IBM cloud infrastructure.
Wherever you decide to invest your time and money, tools like these can help spur wiser investments. As Lynda Stadtmueller, vice president of services with Stratecast-Frost & Sullivan, wrote in a recent blog post, “Whether you are in the information or finance organization, the smartest technology investment is the one that delivers maximum value to the business.”
Calculate your estimated annual savings from an investment in IBM cloud managed services by trying the Cost Benefits Estimator for SAP Applications. You can also try it for non-SAP applications. It takes no more than 15 to 20 minutes to see your results.
The post Calculating the TCO of moving SAP workloads to cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Services Management using Red Hat CloudForms (Video)

In this short video, we specifically look at service management within Red Hat CloudForms. The demonstration highlights the following platform capabilities:

Self-Service portal with lifecycle, operations management and reporting
Service Catalog presented to end-user consumers
Service Definition, built as stand-alone, or from service composites
Life-cycle status monitoring and notifications
Usage consumption and chargeback reports

 
 

 
 
Additional information on Red Hat CloudForms can be found on the Red Hat website.
 
Quelle: CloudForms

Services Management using Red Hat CloudForms (Video)

In this short video, we specifically look at service management within Red Hat CloudForms. The demonstration highlights the following platform capabilities:

Self-Service portal with lifecycle, operations management and reporting
Service Catalog presented to end-user consumers
Service Definition, built as stand-alone, or from service composites
Life-cycle status monitoring and notifications
Usage consumption and chargeback reports

 
 

 
 
Additional information on Red Hat CloudForms can be found on the Red Hat website.
 
Quelle: CloudForms

Cloud digital revolution: disrupt or be disrupted

The future that analysts, technologists and science fiction authors predicted is here—we now live in a digital economy. As consumers, we don’t just access information, we use it to make decisions in real time. We don’t just use a service, we expect an experience that is fast, simple and intuitive. Consumers expect immediate access, speed and connectivity in technologies and show no patience for products, services and businesses that fail to meet expectations.
This fundamental shift in consumer behavior is rapidly driving enterprise products, services and business models to transform. Cloud and cognitive are at the center of this transformation. I want to share three very different examples of how companies are leveraging the latest cloud technologies to build value and wow customers.
But first, I’d like to introduce myself to Thoughts on Cloud readers. My name is Shahz Afzal. I recently joined IBM after spending 16 years at Microsoft working on strategies and programs to help customers transition to the first wave of cloud.  I could not be prouder to be part of the IBM Cloud team building the future through technologies like connected apps, cognitive computing and the Internet of Things.

Gartner predicts that by 2020, businesses without cloud capabilities will be as uncommon as businesses without Internet are today. 30 percent of the largest new software investments will exist entirely on the cloud. Amazing.
If you are wondering how the cloud might impact your business, here are just a few examples where clients have used cloud to redefine what’s possible for their customers.
1. Accelerating time-to-market with Walmart
What do developers want once they have an idea? Fast access to development tools. IBM helped Walmart create a cloud platform driven by APIs to provide developers access to resources such as infrastructure, storage, databases and web servers. Walmart estimates they can now deliver resources to developers 100 times faster, meet unpredictable spikes in demand and accelerate updates to market.
2. Shaping the future of sports with the KLM open
How cool is it to not just watch sports live, but also have a TV-like experience at the event? The KLM Open, one of the oldest golf tournaments in the European Tour, wanted to provide an interactive mobile application, but it didn’t have the expertise or infrastructure to make that vision a reality. IBM used WebSphere to help create a live, third-screen mobile experience for golf fans, helping spike a 25 percent increase in visitors by delivering an engaging mobile app for fans.
3. Enabling rapid growth with Bernhardt Furniture
What business doesn’t want higher sales and profits? Bernhardt Furniture needed a faster way to improve business applications and a more flexible architecture to accommodate rapid growth. They built a hybrid cloud solution to support business applications with a microservices architecture and API infrastructure. And this meant they could adopt the DevOps approach to continuously deliver upgrades to customers. But take a look at the results: Bernhardt engaged 20 percent more customers during sales events and netted a 20 percent sales increase through improved ordering capabilities.
The bottom line: the cloud is revolutionizing business models across a diverse range of industries. In my view, it is truly a question of disrupting or being disrupted. Where would you rather be?
If you are as thrilled with the rapid innovation in cloud as I am, be sure to register for IBM InterConnect, the industry’s preeminent event for cloud solutions. My colleagues and I would love to meet you.
I look forward to working with partners and clients to help usher in the future of cloud. Together we have the power to change the world. And I want to hear from you. Continue the discussion by leaving a comment below, tweeting @IBMCloud or finding me on LinkedIn.
The post Cloud digital revolution: disrupt or be disrupted appeared first on news.
Quelle: Thoughts on Cloud

Testing RDO with Tempest: new features in Ocata

The release of Ocata, with its shorter release cycle, is close and
it is time to start a broader testing (even if one could argue that
it is always time for testing!).

One of the core pieces for testing the cloud is the
Tempest.

Tempest in RDO

Current status

The status up to the Newton is well described in few blog posts,
either from packages
or from git.
In short, RDO used a forked repository,
which regularly received all the changes from the official Tempest.
The main reason for this was a configuration script which auto-discovers
the features of the cloud (with some hints based on the version) and creates
a valid tempest.conf.

Changes in Ocata

The auto-configuration script was decoupled from the internal Tempest and
moved to the new python-tempestconf repository,
thanks to the work of Martin Kopec, Chandan Kumar and Daniel Mellado.

This means less burden work required (no need to keep the fork) but also
simplifies a bit the steps required to initialize and run Tempest tests,
close to the process documented by Tempest upstream.

New configuration steps

Configure the RDO repositories, then install the required packages:

$ yum -y install openstack-tempest

python-tempestconf will be installed as well, as new dependency of
openstack-tempest.

Now source the admin credentials, initialize tempest and run the discovery tool:

$ source </path/to/>keystonerc_admin
$ tempest init testingdir
$ cd testingdir
$ discover-tempest-config –debug identity.uri $OS_AUTH_URL
identity.admin_password $OS_PASSWORD –create

discover-tempest-config is the new name of the old config_tempest.py
script and it accepts the same parameters.

And that’s it! Now it is possible to lists and run the tests a usual:

$ tempest run

For more details, see the the
upstream documentation of tempest run.
As this is a wrapper to testr, ostestr (and direct calls to testr)
works as before, even if the usage of tempest run and its filtering
features is highly recommended.

Tempest plugins

Current status

Tempest is really composed by two big parts: a library (with an increasing
number of stable APIs) and a set of tests.
With the introduction of Tempest Plugins
the scope of the tests included in tempest.git was limited to the core
components (right now Keystone, Nova, Neutron, Cinder, Glance and Swift).

The tests for other projects, as well as the advanced tests for the core
components, have been moved to separate repositories.
In most of the cases they have been added to the same repository of the
main project. This introduces a complication when the tests are split
in a separate RPM subpackage, as it is the case in RDO: the entry point
for the tests for a component foo is always installed as part of the
base subpackage for foo (usually python-foo), but the corresponding
code is not (the python-foo-tests is not required). Running any
tempest command, the entry points are found but the code could not
be there, leading to errors.

The script install_test_packages.py provided in the openstack-tempest
RPM could discover the missing entry points and install the required
package, but that was clearly a workaround with a maintainance burden.

Changes in Ocata

Thanks again to the work by Chandan Kumar, the packaging was fixed
to prevent this problem by automagically tuning the entry points
in the generated packages. The interesting technical details are
described in a past blog post.

So no more obscure errors due to missing packages while using Tempest,
but you may want to check which packages containing tests are really
installed if you really want to maximize the testing against your cloud.
Quelle: RDO