Cloud-ready systems elevate hybrid cloud environments

Organizations across all industries are investing in cloud technologies for innovation, growth and efficiency. Many enterprises are integrating public cloud, private cloud and traditional IT platforms, thereby leveraging existing IT investments and are creating a hybrid cloud environment.
Power Enterprise Systems for the cloud
Clients are using IBM cloud-ready systems to simplify the movement of data, applications and services across a hybrid cloud environment; take advantage of new capabilities; optimize operations and support changing business demands.
HealthPlan Services, a leading U.S. healthcare technology company, for example, chose a hybrid cloud solution to address a new space in the market created by the Affordable Care Act. Today, HealthPlan Services administers more than 30 percent of the Individual On/Off Exchange insurance market, as well as a significant percentage of the Group Insurance market.
“Our triple-digit growth and leadership directly correlates to our technology strategy and how our hybrid cloud architecture has fueled business innovation,” said Michael Lawley SVP of IT at HealthPlan Services.
Open standards
By supporting open communities and standards, IBM is giving clients a range of choices to create comprehensive hybrid cloud strategies for their businesses in order to help them address market place demands more quickly.
For a deeper dive into cloud-ready systems, hybrid cloud environments and more, read the full article at HPCwire.
The post Cloud-ready systems elevate hybrid cloud environments appeared first on news.
Quelle: Thoughts on Cloud

Microsoft server hosting on IBM Cloud

Did you know that tens of thousands of Microsoft workloads are running on IBM Cloud? Here are some of the reasons why organizations of all sizes are choosing cloud to support their Microsoft servers.
Why choose cloud
Businesses are looking for new ways to engage customers, drive digital transformation and make operations faster and more flexible. With cloud, it’s easier to design and implement these ideas to create competitive advantage. Choose from multiple models &; public, private, and hybrid cloud – that deliver choice and flexibility as the competitive landscape changes and your business needs evolve.

Across public, private and hybrid cloud, IBM Cloud can provide seamless integration and support for the latest versions of applications such as Microsoft SQL Server 2016. The infrastructure is secure, scalable, and flexible, providing the solid foundation that has made IBM Cloud the hybrid cloud market leader.
Success factors

Configure the cloud your way – Can you trust the cloud with critical Microsoft workloads? One secure and widely used approach is to implement bare metal servers, creating a custom, dedicated cloud. With bare metal, the server is designed to your specifications. You select and approve what goes on it.
Stay in control – When your Microsoft workloads are running in the cloud, you want to manage them like an extension of your data center. Look for ways to use APIs and a single management system across workloads.
Go global – No matter the size of your business, you need to consider the flexibility of global data access and storage when choosing a cloud provider to support your growth plans.
Manage costs – Check to understand cost visibility across the software and server resources. Evaluate each element, from how Microsoft workloads are hosted to infrastructure. IBM Cloud offers clear, competitive pricing on hourly or monthly terms for cloud services and Microsoft software so you can easily meet all of your Microsoft Windows workload requirements.

Also, each time the software inside your core applications is headed for end of life, see if cloud can help you move to the newest version. For example, moving from an older version of SQL Server onto SQL Server 2016 may be faster using cloud hosting.
Get started
Stop by the IBM Booth at Microsoft Ignite 2016 in Atlanta, Georgia, September 26-30, to speak to advisors about Microsoft on IBM Cloud.
Learn more about IBM Cloud and Microsoft workload hosting.
The post Microsoft server hosting on IBM Cloud appeared first on news.
Quelle: Thoughts on Cloud

Running Tempest on RDO OpenStack Newton

Tempest is a set of integration tests to run against an OpenStack cluster.

What does RDO provides for Tempest?

RDO provides three packages for running tempest against any OpenStack installation.

python-tempest : It can be used as a python library, consumed as a dependency for out of tree tempest plugins i.e. for horizon and designate tempest plugins.
openstack-tempest : It provides python tempest library and required executables for running tempest.
openstack-tempest-all : It will install openstack-tempest as well as all the tempest plugins on the system.

Deploy packstack using latest RDO Newton packages

Roll out a vm of CentOS 7, Follow these steps:

Install rdo-release-newton rpm

# yum -y install https://rdoproject.org/repos/openstack-newton/rdo-release-newton.rpm

Update your CentOS vm and perform reboot.

# yum -y update

Install openstack-packstack

# yum install -y openstack-packstack

Run packstack by enabling RDO GA testing repo:

# packstack –enable-rdo-testing=y –allinone

Once packstack installation is done, we are good to go ahead.

Install tempest and required tempest plugins

Install tempest

# yum install openstack-tempest

Install tempest plugins based on the openstack services installed and configured on deployment.

Packstack installs by default horizon, nova, neutron, keystone, cinder, swift, glance, ceilometer, aodh and gnocchi.
To find out what are the openstack components installed, just do a rpm query:

# rpm -qa | grep openstack-*

OR you can use openstack-status command for the same.
Just grab the tempest plugins of these services and install it.

# yum install python-glance-tests python-keystone-tests python-horizon-tests-tempest
python-neutron-tests python-cinder-tests python-nova-tests python-swift-tests
python-ceilometer-tests python-gnocchi-tests python-aodh-tests

To find what are tempest plugins installed:

# tempest list-plugins

Once done, you are ready to run tempest.

Configuring and Running tempest

source admin credentials and switch to normal user

# source /root/keystonerc_admin

# su <user>

Create a directory from where you want to run tempest

$ mkdir /home/$USER/tempest; cd /home/$USER/tempest

Configure the tempest directory

$ /usr/share/openstack-tempest-*/tools/configure-tempest-directory

Auto generate tempest configuration for your deployed openstack environment

$ python tools/config_tempest.py –debug identity.uri $OS_AUTH_URL
identity.admin_password $OS_PASSWORD –create

It will automatically create all the required configuration in etc/tempest.conf

To list all the tests

$ testr list-tests

OR

$ ostestr -l

To run tempest tests:

$ ostestr

For running api and scenario tests using ostestr and prints the slowest tests after test run

$ ostestr –regex ‘(?!.*[.*bslowb.*])(^tempest.(api|scenario))’

To run specific tests:

$ python -m testtools.run tempest.api.volume.v2.test_volumes_list.VolumesV2ListTestJSON

OR

$ ostestr –pdb tempest.api.volume.v2.test_volumes_list.VolumesV2ListTestJSON

ostestr –pdb will call python -m testtools.run under the hood.

Thanks to Luigi, Steve, Daniel,
Javier, Alfredo, Alan for the review.

Happy Hacking!
Quelle: RDO

Direct Link facilitates secure and scalable hybrid clouds

Many organizations use services to reduce information technology (IT) costs and take advantage of new business opportunities.
For enterprises that have traditionally built and operated (or leased) their own IT infrastructure, the proliferation of mobile and web applications, ever-increasing sources of data, and easy access to advanced cloud analytics have created an application economy in which business success now depends on building cost-optimized hybrid cloud architectures that combine those private IT resources with cloud services purchased from cloud service providers (CSPs).
Hybrid cloud architectures span several high-level operational capabilities including service and operations management; backup, archive and recovery; disaster recovery and business continuity; cloud services brokerage; and hybrid cloud connectivity.
Of these capabilities, hybrid cloud connectivity describes the combined simultaneous use of public and/or private clouds provided by IT resources hosted on premises, in a co-located facility, and/or off premises in a CSP’s facility. Hybrid cloud connectivity can occur at one or more layers at the same time, including the data layer, the services (or API) layer, the management layer, the network layer, or at an added “integration” layer. Examples of application environments where hybrid cloud connectivity is necessary include:

A legacy, monolithic application running in a private data center writing large amounts of data to cloud-based storage.
A cloud-native mobile application running off premises on a public cloud that analyzes data stored on premises due to residency requirements.
High-volume application programming interfaces (APIs) whose fulfillment requires rapid completion of complex orchestrations across on-premises and off-premises computing resources and data.

Optimal choice of hybrid cloud connectivity layers depends on several factors unique to a particular application environment, primary among them being security, performance and scalability. For organizations with existing IT facilities seeking to migrate to a hybrid cloud environment, who typically require enterprise-level scalability, the performance and security they require is best achieved by forming network-layer connections between on-premises and off-premises IT resources. Since the advent of computer networking, the Internet and, recently, hybrid cloud computing, this has been routinely achieved through layer 2/3 connections between network elements such as routers, firewalls and gateways with controlled routing and/or switching policies between them. At IBM, we offer network-layer hybrid cloud connectivity through Direct Link.
Direct Link is offered via IBM SoftLayer, the IBM infrastructure-as-a-service (IaaS) platform.  It allows IBM Cloud customers to connect their wide-area network (WAN), colocation environment or cloud exchange provider directly to the cloud through IBM’s global points of presence (POP). Designed to create secure, worldwide extensions for private networks, this connectivity option serves as a scalable, high-performance alternative to forming hybrid clouds, using site-to-site tunnels (IPSec, PPTP) or application-to-application tunnels (SSL) over the public internet.
A dedicated fiber connection (one or 10 Gigabits per second) connects the customer’s service equipment (provided by the customer or the telecom carrier) and the network equipment located in an IBM PoP. Once dedicated physical connections are established, routing and/or tunneling policies must be created to ensure secure separation of customers’ traffic on the IBM global network and to differentiate between privately and publicly accessible computing resources. I’ll discuss the connectivity options for Direct Link in more detail in my next blog post.
Building a secure, high-performance, hybrid cloud environment doesn’t have to be complicated. IBM helps make it easy with Direct Link. While there are other options for integrating private, public and hybrid clouds, Direct Link stands out as the hybrid cloud connectivity option that provides the security, price and performance characteristics required to operate hybrid clouds economically at enterprise scale.
To learn more about Direct Link and other features and technology available with IBM SoftLayer, check out our Cloud How-To webcast series.
The post Direct Link facilitates secure and scalable hybrid clouds appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Cognitive computing is driving ROI on cloud

A driving force in cloud adoption is the cost savings one gets from eliminating physical servers, then migrating applications and data into the cloud. This reduces up-front capital expenditures (CAPEX) and enables budget planning that is based around operating expenditures (OPEX) and on-demand service provisioning.
Undoubtedly this propels pricing competition among major cloud service providers. They are striving to reduce their price per virtual machine (VM), which averages 10 percent a year or roughly 5 cents an hour per virtual instance. The continuous pricing reduction derived from multi-tenant public cloud has contributed to the exponential growth of the cloud market for the past five years.
However, from an enterprise user perspective, is the velocity of cost savings enough to be highly competitive in the long term? The convergence of mobile, social, Internet of Things (IoT), cognitive computing and cloud has already disrupted industries by creating innovative solutions around smart cars, smart buildings, smart health care and smart education.
App-based ride sharing companies are now multi-billion-dollar concerns. How can existing taxi service providers around the world sustain their business? Obviously, cost savings from cloud migration is not enough, but they may be compelled to create innovative solutions using cognitive capabilities on cloud.
“Cognitive on cloud” refers to cognitive services running in the cloud and that are available to be consumed via representational state transfer (REST) APIs. These services are available as part of platform-as-a-service (PaaS) offerings such as Bluemix so they can be easily bound to an application while coding.
For example, cognitive analytics such as voice (tone analyzer, speech-to-text) and video (face detection, visual recognition) capabilities enable users to analyze petabytes of unstructured data generated by mobile devices every day.
Developing cognitive applications to run on mobile devices will provide new insights and help organizations create totally new revenue streams.  The convergence of cognitive computing and cloud is driving this cognitive-oriented digital economy. The potential return is seemingly unlimited.

From an return-on-investment (ROI) perspective, let’s suppose that there are two organizations, each planning its cloud adoption. One picks the lowest-priced cloud service provider based on cloud commodity services as well as total cost of ownership (TCO) in the cloud (this is Org 1 in the chart). The other picks a cloud service provider based on one more evaluation criterion in addition to the TCO comparison: cognitive capability (Org 2 in the chart).
As shown in the chart above, by leveraging cognitive capabilities in the cloud, Org 2 will achieve a higher ROI through continuous innovation. The difference between the two organizations is that Org 2 sees cloud as an innovation platform for high ROI as rather than a way to cost savings based on a TCO comparison.
Continuous innovation will also include added values, such as IoT and blockchain powered by the cloud. However, the return may not be immediately quantifiable. Therefore, the gap between the two organizations can be enormous depending on how effectively an organization creates innovative solutions via cognitive applications and other value-added services.
When it comes to selecting the right service provider, ROI on cloud requires more than just a TCO comparison based on the number of VMs, storage capacity, hypervisor, operating system and so on. In addition to this basic analysis, an organization must consider which cloud is cognitive enabled or disabled at the PaaS layer. Achieving a high ROI requires a cognitive-enabled cloud as a foundation. More than 30 IBM Watson services on cloud with a supporting foundationhave become key to the solution for organizations across many industries.
Learn more about cognitive cloud and Watson services at IBM World of Watson.  
The post Cognitive computing is driving ROI on cloud appeared first on news.
Quelle: Thoughts on Cloud