Majesco teams with IBM to bring cognitive cloud to insurance

The insurance business is all about anticipating and preparing for what may happen in the future.
That could be why Majesco, which provides core insurance software, consulting and services for providers across the world, has formed a five-year partnership with IBM to offer a cognitive, cloud-based platform to give insurers the power to develop new customer services that use predictive analytics.
Here’s how SD Times described what will be on offer:
IBM will contribute Watson and other cognitive application programming interfaces (APIs) that will run on IBM Cloud. This will allow insurance companies to better analyze, price and understand business risks using new data sources and add an engaging and personalized advisory interface to their services.
Having these cognitive abilities at hand to analyze customer data will help reduce risks, improve pricing and increase efficiency.
Another piece of the joint offering is a secure, global incubator which will enable insurance companies to develop and launch cognitive, cloud-based products and services.
For more about the partnership between Majesco and IBM, check out the full SD Times article.
The post Majesco teams with IBM to bring cognitive cloud to insurance appeared first on news.
Quelle: Thoughts on Cloud

Bringing cloud storage into the hybrid era

An interesting anomaly has quietly evolved in large enterprises in recent years. As organizations have rightly come to revere as the path to true scalability, efficiency, and economics, one corner of it — storage — has been left to wallow in the silos of yesteryear, with limited advances or innovation.
Researchers continue to predict that the digital universe will expand at a rapid rate, topping 44 zettabytes by 2020 (that’s over 44 billion terabytes), up from 1.8ZB just five years ago.
To help cope with this growth, enterprises are turning to hybrid cloud computing, which delivers management and support of on-premise and cloud-based computing infrastructures and applications. In fact, such obvious tangible benefits as improved scalability, IT flexibility, and economics has most researchers predicting continued growth for the hybrid cloud market, one estimating an annual growth rate of 27 percent.
Storage, however, has not experienced the same advances. When it comes to data storage, organizations have been left to manage their growing data volumes much as they have been for the past 10 years — either on in-house devices, or in cloud storage, but not together.
Until now.
The release of IBM Cloud Object Storage last week enables organizations for the first time to scale large unstructured data volumes across on-premise storage systems on public and/or private clouds or as hybrid solutions with combinations of on-premises and cloud. Enterprises will finally have the capabilities they’ve leveraged with hybrid cloud computing at the data storage level: dramatic increases in flexibility, availability, and better economics.
For the first time they will be able to quickly and easily store, manage and access their object data, what some researchers say makes up 80 percent of the digital universe, across their hybrid clouds.
But it doesn’t stop there. We’ve gone to great lengths to bring advanced capabilities to the world of hybrid cloud storage, like encrypted data slicing. With this innovation, the system automatically breaks up incoming data into slices and stores the different pieces across geographically dispersed systems. As a result, the data is not only secure, but available in the event of a breach, hack or natural disaster.
As the hybrid cloud platform advances ever more quickly, and adoption rates climb, it was about time that storage was brought along for the ride. Enterprises around the world will reap the benefits.
_______________________________
Related Stories:
Why Business Shouldn’t Settle for Just Any Storage – Thoughts on Cloud
Beyond Four Walls: Rocketing Into Hybrid Cloud – In the Making
Partner Perspectives:
Hybrid Matters – Panzura
A Platform for Hybrid Cloud Enterprise Services – CTERA
IBM Cloud Object Storage – Nasuni
IBM COS: The Foundation of the Digital & Cognitive Eras – Mark III Systems

A version of this post originally appeared on the THINK blog.
The post Bringing cloud storage into the hybrid era appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

The Business Value of Red Hat CloudForms

When talking to IT leaders about Red Hat CloudForms, we often point out the time and cost savings that CloudForms can have on their organization. While we have several customer success stories that highlight the various benefits of CloudForms to each organization, we wanted a more formal study of the business value that CloudForms could bring to an organization. To that end, Red Hat commissioned a study, conducted by IDC, to look at the business value of CloudForms. This blog post will highlight some of their findings, with IDC’s complete report available for review.
For this study, IDC conducted in-depth interviews with seven organizations, of various sizes and from a variety of industries, that had deployed CloudForms. The organizations were running application workloads across physical, virtualized, and private cloud environments, with some also using public cloud or container-based environments. All of the organizations cited the pressure to manage these diverse environments as a key reason to implement CloudForms. The interviews sought to uncover both quantitative and qualitative impacts that could be traced to the implementation of CloudForms. Let’s take a closer at some of IDC’s findings.
Increasing IT Agility
In most organizations, responding to IT service requests requires significant staff time and can take weeks to deliver on a request. This makes the IT operations team a bottleneck for new initiatives required by the business. Organizations in the study showed a significant increase in their ability to deliver on service requests in a more timely manner. On average, they took 89% less calendar time to deliver on requests and requests required 92% less staff time. This improvement has allowed IT organizations to increase the number of requests that they can process, handling more than 20 times more requests on average. This in turn has allowed the IT organization to become more agile, supporting application development teams in their bid to accelerate application development.

Accelerating Application Delivery
In fact, IDC found that CloudForms can have a dramatic impact on application development teams and their ability to support the business. The efficiency CloudForms brings to the service delivery process translates into greater productivity for application development teams because they spend less time waiting for resources. Organizations using CloudForms were able to reduce the time to deliver a new application by 37% and almost double the number of new applications that could be delivered in a year (93% more on average). Delivering more applications, faster should be what every business strives for today.

Improving Business Results
The result of increasing the application development’s team efficiency and ability to deliver means that the business can respond faster and more effectively to changes in the business environment. IDC found that CloudForms contributed in this area as well. Approximately one-half of the organizations in the study reported that they were generating more revenue with CloudForms than without. The average revenue increase was $3.85 million per year and was a result of the businesses being enabled to respond more quickly to customer demands. It’s not often that you find an IT operations solution that can contribute to your organizations top-line.
Business Value of CloudForms
These are just some of the key highlights. IDC collected all of the benefits they found and categorized them into four main areas: IT staff productivity benefits, business productivity benefits, risk mitigation, and IT infrastructure cost reductions. When summed up, they found that, on average, organizations could realize $3.46 million in benefits per year (or $11,937 per 100 users per year), a . And when factoring in costs, they found that CloudForms offered an ROI of 436% and a payback period of only 8 months.

This report has given us new insights into the benefits that CloudForms can have on organizations, beyond just the IT operations team. It has also provided a more methodical study of the benefits. I invite you to take a look at the complete report and determine how CloudForms can benefit your organization.
Quelle: CloudForms

Securing Kubernetes

Security is a complicated topic. Unlike security in the wild west of computing that exists outside OpenShift/Kubernetes, if you take the time to learn the constructs provided by the platform, you can feel comfortable having a security conversation with most anyone. I’ll try and take you through the basics.
Quelle: OpenShift

Not born on the cloud yesterday: Easing into continuous deployments with blueprints

As traditional IT enterprises embrace the cloud to handle continuous delivery, they face challenges posed by their existing legacy systems handling complex applications and environments. For example, mainframe-based systems (typically located within the firewall) often have tighter restrictions on security and data management, which can lead to slower iteration cycles.
According to a recent ADT Mag survey of IT executives, nearly two-thirds of respondents are integrating legacy applications with new mobile or front-end applications. Not surprisingly, managing complex environments was the top challenge when deploying applications that touch both legacy and new systems.
Check out this summary video of the main findings.
As these companies look at adopting some of the benefits offered by the cloud, they might want to start slowly to help address the differences in speeds between different environments. This is otherwise referred to as “multi-speed IT,” where user-facing (mobile, web) development is often executed at a greater velocity than traditional development on mainframe and database systems. This might be an impediment to some, but for those who embrace hybrid cloud, it’s a challenge easily overcome.
When it comes to continuous deployment, companies can achieve the most success in the early stages of adopting the cloud by moving some development and test efforts to the cloud. There, they can get early feedback on an application in a cloud environment that can be easily created and torn down. This can prevent the common backlog in which development teams need operations teams to create or customize an environment to address a new development feature.
During the later stages of delivery — whether it’s Q/A, staging or production — companies can still rely on traditional IT resources or more secure hybrid or on-premises cloud solutions. One downside to this approach can be maintaining consistency in the infrastructure as the app delivery progresses to different stages in the development cycle, no matter whether you are using cloud or different virtualization environments.
One of the newest solutions to the problem of handling cloud complexity is the UrbanCode Deploy Blueprint Designer. A cloud blueprint is a document that describes the full stack of both the application content and the infrastructure as a service (IaaS) environment required to provision the application. Check out this article on the developerWorks blog about Blueprint Designer to find out more.
Many of our enterprise customers are using blueprints to develop their infrastructure and application layers across different cloud environments. They can quickly use a blueprint to deploy, test and destroy environments as needed. UrbanCode Deploy Blueprint Designer is especially useful for customers who want to get early feedback in their development process on their latest application updates, leveraging the notion of “infrastructure as code” provided through the blueprint designer to create, provision, and manage cloud environments for their applications.
Consider this use case from one of our enterprise customers: User error resulted in the inadvertent deletion of a large portion of the customer’s data center – around 250 configured virtual machines. Fortunately, since this infrastructure had been initially created with cloud blueprints, the customer was able to simply re-provision the blueprints to rebuild its data center, and all its VMs were back online within a day.
The benefits of the UrbanCode Deploy Blueprint Designer include:

Portability, in that the same blueprint can be created once and provisioned with minor configuration changes across different clouds. Today, supported cloud providers include OpenStack (included IBM BlueBox), IBM Softlayer, VMware vCenter, Microsoft Azure and Amazon Web Services.
Optimization for OpenStack using the Heat (HOT) language, an open-source, industry-standard format for orchestrating infrastructure and applications.
Support of “infrastructure as code” through blueprint versioning via built-in integration with Git.
Support for multiple software deployment tools, including UCD and Chef, or use of existing automation scripts in a blueprint.
A rich graphical editor with drag-and-drop infrastructure components such as virtual machines, storage volumes, networks, drop in app components and build a blueprint in an easy-to-use user interface
Composite blueprints that enable the separation of roles within an organization by allowing different teams to create blueprints for their area of specialty (compute, networking, application), and combine them into a single, deployable blueprint.

While complexity is still the biggest challenge facing enterprises today, a cloud blueprint from UrbanCode Deploy can help organizations with legacy applications take those first steps toward the cloud.
The post Not born on the cloud yesterday: Easing into continuous deployments with blueprints appeared first on news.
Quelle: Thoughts on Cloud

All about OpenStack and why it matters

I recently attended a week of presentations about OpenStack, looking to get a deeper understanding of what it is all about and why it’s important.
Here’s what I learned. An OpenStack cloud is driven by a collection of open source software that allows organizations to build and manage cloud environments via the OpenStack API. The underlying software comprises core services (such as Nova, Neutron and Cinder) and optional services (such as Horizon, Magnum and Heat).
Like much open source software, you simply deploy what you need and nothing more. Since the software is open source, it’s driven by a large community of developers and maintainers, a number of whom are fellow IBMers I was lucky enough to meet and hear from during the week.
OpenStack is an important part of the cloud landscape, since it provides a common, open standard through its API and allows portability between cloud environments.
The move to hybrid cloud
In a world where business is increasingly seeing the value of hybrid cloud, a common API which enables the IT department to deploy services across a number of suppliers and manage them from a single pane of glass is imperative. One of the biggest mind shifts that hybrid cloud has introduced is that all of an organization’s IT eggs no longer need to be in one basket, as was often the case with the traditional hosting services model. Instead, the enterprise can host and manage its IT in the locations which derive the greatest business value, however the organization chooses to measure it.
Consistent deployment and management tooling is key to successfully maintaining a hybrid cloud. An IT department that needs to develop multiple orchestration scripts to deploy even a simple service, let alone complex ones, will soon become inefficient. Nobody wants to deploy multiple management and monitoring systems.
The OpenStack API has become a standard for the industry because it enables tooling to access multiple environments and for the IT department to take a “build once, deploy anywhere” approach. Avoiding lock-in is important to the enterprise and it’s good for the cloud industry as a whole. It encourages innovation and competition between providers who can demonstrate that switching will give a competitive edge. Open standards make this possible.
IBM is committed to open source software and IBM DeveloperWorks Open is now officially one year old. Learn more regarding how IBM is helping to grow OpenStack and other important open source projects.

 
The post All about OpenStack and why it matters appeared first on news.
Quelle: Thoughts on Cloud

Total Cost of Ownership: AWS TCO vs OpenStack TCO Q&A

The post Total Cost of Ownership: AWS TCO vs OpenStack TCO Q&;A appeared first on Mirantis | The Pure Play OpenStack Company.
Last month, Amar Kapadia led a lively discussion about the Total Cost of Ownership of OpenStack clouds versus running infrastructure on Amazon Web Services.  Here are some of the questions we got from the audience, along with the answers.
Q: Which AWS cost model do you use? Reserved? As you go?
A: Both. We have a field that can say what % are reserved, and what discount you are getting on reserved instances. For the webinar, we assumed 30% reserved instances at 32% discount. The rest are pay-as-you-go.
Q: How does this comparison look when considering VMware&;s newly announced support for OpenStack? Is that OpenStack support with VMware only with regards to supporting OpenStack in a &;Hybrid Cloud&; model? Please touch on this additional comparison. Thanks.
A: In general, a VMware Integrated OpenStack (VIO) comparison will look very different (and show a much higher cost) because they support only vSphere.
Q: Can Opex be detailed as per the needs of the customer? For example, if he doesn&8217;t want an IT/Ops team and datacenter fees included as the customer would provide their own?
A: Yes, please contact us if you would like to customize the calculator for your needs.
Q: Do you have any data on how Opex changes with the scale of the system?
A: It scales linearly. Most of the Opex costs are variable costs that grow with scale.
Q: What parameters were defined for this comparison, and were the results validated by any third party, or on just user/orgnaisatuon experience?
A: Parameters are in the slide. Since there is so much variability in customers&8217; environments, we don&8217;t think a formal third party validation makes sense. So the validation is really through 5-10 customers.
Q: How realistic is it to estimate IT costs? Size of company, size of deployment, existing IT staff (both firing and hiring), each of these will have an impact on the cost for IT/OPs teams.
A: The calculator assumes a net new IT/OPS team. It&8217;s not linked to the company size, but rather the OpenStack cloud size. We assume a minimum team size of about 3.5 people and linear growth after that as your cloud scales.
Q: Should the Sparing not be adding more into the cost, as you will need more hardware for HA for high availability?
A: Yes, sparing is included.
Q: AWS recommends using 90% utilization, and if you are using 60%, it&8217;s better to downgrade the VM to ensure 90% utilization. In the case of provisioning 2500 vms with autoscaling, this should help.
A: Great point, however, we see a large number of customers who do not do this, or do not even know what percentage of their VMs are underutilized. Some customers even have zombie VMs that are not used at all, but they are still paying for them.
Q: With the hypothesis that all applications can be &8220;containerized&8221;, will the comparison outcomes remain the same?
A: We don&8217;t have this yet, but a private cloud will turn out to have a much better TCO. The reason is that we believe private clouds can run containers on bare-metal while public clouds have to run containers in VMs for security reasons. So a private cloud will be a lot more efficient.
Q: This is interesting. Can you please add replication cost? This is what AWS does free of cost within an availability zone. In the case of OpenStack, we need to take care of replication.
A: I assume you mean for storage. Yes we already include a 3x factor to convert from raw storage to usable storage to factor in replication (3-way).
Q: Just wondering how secure is the solution as you have mentioned for a credit card company? AWS is PCI DSS certified.
A: Yes this solution is PCI certified.
Q: Has this TCO calculator been validated against a real customer workload?
A: Yes, 5-10 customers have validated this calculator.
Q: Do you think that these costs apply to another countries, or this is US based?
A: These calculations are US based. Both AWS and private cloud costs could go up internationally.
Q: Hi, thank you for your time in this webinar. How many servers (computes, controllers, storage servers) are you using, and which model do you use for your calculations ? Thanks.
A: The node count is variable. For this webinar, we assumed 54 compute nodes, 6 controllers, and 1080GB of block storage. We assumed commodity Intel and SuperMicro hardware with 3 year warranty.
Q: Can we compare different models, such as AWS vs VMware private cloud/public cloud with another vendor (not AWS)?
A: These require customizations. Please contact us.
The post Total Cost of Ownership: AWS TCO vs OpenStack TCO Q&038;A appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Next week in Barcelona

Join us next week in Barcelona for OpenStack Summit. We’ll be gathering from around the world to celebrate the Newton release, and plan for the Ocata cycle.

RDO will have a table in the Red Hat booth, where we’ll be answering your questions about RDO. And we’ll have ducks, as usual.

On Tuesday evening, join us for an evening with RDO and Ceph, with technical presentations about both projects, as well as drinks and light snacks.

And, throughout the week, RDO enthusiasts are giving a wide variety of talks about all things OpenStack.

If you’re using RDO, please stop by and tell us about it. We’d love to meet you, and find out what we, as a project, can do better for you and your organization.

See you in Barcelona!
Quelle: RDO