Making an automation platform stronger through technical partnership

Automation is at the forefront of the next wave of digital transformation. Robotic process automation (RPA) is one of the key technology contributors with its ability to quickly introduce automation and achieve business value through increased efficiency and employee productivity at low cost and with near-zero risk.
Analysts who observe the RPA market see companies using RPA to achieve desired business outcomes as they digitally transform with the help of automation. Three conditions are driving this investment:

Some companies are deploying RPA bots in attended and unattended modes to automate front-office and back-office tasks, so they need an RPA platform that offers both capabilities.
Some companies are deploying larger transformational automation initiatives and need an enterprise-class RPA solution that is scalable and able to meet any security and compliance requirements.
Some companies have limited IT resources and want a no-code RPA solution that can be easily deployed by business users without significant involvement from IT.

To address these drivers, we’ve been enabling the IBM digital business automation platform to interoperate with more RPA solutions to give clients freedom of choice to execute according to their objectives. Our newest collaboration is with UiPath, a market-leading RPA vendor.
IBM and UiPath have jointly developed API connectors, which will help companies seamlessly integrate the UiPath RPA task automation technology with the IBM Digital Business Automation platform’s low-code tools. This technical collaboration will help customers digitize their operations and drive new efficiencies.
By integrating the core RPA technology of UiPath with IBM Digital Business Automation capabilities, customers can get the benefits of compliance and scale without any technical integration efforts. We wanted to ensure that technical skill gaps didn’t get in the way of building enterprise-class automation applications.
To see how this integration works, watch the following video showing how an insurance company can automate claims processing by using UiPath RPA along with IBM automation capabilities such as Business Automation Workflow, Operational Decision Manager and Datacap.

The above demo combines UiPath RPA with automation capabilities from the IBM Digital Business Automation platform so that organizations can integrate all of the elements — people, systems, content and bots — into one seamless process. This level of collaboration can shorten project lifecycle times, accelerate time to value for automation investments and boost return on investment.
Freedom of choice is a good thing, especially when it comes to RPA platforms. Schedule a no-charge consultation session to unlock greater value from your UiPath RPA investment.
The post Making an automation platform stronger through technical partnership appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Reaching for the Stars with Ansible Operator

In this post I will show you how to use Roles published to Ansible Galaxy as an Operator to manage an application in Kubernetes. Reusing a Role in this way provides an example of how to create an Operator that simply installs an application with the flexibility to expand and customize the behavior organically as […]
The post Reaching for the Stars with Ansible Operator appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

6 lesser-known hybrid cloud benefits

The most commonly talked-about hybrid cloud benefits are flexibility and control.
A hybrid environment gives companies the freedom to store some of their data and applications in a low-maintenance public cloud while entrusting other resources to a more closely managed private cloud. While this perk remains the most important selling point, there are several other advantages that a hybrid cloud strategy can offer over a public or private cloud on its own.
According to an IBM Institute for Business Value report, 98 percent of organizations plan to adopt multicloud architectures, which often include a hybrid component, by 2021. If your organization is among the many considering the transition to hybrid cloud, it might be helpful to learn what other advantages a hybrid cloud can offer so you can start reaping the rewards sooner rather than later.
Here are six lesser-known hybrid cloud benefits you may not have considered but should.
1. Network optimization.
Using hosted cloud services gives you the opportunity to move your network processes off-premises, thus improving the availability and reliability of the connection. Tapping into the cloud, you can prevent system outages with real-time analytics.
Additionally, with load balancing, you can improve latency across geographic locations. This ability could be very relevant for organizations with multiple office branches because it allows you to deliver infrastructure from a location that’s as close as possible to the user rather than a central data center.
2. Other cost savings.
General cost savings may not sound so surprising. Cost-efficiency in relation to capital expenditure and paying only for what you need have long been considered hybrid cloud benefits.
However, there are other, lesser-known cost advantages a hybrid cloud can provide, such as operations and maintenance savings. With fewer on-premises servers, your energy consumption and energy bills will be lower. Less on-premises management can also lead to cost savings in terms of employee time, allowing those hours to be spent on tasks that generate more revenue.
3. Risk management and security.
There are a few ways a hybrid cloud can help you manage risk and improve security. For example, a hybrid model lets you test a few noncritical workloads before moving more critical applications to the cloud.
You can also keep sensitive data on-premises on a dedicated infrastructure. This gives you more control and can increase your network security with a direct connection instead of a public connection. Having a more varied cloud environment helps you avoid vendor lock-in, so it’s easier to migrate to another provider if and when you need to.
4. DevOps.
You may want both off-premises and in-house infrastructure for development and test workloads. For example, an in-house model may be ideal for established applications that require high utilization and data bandwidth, which could make the public cloud a less-than-ideal situation.
Due to their elastic nature, many development and test workloads can benefit from being in a hosted cloud so developers can scale capacity to match demand and only pay for what they use. In a cloud environment, developers can also use cloud-based containers, which facilitate greater portability from one data center to another in the event of a system failure. This limits the potential for disruption for app users and helps facilitate faster app development.
A hosted cloud environment also gives developers the ability to create infrastructure-independent code to optimize workload placements and deliver a more consistent user experience.
5. Backup and disaster recovery.
Hybrid cloud implementation can help organizations cost-effectively use cloud resources for backup and disaster recovery. Cloud service providers offer managed backup and recovery solutions as part of their standard offerings, saving the cost and time that organizations often spend on these functions in-house.
A hybrid cloud can also improve the overall performance and reliability of a disaster recovery solution, especially when compared with traditional methods such as restoring from tapes. In organizations where any recovery downtime is unacceptable, some cloud service providers can even provide a zero-downtime recovery environment.
6. Innovation.
A hybrid cloud can encourage and support greater innovation because of the flexible environment it provides. It also reduces the risks associated with innovation.
Rather than expending significant capital on the hardware and software, you can create an application in the cloud, test it, release it and measure its success. Pay-as-you-go pricing for test projects without any long-term commitment gives you more room to experiment with less skin in the game. The more you attempt innovation, the more likely you are to succeed.
With all these benefits of moving to a hybrid cloud strategy, the odds of a good outcome are stacked in your favor. However, one of the most critical parts of realizing these advantages will be selecting the right cloud service provider.
Not all providers are created equal, so it’s important to find an experienced partner that can meet your specific use cases and support the advantages that are most important to your organization.
Register to download your free guide on  the next generation of cloud operation models.
The post 6 lesser-known hybrid cloud benefits appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

IBM Cloud revenue up 12 percent in 2018

IBM this week announced 2018 fourth quarter earnings that outpaced analyst expectations, including total cloud revenue for the year of $19.2 billion. That’s a 12 percent increase over IBM Cloud earnings in 2017.
The New Economy reported, “The optimism surrounding the Q4 earnings report owes much to IBM’s cloud computing business, which along with social, mobile and analytics made up half of the company’s revenue in 2018.”
In the first few weeks of 2019, IBM Cloud has announced several client agreements that continue its momentum, including:

A just-announced, $260 million, multi-year services agreement with the Bank of the Philippine Islands by which IBM will provide IT infrastructure services to support an agile IT and hybrid cloud, as well as digital development capabilities.
An agreement with France’s largest bank, BNP Paribas, to make IBM its premier cloud provider and increase its use of IBM Cloud services.
A $550 million agreement with Vodafone to provide managed services to Vodafone Business’ cloud and hosting unit.
A $325 million agreement with Juniper Networks to assist in the management of multiple cloud environments.
Red Hat shareholders voted to approve its proposed acquisition by IBM.

The New Economy quoted KeyBanc Capital Markets’ analyst Arvind Ramnani as saying the planned Red Hat acquisition “propels IBM as a leading cloud provider”.
For more on IBM fourth quarter earnings, read the full story at The New Economy.
The post IBM Cloud revenue up 12 percent in 2018 appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Why business leaders should master automation

For someone who bought a house 10 or 20 years ago, applying for a mortgage likely took around a month. Buyers typically had to complete lots of paper forms and field a few dozen calls while wondering when they’d be approved and what the interest rate would be.
Today, next-generation mortgage companies offer mobile apps that allow users to apply for a loan virtually anywhere and get approved very quickly. They’re built on a foundation that automates the heavily regulated process from start to finish. Customers often choose these apps for the convenience, immediacy and transparency made possible by the automation of alerts programmed into the system, the connections to data from credit-monitoring companies and lenders, the workflows designed to eliminate unnecessary steps, and more.
Multiplying opportunities with automation
At its core, automating work is about offering better experiences at speed and scale – for customers, employees, users and others. It’s about improving those multistage, multitask processes that when done manually are slow, costly and frustrating for both employees and customers.
Successful automation isn’t singular or always easy. Big success doesn’t come from one project or solution. It’s an iterative process that adapts to changing markets and business goals. Successfully automated companies often share the following traits:

They focus on the needs of their customers.
They recognize the importance of scalability while still being able to offer each customer a personalized experience.
They keep the automation system as flexible as they can by building on an extensible platform and ensuring alignment between business and IT.
They’ve taken everything that can be made efficient and made it efficient.

Automation is more accessible now, permeating every type of work. Business leaders should master what automation can and can’t do for their company to thrive in a world where automation enables competitors to be easier and faster to work with.
Mastering automation at Think 2019: A “tsunami of innovation”
Automation at Think 2019 aims to bring attendees the latest in automation strategy and innovations. The event will be held in San Francisco from 12 through 15 February.
At a 2018 event in New York City, an IBM Automation client said that “the tsunami of innovation and initiatives erupting from the IBM Automation team is impressive”.
If you’re going to spend the time and money to attend an in-person technology event, it better be a tsunami of something, right? If not a tsunami, then a surfable wave of insights and experiences that can lead to something strategically important for your company and career.
For those interested in mastering automation, here are six can’t-miss things at Think:

Discover what others are doing to create competitive customer and employee experiences. Join key IBM customers and executives to learn how intelligence from aggregated data can be applied to core business operations to deliver notable customer experiences.
Join IBM executives Gene Chao and Mike Gilfix as they share their vision and viewpoints on intelligent automation and using artificial intelligence (AI) to innovate.
Go beyond the bot. Learn how the IBM view of digital labor integrates robotics and AI with all types of automation, including workflows, content capture and decision management, extending the benefits of robotic process automation (RPA) even further.
Explore how organizations can drive intelligent automation at scale and accelerate digital transformation with an integrated automation platform that makes it easier to create, deploy and manage intelligent automation.
Find out how to handle the full spectrum of workflows from highly scalable, structured work to ad hoc and case-based workflows in a solution tightly integrated with content sources.
Preview the roadmap. Find out what’s next in the evolution of IBM automation software technology.

Finally, don’t miss the opportunity to grow your skills. You can advance-demo new products and attend hands-on labs on topics ranging from AI and RPA to workflow and data capture. Get certified. Go from basic to intelligent automation. All from the premier IBM technology event: Think 2019.
For more detail on session content and special events, take a look inside Automation at Think 2019 and register to attend Think 2019..
To stay connected throughout the year, sign up for the IBM Automation Insider newsletter.
The post Why business leaders should master automation appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Multi-node Kubernetes with KDC: A Quick and Dirty Guide

The post Multi-node Kubernetes with KDC: A Quick and Dirty Guide appeared first on Mirantis | Pure Play Open Cloud.
Kubeadm-dind-cluster, or KDC, is a configurable script that enables you to easily create a multi-node cluster on a single machine by deploying Kubernetes nodes as Docker containers (hence the Docker-in-Docker (dind) part of the name) rather than VMs or separate bare metal machines.  It even enables you to easily create multiple clusters on the same machine.
In this article we’ll look at how to use KDC and at some of the simple ways to configure it for more complicated use cases.
Deploying a multi-node Kubernetes cluster with KDC
At its core, deploying Kubernetes with KDC is a simple matter of downloading the script and executing it:
$ wget https://github.com/kubernetes-sigs/kubeadm-dind-cluster/releases/download/v0.1.0/dind-cluster-v1.13.sh
(You’ll notice that the script includes a version number that happens to match the latest version of Kubernetes. As you might have guessed, that’s no coincidence. KDC supports versions 1.10 through 1.13 of Kubernetes, and to change versions you simply need to change the script version.  So to deploy Kubernetes 1.12 you would use dind-cluster-v.12.sh instead of dind-cluster-v1.13.sh.)
Once you’ve got the script, make sure it’s executable, then run it:
$ chmod +x dind-cluster-v1.13.sh
$ sudo ./dind-cluster-v1.13.sh up
The script can take a few minutes to run. During that time, it’s performing several steps, including:

Pulling in the most recent DIND images
Running kubeadm init to create the cluster
Creating additional containers to act as Kubernetes nodes
Joining those nodes to the original cluster
Setting up CNI
Creating Management, Service, and pode networks
Bringing up the Kubernetes dashboard for the new cluster

When it’s finished running, you will see the URL for the Dashboard, as in:

* Bringing up coredns and kubernetes-dashboard
deployment.extensions/coredns scaled
deployment.extensions/kubernetes-dashboard scaled
………………………..[done]
NAME       STATUS   ROLES AGE VERSION
kube-master   Ready master   3m49s v1.13.0
kube-node-1   Ready <none>   2m32s v1.13.0
kube-node-2   Ready <none>   2m33s v1.13.0
* Access dashboard at: http://127.0.0.1:32768/api/v1/namespaces/kube-system/services/kubernetes-dashboard:/proxy
You can then pull that up in your browser and see the brand new empty cluster.

You can also go ahead and work with the cluster from the command line.  First make sure to fix your $PATH; KDC downloads an appropriate version of kubectl for you and places it in the ~/.kubeadm-dind-cluster directory:
$ export PATH=”$HOME/.kubeadm-dind-cluster:$PATH”
Then you can see the nodes in the cluster:
$ kubectl get nodes
NAME       STATUS   ROLES AGE VERSION
kube-master   Ready master   8m40s v1.13.0
kube-node-1   Ready <none>   7m23s v1.13.0
kube-node-2   Ready <none>   7m24s v1.13.0
You can also see the actual Docker containers corresponding to the nodes:
$ sudo docker ps  –format ‘{{ .ID }} – {{ .Names }} — {{ .Labels }}’
c4d28e8b86d8 – kube-node-2 — mirantis.kubeadm_dind_cluster=1,mirantis.kubeadm_dind_cluster_final=1,mirantis.kubeadm_dind_cluster_runtime=
8009079bde24 – kube-node-1 — mirantis.kubeadm_dind_cluster=1,mirantis.kubeadm_dind_cluster_final=1,mirantis.kubeadm_dind_cluster_runtime=
39563d1fb241 – kube-master — mirantis.kubeadm_dind_cluster=1,mirantis.kubeadm_dind_cluster_final=1,mirantis.kubeadm_dind_cluster_runtime=

As you can see, with a single step you have created a 3 node Kubernetes cluster.  But what if you want a second cluster? Fortunately, since the nodes are just Docker containers, you can go ahead and create additional instances without them interfering with each other.
Creating multiple clusters with KDC
Creating an additional cluster is as straightforward as setting a new CLUSTER_ID and re-running the script.  For example:
$ sudo CLUSTER_ID=”2″ ./dind-cluster-v1.13.sh up

…………………[done]
NAME                 STATUS   ROLES AGE VERSION
kube-master-cluster-2   Ready master   3m58s v1.13.0
kube-node-1-cluster-2   Ready <none>   2m43s v1.13.0
kube-node-2-cluster-2   Ready <none>   2m41s v1.13.0
* Access dashboard at: http://127.0.0.1:32770/api/v1/namespaces/kube-system/services/kubernetes-dashboard:/proxy
As you can see, you wind up with a completely separate cluster, with a completely separate dashboard.
You can also set the DIND_LABEL, as in:
$ sudo DIND_LABEL=”edge_test” ./dind-cluster-v1.13.sh up
The advantage here is that you simply get a random cluster_id so you don’t have to worry about collisions.  Also, while CLUSTER_ID must be an integer, DIND_LABEL can be a human-readable string.
Customizing a KDC Kubernetes deployment
To change the behavior of the KDC script you just have to change various variables. To see the available variables, check the config.sh file, which you can find here: https://github.com/kubernetes-sigs/kubeadm-dind-cluster/blob/master/config.sh
We’ve already used this when we created a second cluster:
$ sudo DIND_LABEL=”edge_test” ./dind-cluster-v1.13.sh up
For example, to create a cluster with 5 nodes, you would use the NUM_NODES variable:
$ sudo NUM_NODES=5 ./dind-cluster-v1.13.sh up
Another variable you might want to change is the networking framework. By default, KDC bridges together the various containers, but you also have the option to use flannel, calico, calico-kdd, or weave.  For example, if you were to use calico, you would start your cluster with
$ sudo CNI_PLUGIN=”calico” ./dind-cluster-v1.13.sh up
Of course, there’s one more thing we need to take care of: cleaning up.
Starting, stopping, and cleaning up
KDC also gives you the ability to stop, restart, and delete a deployment.  For example, to restart the cluster, you would execute:
$ sudo ./dind-cluster-v1.13.sh up
just as before, but the process is much faster the second time because images don’t have to be downloaded, and so on.
To shut down and remove a cluster, use the down command:
$ sudo ./dind-cluster-v1.13.sh down
This command removes the containers, but the volumes that back them remain so that you can start the cluster back up. On the other hand, if you want to completely remove the cluster, including volumes, you need to clean:
$ sudo ./dind-cluster-v1.13.sh clean
If you’re going to change Kubernetes versions, you’ll want to run the clean command first.
So that’s what you need to know to get started. What do you plan to build?
The post Multi-node Kubernetes with KDC: A Quick and Dirty Guide appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

3 steps to a successful cloud migration strategy

If you’re planning a cloud migration strategy, it’s unlikely that the “cloud migration” part is an issue. It’s the “strategy” part that often serves as the stumbling block.
Many organizations have moved to the cloud in fits and starts, without an underlying strategy to guide those moves. If you find yourself in that situation, here are three steps to create a successful cloud migration strategy.
1. Ensure leadership buy-in.
Though migrating to the cloud is a business imperative, too many business leaders and C-suite executives take a hands-off approach to cloud adoption, seeing it solely as a problem to be solved by the IT team. They should rethink this stance.
Shifts to the cloud often involve not only IT changes, but also cultural changes. Cloud migration means thinking about business in new ways and can even drive significant changes in job responsibility. Employees may need to gain new skills, learn new tools and use new processes. Business leaders play a role in alleviating unnecessary concerns by supporting education and training efforts as employees gain proficiency in those areas.
Part of the leadership role is publicly supporting the changes being made by explaining how they will benefit the company, its customers and its employees. IT teams can help business leaders by offering clearly defined use cases or proof-of-concept trials. Successful early projects instill confidence and help leaders visualize the implications of larger migrations.
2. Choose projects based on business impact.
As part of your cloud migration strategy, assess the business impact of the migration. This step involves the most heavy lifting within your organization. First, determine how different workloads are classified (testing, development, production) and how each classification affects migration requirements. Understand how each data workload affects the overall business, as well as which core services each workload touches. Also factor in the relative importance of each project. Mission-critical processes carry more weight than nonessential ones.
Urgency is another factor. Will a move to the cloud improve the speed of processes? If so, would that give your business a competitive edge? It’s important to assess the appropriate cloud type (public, private, hybrid) for hosting workloads, then determine the correct delivery model: software, infrastructure, platform and other as-a-service options. No cloud migration strategy will succeed without performing the hard work necessitated by this step. Build a string of small successes and use those as the impetus to take on larger, more critical projects.
3. Measure it to manage it.
It’s an old business saying because there’s truth in it: you can’t manage what you don’t measure. This step goes hand in hand when choosing projects based on business impact. If you thought the move would make an impact on your business, the only way to know that for sure is to measure the impact.
Part of your strategy should include figuring out what benefits resulted from the migration. For example, you can identify benchmarks that can demonstrate business agility has improved. Choose a few key operational costs and track how the cloud helps you lower them. Evaluate how much time your IT team has been spending on maintenance tasks so you can ensure it’s decreasing after your cloud migration.
Measurement also reveals opportunity. If results don’t come back as expected, make tweaks to improve the results or refine strategies.
Knowing these answers also leads back to step one of ensuring leadership buy-in. When the C-suite sees bottom-line improvement to the business, it’s more likely that support for these migrations will continue.
Looking to start your organization’s cloud migration journey down a well-directed path? Check out our white paper on how using VMware solutions with IBM Cloud can help you create the right strategy for your organization and migrate to the public cloud with confidence.
The post 3 steps to a successful cloud migration strategy appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

SilverHook Powerboats makes waves with IoT technology

Racing powerboats at high speed is exhilarating, but it’s also hazardous and mechanically complex. To get to the finish line safely, drivers and their teams rely on their intuition and expertise, but even the most experienced racers still make mistakes.
For boats travelling at more than 150 miles per hour in rough seas, even the smallest change in speed, direction or engine pressure can have dramatic consequences. For instance, if one of the two engines on a powerboat suddenly cuts out, or if the boat hits an unexpected change in current, it could be hurled into the air, spin out or capsize.
Not only does traveling fast generate problems for drivers, it also makes accurately judging races difficult. At big events such as the Trinidad & Tobago Great Race, the stakes are high — and for the good of the sport, it’s vital that officials always make the right call.
At SilverHook Powerboats, we consider it our mission to make boat racing safer for competitors, easier for officials to adjudicate and even more of a thrill for fans to watch.
The first hurdle
Because boats and hydroplanes travel so quickly, collecting and relaying data on the position, speed, revolutions per minute (RPM) and engine pressure of any given boat has been a huge hurdle, until now.
We spotted an opportunity with the Internet of Things (IoT). We wanted to see whether it would be possible to analyze data in real time from smart sensors attached to a boat. If we could do this, we could help drivers spot hazards and empower officials to make evidence-based decisions during a race.
First, we needed to find the right technology to develop a solution. We knew we needed a robust, scalable infrastructure, but we didn’t have the resources or expertise required to build and maintain a data center ourselves.
Enter IBM Cloud. We were an early adopter of the IBM Cloud because we saw from the beginning that it would give us the flexibility, scalability and compute power that we needed to start building our solution. Without IBM Cloud, we doubt whether we would have been able to launch this project in the first place.
Record-breaking precision racing
After establishing the cloud architecture for our application, the next step was for us to source the rapid analytics tools that would sit at the heart of our solution.
Combining the machine learning and analytics capabilities of IBM Watson IoT and the reliable performance of the IBM Cloud, we developed a solution that we call t3lemetry. The solution collects data from sensors placed on a powerboat and automatically relays this information to the IBM Watson IoT platform hosted in the IBM Cloud.
Watson learns more about our 77 Lucas Oil SilverHook powerboat with each journey and makes even more accurate predictions about the potential dangers ahead.
Already, insights from IBM Watson IoT have helped us avoid disaster when we set the world record for the fastest journey between Key West Florida and Havana, Cuba.
Oceans of possibility
So far, we have focused on testing t3lemetry on our award-winning 77 Lucas Oil SilverHook powerboat, though we have plans to extend it to other teams and boats in forthcoming hydroplane racing events.
With a larger pool of data on the performance of different types of boats, we can refine the precision of t3lemetry and see how the solution performs in different arenas. We anticipate that the solution will help race officials take a more empirical approach to scoring events, as they will no longer need to rely on traditional, inaccurate measures of speed, such as radar guns.
What’s more, we’ve seen a great improvement in the engine life of our 77 Lucas Oil SilverHook because Watson now alerts us before we push the engines beyond their absolute limit. The manufacturer of our engines, Mercury Racing, was surprised to see that they were still in great condition even after a series of grueling races.
We also expect to see improvements in engine life on hydroplanes that use t3lemetry. This will have a huge impact on boat racers, helping them make significant savings on engine repairs and maintenance.
Looking ahead, we plan to enhance data taken from t3lemetry with a visualization engine to create engaging, informative graphics that will give spectators the chance jump onboard virtually and follow each boat every step of the way, no matter how fast or how far out in the ocean the boats are traveling through our satellite system from Satcom Direct.
With IBM supplying excellent technical support and expertise, we feel confident that we can take the next step in our journey to share the excitement of powerboat and hydroplane racing with even larger audiences.
Read the case study for more details.
The post SilverHook Powerboats makes waves with IoT technology appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Introducing IBM Cloud Private Experiences: Find your path to AI

Artificial intelligence (AI) is no longer the next big thing. It’s a present reality of IT, and it’s only becoming more important.
In a recent study by MIT Sloan Management Review, 91 percent of business leaders said they expected AI to deliver new business growth by 2023. The study also showed that firms that had invested in AI were already pulling ahead.
Despite the clear opportunity that AI presents, many organizations are struggling to wrangle the data required for AI applications.
To help enterprises gain more of a self-service option for AI, IBM built IBM Cloud Private Experiences, a guided, no-download-needed journey through collecting, organizing and analyzing data with IBM Cloud Private and IBM Cloud Private for Data. Users can build cloud-native, AI-powered apps over the course of seven days of access to a hosted environment.
To help explain the power of this tool, we’ll walk through each of the experience’s paths using an industry example of machine learning applications for banking.
Easily collect data sources
Users can connect to existing databases for quick and easy access to data no matter where it lives. For example, in the banking industry, enterprises can use the integrated connectors to connect to the bank’s database, then discover and select the data needed to build a mortgage prediction model.
Organize data
IBM Cloud Private Experiences can help turn data into trusted data. Automated discovery can help users import, analyze and classify data. They can define business terms so that data is consistent, as well as apply rules and policies to make data compliant with regulations. They can transform the data to make it useful, and, most importantly, make the data easy for users to find by publishing it to the enterprise data catalog.
Analyze and build
Quickly analyze relevant data and build cloud-native apps. For example, again within the banking industry, users can build, deploy, and publish a machine learning model to predict whether clients will repay their mortgage or default on it. The model can be further trained with different classification techniques and insights can be easily seen through visual dashboards.
Enterprises can start from any of these points to easily build custom AI-powered applications.
Choose your path 
Test the tool by visiting the IBM Cloud Private Experiences website and learn more from the introductory video.
Check back for updates on the next path focused on the IBM Cloud Private catalog to experience how companies can easily deploy a cloud-native application using microservices and containers, and drive towards improved scalability, portability and resiliency.
The post Introducing IBM Cloud Private Experiences: Find your path to AI appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

IBM signs agreement to help Juniper in journey to cloud

Networking vendor Juniper Networks and IBM Services announced a seven-year, $325 million agreement this week through which IBM will help enhance Juniper’s infrastructure, applications and IT services.
As part of the agreement, IBM Services will use IBM Services Platform with Watson to help manage support systems including data and voice networks, data centers and the help desk.
Juniper will also use IBM AI technology to reduce costs and create a more agile IT environment, thereby enhancing its journey to the cloud. “To facilitate this, IBM will introduce the ‘Factory Development’ concept for application management, which uses automation and cognitive tools to drive efficiency,” ARN reports.
Integrating cloud solutions with Juniper’s existing IT investments “gives them the opportunity to generate more value from existing infrastructure, along with helping them manage strategic services that are critical to their business”, said Martin Jetter, senior vice president of IBM Global Technology Services.
For more information about this agreement, read the full story at ARN.
The post IBM signs agreement to help Juniper in journey to cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud