4 questions to ask when considering SaaS for business automation

Most companies these days are using some form of automation to transform business. The automation of work, including automating processes, decisions, data capture and management of content, means increased operational efficiency.
Along with implementing these types of automation technologies, many organizations have also started looking into how migrating different workloads to the cloud would impact their business. Part of this includes deciding when and where you should employ software as a service (SaaS).
The SaaS market is expected to grow 17.8 percent in 2019, making it more popular than ever before. The benefits of public cloud adoption with SaaS include enabling better scalability of technologies, increased agility and decreased resource cost.
When considering implementing SaaS with digital business automation technologies, here are some of the key questions you should consider:
1. Are there areas of your organization that have a more urgent need to transform quickly?
Maybe there is a line of business within your organization that has higher rates of customer attrition due to new, agile competitors that can quickly deliver new technologies. Automating business with SaaS means that you can quickly employ new technologies to drive retention, then scale into to new areas of the business as needed.
2. Do you fully understand your costs today and how SaaS would impact them?
Understanding the true return on investment (ROI) for SaaS means looking at the complete picture. Consider all labor costs, including those associated with updating and maintaining software in data centers, and sunk cost from investing in servers that may not be used in the future if migrating to the cloud. This could also mean costs associated with an inability to act quickly, such as customer attrition losses or reputational risk.
3. How will SaaS impact the role of IT in your organization?
With SaaS, you are gaining the benefits associated with a managed software solution, but also giving up control of your infrastructure. Some companies find this hard to do, but those prepared for this mindset change usually succeed faster. SaaS also allows you to reallocate your people resources to more strategic initiatives.
Consider Enterprise Holdings, an IBM customer that migrated from an on-premises version of IBM automation software to a SaaS version. With SaaS, the company can allocate a full-time employee to a more strategic role rather than managing on-premises operations.
4. What are qualities to look for in an automation vendor?
When choosing a partner, look for a solution that can scale with your growing needs and fits within your overall business automation goals. Ideally, a partner should offer a platform of solutions that can be added to your technology stack as you continue to move forward in your cloud journey.
With a SaaS platform that has been built for business automation, IBM has now made it easier than ever to get started on your automation journey with SaaS.
For more information, register to attend the webinar, “Curing the productivity crisis with cloud-based automation” and click to learn more about the IBM Automation Platform for Digital Business.
The post 4 questions to ask when considering SaaS for business automation appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Shifting from the monolith to the cloud with microservices

Rapid application deployment is vital for companies to meet consumer demands, compete in the market and scale for the future. Quickly delivering software to support your organization across these facets is a tall order.
Developers often spend their days building something new or debugging something that’s broken, which is why they want a solution that simplifies the development process, making it faster and easier. The faster they can improve existing apps or find errors in the code, the more time they have to learn new skills.
The benefits of microservices — agility, shorter development time, flexibility — help developers build something more robust faster and with fewer problems. The challenge some developers face, though, whether due to the culture or ingrained processes within an organization, is making the shift to building in a microservices architecture.
For developers advocating for the adoption of microservices, a few things can make the shift smoother: learning microservices development best practices, optimizing which languages to use, using an opinionated stack and preconfigured pipeline, and testing apps using continuous deployment.
A major mental shift
When shifting to microservices, there are best practices developers must adopt. It’s important to understand that building microservices isn’t merely breaking things into pieces. It’s also about automation and the method for developing software.
Take continuous deployment, for example. When building a monolithic application, you must build all of the pieces together. However, if you need to make updates or perform maintenance, the entire application must be rebuilt and deployed. This downtime could negatively affect the user experience and create more fire drills for your team because you’re scrambling to make the changes as fast as possible.
Microservices eliminate this, because each microservice is a small piece of an application. To make an update, you may only need to look at one or two microservices, making it easier to implement changes. Take advantage of this capability and practice continuous deployment to refine the app faster.
Choose the best programming language
Unlike a monolith in which everything must be written in the same language, microservices provide the freedom to choose the programming language best suited for an app.
For example, Microclimate, an end-to-end, cloud-native platform, currently supports Java, Node.js, Swift or bring-your-own template. Developers build microservices using Docker containers. This flexibility means you can have a team or a single person working on one piece of an application using Java, and another team can work on a different piece coding with Node. The various parts of the application still function cohesively, and you can build higher-quality apps.
Speed ahead with an opinionated stack
An opinionated stack provides a predefined framework or code base that’s built using specific design patterns that follow best practices.
This capability simplifies specific development tasks because it provides a predesigned path. Developers don’t need to make as many decisions and spend time on setting up, leaving them with more time to focus on app optimization.
Move into production faster with a preconfigured pipeline
Many tools used to push an app into production require setup first, adding a step to the process. A preconfigured pipeline, such as Jenkins, abstracts this extra step, helping save time and effort and push the code into the environment faster, simplifying build and deployment.
This agile approach, combined with a microservices architecture focused on one piece of a broader application, lends itself well to continuous integration and deployment. The faster companies get an app into production, the faster developers can receive feedback to improve it.
Release and test one thing, not the whole thing
Testing a monolithic app is complicated because test environments must be created to simulate what an application might do when live. This extra step makes implementing changes slower and more challenging. Also, there is a greater risk of downtime in a monolithic architecture because developers have to rerelease the entire application. A bug in one module could impact the availability of the entire app.
The beauty of microservices is that developers can test one part of the application in a live environment without affecting the entire app. They can also choose different testing methods such as A/B, red/green, canary or geolocation testing.
For example, if developers are doing geolocation testing, they can release live code for a new feature in Canada. If it runs smoothly, they can then release it globally. While the new version is running in Canada, the old version of the same app is still running everywhere else. This dual deployment lowers the risk of widespread downtime if something goes wrong.
It’s about what’s best for your needs
While microservices provide compelling evidence for adoption, there are some cases where it makes sense to integrate microservices with a monolith. Ultimately, the choice should be made based on what’s best for your organization.
If you choose to adopt a microservices architecture, remember you don’t need to do it all at once. An excellent way to get started is to evaluate your monolith, find a service that can be  built as a microservice, build the microservice and redirect traffic to the new service, then seek out a next service. This approach, along with an integrated end-to-end development platform such as Microclimate, can smooth the transition from monolith to microservices.
Learn more and get started with Microclimate.
The post Shifting from the monolith to the cloud with microservices appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Thanos: long-term storage for your Prometheus Metrics on OpenShift

Thanos is a project that turns your Prometheus installation into a highly available metric system with unlimited storage capacity. From a very high-level view, it does this by deploying a sidecar to Prometheus, which uploads the data blocks to any object storage. A store component downloads the blocks again and makes them accessible to a […]
The post Thanos: long-term storage for your Prometheus Metrics on OpenShift appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Zenfolio uses IBM Cloud Object Storage to deliver picture-perfect customer service

Managing data storage can be hard for just one person, so imagine trying to do it for thousands of users.
With tens of thousands of subscribers regularly uploading photos and HD videos, Zenfolio’s on-premises storage infrastructure was becoming time-consuming to manage, difficult to scale and was fast approaching capacity. Making the move to the IBM Cloud unlocked practically limitless scalability and will save the company up to $1 million over the next three years.
Giving customers what they want
Amateur and professional photographers, from wedding to wildlife specialists, use Zenfolio to upload, display, share and sell their photos and photography services. The photo-hosting platform comes with prebuilt photo galleries to display work, as well as integrated marketing and e-commerce tools, including our “unlimited upload” promise.
Zenfolio currently hosts more than 2 billion images, which amount to around 12 petabytes of data. That number just keeps on growing. Our platform is becoming more popular, and more subscribers means more and more photos.
We had used on-premises storage infrastructure, housed in four separate data centers, for several years. But increasing data volumes and the physical constraints of those sites meant we were fast reaching the limits of our capacity. We were struggling to scale the storage environment to meet growing requirements, which made data center management a heavy burden on our IT operations team, not just in infrastructure maintenance, but also in the time spent on capacity and floorspace planning.
Releasing the pressure
We decided to move to IBM Cloud Object Storage because it offers practically endless scalability, without us having to procure and manage physical storage systems. Managing our existing on-premises storage was challenging. Adding new capacity to meet growing demand would make data center planning even more complex and put more pressure on our IT team.
IBM Cloud Object Storage represented the best of both worlds for us: cloud-level scalability and flexibility without requiring the time, effort and resources to procure and manage the technology.
To make sure this was the right decision, we ran a total cost of ownership (TCO) analysis comparing IBM Cloud Object Storage to competing cloud storage solutions. We concluded that it offered the best value, coming in at 45 percent and 33 percent more cost effective than competitors. In particular, IBM offered the best pricing for “cold vault” storage, where data is only accessed infrequently, which represents a large proportion of our stored data. Cold vault storage offers a great price per gigabyte, but data is still available in milliseconds if needed.
We’re working with IBM to migrate data from our on-premises infrastructure to the IBM Cloud; all 12 petabytes. The support from the IBM team has been excellent, helping us to maintain our existing infrastructure until the migration is complete. We’re also directing all new photo uploads to IBM Cloud Object Storage.
Checking all the boxes
Moving from on-premises infrastructure to IBM Cloud Object Storage will save Zenfolio considerable time, effort and money. We’re still running our on-premises solutions while we migrate the historical data to the IBM Cloud, but we’ll soon be able to shut down our data centers, which will generate huge savings.
We’re expecting data volumes to grow to 24 petabytes in the next five years. That growth would have been very costly and tricky to deal with using traditional, on-premises storage. We’re very happy that we no longer need to worry about procuring new infrastructure or floorspace planning or configuring and servicing storage systems.
Once the migration is complete, our IT staff won’t have to think about storage infrastructure at all. This will relieve the pressure on our lean IT operations team and free up their time to focus on delivering more value to customers. Knowing that all our data will be stored in secure IBM data centers that are staffed around the clock gives us complete confidence in the solution.
So far, we’ve been very impressed with IBM Cloud Object Storage and the support we’ve received. We’re now looking at how we can use the integrated Aspera high-speed data transfer feature to accelerate uploads, improve our service and become the first-choice hosting platform for photographers.
To read the full story now, check out the IBM case study.
Learn more about IBM Cloud Object Storage.
The post Zenfolio uses IBM Cloud Object Storage to deliver picture-perfect customer service appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Setting up microservices in OpenShift using Service Mesh and Kubevirt

In this blog post, we’re going to see how KubeVirt upstream community project and Red Hat OpenShift Service Mesh co-exist on the Red Hat OpenShift Platform (OCP), and how they interact with the existing containers pods in a microservices world. As much as we will like to think everything is in containers, it is to […]
The post Setting up microservices in OpenShift using Service Mesh and Kubevirt appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

All the News from the OpenShift Commons Gathering at Kubecon Seattle

  The OpenShift Commons Gathering at KubeCon Seattle, last week, was packed with information on the past, present and future of Red Hat OpenShift in all its forms. Over 350 people from over 115 companies from around the world to gathered at the event and hear about the future of the platform. The event even […]
The post All the News from the OpenShift Commons Gathering at Kubecon Seattle appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

OpenShift Commons Gathering at Seattle KubeCon 2018 Recap with Video and Slides

With over 350 attendees from over 115+ companies and more than 25 speakers by community members, upstream project leads, contributors, end users, and from Red Hatters, the OpenShift Commons Gathering in Seattle this past week was a great place to learn about the future of Kubernetes, OpenShift, and cloud native infrastructure. Seattle Shines Spotlight on […]
The post OpenShift Commons Gathering at Seattle KubeCon 2018 Recap with Video and Slides appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Workflow automation: 25 years of tried-and-true success

1993 was a year of dynamic changes in politics and culture. It was also the year that IBM released its flagship enterprise workflow automation software: IBM FlowMark.
Since then, workflow automation technology has emerged as the leading front-end solution for enterprise digital transformation initiatives. IBM continues to provide leading solutions for designing and managing workflows to help drive growth.
A brief history of workflow

Early 1990s. The first workflow automation software solutions, based on workflow engines, were released. This software replaced basic, paper-based processes with electronic ones, enabling companies to replace paper-based task-routing activities with automated electronic-form processes.
Late 1990s. Features such as modelling tools, business rules and more were added to analyze, model and describe business processes. It helped companies analyze the graphical view of “as-is” processes in an organization and contrast it with “to-be” processes to make them more efficient.
2005. The modern era of workflow automation began with the introduction of business process management (BPM) methodology and tools.

IBM has been part of it all, starting with the launch of IBM FlowMark in 1993 and continuing through today, as you can see from the below timeline graphic. A quarter of a century later, Business Automation Workflow is at the forefront of enterprise automation for improving productivity, visibility, faster time to market and improved accountability.

The future of workflow
Workflow automation software has already reached the cloud, making it easier for people to collaborate within different cloud-based applications, wherever they are. What’s next?
Three words: intelligent workstream automation. Based on our interactions with clients, I believe the future of workflow will be intelligent, meaning the technology will continue to evolve to help reduce the management of understanding what work is the most important to do and who is the right resource (human, robot, system API) to complete it.  This significantly reduces the reliance on manual intervention by being able to “learn” from human work patterns to adapt workstreams via workflow automatically.
In the next five to 10 years, I anticipate companies are likely to see advancement in three key areas:
1. Machine learning and workflow 
Artificial intelligence (AI) is taking its strategic place in both operational and strategic business process management, changing the ways we do things. For example, robotic process automation (RPA) is already being used to automate routine, manual tasks to help reduce errors and increase speed. Going forward, it’s likely that more intelligence will be added to enhance RPA’s ability to observe and learn from human patterns to optimize front- and back-office experiences. Likewise, machine learning is poised to revolutionize workflow, helping to enable companies to trigger new processes, reroute running processes and make action recommendations based on predictions.
2. Low-code or no-code workflow software
Workflow software requiring minimal or no coding will continue to be a strategic priority to make process automation more accessible to the entire organization, especially line of business. According to a 2017 survey, three-fourths of organizations report that at least some of their applications are developed by employees outside the IT department.
3. Text-message-based workflow
The de facto communication method for the new workforce is likely to be text messaging. In some instances, we’re already seeing this. For example, SMS marketing is on the rise due to high read and response rates. It’s thus likely that enabling text messaging and text message-based workflow will be a big driver for the next generation of productivity gains in the enterprise.
IBM Business Automation Workflow continues to evolve to help meet emerging demands, just as it did 25 years ago, evolving from workgroup-style and document-oriented workflows to workflow automation on cloud to more intelligent automation.
Here’s to another 25 years of relevant innovation and improvement.
Learn more about how IBM can help you design and manage start-to-finish workflows at scale.
The post Workflow automation: 25 years of tried-and-true success appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Industry experts weigh in on the future of automation and DevOps

Application teams need reporting. They specifically need pipeline analytics to measure and optimize metrics, such s deployment frequency and deployment success, to find and fix practices that are slowing them down and orchestrate releases. In its latest Continuous Delivery and Release Automation Wave Report, Forrester noted that reporting can be a “weak spot” for much of the Continuous Delivery and Release Automation segment.
Thoughts on Cloud recently asked a virtual roundtable of DevOps industry experts and analysts to share views on the future of release automation for enterprises. Participants included Torsten Volk, managing research director for containers, DevOps, machine learning and AI at Enterprise Management Associates; Tony Flath, a tech and media consultant and president of TmanSpeaks, Ltd; and Matthew P.Skelton, head of consulting for Conflux and co-author of the forthcoming book Team Topologies.
Thoughts on Cloud: A recent IBM Institute for Business Value survey found that 72 percent of C-suite executives in large enterprises are creating their own disruption to protect their market share from being eaten by a nimbler startup. How do deployment automation and continuous delivery solutions like IBM UrbanCode facilitate proactive disruption? 
Torsten Volk: While implementing continuous delivery is relatively simple in a greenfield cloud native environment, the transformation toward a state of continuously releasing and validating new software features is much harder within an enterprise context, with its traditional hairball of legacy technologies and rigid processes that often have formed over years or decades to minimize operational risk and cost. UrbanCode embraces this enterprise use case by providing a universal toolkit for gradually implementing DevOps principles without dramatically increasing operational risk and also without replacing the existing toolchain.
Tony Flath: Three things pop right to mind: speed, secure and cloud. Agreed, organizations need to be able to create and deploy applications that better utilize emerging technologies and advanced artificial intelligence capabilities to gain or maintain a competitive advantage. UrbanCode enables quicker time to market, with enterprise scalable security built in on a continuous hybrid multicloud delivery design.
ToC: How important is release orchestration, the ability to bring numerous disjointed pipelines together under one release management capability, to digital transformation?
Torsten Volk: Developers often are in love with their individual tools and will not be friendly to anyone who tries to rip them out of their hands. Therefore, successful release orchestration needs to harness and further fuel this developer enthusiasm by providing a set of universal “guardrails” to optimally synchronize DevOps pipelines through universal observability, orchestration and automation.
Tony Flath: True digital transformation comes down to economics or business value delivered by technology. Having the capability to quickly deploy applications en masse with prebuilt templates, artifact repositories and maps really amps up capabilities at a macro level to manage numerous application nodes made up of prebuilt web, app and database designs. This is a true enabler and game changer for multicloud hybrid control; it’s true digital transformation.
Matt Skelton: Coupling at deployment time is really another kind of monolith, so we should generally avoid aiming for large orchestration of releases. For managing complicated external dependencies, release orchestration can be helpful, but our aim should be to reduce the need for orchestration.
ToC: IBM UrbanCode is offered as four separate products: Build, Deploy, Velocity and Release. Does this add complexity or flexibility to enterprise buying decisions?
Torsten Volk: Enterprises do not buy an entire DevOps platform, but require certain universal tools to alleviate their current pain points. Splitting UrbanCode into four products enables customers to start wherever there is the most pain and then, gradually, optimize the rest of their pipeline.
Tony Flath: UrbanCode adds flexibility that makes it possible for an organization to take DevOps to a whole new level to scale and deploy applications that harness the capabilities of cloud, big data and analytics, then deploy across cloud, on premises, edge and mobile. Key benefits to an enterprise include inventory management, artifact repository, turnkey hybrid cloud enterprise modeling, enterprise security and audit trail. Big app controls best harness the capability of big data.
ToC: What does the future of DevOps for the enterprise look like? How is IBM UrbanCode positioned to support it?
Torsten Volk: The future of DevOps requires what I like to call “continuous everything”. This means that security, compliance, performance, usability, cost and all other critical software components are automatically and continuously implemented without slowing down the release process. In short, the optimal DevOps process is fully automated and directly synchronized with rapidly changing corporate requirements.
Tony Flath: Think of DevOps this way: old DevOps facilitated the small community where developers provided lots of hands-on support in an isolated, difficult-to-move environment. The future of DevOps becomes like a booming metropolis of information with many automated tools, procedures and prebuilt structure components that enable ease and flexibility to move, test and change quickly and securely to keep up with the pace of change.
Matt Skelton: In some organizations, DevOps has come to mean largely just infrastructure automation. The future of DevOps for enterprises is really a return to the first principles of targeted collaboration, well-defined team boundaries, and loosely-coupled, high-cohesion APIs between teams and software systems.
UrbanCode client case studies and industry resources
Learn more about UrbanCode by reading the below client references, which demonstrate a range of quantifiable business benefits.

NBCUniversal scales DevOps across a large multi-speed IT enterprise
Bendigo and Adelaide Bank improves agility with DevOps
Rabobank uses UrbanCode and Rational Test to gain agility on IBM Z

Register for your free copy of the “The Forrester Wave: Continuous Delivery and Release Automation, Q4 2018” report.
The post Industry experts weigh in on the future of automation and DevOps appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Intelligence Retail drives consumer packaged goods sales with AI insights

When you next go shopping, glance at the shelves around you: the options are endless. Today’s consumers face so many choices that it can be hard for retailers and consumer packaged goods (CPG) companies to keep track of their products.
The challenge for these stakeholders is ensuring that popular goods are always in stock and positioned where customers are most likely to buy them. The ability to make on-the-spot stock decisions dictates which companies stay ahead of the curve.
Unlocking the mystery of store shelves
At Intelligence Retail, we wanted to provide fast, accurate product information to help retailers and CPG producers achieve competitive advantage. We developed a solution that uses visual recognition technology to analyze photographs of store shelves. These photos from smartphones and tablets are used to identify products, and machine learning algorithms calculate key performance indicators (KPIs) around presence, position, promotion and pricing.
To ensure that the solution could provide useful results fast, we needed a powerful infrastructure platform and an intuitive user interface. Our existing cloud platform simply wasn’t up to the task, so we began our search for an alternative.
Ramping up performance
Intelligence Retail migrated its solutions to IBM Cloud bare metal infrastructure, featuring GPU servers designed to handle compute-intensive workloads. Our testing found that IBM GPU Cloud servers outperformed the competition by 40 percent. Working with IBM gives us access to the latest technology developments at a very reasonable price. We also gain expert advice on developing our offering.
With leading-edge GPU computing supporting our solutions, Intelligence Retail can offer users very short response times. Complex photographs of store shelves are analyzed in no more than 10 seconds and have an accuracy level of 95 percent.
To present insights to clients, we deployed IBM Cognos Analytics. It powers customizable dashboards for sales performance and price, so users can drill down into KPIs to discover their best-selling products and strategies. They can aggregate KPIs by region, retailer, store, brand or visitor.
Winning the retail game
Up and running in the IBM Cloud, we’re ready to take Intelligence Retail global. With rapidly delivered insights from Intelligence Retail, retailers and CPG producers can increase sales by two to three percent and cut audit costs by up to 80 percent.
Two years into our partnership with IBM, we’ve extended our reach across Europe, and greater expansion is on the horizon. Thanks to the scalability afforded by IBM Cloud solutions, we can serve companies of any size at short notice.
Next, we plan to explore the demand forecasting capabilities of IBM Watson. We want to build on our solutions’ insights to provide advice on what customers should do next. By giving retailers and CPG producers another tool to get ahead of the competition, we’ll stand out in the market even more.
Read the case study for more details.
The post Intelligence Retail drives consumer packaged goods sales with AI insights appeared first on Cloud computing news.
Quelle: Thoughts on Cloud