Will Edge Computing Reverse Network Virtualization Momentum?

The post Will Edge Computing Reverse Network Virtualization Momentum? appeared first on Mirantis | Pure Play Open Cloud.
This week we released a Kubernetes-based virtual appliance to help service providers build edge clouds. We made a bet on Kubernetes as the resources scheduler and combined it with a project called Virtlet to make it possible to run both containers and VMs. The point of this blog isn’t to describe the architecture of our solution, though. Moreover, I will be the first to admit that neither Mirantis nor anybody in the telco industry today has a very clear idea of what the right architecture for service provider edge should look like. The point of this blog is to explain the bigger problem we aim to solve by releasing a possibly incorrect, yet real, tangible, downloadable virtualization substrate for service provider edge.

For the last 5 years network function virtualization has been one of the biggest trends in the telco space. Top telcos like AT&T announced their intent to go all in on network virtualization and the rest of the world followed. For those not familiar, NFV is a new way to build service provider networks, where instead of investing in expensive, carrier grade hardware appliances like these……telcos deploy cheap COTS servers and run network functions (like routers, session border controllers etc.) as pure software. The outcome is cost savings, less vendor lock-in and, most importantly, the ability to update software network functions and physical hardware on independent cycles. There is plenty of good material out there on NFV, but the key point is that over the last five years telcos invested billions of dollars in various NFV initiatives for network core ….yet when it comes to network edge we are about to go the other way.

Let me explain. The only thing in telco that is hotter than NFV today is multi-access edge computing (MEC). Projected adoption of IoT devices, connected cars, AR/VR etc. are requiring telcos to bring more and more sophisticated network functions and data processing applications closer to the end user. Unlike NFV initiatives, which are primarily about saving money on carrier grade gear, MEC is about seizing new opportunity at the network edge. The first to the edge will gain market advantage. And unlike with NFV, where telcos would look to build rather than buy, with MEC telcos are ready to throw money at vendors and get out-of-the-box integrated solutions that get them to the “edge finish line” faster. In effect, the market race for edge is reversing the NFV momentum that has been developing at the network core for the last 5 years.

You cannot blame either service providers or telecom vendors for prioritizing speed over longer term efficiencies at this stage in market development. However, since a large chunk of Mirantis revenue is tied to network virtualization use cases, we can’t help but wonder about the main obstacles to virtualizing the network edge and not just the core today.

In trying to find the answer we looked at the landscape of incumbent telco vendors and found the following:

all have ready-to-buy gear or solutions for edge (Nokia, Huawei, Juniper);
all of the above solutions have a good intention of being open and based on standards;
none of them actually exist as something you can download and try

Perhaps the most glaring lack of openly available products or ready to use code can be observed among the various open standards bodies erected to promote virtualization at the network edge. ETSI Multi-Access Edge Computing group has been around for over 3 years, has produced numerous white papers and specifications, but none of that work has yet to materialize in tangible open source code. Much of the same applies to OPNF Edge, as well as the recently formed Linux Foundation project called Akraino. If you try to download and install any code for edge virtualization substrate produced as a result of work by above standards bodies – you will fail. The software is either not available in the open or not available, period.

Why does it matter? It matters because NFV has momentum at the network core because the two virtualization platforms used for core – OpenStack and VMware – are real, tangible pieces of software that can be downloaded, installed and experimented with by vendors building virtual network functions. You virtualize your core using either one of these two and there are a bunch of VNFs you can run on top. However, when it comes to network edge, today we have closed vendor solutions and many open reference architectures, but nothing concrete for an ecosystem of VNF and software vendors to build on top of. Changing that is the first step towards virtualized edge.

With our release of K8S based MCP Edge we openly admit that we may be making a bet on edge architecture that isn’t guaranteed to evolve into a standard. However, we are hoping for the broader ecosystem of vendors and telcos to look at it, try it, give us feedback or maybe even release an alternative version of their own that can be experimented with in the open. Our release of MCP Edge is an open call to the service provider ecosystem:  if we want for the edge to be decoupled from hardware, the time has come to go beyond standards bodies and reference architectures. The time has come to start experimenting with real software. The post Will Edge Computing Reverse Network Virtualization Momentum? appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Spinnaker Shpinnaker and Istio Shmistio to make a shmesh! (Part 1)

The post Spinnaker Shpinnaker and Istio Shmistio to make a shmesh! (Part 1) appeared first on Mirantis | Pure Play Open Cloud.
I’m guessing that whenever your manager approaches you and says “We have a problem,” you sort of know that it really means “I have a problem for you to solve.” Such is often the case with our customers, who are frequently attempting to move from a cascading (waterfall) style of delivering application services on bare metal to a more modern way of approaching continuous delivery geared toward cloud native applications.
The most specific problem many of them — and perhaps you as well — face in this regard is the lack of mature tools and services to enable them to combine all of the software delivery elements in play into a robust set of pipelines that represent every aspect of delivery, from baking, to verification, to release, with the ability to repeat the process as often as needed. You also want to be able to incorporate slight variations based on the circumstances of the change being introduced in any given cycle, and you want to do it all without a disruption in service.
In my role at Mirantis, I make it a point to see things from the customer’s side so I decided early on to document my journey to this land of milk and honey in the hope that my travels will help others who may be faced with the same “problem” to solve for their company.
Chapter 1: The Birth of a Shmesh
I was at my home office, as usual, working on creating an application services environment that would provide everything one would need to produce a cloud native application that can be continuously delivered.
For most people, the starting point is to define a logical path from treating the infrastructure as “pets” to one that treats it as “cattle.” What I was looking for, I knew, had to provide a portable and immutable infrastructure pattern on which to host the applications.
This led me to Kubernetes as a foundation, and once I had gone “all in” on that concept, I started looking at sets of tools that would sit on top of my portable and immutable infrastructure and fit the needs of my application services environment without placing too much of a burden on the application to provide recoverability and resiliency. I also wanted to fill as many of the ease-of-development criteria as possible.
The next element to focus on was a workflow engine that would enable me to piece together the complicated steps required for baking, testing, and releasing the application services in a continuous flow. That’s where the Spinnaker workflow engine comes in. (Or “Shpinnaker”, as I found myself calling it after the 137th viewing of “Shrek” with my grandkids.)
Shpinnaker — I mean Spinnaker — was originally developed by Netflix, but has been picked up by some heavy weight development teams like Google, Capital One, and Mirantis to capture and maintain the steps in the release process over time for continuous delivery of cloud native applications. (More to come on this in later chapters.)
In working through the various processes required by the various Development and Operations teams I work with, I have discovered that although the Kubernetes framework addresses several needs such as “self-healing”, “auto-scaling” and “contraction” pretty well, some of the development features, specifically those related to internal integration points used by application developers, had to be recreated in separate Pods and Deployments repeatedly. And of course, since there are many different ways to skin the proverbial cat, each instance of Firewall, Domain Name Service, DHCP and even Load Balancing tended to be handled slightly differently, which made continuous management and delivery more difficult and complex.
And that brings us to Istio (or “Shmistio”, as we were now up to 153 “Shrek” viewings). Istio provides a pluggable service mesh that is integrated with the Kubernetes framework using Envoy as the Proxy Service between the Control and Data plane. (Istio, too, will be covered in later chapters.)
When I deployed my first version of Spinnaker with Istio, I was so pleased with the result that I called to my wife, and said “Look, Bunny, I made a Shmesh!”
She brought a broom and dustpan and looked over the living room her (admittedly profitable) Ebay business has turned into something out of a Salvation Army donation center and said, “Where is it?”
“No…” I said.  “I used Spinnaker with Istio to makes an application service mesh…  a Shmesh!”
She just shook her head and walked out of the room. (We’ve been married for a long time.)
Bunny’s underwhelmed response aside, there are a few things I want to share about my “Shmesh” to set the stage for the rest of this story. Specifically, you should be familiar with:
Kubernetes: If you’re reading this, you’re probably already familiar with Kubernetes, the IaaS that I will use as the foundation for our project, but if not, please check out this link for a basic idea so you can understand the concepts.
Istio: Don’t worry if you’re not yet familiar with Istio; we’ll be talking about what you need to know as we go along. I will present the information about some of the features and capabilities of Istio as a service mesh in the context of how Istio is applied to the K8s framework to facilitate and accelerate the development and continuous delivery of application services and microservices.
Spinnaker: I will also share with you the ways in which a workflow engine such as Spinnaker can be implemented to “bring everything together” in an automated and repeatable way.
Ultimately, I will provide an actual “use case” where I put all of these tools to work to form a Continuous Delivery Pipeline for one of our applications. The application targets deployment on Google Kubernetes Engine (GKE) that gets “injected” with Istio framework components to support a variation of load balancing.  
In part 2, we’ll talk about microservices and get you set up with a working install of Istio.
The post Spinnaker Shpinnaker and Istio Shmistio to make a shmesh! (Part 1) appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Proof of Concept: A waste of time and money?

The post Proof of Concept: A waste of time and money? appeared first on Mirantis | Pure Play Open Cloud.
Very often, moving into a new field brings more questions than answers.
The Customer’s Dilemma
Consider, for example, a company that has ample experience with traditional IT, but is encountering the limitations of the traditional approach and wants to consider moving to cloud computing.
Thus enters the cloud vendor. The IT team, without much cloud experience, sits in a room while sales and an architect from the cloud vendor present a bunch of slides, detailing how everything will get better once the company buys the vendor’s product.
All those unicorns and rainbows sound great while everyone is in the meeting room, but afterwards doubts begin to plague both management and the IT team. How good is this approach, really? Is it going to meet our requirements? Are we going to be able to manage it? How are our internal customers going to feel about it?
And the big one: should we go ahead with it?
What the team really needs at this point is a proof of concept (PoC). But you have to do it right.
What is this PoC thing anyway?
A proof of concept is a crucial part of any cloud project, and it shouldn’t be slapped together just to tick off a checkbox. A PoC can only be successful if both parties agree to put real effort into creating–on the part of the vendor, and examining and testing–on the part of the IT staff–the PoC cloud.
A cloud PoC should be a cloud built on a very small scale, but with all important components in place, so the admin and development teams can see — and more importantly actually, try out — the cloud. Developers can see how to onboard an application app or two. Tests can be conducted for situations such as instance component failure. Overall the cost is moderate, especially compared to a large deployment, which will require a large financial commitment.
It’s important that the PoC is not only created appropriately, but deliberately used. Nobody is served if the PoC cloud is just set up and demoed to management; that can be achieved much easier by demoing the vendor’s cloud environment.
Prerequisites
While the need for a PoC is rarely challenged, conflict over how it is created is common. Customers often think it can be built with three moldy servers from the back room, while vendors want all new hardware.
The truth usually lies somewhere in between.
At Mirantis, we recommend a reasonable approach to PoC setup. The servers don’t have to be new or top-of-the-line, but they should be of reasonable specifications, and if they are not new, well tested if–especially if the reason for retiring them is not known. They must also match the requirements from the Mirantis MCP hardware recommendation list. If in doubt about specifications, talking to the vendor is much easier than deploying and then finding out things don’t work.
Preparing the hardware involves ensuring that it is racked and stacked, tested and accessible via IPMI. The customer must also provide for remote access for offsite deployment engineers or a usable workspace for onsite engineers as necessary.
All teams must communicate from the very beginning, and ideally the customer IT crew will be part of the deployment as it happens, at least to the degree their own work schedule allows. After all, if the PoC is successful, they are the ones who will be working with the resulting production cloud.
Deployment and Testing
Once everything is in place, the deployment team starts building. For Mirantis Cloud Platform (MCP), the engineers will build a software model PoC cloud in code. Once finished, this model is a complete representation of the PoC cloud.
Engineers can then use the model, together with the foundation node, to deploy all components of the cloud onto the rest of the hardware. Any changes are first made to the model and then deployed in turn, so that the model is always an accurate representation of the actual infrastructure. This is how it works on the large-scale production cloud, so it must work like this on the small scale, too.
Unlike in production, however, the PoC could makes it possible to do destructive testing, enabling the deployment team to show how failed parts of cloud can be rebuilt in short order from the model. Tests should include not only cloud functionality and verification that all requirements have been met, but should also show all involved parties–from administrators to developers to management–how everything works together in real life.
A proof of concept is not a demo conducted by a single person, but a real world, albeit small, cloud environment, showcasing how the subsequent large deployment will ultimately be handled.
The PoC is complete. What now?
The PoC was a success, and the deployment team is hard at work on the large-scale cloud that was designed using the knowledge gained during the PoC.  
Meanwhile, the PoC cloud is still sitting there. Should you go ahead and dismantle it? I say that’s a waste. Here you have a small cloud that has all of the functionality you expect from your production cloud. The work and effort to create it has been spent, so let’s put it to good use.
Option one is to integrate it into the production environment as staging cloud–after all, you need one anyway! Making use of the PoC cloud means you don’t have to provide extra hardware and effort to set up staging. Changes to production can be tested properly, before they are implemented in production. (This is specifically how the MCP workflow has been designed.)
Option two is to have the deployment team create a new staging cloud together with the production environment, and grow the PoC cloud to a test and development environment. The admin team and the developers should already have an established workflow for this cloud from their time testing the PoC. Keys are in place, and software can be deployed using established methodology.
But which option should you choose? It depends on the hardware.
If the hardware for the PoC is similar to what is being deployed in production, it makes sense to consider the staging option, but if the PoC has been built from well maintained, but older hardware, test/dev should be your first choice. Of course the dev cloud will have to have considerably more compute resources than the tiny PoC cloud, but we can use this as a welcome opportunity to scale out a cloud environment and encounter the workflow and challenges posed by scaling out an environment and overcome them in a safe and low-pressure environment.
Do I need a PoC?
Not all customers need to do a proof of concept. Those who already have some cloud experience often opt to jump into production right away. It is not a bad choice, but in many cases even an experienced team can learn and test useful things in the PoC process.
One thing a PoC is not is a waste of time and money. Even in the rare cases where a PoC fails, it is much better to experience that failure with a relatively small financial outlay rather than have a production-ready but unsuitable cloud sitting around, burning money.
And if it is a success, the experience gained, the workflow mechanics learned and the development integration tested will prove invaluable when the production environment is built and goes online.
Besides, as the PoC cloud is repurposed, instead of building a test/dev or staging cloud after the fact, the net financial outlay is in essence minimal.
The post Proof of Concept: A waste of time and money? appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

4 ways IBM has embraced an open, hybrid multicloud approach

Most companies are just getting started on their cloud journey.
They’ve maybe completed 10 to 20 percent of the trek, with a focus on cost and productivity efficiency, as well as scaling compute power. There’s a lot more though to unlock in that remaining 80 percent: shifting business applications to the cloud and optimizing supply chains and sales, which will require moving and managing data across multiple clouds.
To accomplish those things easily and securely, businesses need an open, hybrid multicloud approach. While most companies acknowledge they are embracing hybrid multicloud environments, well over half attest to not having the right tools, processes or strategy in place to gain control of them.
Here are four of the recent steps IBM has taken to help our clients embrace just that type of approach.
1. IBM to acquire Red Hat.
The IBM and Red Hat partnership has spanned 20 years. IBM was an early supporter of Linux, collaborating with Red Hat to help develop and grow enterprise-grade Linux and more recently to bring enterprise Kubernetes and hybrid multicloud solutions to customers. By joining together, we will be positioned to help companies create cloud-native business applications faster and drive greater portability and security of data and applications across multiple public and private clouds, all with consistent cloud management. Read the Q&A with Arvind Krishna, Senior Vice President, IBM Hybrid Cloud.
2. The launch of IBM Multicloud Manager.
When applications and data are distributed across multiple environments, it can be a challenge for enterprises to keep tabs on all their workloads and make sure they’re all in the right place. The new IBM Multicloud Manager solution helps organizations tackle that challenge by helping them improve visibility across all their Kubernetes environments using a single dashboard to maintain security and governance and automate capabilities. Learn why multicloud management is becoming critical for enterprises.
3. AI OpenScale improves business AI.
AI OpenScale will be available on IBM Cloud and IBM Cloud Private with the goal of helping businesses operate and automate artificial intelligence (AI) at scale, no matter where the AI was built or how it runs. AI OpenScale heightens visibility, detects bias and makes AI recommendations and decisions fully traceable. Neural Network Synthesis (NeuNetS), a beta feature of the solution, configures to business data, helping organizations scale AI across their workflows more quickly.
4. New IBM Security Connect community platform.
IBM Security Connect is a new cloud-based community platform for cyber security applications. With the support of more than a dozen other technology companies, it is the first cloud security platform built on open federated technologies. Built using open standards, IBM Security Connect can help companies develop microservices, create new security applications, integrate existing security solutions, and make use of data from open, shared services. It also enables organizations to apply machine learning and AI, including Watson for Cyber Security, to analyze and identify threats or risks.
Learn more about IBM support of open source technology on the cloud.
The post 4 ways IBM has embraced an open, hybrid multicloud approach appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Trung Nguyên Legend Corp. modernizes IT with IBM Cloud to take coffee brand global

Coffee is one of Vietnam’s key exports, and the country produces millions of tons of Robusta and Arabica beans each year. Trung Nguyên Legend Corp.’s mission is to bring Vietnamese coffee culture to the world and become a major global coffee brand.
To make its expansion aims a reality, Trung Nguyên Legend Corp. must be lean and flexible enough to react quickly to market developments and seize new opportunities. To boost business agility, Trung Nguyên Legend Corp. chose to migrate its SAP ERP business applications to IBM Services for Managed SAP Applications.
The leading Vietnamese coffee
We produce, process and sell coffee products in more than 80 countries and territories. Our products include G7, the top instant coffee in Vietnam and a beloved brand all over the world, Trung Nguyên Legend café sữa đá (iced coffee with milk), Special Edition, Classic and the first and only oxygen-tight, biodegradable coffee capsules in the world. We also operate more than 10,000 coffee shops in Vietnam.
Trung Nguyên Legend is already a well-loved brand all over the world, not just in Vietnam. However, we face stiff competition from established, multinational corporations that have been operating in the sector for decades. To challenge the big players, we knew that we needed to modernize our business processes and systems.
We rely on SAP enterprise resource planning (ERP) applications to manage everything from production to our supply chain and distribution through to sales and finance. We used to run these mission-critical applications on on-premises infrastructure, which was time-consuming to manage, expensive to maintain and difficult to scale.
Choosing cloud computing
After weighing our options, we decided to move from on-premises infrastructure to a cloud model. We didn’t want the hassle of managing physical hardware anymore and, crucially, we wanted the ability to scale up operations rapidly when needed.
We looked at offerings from several cloud vendors and settled on IBM Services for Managed SAP Applications. We love the fact it’s a fully managed service. This means that, as well as taking care of all the back-end infrastructure, IBM also handles the application management and maintenance. The longstanding IBM collaboration with SAP gave us the confidence to entrust them with our most critical business applications.
IBM also helped us to migrate our SAP ERP applications to SAP HANA as part of our cloud modernization project. We worked closely with both IBM and a local SAP partner, Digi-InfoFabrica, to migrate to the SAP HANA platform before moving our entire SAP landscape to the IBM Cloud.
Optimizing operations
Running business processes in the IBM Cloud means we can scale capacity up and down whenever we want. All we need to do is ring up the IBM team. This means that we’re much better positioned to react to fluctuating market conditions and demands from the business. It will also make expanding into new markets so much easier.
Because Managed SAP Applications is a fully managed service, we no longer need to worry about our SAP landscape. This means that, instead of routine admin tasks, our IT team can focus on exciting new projects, including building e-commerce and omnichannel marketing platforms.
Moving to the cloud has also removed the need to invest in and maintain costly hardware systems. We estimate that this will reduce the total cost of ownership of our SAP landscape by 43 percent over the next five years.
We’ve also seen performance increase by a factor of 30 since migrating to SAP HANA on Managed SAP Applications. Reports are available more quickly and applications are running faster, so there’s no more waiting around with long loading times, and employees can work more productively.
Recipe for success
By migrating from on-premises infrastructure to super-flexible cloud computing, we have dramatically increased business agility.
The move to Managed SAP Applications represents a massive shift in the way we operate, making us a leaner, more agile, digital-driven company.
Looking to the future, we’re planning to harness IBM big data analytics, AI and blockchain technologies to forge smarter business operations and accelerate international growth. With support from IBM and SAP, we know we have what it takes to become a global coffee brand.
Read the case study for more details.
The post Trung Nguyên Legend Corp. modernizes IT with IBM Cloud to take coffee brand global appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Why automated data capture is the two-minute drill of digital transformation

The NFL and NCAA football seasons are in full sprint toward the playoffs. More often than not, games are decided by the two-minute drill: an offensive strategy in which a team tries to rapidly move the ball down the field when it’s losing late in a game and time is running out.
Similarly, organizations can use a two-minute drill to make game-winning plays against their competitors. In digital transformation, a team’s ability to execute a winning strategy depends on its ability to develop and execute a smart play that outmaneuvers opponents.
Outrun competitors with automated data capture
Many organizations are using automated data capture as part of their winning digital transformation strategy to gain a competitive edge in their market. Frost & Sullivan’s white paper, “Why Data Capture and Automation Are Key to Digital Transformation: How Advanced Document Capture Can Improve Business Outcomes”, describes best practices enterprises should consider when evaluating intelligent, automated data capture solutions.
Much like coaches, quarterbacks and offenses practice and evaluate plays to ensure crisp execution when it matters, decision makers should evaluate and consider processes and solutions that embrace best practices to improve business outcomes. The critical best practices that leading enterprises follow to execute winning digital transformation strategies with automated capture include identifying critical data, evaluating information, classifying content, extracting key components and using data for critical success.
Similar to a coach’s expertise to select the right combination of plays in a situational two-minute drill game plan, Frost & Sullivan found that leading enterprises select and implement the right mix of automated data capture and digital business automation capabilities to “increase productivity and efficiency, drive higher revenue per employee, replace repetitive and mundane processes, and redeploy knowledge workers to focus on value-added tasks.” A successful automated capture game plan based on best practices begins with mirroring how people work and understanding how to integrate with business processes and line-of-business applications.
Build your automated data capture game plan
What game plan should your enterprise consider when evaluating automated data capture capabilities as part of your digital transformation strategy? Three recommendations from “The essential buyer’s guide to data capture and automation: What to look for when considering capture and automation solutions” are:

Evaluate automated capture solutions that use cognitive capabilities. Intelligent capture increases efficiency of data extraction using machine learning to understand and gain contextual intelligence to increase accuracy and insight of unstructured data.
Consider platform-based solutions. Research offerings that encompass a full range of capture capabilities as a key component of a digital business automation platform that includes tasks, content, workflow and decisions to help enterprises drive all types of automation projects at speed and scale.
Require integration with line-of-business applications. Make sure your data capture and automation solution can integrate with existing (and future) line-of-business applications. Integration ensures sharing of captured data in a usable format with all relevant applications. Not only will integration improve availability of data to users and applications to ensure accuracy, but it will also provide investment protection for an enterprise’s existing applications.

Want to execute your own automated capture digital transformation two-minute drill to gain a competitive edge, improve customer experience and increase higher employee productivity?
Download the Frost & Sullivan white paper, “Why Data Capture and Automation Are Key to Digital Transformation: How Advanced Document Capture Can Improve Business Outcomes” and “The essential buyer’s guide to data capture and automation: What to look for when considering capture and automation solutions”.
The post Why automated data capture is the two-minute drill of digital transformation appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Pikkart offers mobile augmented reality solution delivered on IBM Cloud

By Tech Target’s definition, augmented reality (AR) is the integration of digital information with the user’s environment in real time. Unlike virtual reality, which creates a totally artificial environment, augmented reality uses the existing environment and overlays new information on top of it.
For example, one of the first commercial applications of AR technology was the yellow “first down” line that began appearing in televised US football games in the late 1990s.
Pikkart S.r.l. is an Italian startup that specializes in AR. It is is the only Italian company — and one of the few in the world — to have its own proprietary framework that enables users to add digital content to real content with the aid of mobile devices.
Introducing Pikkart AR Discover
Pikkart’s most recent AR development is called AR Discover. The solution is a mobile app that can recognize objects and environments and link content to them. For example, users could visit a museum, point their mobile device at a piece of artwork and learn more about it with associated content such as information sheets, pictures, videos or 3D elements. Likewise, a factory worker could access information about operating a piece of machinery or perhaps a shopper in a retail store could gain access to exclusive sales.
Developers working with the solution simply take a few reference pictures and load them into the software to automatically create a model that is associated with AR content.
Using a cloud recognition system
Pikkart uses its cloud recognition system (CRS) to perform fast image searches of a dataset of hundreds of thousands of pictures. When a user frames a marker with their camera, the image is sent to cloud servers that find the matching image (if it exists) in less than one second using advanced algorithms.
The CRS is scalable, which keeps the app itself lightweight, up to hundreds of thousands of images. The mobile device, with its memory and compute limitations, doesn’t have to do the heavy lifting.
Getting started with IBM
Pikkart participated in an IBM Design Thinking workshop, then joined the IBM Global Entrepreneur Program and has been running its business on the IBM Cloud since the beginning. The company determined that it needed the reliability, elasticity and bare metal option that IBM Cloud offered.
Additionally, though Pikkart is still in startup mode, it has customers from around the world. The company must be able to localize its application in the closest cloud to its users to support consistent application performance anywhere in the world.
Realizing real-world benefits
Pikkart has its own proprietary framework, which allows developers to give shape to their ideas in a simple and intuitive way, without the need for any additional technical expertise. Pikkart AR is compatible with both Android and iOS operating systems.
With IBM Cloud, Pikkart customers have the speed and flexibility they need for AR applications from gaming to retail marketing to industrial. Customers can also feel confident about IBM Cloud security and scalability. Both are important factors in the dynamic AR market. Furthermore, running its business on IBM Cloud means Pikkart can be very close to every customer it has, anywhere in the world.
The company counts manufacturers, supermarkets and a tourist board among its customers.
Learn more about IBM Design Thinking.
The post Pikkart offers mobile augmented reality solution delivered on IBM Cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Helvetia Seguros launches Watson-powered virtual assistant to support sales staff

Everyone knows that sinking feeling: to get to the information you want on a website, you’re presented with a long form with dozens of questions.
It’s a real turn-off for customers. In general, around 51 percent of customers will not fill out a long form once they see it. But this is still how many companies handle online sales.
To make our customers happy and get ahead of the competition, Helvetia Seguros has embarked on a wide-reaching digital transformation that touches almost every element of the business. As part of this, we are launching a broad array of digital services and making it easier for customers to interact with us online.
Of course, insurers like us need information about customers to estimate risk and calculate premiums, but we were eager to move away from long questionnaires. We wanted to find a way to make a better first impression and make the customer experience more welcoming.
Assistance to build assistants
We wanted to build a virtual assistant to help customers buy insurance online. We decided to pilot the project on our family protection insurance because it’s one of our least complex policies and is relatively straightforward to price.
We enlisted the help of IBM Business Partner Red Skios. We’ve been working with IBM for decades, and its consultants are always eager to help us try new things and support us if we have challenges or questions.
How it works
When a customer browses the relevant product page, a new window opens offering help from a virtual assistant. If the customer accepts, they simply tell the virtual assistant about the insurance cover they are looking for in ordinary, natural language and without a form in sight.
The virtual assistant uses artificial intelligence from IBM Watson Assistant and IBM Watson Natural Language Understanding to comprehend the responses, capture the relevant data and generate a personalized quote. Thanks to the virtual assistant’s IBM Watson Speech to Text and IBM Watson Text to Speech APIs, the customer can choose to interact either by speaking or typing the relevant information. The Watson APIs are delivered via IBM Cloud.
If a customer asks questions about the policy, the virtual assistant provides immediate answers. If the customer is happy with the offer, he or she can accept the quote, purchase the policy online and sign the contract electronically, all without intervention from a salesperson.
If a customer wishes to obtain a quote and then make a decision later, they can request an email copy of the quote to avoid repeating the process when they are ready to proceed with the purchase.
The virtual assistant gives clients a completely new way to purchase insurance online, all without asking them to fill out forms.
By enabling customers to purchase these relatively uniform policies online, we enable our agents and brokers to focus on selling sophisticated policies, which is a much better use of their time and skills.
Happy customers
After a soft launch, the virtual assistant has been a hit with customers. Within just a few weeks it has conversed with 355 customers and generated 138 personalized quotes.
We believe that this form-free approach to buying insurance will attract significant new business from customers, particularly during evenings and weekends when other channels are often unavailable.
Read the case study for more details.
The post Helvetia Seguros launches Watson-powered virtual assistant to support sales staff appeared first on Cloud computing news.
Quelle: Thoughts on Cloud