Cloud-based receipt management enhances mobile banking

How many times have you looked at your bank statement, noticed a line item like “WITHDRAW / POS 0217 2013 975246 MARSHALLS ” and racked your brain trying to figure out what you bought?
You likely had to dig through your wallet for the paper receipt (if you even kept it) or rummage through your inbox for the electronic copy – but no longer.
Now, when banks incorporate the receipt management capabilities of Sensibill into their mobile apps, customers can see purchase details and more in the palms of their hands. Sensibill integrates with banks’ existing mobile apps and web interfaces for an easier way to keep track of purchases, expenses and budgeting.
How Sensibill works
When a customer takes a picture of a receipt with a smartphone, the picture is sent to the Sensibill service based in the IBM Cloud. Sensibill then extracts the data and creates an enhanced receipt; a smart receipt.
Sensibill’s smart receipt system can remind you about return and warranty information, as well as pull information about the merchant, including a contact number and address.
Sensibill uses a proprietary algorithm to process any receipt with a high level of accuracy. Since there are approximately 150 data points on receipts, Sensibill counts on an R&D team of experts whose single mission is to extract receipt data and classify it.
Machine learning: An essential component
The R&D team specializes in machine learning techniques – deep learning, specifically – to read and understand receipts. This means identifying items, the categories items belong to, the merchant, the total and so on.
The idea is that as the machine continues to “learn” and become more complex, it will be able to recognize the item “Levi’s 501 jeans”, and categorize it as clothing or men’s apparel. Understanding the true identity of an item means that the machine can provide deeper, enriched insights.
Proprietary technology and advanced algorithms enable Sensibill to take full advantage of machines to perform work at a scale that is just not possible for humans.
Sensibill’s “machine” can also learn to understand historical and habitual purchases. With that information, users can track spending and budget, or banks can use it to present customers with appropriate advice or offers.
Anticipating needs
As the Sensibill system continues to learn and add layers to its hierarchical neural network, it will be able to discern increasingly complex information based on customers purchase data. Beyond identifying items, it will understand the meanings attached to common spending patterns until can perform basic inductive reasoning.
Ultimately, the Sensibill system will be able to predict a customer’s needs in real time based on changes in behavior, then alert customers to offers they need and want, maybe before they even know they need or want them. This capability will augment bank services profoundly.
How banks benefit
Sensibill developed its solution to work with banks concerned about fintechs disrupting the industry and eroding their customer bases. The solution has been built from the ground, with security in mind, and offers a robust experience.
Banks often do not employ subject matter expertise outside their core offerings, so rather than try to build something like this on their own, they can partner with Sensibill. Incorporating the receipt management solution into its service greatly improves a bank’s ability to serve and retain customers.
Sensibill credits its ability to work well with banks to its receipt management solution running on IBM Cloud. Banks trust IBM and its solid expertise in the financial services industry.
Read about other IBM clients who built their success on the IBM Cloud here.
The post Cloud-based receipt management enhances mobile banking appeared first on news.
Quelle: Thoughts on Cloud

Why GPUs are taking over the enterprise

With IBM InterConnect just a few weeks away, we’re gearing up to showcase how the MapD graphics-processing-unit (GPU)-powered data analytics and visualization platform has evolved over the past year.
When MapD participated in InterConnect last year, we were just starting to ramp up. Our mission then, as now, remains unwavering: to provide the world’s fastest data exploration platform.
So much has changed. Since the last InterConnect, MapD launched its product offering and announced A round funding while steadily building on momentum throughout the year, culminating in the release of version 2.0 of the MapD Core database and Immerse visual analytics platform in December.
In the meantime, we were fortunate to pick up some prestigious awards including Gartner Cool Vendor, Fast Company Innovation by Design, The Business Intelligence Group’s Startup of the Year, CRN’s 10 Coolest Big Data Startups and Barclays Open Innovation Challenge.
One major reason for all this attention and praise is that GPUs are taking over the enterprise. We’ve written about this extensively in several blog posts, but GPUs are no longer a technology novelty item. They are mainstream, as Nvidia’s tripled data center revenues attest.
What makes MapD unique is that, from the ground up, we’ve built a querying engine and visual analytics platform that takes advantage of GPUs. With CPUs come the limitations of Moore’s Law. As more as more businesses place an emphasis on massive quantities of data, machine learning, mathematics, analytics and visualization, GPUs are poised to take over.
Here’s a simple example: say you want to run a query against a billion or more records, a pretty common request. With today’s legacy database solutions you should plan a two-martini lunch because that’s how long it&;s going to take to run.
You better have your question nailed too, because if you want to modify it, you are going to want to eat dinner before you see the updated query again.
The experience is similar to the days of dial-up page load times, which wouldn’t that problematic if we didn’t know what real speed felt like.
In the past year, we’ve been able to validate our claims in a series of powerful independent benchmarks published by noted database authority Mark Litwintschick. To date, we remain the fastest platform (by a factor of 75) that he has ever tested against the 1.2 billion-row New York city taxi dataset.
We are delighted to be speaking at IBM InterConnect again. It is a homecoming in many ways. We can’t wait to show you how transformative the MapD platform can be when tackling big data problems.
Check out the session “Speed, Scale and Visualization: How GPUs are Remaking the Analytics Landscape in the Cloud,” at InterConnect Thursday, 23 March, from 11:30 AM to 12:15 PM.
Attend this session and more by registering for IBM InterConnect 2017.
We’re looking forward to another great event in Las Vegas. If you’d like to set-up an appointment or demo, send us an email at info@mapd.com.
The post Why GPUs are taking over the enterprise appeared first on news.
Quelle: Thoughts on Cloud

RDO Ocata Release Behind The Scenes

I have been involved in 6 GA releases of RDO (From Juno to Ocata), and I wanted to share a glimpse of the preparation work.
Since Juno, our process has tremendously evolved: we refocused RDO on EL7, joined the CentOS Cloud SIG, moved to Software Factory.

Our release process does not start when upstream announces GA or even a milestone, no it starts from the very beginning of upstream cycle.

Trunk chasing

We have been using DLRN to track upstream changes and build continuously OpenStack as a RPM distribution.
Then our CI hosted on CentOS community CI runs multiple jobs
on DLRN snapshots. We use the WeIRDO framework to run the same jobs as upstream CI on our packages.
This allows us to detect early integration issues and get either our packaging or upstream projects fixed. This also includes installers such as OPM, TripleO or PackStack.

We also create Ocata tags in CentOS Community Build System (CBS) in order to build dependencies that are incompatible with currently supported releases.

Branching

We start branching RDO stable release around milestone 3, and have stable builds getting bootstrapped. This includes:

registering packages in CBS, I scripted this part for Ocata using rdoinfo database.
syncing requirements in packages.
branching distgit repositories.
building upstream releases in CBS, this part used to be semi-automated using rdopkg tool, Alfredo is consolidating that into a cron job creating reviews.
tag builds in -testing repositories, some automation is in preparation.

Trunk chasing continues, but we pay attention in keeping promotions happening more frequently to avoid a gap between tested upstream commits and releases.

GA publication

Since OpenStack does releases slightly ahead of time, we have most of GA releases built in CBS, but some of them comes late.
We also trim final GA repositories, use repoclosure utility to check if there’s no missing dependencies.
Before mass-tagging builds in -release we launch stable promotion CI jobs and if they’re green, we publish them.

At this stage, CentOS Core team, creates final GA repositories and sign packages.

For Newton, it took 10 hours between upstream GA announcement and repositories publication, 4 hours up to stable tagging. As for Ocata, all stable builds + CI jobs were finished within 2 hours.

Fun fact, Alan and I were doing the last bits of the Ocata release in the Atlanta PTG hallway and even get to see Doug Hellmann to send the GA announcement live (which started the chronometer for us). So we sprinted to have RDO Ocata GA ready as soon as possible (CI included!). We still have room for improvement but we were the first binary OpenStack distro available!

Thoughts

As of Ocata, there are still areas of improvement:

documenting releases process: many steps are still manual or require specific knowledge. During Newton/Ocata releases, we enabled Alfredo to do large chunks of the release preparation work.
With post-mortems, this helped us clarifying the process, and prepare to allow more people helping in the release process.
dependencies CI: dependencies are a critical factor to release RDO in time. We need to test dependencies against RDO releases, RDO against CentOS updates and ensure that nothing is broken. That’s one of our goals for Pike.
tag management: tags are used in CBS to determine where builds are to be published. Unlike Fedora, CBS has no automated pipeline to manage updates, so we have to manually tag builds. I’m currently working on having a gerrit-based process to manage tags. The tricky part is how to avoid inconsistencies in repositories (e.g avoid breaking dependencies, accidental untagging etc.)
dependencies updates: we want dependencies to remain compatible with Fedora packages, as Fedora is the foundation of next RHEL/CentOS, some of them are maintained in Fedora, others in
RDO common, some with basic patches to fix EL7 build issues (not acceptable in Fedora), the rest being forks that we effectively maintain (e.g MariaDB).
As a first step, we want to have the last set of packages to be maintained in our gerrit instances to allow maintainers doing builds without any releng support.
more contributions! Our effort into automating the release pipeline also serves the goal of empowering more contributors into the release work, so if you’re interested, just come and tell us. ;-)

I hope this gave you an overview of how RDO is released and what are our next steps for Pike release.
Quelle: RDO

What’s new in OpenStack Ocata webinar — Q&A

The post What&;s new in OpenStack Ocata webinar &; Q&;A appeared first on Mirantis | Pure Play Open Cloud.
On February 22, my colleagues Rajat Jain, Stacy Verroneau, and Michael Tillman and I held a webinar to discuss the new features in OpenStack&8217;s latest release, Ocata. Unfortunately, we ran out of time for questions and answers, so here they are.
Q: What are the benefits of using the cells capability?
Rajat: The cells concept was introduced in the Juno release, and as some of you may recall, it was to allow a large number of nova/compute instances to share openstack services.

Therefore, Cells functionality enables you to scale an OpenStack Compute cloud in a more distributed fashion without having to use complicated technologies like database and message queue clustering. It supports very large deployments.

When this functionality is enabled, the hosts in an OpenStack Compute cloud are partitioned into groups called cells. Cells are configured as a tree. The top-level cell should have a host that runs a nova-api service, but no nova-compute services. Each child cell should run all of the typical nova-* services in a regular Compute cloud except for nova-api. You can think of cells as a normal Compute deployment in that each cell has its own database server and message queue broker. This was achieved by the nova cells and nova api services to provide the capabilities.
One of the key changes in Ocata is the upgrade to cells v2, which now only relies on the nova api service for all the synchronization across the cells.
Q: What is the placement service and how can I leverage it?
Rajat: The placement service, which was introduced in the Newton release, is now a key part of OpenStack and also mandatory in determining the optimum placement of VMs. Basically, you set up pools of resources, provide an inventory of the compute nodes, and then set up allocations for resource providers. Then you can set up policies and models for optimum placements of VMs.
Q: What is the OS profiler, and why is it useful?
Rajat: OpenStack consists of multiple projects. Each project, in turn, is  composed of multiple services. To process a request &8212; for example, to boot a virtual machine &8212; OpenStack uses multiple services from different projects. If something in this process runs slowly, it&8217;s extremely complicated to understand what exactly goes wrong and to locate the bottleneck.
To resolve this issue,  a tiny but powerful library, osprofiler, was introduced. The osprofiler library will be used by all OpenStack projects and their python clients. It provides functionality to be able to generate 1 trace per request, flowing through all involved services. This trace can then be extracted and used to build a tree of calls which can be quite handy for a variety of reasons (for example, in isolating cross-project performance issues).
Q: If I have keystone connected to a backend active directory, will i benefit from the auto-provisioning of the federated identity?
Rajat: Yes. The federated identity mapping engine now supports the ability to automatically provision projects for federated users. A role assignment will automatically be created for the user on the specified project. Prior to this, a federated user had to attempt to authenticate before an administrator could assign roles directly to their shadowed identity, resulting in a strange user experience. This is therefore a big usability enhancement for deployers leveraging the federated identity plugins.
Q: Is FWaaS really used out there?
Stacy: Yes it is, but its viability in production is debatable and going with a 3rd party with a Neutron plugin is still, IMHO, the way to go.
Q: When is Octavia GA planned to be released?
Stacy: Octavia is forecast to be GA in the Pike release.
Q: Are DragonFlow and Tricircle ready for Production?
Stacy: Those are young big tent projects but pretty sure we will see a big evolution for Pike.  
Q: What&8217;s the codename for placement service please?
Stacy: It&8217;s just called the Placement API. There&8217;s no fancy name.
Q: Does Ocata continue support for Fernet tokens?
Rajat: Yes.
Q: With federated provider,  can i integrate openstack env with my on-prem AD and allow domain users to use Openstack?
Rajat: This was always supported, and is not new to ocata. More details at https://docs.openstack.org/admin-guide/identity-integrate-with-ldap.html
What&8217;s new in this area is that the federated identity mapping engine now supports the ability to automatically provision projects for federated users. A role assignment will automatically be created for the user on the specified project. Prior to this, a federated user had to attempt to authenticate before an administrator could assign roles directly to their shadowed identity, resulting in a strange user experience.

Q: if i&8217;m using my existing domain users from AD to openstack,  how would i control their rights/role to perform specific tasks in the openstack project?
Rajat: You would first set up authentication via LDAP, then provide connection settings for AD and also set the identity driver to ldap in the keystone.conf. Next you will have to do an assignment of roles and projects to the AD users. Since Mitaka, the only option that you can use is the SQL driver for the assignment in the keystone.conf, but you will have to do the mapping. Most users prefer this approach anyway, as they want to keep the AD as read only from the OpenStack connection. You can find more details on how to configure keystone with LDAP here.
Q: What, if anything, was pushed out of the &;big tent&; and/or did not get robustly worked?
Nick:  You can get a complete view of work done on every project at Stackalytics.
Q: So when is Tricircle being released for use in production?
Stacy: Not soon enough.  Being a new Big Tent project, it needs some time to develop traction.  
Q: Do we support creation of SRIOV ports from horizon during instance creation. If not, are there any plans there?
Nick: According to the Horizon team, you can pre-create the port and assign it to an instance.
Q: Way to go warp speed Michael! Good job Rajat and Stacy. Don&8217;t worry about getting behind, I blame Nick anyway. Then again I always I always blame Nick.
Nick: Thanks Ben, I appreciate you, too.

Openshift Commons Gathering Berlin Adds New Speakers from Google, Atos, Volvo, T-Systems, OCI, CNCF and More!

The OpenShift Commons Gathering brings together experts from all over the world to discuss the container technologies, best practices for cloud native application developers and the open source software project that underpin the OpenShift ecosystem to help take the OpenShift ecosystem to the next level in cloud native computing. The 2017 event will gather 200+ developers, devops professionals and sysadmins together to explore the next steps in making container technologies successful and secure.
Quelle: OpenShift

Cognitive computing and analytics come to mobile solutions for employees

The Drum caught up with Gareth Mackown, partner and European mobile leader at IBM Global Business Services, at the Mobile World Congress this week in Barcelona to ask him about how mobile solutions are becoming more vital for not only an enterprise&;s customers, but also employees.
&;Today, organizations are really being defined by the experiences they create,&; Mackown said in an interview. &8220;Often, you think of that in terms of customers, but more and more we&8217;re seeing employee experience being a really defining factor.&8221;
IBM partnered with Apple to transform employee experiences through mobility, he said, and it&8217;s just getting started. Internet of Things (IoT) technology, cognitive computing and analytics will make those mobile solutions &8220;even more critical&8221; for people working in all kinds of different fields.
Mackown pointed to the new IBM partnership with Santander, announced at Mobile World Congress. &8220;We&8217;re helping them design and develop a suite of business apps to help them transform the employee experience they have for their business customers.&8221;
The video below includes the interview with Mackown, along with mobile business leaders from several other large companies.

Find out more in The Drum&;s full article.
The post Cognitive computing and analytics come to mobile solutions for employees appeared first on news.
Quelle: Thoughts on Cloud

Reach for the cloud with financing

Not too long ago, cloud was touted primarily as a cost-cutting tool and means of improving agility through easy access to low cost infrastructure. It’s now gaining visibility in the minds of leaders for its potential to change the dynamics of digital globalization and competition.
One way to drive the new experiences clients are after is to bring together the value of existing investments in data and applications with new methods of engagement — in the form of modern, hybrid mobile apps that can extend the reach of businesses into the hand of the consumer and augment the data you have today with social, weather, Internet of Things and other data that enrich the clients experience and interactions.
As we move to cloud for more strategic applications, we uncover opportunities to reimagine business processes and entire industry models. Cognitive capabilities can learn from individual client interactions over time, and adapt automatically to accommodate changing preferences, buying patterns, and even understand tone of voice to tailor the interactions to the person’s state. But that is only the beginning: the ability to interconnect the latest capabilities on cloud, no matter the vendor or source with speed and without lock-in can make the difference between lagging and leading.
The extent to which leaders will look into the horizon — and consider the strategic implications of the cloud — will have a significant impact on their long term growth. Is your strategy centered solely on scalable infrastructure? Is it adaptable to your business model and investments? Does it deliver the higher value business applications and industry functions that you need to enable game-changing business models, processes and customer experiences?
Current budgets may restrict acting now or even this year, but the cost of waiting is high, given the urgency to digitally transform your business ahead of the competition.  We can help affordably finance the complete IT life cycle for your cloud infrastructure solution – including services, software, and hardware products.
Visit Financing IBM Cloud to learn about our financing options. And join us, the biggest companies, the most innovative new startups, and the world’s foremost technical experts at IBM InterConnect 2017.
A version of this article appeared on the IBM Global Financing blog.
The post Reach for the cloud with financing appeared first on news.
Quelle: Thoughts on Cloud

While storage is on your mind, considerations to protect against outages

Let’s admit it: Storage is not the sexiest thing to talk about. But when it’s gone or unavailable, people notice, whether it’s your customers, employees or constituents. If you can’t get to your data, nothing else in your system matters.
When it comes to storage of images, audio, video or other unstructured content,  organizations need technology designed for the needs of business, and that’s IBM Cloud Object Storage.
IBM Cloud Object Storage encrypts and slices up data as soon as it comes in, with the slices dispersed across multiple regions automatically. With the “immediate consistency” capability, if a write operation is confirmed, data is protected immediately. That’s really important, because if a region goes down, data can still be delivered from the slices that exist in remaining regions. Applications that rely on that data remain up and running. They can survive regional outages we often hear about.
Given the explosion of data in just about every enterprise, managing the storage of data in a reliable and cost-effective way is a business imperative. Protecting data with IBM Cloud Object Storage Cross Region service is actually 30 to 40 percent less expensive than competitors.
With IBM Cloud Object Storage services on the IBM Cloud, business leaders can feel confident in:

Technology which reflects over 400 patents and has been recognized by Gartner and IDC as top in class is up to the task
The combination of slicing, geo dispersal and immediate consistency is unique and protects organizations against similar outages
Our offerings are lower priced than competitors
IBM is synonymous with enterprise IT, and Cloud Object Storage is designed around the needs of enterprises

So maybe storage can be sexy.
Learn more about IBM Cloud Object Storage.
The post While storage is on your mind, considerations to protect against outages appeared first on news.
Quelle: Thoughts on Cloud

Environment-Dependent Property Management Strategies for OpenShift Pipelines

How an application expects to read its configurations is completely application-dependent. That said, over the course of several projects we have seen some patterns emerge that we have found to be successful. There is no better or worse approach – it is the responsibility of the pipeline designer to choose the best approach for a given context. This blog post focuses on environment-dependent properties, but the same approaches could be potentially used for all properties, whether or not they are environment-dependent.
Quelle: OpenShift

Use a CI/CD workflow to manage TripleO life cycle

In this post, I will present how to use a CI/CD workflow to manage TripleO deployment life cycle within an OpenStack tenant.

The goal is to use Software-Factory to submit reviews to create or update a TripleO deployment. The review process ensure peers validation before executing the deployment or update command. The deployment will be done within Openstack tenants. We will split each roles in a different tenant to ensure network isolation between services.

Tools

Software Factory

Software Factory (also called SF) is a collection of services that provides a powerful platform to build software.

The main advantages of using Software Factory to manage the deployment are:

Cross-project gating system (through user defined jobs).
Code-review system to ensure peer validation before changes are merged.
Reproducible test environment with ephemeral slave

Python-tripleo-helper

Python-tripleo-helper is a library provides a complete Python API to drive an OpenStack deployment (TripleO). It allow to:

Deploy OpenStack with TripleO within an OpenStack tenant
Can deploy a virtual OpenStack using the baremetal workflow with IPMI commands.

TripleO

Tripleo is a program aimed at installing, upgrading and operating OpenStack clouds using OpenStack’s own cloud facilities as the foundations.

full article
Quelle: RDO