Managing microservices with Istio

Nowadays, there are more and more developers adopting a microservices approach to build their applications.
One of the main drivers for this is the need to build cloud-native applications, which are continuously available and dynamically scalable. This approach helps the developers break the applications into small, manageable pieces that can be developed and managed independently by different teams.
A microservices approach has  a lot of benefits, but can also be complex. Before a service can be deployed into production, many data and control plane issues relating to the operability of the service must be resolved, including:

how to provide services discovery and request routing between different microservices
how to control and secure access to the application and to individual microservices
how to efficiently scale up (and down) microservices while maintaining connectivity and overall application resiliency
how to collect and send logging and monitoring data for later consumption
how to enable DevOps functions, such as Canary deployments, A/B testing and gradual rollouts or roll-backs

Traditionally, much of that functionality had to be invented or rediscovered by every new application team, with support codified into the different microservices. While this may be an achievable goal within the confines of a single application and source base, as applications grow more complex and microservices are implemented using different languages and runtimes, the work becomes tedious and open to error.

By implementing a common microservices fabric, Istio addresses many of the challenges faced by developers and operators as monolithic applications transition to a distributed microservices architecture.
The initial (0.1) release was just announced at the Glue 2017 Conference. It is a result of collaboration between IBM, Google and Lyft to provide traffic flow management, access policy enforcement and telemetery data aggregation between microservices. All those are achieved without requiring any changes to the application code.  Thus, developers can focus on business logic and quickly integrate new features.
Istio provides an infrastructure-level solution for managing all service-to-service communications. By deploying a special sidecar proxy to intercept and act on traffic between microservices throughout the environment, Istio provides a straightforward way to create a network of deployed services, often referred to as a “service mesh.” Istio automatically collects service metrics, logs and call traces for all traffic within a cluster, including cluster ingress and egress. The use of sidecar proxies enables a gradual and transparent introduction without architectural or application code changes.
The service mesh is configured and managed using Istio’s control plane functionality to deliver the required quality of service attributes, such as load balancing, fine-grain routing, service-to-service authentication, monitoring and more. Istio’s Mixer component provides a pluggable policy layer supporting fine-grain access controls, rate limits and quotas. Since Istio has a control on communication between services, it can enforce authentication and authorization between any pair of communication services,
Istio is not targeted at any specific deployment environment. During the initial stages of development, and as it currently stands, Istio supports Kubernetes-based deployments. However, it is being built to enable rapid and easy adaptation to other environments, such as VMs and Cloud Foundry.
How we got there and what’s next
Our journey to microservices fabric started with developing and open-sourcing Amalgam8. Amalgam8 provided service discovery, smart routing capabilities and controlled resiliency testing.
Istio is the next step in our journey, bringing more powerful functionality and capability around security, policy management, rate limiting, auditing and basic API management.
We are excited to continue to work on building and extending Istio. One of the goals is providing security policy enforcement together with data collection and analytics. It can be extremely helpful to reaching compliance in the cloud native deployments.
What  do you like about Istio. and what are the main challenges when it comes to building and operating microservices applications?
Learn more about Istio.
Related articles:

developerWorks: IBM, Google and Lyft give microservices a ride on the Istio Service Mesh by IBM Fellow Jason McGee
Forbes: Google, IBM And Lyft Want To Simplify Microservices Management With Istio
Research blog: Upping the microservices game with Istio: A microservice mesh by IBM Fellow Tamar Eilam

 
The post Managing microservices with Istio appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Training helps users get more out of Watson Developer Cloud

Watson has come a long way since first winning against the Jeopardy! champions in 2011.
Watson was initially delivered via supercomputers, but now, with the evolution of cloud platforms, the power of Watson is pervasive and available to all via the Watson Developer Cloud. Delivered through IBM Bluemix, Watson Developer Cloud provides a suite of cognitive services that empower developers to extend and build next-generation user experiences in applications that can interact with humans.
Watson Developer Cloud provides a set of building blocks for developers to build cognitive solutions. Many Watson Developer Cloud services can be consumed by applications as cognitive services with no training effort required.
For example, the Language Translator or Tone Analysis services can, as the names suggest, translate from one language to another or analyze the tone of a body of text by simply invoking the service.
Other services on Watson Developer Cloud require training. Examples include Retrieve and Rank, Discovery, and the Conversation service. When working with a service that requires training, insights delivered by the service can only be as effective as the training provided.
The purpose of AI and cognitive systems developed and applied by IBM is to augment human intelligence. When using Watson’s Discovery service on Bluemix, for example, to augment human ability to glean insights from data, we first need to train Watson to understand information specific to areas of focus, such as an industry or scientific discipline.
Users can do this using Watson Knowledge Studio to create a custom model that the Discovery service will use to enrich document content with cognitive metadata. Similarly, when using the Retrieve and Rank service, training a machine learning ranker helps Watson surface the most relevant information from a collection of documents. Training in each instance requires domain-specific expertise with relevant training data. Without this domain expertise and training, Watson cannot reach its full potential with a user’s data.
Watson’s ability to augment human intelligence is dependent on training, domain expertise and, ultimately, human intelligence. Watson Developer Cloud provides access to Watson capabilities with nothing more required than an IBM Bluemix account.
Why not build cognitive capabilities into your applications with IBM Watson on Bluemix?
Get started with Watson Developer Tools.
The post Training helps users get more out of Watson Developer Cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Become a disruptor: Announcing new cloud integration capabilities

Digital business transformation is the biggest business disruptor of our age. Consider that the world’s top accommodations provider owns no real estate, and the world’s biggest taxi company owns no vehicles.
Your business can remain vulnerable to “born on digital” disruptors like these. Or you can embrace the seismic shift toward digital business that comes through a fully integrated cloud computing environment.
IBM recognizes that business is about to take the next great leap forward. The future is built on data, enabled by the cloud and driven by cognitive computing to redefine immersive experiences. Create new partnerships, innovate quickly and do it all with one singular goal—providing a truly unique and timely experience for customers, employees and business partners. Create an experience that disrupts their expectations and makes them see your company in a new light.
The ability to connect to everything just became critical to the success of your business. Most of us live in a multicloud environment where you need to be able to access data from the Internet of Things, business partners, third-party clouds and even your own customers. Integration has evolved from a function of IT to business enabler.
This shift is going to require a more unified view of integration, one that support many of the latest integration techniques: API, application, message-based and data. Integration will be based on multiple clouds, new lightweight architectures, event-driven patterns and new connectivity options. You can move beyond connecting systems to connecting entire ecosystems of data so you can gain insight and get to market faster than your competition.
The right cloud integration solution delivers new ways of accessing and combining information. It’s built on an architecture that addresses challenges around security, governance, performance and scale necessary to support business models designed around cloud and cognitive solutions.
Today, IBM is announcing new capabilities to our market-leading cloud integration solution:

New API monetization functionality built right into API Connect allows you to start seeing revenue directly from your APIs.
New open source API microgateway gives your developers with a first-class framework for building their own gateway solution.
New Watson connectors, and App Connect on Bluemix expands functionality to build cognitive solutions and connect cloud and on-premises systems.
New Connector Pack with MQ provides the ability to perform a message-driven query into IBM Blockchain for greater insight.

IBM is dedicated to your success. Our new capabilities are just further evidence of our commitment to driving digital transformation and business results for our customers.
Find out more at IBM Cloud Integration.
 
The post Become a disruptor: Announcing new cloud integration capabilities appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

4 challenges of shared processes managed by blockchain

Blockchain is already disrupting industries and bringing trust to scenarios where it was previously complex to implement.
Many of these solutions transfer data or assets between network members, but each scenario also represents a process between organizations.
For example, the journey of a shipping container is made up of hundreds of requests, approvals and document transfers. For a letter of credit, the blockchain solution is a shared business process. The benefits of a solution depend on how much a blockchain can improve these business processes. Benefits increase with the rise of business-network thinking that also increases trust. This makes radically improved ways of working possible.
Let’s take the example of health records on a blockchain. In the future, patients will control how their medical data is shared between healthcare providers, testing labs, and pharmaceutical companies. Currently, each organization stores its own data and manages its own processes.
An electronic health record (EHR) stored on a blockchain would securely share only the data that each party is permitted to see. Adding new test results to the record might automatically schedule an appointment with the patient depending on the test result. Finally, once the appointment is completed, the notes and follow-up actions are added indelibly to the EHR for future reference.
Every blockchain project contains business process change. This could mean an IT change to integrate a system with a blockchain, or, in more transformational solutions, the change is much greater. Some completely change business models or create new ones which can only exist with blockchain. For example, an insurance company could automatically pay claims when the consensus is that an event occurred.

Traditionally, processes are centralized on a private platform that is internal to an organization. Even in scenarios where processes are shared, each network member needs to allow others to access their private platform or delegate management of the process to a trusted third party (for example, in securities settlement). In an “interorganizational process” the process flow is choreographed by the code that runs on the blockchain. Control is handed over to process participants to complete tasks depending on their roles. For example, only a lab can submit test results and only care providers can hold appointments with patients.
However, interorganizational processes pose challenges. Some are familiar, some are new:

Why should some processes be shared and others be private to an individual network member?

How can a network member avoid losing visibility or control of who is doing what and when? Changed processes must improve client trust, satisfaction, cycle times and consistency of experience in a decentralized environment

How can shared processes be automated in such a way that they can be governed by lawyers and decision makers, without duplication and to be fully compliant?

What should network members do when something unexpected happens? Who should handle bugs in the contract, disputes, errors and rework, contract changes etc.?

In our next post, we’ll discuss some methods for addressing these challenges.
Learn more about IBM Blockchain.
The post 4 challenges of shared processes managed by blockchain appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

The cognitive future of customer service with IBM Voice Gateway

What’s the future of customer service? Your call center is frequently where your customers experience your company. So what can you do to make that experience as positive as possible?
In my last blog, I described how cognitive capabilities have transformed interactive voice response (IVR) systems. Cognitive IVR systems rely on artificial intelligence (AI) to understand and communicate with callers. The AI can be trained to detect the caller’s intent, speeding issue resolution. Because the system has been trained on language and acoustic models, it can understand many different voices and contexts. Speech-to-text and automatic speech recognition can handle domain-specific words and dialects.
Recently, IBM announced the IBM Voice Gateway. It essentially turns Watson into a cognitive IVR system. IBM Voice Gateway can be used to build virtual cognitive agents that communicate with customers using natural language. Through the orchestration of several cognitive services including Watson Speech to Text, Watson Text to Speech and Watson Conversation, the new IBM Voice Gateway provides a key integration point between cognitive self-service agents and your call center operations.
How does Voice Gateway accomplish this exactly? By enabling a callable session initiation protocol (SIP) application that can be connected to from a variety of sources, including SIP Trunks—for example, Twilio’s SIP Trunking service, Session Border Controllers or virtually any enterprise telephony device that communicates using the SIP protocol.
Think of Voice Gateway as a next-generation, cloud-native, standalone, cognitive IVR system. It includes features you would expect from a traditional IVR system, such as touch tone support and the ability to play music on hold. But beyond these traditional features, the solution makes it easy to develop virtual agents that understand natural language.
You may be wondering how Voice Gateway integrates with traditional IVRs that are programmed using voice XML from vendors like Avaya or Cisco. IVR systems are typically designed to support the ability to transfer out to other SIP endpoints like ACDs or specific SIP URIs. Voice Gateway is just another SIP application that you program into an existing IVR to transfer out to or conference into a call. You can also route directly into Voice Gateway from a SIP trunk or Session Border Controller.
One challenge to address is how to share context between IVR systems. For example, if a call starts in a traditional IVR that collects information from the caller, it may be necessary to share that information with the cognitive IVR system. With IBM Voice Gateway, you can share contexts through an exchange of metadata in the SIP signaling using the User-to-user (UUI) header. The metadata can contain the actual context or point to the data in a separate context store.
Callers want personalization, a key feature in next-generation call automation systems. Consumers expect that a company will remember their past interactions and use them to provide better customer support. IBM Voice Gateway is customizable through a Service Orchestration layer to modify responses to queries through systems of record APIs to add personalization capabilities.
Through IBM Voice Gateway with Watson services, you can bring next-generation call automation into your organization. You can drive down costs of running traditional IVRs and drastically improve your customers’ satisfaction.
Are you ready to get started? Register for the IBM Voice Gateway webcast on June 6th and discover how you can bring this cognitive call center solution to life in your organization.
The post The cognitive future of customer service with IBM Voice Gateway appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Kubernetes: Application Coupling

Applications often consist of more than one executable and have programs that need to be launched together. Learn about different application coupling options in Kubernetes pertinent to ownership, colocation, and communication requirements and see a concrete example with all the characteristics of a real-world setup in the context of application coupling.
Quelle: OpenShift

How to connect MQ Bridge and Salesforce events

In March, Salesforce and IBM announced a strategic partnership. The partnership reflects that many large corporations have investments in technologies from both Salesforce and IBM. A common scenario at some of these corporations: the applications used by customer-facing staff were not tied to the enterprise systems of record. So a key aim of the partnership is to make it simpler to integrate applications and systems.
Salesforce has an event-based interface. Wouldn’t it be good to have Salesforce applications sending events data into the enterprise IT systems? Could a company benefit from exposing systems of record data directly into a Salesforce application as an external data source?
One of the fruits of the partnership is the MQ Bridge to Salesforce delivered in IBM MQ 9.0.2. The bridge enables event data sent from the Salesforce platform to be re-published as MQ messages that can easily be processed by enterprise applications.
MQ customers are used to events being delivered as messages. So an intuitive way to integrate Salesforce’s events is to turn them into MQ messages.
MQ Bridge connects your Salesforce events to your back-end systems and applications.
There are two kinds of events supported by the MQ bridge: PushTopics events and platform events.
PushTopics are queries you define to receive events when changes are made to data in Salesforce. You specify the kind of data conditions in which you want the events generated and the data you’d like included in the events. For example, you might want an event generated every time a new contact is created.
Platform events are customizable event messages that you define in Salesforce containing whatever data you like. Once defined, you send platform events directly from application code in Salesforce. Platform events are much more flexible. For example, you can include any data in your event that you like and create a distributed application, part of which runs in Salesforce and part of which runs in your enterprise data center.
Each PushTopic and platform event type corresponds to a separate topic that the MQ Bridge to Salesforce can use receive events.
Bridge the gap with the MQ Bridge
Here’s a hypothetical scenario.  Let’s assume you already use Salesforce to manage relationships with your customers. Unfortunately, something has gone wrong with an order, and the customer is expressing their frustration over the phone to someone on your team. How can you make your customer happy again, quickly?
As your team member is logging the call with the customer, they flag a record in Salesforce triggering special treatment for this customer. By registering a PushTopic that matches this flag, an event will flow across to MQ. There, the event can be picked up by a back end system to take an appropriate action, such as applying a promotion for the customer’s next order.
With the MQ Bridge, it’s easy to close the gap between the Salesforce applications and back end systems. You can enhance the experience you offer your customers with minimal impact on existing systems.
The MQ Bridge to Salesforce will likely run on the edge of the data center, connecting to Salesforce on the cloud and to an enterprise queue manager. The bridge itself is available on Linux x86-64, but it can connect to any queue manager running MQ 7.0 and later.
If you know MQ and want to experiment with the bridge, it’s easy to sign up for a no-cost 30-day trial of Salesforce here.
Learn more about how the IBM and Salesforce partnership can help your business.
The post How to connect MQ Bridge and Salesforce events appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Announcing General Availability of Red Hat CloudForms 4.5

Today marks the general availability of Red Hat CloudForms 4.5, as announced in the recent Press Release. One of the key highlights of the release is the introduction of Ansible Automation Inside, which provides a simple, powerful, human readable automation language, directly accessible from within CloudForms.
In addition, several enhancements are added to the multi-cloud management platform, including a new storage provider for Amazon Web Services, metrics and container improvements for OpenShift, and additional features for OpenStack. Let’s take a look at some of these improvements.

Ansible Automation Inside
Red Hat CloudForms comes with Ansible Automation Inside and becomes the first cloud management platform (CMP) to take an automated-based approach to multi-cloud management. Ansible playbooks consist of automation tasks allowing provisioning or configuration of entities across the infrastructure and application stack, from simple configuration, to complex service provisioning definition.
With the 4.5 release, users can now import their Ansible playbooks from within CloudForms and extend cloud management tasks using automation. For example, automation can be exposed to end users in CloudForms as:

Service items, providing an easy way to consume an Ansible playbook from a service catalog item, monitor its output, and manage its lifecycle,
Custom buttons, exposing Ansible playbooks on CloudForms entities to perform configuration tasks,
Control actions, allowing Ansible playbooks to execute automatically when a policy event is triggered,
Control alerts, allowing the use of Ansible playbooks to perform automation on an alert.

 

 
With Ansible Automation Inside, CloudForms automatically benefits from a large collection of modules and roles developed by the Ansible community, allowing simple and faster integration, and assembling advanced multi-tier service deployment definition with ease.
This native integration between CloudForms and Ansible automation simplifies service and policy definition as well as service lifecycle management. It not only provides a way to simply and fully automate IT services, but also gives greater visibility and management of all relevant resources.

Amazon Web Services Provider
CloudForms 4.5 introduces a new storage provider for Amazon Elastic Block Store (EBS) service. The persistent block storage volumes used by Amazon EC2 instances are now visible under their own storage management provider in CloudForms. This provider extends CloudForms ability to manage storage, with OpenStack Swift and Cinder storage providers introduced in a previous release. The inventory of EBS cloud volumes and snapshots, as well as associations with other entities (e.g. instances) are now available within CloudForms.
 

 
Red Hat CloudForms 4.5 also brings support for the association and synchronization of AWS labels to CloudForms tags, allowing easier integration, automation and reporting.
Another noticeable enhancement is the addition of Amazon CloudWatch events support, providing greater metrics and allowing CloudForms automation to react on Amazon alerts.

OpenShift Container Platform Provider
CloudForms 4.5 brings new features to the OpenShift Container Platform provider.
CloudForms now provides live ad-hoc metrics accessible from its user interface (UI) by querying Heapster, in addition to graph metrics generated from Hawkular data collection.
 

 
The CloudForms inventory is enhanced to show relationships between containers and persistent volumes.
New container management roles for container operators and administrators, as well as specific dashboard widgets and reports, are now available.

OpenStack Cloud Provider
CloudForms 4.5 contains enhancements for OpenStack management.
Synchronization of OpenStack tenants and users is now happening, including mapping of the object relationships (network and storage).
CloudForms supports the Panko service as an alternative to Ceilometer for eventing.
The inventory of OpenStack floating IPs is now presented in CloudForms.
 
Performance improvements around graph refreshes, widget generation, and inventory loading optimization are included in this release, contributing to CloudForms responsiveness improvements.

Conclusion
This 4.5 release of Red Hat CloudForms brings an Ansible automation-centric approach to multi-cloud management. This not only makes CloudForms more easily deployable across an organization, but also provides users with a collection of integration points for simple and easy automation for their IT service management tasks.
For additional information, Red Hat CloudForms 4.5 Release Notes and Documentation can be found on the Red Hat Customer Portal where the release images are now also available for download.
Quelle: CloudForms

Microservices Patterns with Envoy Sidecar Proxy, Part I: Circuit Breaking

This is the first post in a series taking a deeper look at how Envoy Proxy and Istio.io enable a more elegant way to connect and manage microservices. Follow me @christianposta to learn when the next posts are available. In this series I’ll cover: What is Envoy Proxy and how does it work? How to implement some of the basic patterns with Envoy […]
Quelle: OpenShift