OpenShift Commons Briefing #66: Microservices, .NET Core 1.1, and Kubernetes on OpenShift

In this briefing, Red Hat’s Todd Mancini, Senior Principal Product Manager and Don Schenck, Developer Advocate for .NET on Linux discussed some of the new application development highlights in Microsoft’s .NET Core 1.1:

• Over 1,300 new APIs since .NET Core 1.0.
• .NET Core 1.1 docker images from Red Hat’s container registry.
• Safe side-by-side installation with .NET Core 1.0.
• Performance improvements
Quelle: OpenShift

Disaster recovery for applications, not just virtual machines using Azure Site Recovery

Let’s say your CIO stops by one day and asks you, “What if we are hit by an unforeseen disaster tomorrow? Do you have the confidence to be able to run our critical applications on the recovery site, and guarantee that our users will be able to connect to their apps and conduct business as usual?” Note that your CIO is not going to ask you about just recovering your servers or virtual machines, the question is always going to be about recovering your applications successfully. So why is it that many disaster recovery offerings stop at just booting up your servers, and offer no promise of actual end to end application recovery? What makes Azure Site Recovery different that allows you as the business continuity owner to sleep better? To answer this, let’s first understand what an application constitutes: A typical enterprise application comprises of multiple virtual machines spanning different application tiers. These different application tiers mandate write-order fidelity for data correctness. The application may also require its virtual machines to boot up in a particular sequence for proper functioning. A single tier will likely have two or more virtual machines for redundancy and load balancing. The application may have different IP address requirements, either use DHCP or require static IP addresses. Few virtual machines may require a public IP address or DNS routing for end user internet access. Few virtual machines may need specific ports to be open or have security certificate bindings. The application may rely on user authentication via an identity service like Active Directory. To recover your applications in the event of a disaster, you need a solution that facilitates all of the above, gives you the flexibility to potentially do more application specific customizations post recovery, and do everything at an RPO and RTO that meets your business needs. Using traditional backup solutions to achieve true application disaster recovery is extremely cumbersome, error prone and not scalable. Even many replication based software only recover individual virtual machines and cannot handle the complexity of bringing up a functioning enterprise application. Azure Site Recovery combines a unique cloud-first design with a simple user experience to offer a powerful solution that lets you recover entire applications in the event of a disaster. How do we achieve this? With support for single and multi-tier application consistency and near continuous replication, Azure Site Recovery ensures that no matter what application you are running, shrink-wrapped or homegrown, you are assured of a working application when a failover is issued. Many vendors will tell you that having a crash-consistent disaster recovery solution is good enough, but is it really? With crash consistency, in most cases, the operating system will boot. However, there are no guarantees that the application running in the virtual machines will work because a crash-consistent recovery point does not ensure correctness of application data. As an example, if a transaction log has entries that are not present in the database, then the database software needs to rollback until the data is consistent, in the process significantly increasing your RPO. This will cause a multi-tier application like SharePoint to have very high RTO, and even after the long wait it is still uncertain that all features of the application will work properly. To avoid these problems, Azure Site Recovery not only supports application consistency for a single virtual machine (application boundary is the single virtual machine), we also support application consistency across multiple virtual machines that compose the application. Most multi-tier real-world applications have dependencies, e.g. the database tier should come up before the app and web tiers. The heart and soul of the Azure Site Recovery application recovery promise is extensible recovery plans, that allow you to model entire applications and organize application aware recovery workflows. Recovery plans are comprised of the following powerful constructs: Parallelism and sequencing of virtual machine boot up to ensure the right recovery order of your n-tier application. Integration with Azure Automation runbooks that automate necessary tasks both outside of and inside the recovered virtual machines. The ability to perform manual actions to validate recovered application aspects that cannot be automated. Your recovery plan is what you will use when you push the big red button and trigger a single-click stress free end to end application recovery when needed, with a low RTO. Another key challenge for many of these multi-tier applications to function properly is network configuration post recovery. With advanced network management options to provide static IP addresses, configure load balancers, or use traffic manager to achieve low RTOs, Azure Site Recovery ensures that user access to the application in the event of a failover is seamless. A common myth around protecting your applications is the fact that many applications come with in-built replication technologies – hence the question, why do you need Azure Site Recovery? The simple answer: Replication != Disaster Recovery Azure Site Recovery is Microsoft’s single disaster recovery product that offers you a choice to work with different first and third-party replication technologies, while providing an in-built replication solution for those applications where there is no native replication construct, or native replication does not meet your needs. As mentioned earlier, getting application data and virtual machines to the recovery site is only a piece of what is takes to bring up a working application. Whether Azure Site Recovery replicates the data or you use the application’s built-in capability for this, Azure Site Recovery does the complex job of stitching together the application, including boot sequence, network configurations, etc., so that you can failover with the single click. In addition, Azure Site Recovery allows you to perform test failovers (disaster recovery drills) without production downtime or replication impact, as well as failback to the original location. All these features work with both Azure Site Recovery replication and with application level replication technologies. Here are a few examples of application level replication technologies Azure Site Recovery integrates with: Active Directory replication SQL Server Always On Availability Groups Exchange Database Availability Groups Oracle Data Guard So, you ask, what does this really mean? Azure Site Recovery provides you with powerful disaster recovery application orchestration no matter whether you choose to use its built-in replication for all application tiers or mix and match native application level replication technologies for specific tiers, e.g. Active Directory or SQL Server. Enterprises have various reasons why they may go with one or the other replication choice, e.g. tradeoffs between no data loss and cost and overhead of having an active-active standby deployment. The next time you get asked, why do you need Azure Site Recovery when say you already have SQL Server Always On Availability Groups, do make sure you clarify that having application data replicated is necessary but not sufficient for disaster recovery, and Azure Site Recovery complements native application level replication technologies to provide you a full end to end disaster recovery solution. We have learnt from our enterprise customers who are protecting hundreds of applications using Azure Site Recovery, what the most common deployment patterns and popular application topologies are. So not only does Azure Site Recovery work with any application, Microsoft tests and certifies popular first and third-party application suites, a list that is constantly growing. As part of this effort to test and provide Azure Site Recovery solution guides for various applications, Microsoft provides a rich Azure Automation library with production-ready, application specific and generic runbooks for most common automation tasks that enterprises need in their application recovery plans. Let’s close with a few examples: An application like SharePoint typically has three tiers with multiple virtual machines that need to come up in the right sequence, and requires application consistency across the virtual machines for all features to work properly. Azure Site Recovery solves this by giving you recovery plans and multi-tier application consistency. Opening a port / adding a public IP / updating DNS on an application’s virtual machine, having an availability set and load balancer for redundancy and load management, are examples of common asks of all enterprise applications. Microsoft solves this by giving you a rich automation script library for use with recovery plans, and the ability to set up complex network configurations post recovery to reduce RTO, e.g. setting up Azure Traffic Manager. Most applications will need an Active Directory / DNS deployed and use some kind of database, e.g. SQL Server. Microsoft tests and certifies Azure Site Recovery solutions with Active Directory replication and SQL Server Always On Availability Groups. Enterprises always have a number of proprietary business critical applications. Azure Site Recovery protects these with in-built replication and lets you test your application’s performance and network configuration on the recovery site using the test failover capability, without production downtime or replication impact. With relentless focus on ensuring that you succeed with full application recovery, Azure Site Recovery is the one-stop shop for all your disaster recovery needs. Our mission is to democratize disaster recovery with the power of Microsoft Azure, to enable not just the elite tier-1 applications to have a business continuity plan, but offer a compelling solution that empowers you to set up a working end to end disaster recovery plan for 100% of your organization’s IT applications. You can check out additional product information and start replicating your workloads to Microsoft Azure using Azure Site Recovery today. You can use the powerful replication capabilities of Azure Site Recovery for 31 days at no charge for every new physical server or virtual machine that you replicate, whether it is running on VMware or Hyper-V. To learn more about Azure Site Recovery, check out our How-To Videos. Visit the Azure Site Recovery forum on MSDN for additional information and to engage with other customers, or use the Azure Site Recovery User Voice to let us know what features you want us to enable next.
Quelle: Azure

Red Hat Announces OpenShift Commons Gathering at Summit in Boston on May 1st

This edition of the OpenShift Commons Gathering will focus on talks by end users with production deployments of OpenShift sharing their use cases, lessons learned and best practices with experts from all over the world to discuss container technologies, best practices for cloud native application developers and the open source software projects that underpin the OpenShift ecosystem to help take OpenShift to the next level in cloud native computing.
Quelle: OpenShift

Announcing IBM Cloud Product Insights for hybrid cloud

Stressful. Frustrating. Unpleasant.
That’s the trio of adjectives that surface when I think about relocation. Having recently moved from one house to another, those sentiments remain fresh in my memory.
The litany of unpacked boxes still lining my garage are constant reminders. And until the final coffee mug, picture and trinket are placed in their designated home, an unsettled feeling will reign as my shadow.
When you think about it, information technology (IT) and the professionals who support it are no different. Any change that introduces a major overhaul to how IT applications and the supporting infrastructure are implemented, managed and monitored is far from desired.
It’s especially true when proposing a total “lift and shift” of critical IT workloads from on-premises to the public cloud. The mere mention of it to IT executives is often a conversation stopper.
That’s not to say that enterprises aren’t moving toward the public cloud. They clearly are. Most understand that, in order to accelerate digital transformation, leveraging the power and flexibility of capabilities delivered in the public cloud is necessary for progressing at the required business pace.
But with the cloud, we’ve long discovered this is not either-or. In other words, hybrid cloud, which enables organizations to unlock new value with speed and flexibility while improving the ROI of existing infrastructure, is the preferred approach.
So, for those who’ve yet to dip their toes into the hybrid pool, where do you start? When, where, and how do you start taking advantage of the innovative, self-service capabilities public cloud venders are bringing to market at a rapid pace?
For existing IBM software clients, assistance with that journey is provided by IBM Cloud Product Insights. A new service recently released on IBM Bluemix, Product Insights provides IBM clients an entry point to the cloud with new capabilities that optimize an organization’s existing on-premises environments. New capabilities include:
Registration and usage
Multi-product, cloud-based dashboards provide visibility into what software products and versions are deployed. Quick and easy-to-use registration simplifies the task of getting a view of the supported products into the hybrid cloud.
Recommendations for optimization
Intelligent recommendations provide guidance on which cloud services enhance the registered software products used in your hybrid cloud environments. These recommended services can then be deployed, allowing organizations to discover, try and adopt services tailored to improve their existing environments.
Log management service
In addition to the capabilities made available through the initial release of Product Insights, a new experimental release of a Product Insights log management service is available, which provides users with deeper insight into their backend operations through log correlation, management, and SME dashboards.
Today, Product Insights and the experimental log management service supports six IBM products:

IBM WebSphere Application Server
IBM WebSphere Liberty
IBM MQ
IBM Integration Bus
IBM Operational Decision Manager
IBM Operational Decision Manager on Cloud

Throughout 2017, support for more IBM software products will be included with Product Insights. Advanced cognitive capabilities powered by IBM Watson will also be introduced, enabling users to proactively address potential problems with their backend operations before they become service impacting.
The cognitive insights that accompany IBM Watson and Product Insights are part of what differentiates the IBM Cloud from other vendors. The support of hybrid environments is another.
Because let’s face it: picking up and moving everything you own from one place to another isn’t easy. Whether it’s shifting your IT workloads to the cloud or the family to a new neighborhood, challenges surely await.
For me, the physical move was well worth it. Much of that can be attributed to a consumable process that did not include a complete lift and shift at an undesirable pace.
The extra space to give my 7-year-old more room to play football, baseball and soccer was a pretty nice benefit, too.
For more information on Product Insights, click here.
The post Announcing IBM Cloud Product Insights for hybrid cloud appeared first on news.
Quelle: Thoughts on Cloud

How to combine cognitive and rule-based decision management

Jean-Christophe Jardinier, BLogic Software, co-authored this post.

Businesses selling through digital channels have a critical balance to achieve. They must minimize the cost of selling while also maximizing the likelihood of success every time they offer a product or service to a customer.
Cognitive capabilities can help slash cost. Tools such as cognitive chatbots can anticipate the customer’s needs by asking the right questions and understanding the answers. All of this happens at a low operational cost.
For the maximizing success piece of the balancing act, consider rule-based decision management. Successful selling requires delivering a highly-personalized experience to your customer. Once you know what your customer needs, you have to propose an offer that addresses these expressed needs and takes into account what you already know about the customer. And you have to do all of this while complying with regulations, best practices and policies. A quality business rules management system makes this possible.
You also need to customize the messages to each customer. You need to use a language that is likely to resonate with the customer and highlights the features of the offer that matter to her. Consider the following example that shows how rule-based decision management can increase the likelihood of an offer’s success.
Your company is promoting a credit card offer. Your data shows that specific demographic profiles would likely respond well to free rides from popular ride-sharing apps. You can use the rules and customize messages to reach the people who are most likely to respond to a specific offer or benefit. For example, if the customer is young and in a city, then you could mention your credit card offer provides free rides from ride-sharing apps.
By using customized language while combining cognitive and a rule-based decision management system, the process has not only identified key characteristics about the customer that make her a great candidate for this offer, but was also able to determine and effectively communicate which features of the offer may be most appealing to her – in this case, free rides due to her age and location.
Want to dive deeper into using cognitive and decision-management? Join us at IBM InterConnect. Here are two can’t-miss sessions.
Session 1784: Watson + ODM = understanding what your customer needs and propose the right offer
In this session, we’ll show a demo that combines the Watson Conversation service available in Bluemix and IBM Operational Decision Manager (ODM). We’ll cover how you can automate as much as possible with Watson. We’ll also cover how to constantly improve your decisions and adapt to moving market conditions.
Session 2735: A next-gen, low-code, intelligent virtual assistant—context-aware and tied to legacy and cognitive
This session will introduce the next-generation customization intelligent virtual assistant. Companies can address the need to scale expertise without overloading their knowledge workers. Explore how to implement intelligent virtual assistants (IVAs) that leverage reusable assets, cognitive services such as Watson APIs, existing systems of record, decisions and workflows.
And there’s more. InterConnect will bring together more than 20,000 top cloud professionals to network, train and learn about the future of the industry. Register today.
The post How to combine cognitive and rule-based decision management appeared first on news.
Quelle: Thoughts on Cloud