The unstoppable trends transforming automation

The key to successful business operations is constant innovation. However, mundane, repetitive tasks and inflexible processes can often take up a knowledge worker’s valuable time. This could hinder the flow of great ideas, reduce productivity and take away from the creation and sustainability of a great customer experience.
This year, at our keynote session for IBM Process Transformation at InterConnect 2017, our executives and our client, NHS Blood and Transplant, led a discussion around building a powerful automation strategy. The panel detailed the ideal strategy: it effectively combines business process management and decision management capabilities, is powered by flexible cloud and cognitive intelligence, can significantly improve customer-centricity, speeds up response time, reduces errors and can lower costs.
We’re working to arm knowledge workers with the tools they need to make their work more efficient and focus on providing an optimal customer experience. To this end, it is necessary to identify the core components of a successful digital process automation strategy and work on providing solutions and products that address each cornerstone. Watch for more detail:

Nothing demonstrates the impact of a product or solution better than a real-life business use-case. UK-based NHS Blood and Transplant uses IBM Business Process Manager on Cloud, Operational Decision Manager on Cloud and IBM Blueworks Live. The organization built a rules-based platform for agile development, capturing workflows, and simplifying processes, which enabled it to develop, implement, and change allocation scheme rules on an ongoing basis. Aaron Powell, Chief Digital Officer at NHSBT, elaborates on his organization’s process automation journey in the clip below.

We had an exciting product announcement on the InterConnect stage this year: the IBM Digital Business Assistant. Jim Casey, Product Manager, Business Process and Decision Management at IBM, demonstrates how the Digital Business Assistant removes distractions that keep you from getting your high-value work done.

One important characteristic of IBM Digital Business Assistant that is part of a growing trend in process automation is the fact that it is low-code to no-code. This means it is very accessible for knowledge workers and citizen developers to use. With low-code/no-code being a priority moving forward, we are thrilled to also offer:

An updated version of IBM Process Designer, where you can design process apps from web browsers, with no coding

An experimental version IBM Decision Composer, where you can build decision models with no coding

It’s clear that this is an exciting time to be exploring and advancing digital process automation opportunities. You can ensure your knowledge workers are able to use their time most effectively. You can simplify complex and influential processes. And of course, you can give your customers the seamless and personalized experience they expect.
These trends are gaining speed—more and more businesses taking great steps in their digital transformation journeys. Together, we can continue to make innovative strides in this direction.  Interested in learning more? Check out IBM Process Automation here.
The post The unstoppable trends transforming automation appeared first on news.
Quelle: Thoughts on Cloud

Distributed tracing for Go

By Jaana Burcu Dogan, Engineer

The Go programming language has emerged as a popular choice for building distributed systems and microservices. But troubleshooting Go-based microservices can be tough if you don’t have the right tooling. Here at Google Cloud, we’re big fans of Go, and we recently added a native Go client library to Stackdriver Trace, our distributed tracing backend to help you unearth (and resolve) difficult performance problems for any Go application, whether it runs on Google Cloud Platform (GCP) or some other cloud.

The case for distributed tracing
Suppose you’re trying to troubleshoot a latency problem for a specific page. Suppose your system is made of many independent services and the data on the page is generated through many downstream services. You have no idea which of those services are causing the slowdown. You have no clear understanding of whether it’s a bug, an integration issue, a bottleneck due to poor choice of architecture or poor networking performance.

Solving this problem becomes even more difficult if your services are running as separate processes in a distributed system. We cannot depend on the traditional approaches that help us diagnose monolithic systems. We need to have finer-grained visibility into what’s going on inside each service and how they interact with one another over the lifetime of a user request.

In monolithic systems, it’s relatively easy to collect diagnostic data from the building blocks of a program. All modules live within one process and share common resources to report logs, errors and other diagnostics information. Once your system grows beyond a single process and starts to become distributed, it becomes harder to follow a call starting from the front-end web server to all of its back-ends until a response is returned back to the user.

To address this problem, Google developed the distributed tracing system Dapper to instrument and analyze its production services. The Dapper paper has inspired many open source projects, such as Zipkin, and Dapper-style tracing has emerged as an industry-wide standard.

Distributed tracing enabled us to:

Instrument and profile application latency in a large system.
Track all RPCs within the lifecycle of a user request and see integration issues that are only visible in production.
Figure out performance improvements that can be applied to our systems. Many bottlenecks are not obvious before the collection of tracing data.

Tracing concepts
Tracing works on the basic principle of propagating tracing data between services. Each service annotates the trace with additional data and passes the tracing header to other services until the user request is served. Services are responsible for uploading their traces to a tracing backend. Then, the tracing backend puts related latency data together like the pieces of a puzzle. Tracing backends also provide UIs to analyze and visualize traces.

In Dapper-style tracing, each trace is a call tree, beginning with the entry point of a user request and ending with the server’s response, including all RPCs along the way. Each trace consists of small units called spans.

Above, you see a trace tree for a TaskQueue.Stats request. Each row is labelled with the span name. Before the system can serve TaskQueue.Stats, five other RPCs have been made to other services. First, TaskQueue.Auth checks if we’re authorized for the request. Then, QueueService is queried for two reports. In the meantime, System.Stats is retrieved from another service. Once reports and system stats are retrieved, the Graphiz service renders a graph. In total, TaskQueue.Stats returns in 581 ms, and we have a good picture of what has happened internally to serve this call. By looking at this trace, maybe we’ll learn that rendering is taking more time than we expect.

Each span name should be carefully chosen to represent the work it does. For example, TaskQueue.Stats is easily identified within the system and, as its name implies, reads stats from the TaskQueue service.

Spans can start new spans where a span depends on other spans to be completed. These spans are visualized as children spans of their starter span in a trace tree.

Spans can also be annotated with labels to convey more fine-grained information about a specific request. Request ID, user IDs and RPC parameters are good examples of labels commonly attached to traces. Choose labels by determining what else you want to see in a particular trace tree and what you would like to query from the collected data.

Working with Stackdriver Trace
One of the exciting things about GCP is that customers can use the same services and tools we use daily at Google-scale. We launched Stackdriver Trace to provide a distributing tracing backend for our customers. Stackdriver Trace collects latency data from your applications, lists and visualizes it on Cloud Console, and allows you to analyze your application’s latency profile. Your code doesn’t have to run on GCP to use Stackdriver Trace — we can upload your trace data to our backends even if your production environment doesn’t run on our cloud.

To collect latency data, we recently released the cloud.google.com/go/trace package for Go programmers to instrument their code with marking spans and annotations. Please note that the trace package is still in alpha and we’re looking forward to improving it over time. At this stage, please feel free to file bugs and feature requests.

To run this sample, you’ll need Google Application Default Credentials. First, use the gcloud command line tool to get application default credentials if you haven’t already.

Then, import the trace package:
import “cloud.google.com/go/trace”

Create a new trace client with your project ID:
traceClient, err = trace.NewClient(ctx, “project-id”)
if err != nil {
log.Fatal(err)
}

We recommend you have a long-living trace.Client instance. You can create a client once and keep using it until your program terminates.

The sample program makes an outgoing HTTP request. In this example, we attach tracing information to the outgoing HTTP request so that the trace can be propagated to the destination server:
func fetchUsers() ([]*User, error) {
span := traceClient.NewSpan(“/users”)
defer span.Finish()

// Create the outgoing request, a GET to the users endpoint.
req, _ := http.NewRequest(“GET”, “https://userservice.corp/users”, nil)

// Create a new child span to identify the outgoing request,
// and attach tracing information to the request.
rspan := span.NewRemoteChild(req)
defer rspan.Finish()

res, err := http.DefaultClient.Do(req)
if err != nil {
return nil, err
}

// Read the body, unmarshal, and return a slice of users.
// …
}

The User service extracts the tracing information from the incoming request, and creates and annotates any additional child spans. In this way, the trace of a single request can be propagated between many different systems:

func usersHandler(w http.ResponseWriter, r *http.Request) {
span := traceClient.SpanFromRequest(r)
defer span.Finish()

req, _ := http.NewRequest(“GET”, “https://meta.service/info”, nil)
child := span.NewRemoteChild(req)
defer child.Finish()

// Make the request…
}

Alternatively, you can also use the HTTP utilities to easily add tracing context to outgoing requests via HTTPClient, and extract the spans from incoming requests with HTTPHandler.

var tc *trace.Client // initiate the client
req, _ := http.NewRequest(“GET”, “https://userservice.corp/users”, nil)

res, err := tc.NewHTTPClient(nil).Do(req)
if err != nil {
// TODO: Handle error.
}

And on the receiving side, you can use our handler wrapper to access the span via the incoming request’s context:

handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
span := trace.FromContext(r.Context())
// TODO: Use the span.
})
http.Handle(“/foo”, tc.HTTPHandler(handler))

A similar utility to enable auto-tracing is also available for gRPC Go clients and servers.

Please note that not all services need to be written in Go — propagation works across all services written in other languages as long as they rely on the Stackdriver header format to propagate the tracing context. See the Stackdriver Trace docs to learn about the header format.

Future work
Even though we currently provide a solution for GCP, our goal is to contribute to the Go ecosystem beyond GCP. There are many groups working on tracing for Go, and there’s a lot of work to do to ensure it’s aligned. We look forward to working with these groups to make tracing accessible and easy for Go programmers.

One particular problem we want to solve is enabling third-party library authors to provide out-of-the-box tracing without depending on a particular tracing backend. Then, open-source library developers can instrument their code by marking spans and annotating them to be traced by the user’s choice of tracing backend. We also want to work on reusable utilities to automatically enable tracing anywhere without requiring Go programmers to significantly modify their code.

We’re currently working with a large group of industry experts and examining already-established solutions to understand their requirements and provide a solution that will foster our integrations with tracing backends. With these first-class building blocks and utilities, we believe distributed tracing can be a core and accessible tool to diagnose Go production systems.
Quelle: Google Cloud Platform

Alex Jones And The Dark New Media Are On Trial In Texas

Infowars / Getty

AUSTIN, Tx — Halfway through the second official day of his 10-day civil custody trial, Alex Jones reclined in his chair and mopped sweat from his brow while watching a shirtless, pantsless version of himself hawk male vitality supplements on a courtroom television screen. It was hardly the most outlandish moment of the afternoon.

Indeed, the first full day of Jones’ battle to retain custody of his three young children was filled with bizarre allegations — claims that Jones took his shirt off during a joint family counseling session and once blamed his inability to recall basic facts about his children during a pre-trial deposition on having “had a big bowl of chili for lunch.”

The news from the Travis County courtroom — breathless tweets from a gaggle of journalists covering the trial — bled across the internet instantly. Since Sunday evening, when the Austin American Statesman broke the news that Jones’ attorneys planned to defend his custody on the grounds that his two-plus decades of conspiracy theorizing has been “performance art,” Alex Jones&; name and reputation have unexpectedly become one of the biggest stories in the country.

And while it’s unusual for a contentious family custody case to end up as fodder for late night television hosts (the Jones case got the extended Colbert monologue treatment on Monday evening), Jones’ trial is far larger than his painful and in some ways ordinary family dispute. For the millions on either side who both adore and revile Jones, the case offers the hope of answering a near-impossible question: where does Alex Jones the character end and Alex Jones the person begin?

But the herculean task of untangling Jones from his political views has put the 43 year-old broadcaster at the center of something bigger than himself. Unexpectedly, Jones is now the star of a courtroom drama that feels less like a quotidian family law case and more like a referendum on politics, the internet, and the media in the post-Trump ecosystem.

And that’s because at present Jones and his Infowars media empire sit at the intersection of the thorniest issues across the media landscape. Jones, depending on who you ask, is either a participant in, defender of, or the driving force behind everything from fake news, online harassment and conspiracy theories to the toxic, hyper-partisan politicization of seemingly innocuous events.

Which is what makes Jones’ trial — and his impending trip to the witness stand — so alluring. Perhaps less interesting than knowing exactly what Jones truly believes is the prospect of watching legal experts compel earnest testimony from one of the nation’s top exporters of loose facts, untruths, and partisanship. Jones’ unenviable position then — disavow your lucrative professional views or risk losing your family — feels like a rare shot at the truth at a time when disinformation and professionalized trolling are staples of both sides of the political spectrum.

And while Jones’ verdict will likely set few precedents when it comes to internet conspiracy theorizing, the national scrutiny is bound to imbue even the smallest rulings with added meaning. Jones’ performance art defense resonates deeply during a month in which CNN’s President Jeff Zucker prompted outrage by comparing political news coverage to a sporting event. Similarly, Judge Orlinda Naranjo’s decisions to admit or disallow Jones’ rants into evidence seem — perhaps unfairly — referendums on fake news. Mix in the trial’s custody element and the case’s rulings grow unanswerable and near-existential to an outside observer: can someone who trafficks in fake news simultaneously be a good father? Just how amoral are conspiratorial thoughts when they’re published for a wider audience?

And because Jones and the Infowars empire are creatures of the internet, the trial stands to put the engines and platforms of information distribution on trial in unexpected ways. On Tuesday there was confusion in the court as to whether Jones’ impromptu Facebook Live streaming videos — which depict him shouting at protesters and slurring words ahead of Donald Trump’s inauguration — should be classified as part of Jones’ professional life or if they were videos of a more personal nature, given that they weren’t shot in studio. There’s even a debate to be had over the legal sincerity of certain online threats. Jones’ parenting, for example, was called into question Tuesday by the opposing counsel for bringing his 14 year-old son onto his streamed radio after having received death threats online during his broadcasts.

And if you’re looking to understand how alternative political and factual universes respond to news about polarizing figures, the first two days of Jones’ trial have been highly instructive. Jones’ attorney’s performance art defense was met by the mainstream media as an ideological checkmate of sorts while his defenders reflexively blamed the claims on a deeper, more sinister conspiracy. Jones himself denied the rumors and claimed that the media was doing to him what they claim he does: take a kernel of truth and spin it to fit a convenient fake news narrative. Jones’ critics react incredulously while his fans argue it’s unfair to politicize what should be a private family matter. All sides talk past each other, ignoring the other and assuming they&039;ve won.

This local custody trial is not supposed to be about Alex Jones. And yet his centrality to the proceedings is unavoidable. Still, despite the fact that this is at its core a family matter and an examination of Jones’ dueling personae, it is hard to shake the feeling that there’s something greater looming over the 10 days. This is the 21st century media’s century’s Scopes Monkey Trial (we are the lower primates here, not the earnest schoolteacher), and the trial’s symbolic meaning will overshadow its subjects, litigants, and even its verdict. Instead, it standing in as a referendum on a divisive moment, to be interpreted differently by all who follow along. Trials with media personalities highlight this further. Last year’s Hulk Hogan lawsuit against Gawker Media was largely viewed as a condemnation of a bygone era of sensationalist online writing and reporting. And in its own overdetermined way, Jones’ trial, coverage, and fallout feels a bit like a trial for the media — with all its attendant volatility and uncertainty and toxicity — in the year 2017.

We are riveted to our olive green, cushioned seats in the gallery of the Travis County Courthouse this week because of Jones’ profound influence — both intentional and unintended — on our politics, culture, and on a conspiratorial ideology of fear that transcends party lines. The case of Jones v. Jones resonates so deeply at this moment because we are living in a moment that Alex Jones himself ushered in.

Quelle: <a href="Alex Jones And The Dark New Media Are On Trial In Texas“>BuzzFeed

Mirantis Releases Kubernetes Distribution and Updated Mirantis OpenStack

The post Mirantis Releases Kubernetes Distribution and Updated Mirantis OpenStack appeared first on Mirantis | Pure Play Open Cloud.
Mirantis Cloud Platform 1.0 is a distribution of OpenStack and Kubernetes that can orchestrate VMs, Containers and Bare Metal

SUNNYVALE, CA – April 19, 2017 – Mirantis, the managed open cloud company, today announced availability of a commercially-supported distribution of OpenStack and Kubernetes, delivered in a single, integrated package, and with a unique build-operate-transfer delivery model.

“Today, infrastructure consumption patterns are defined by the public cloud, where everything is API driven, managed and continuously delivered. Mirantis OpenStack, which featured Fuel as an installer, was the easiest OpenStack distribution to deploy, but every new version required a forklift upgrade,” said Boris Renski, Mirantis co-founder and CMO. “Mirantis Cloud Platform departs from the traditional installer-centric architecture and towards an operations-centric architecture, continuously delivered by either Mirantis or the customers’ DevOps team with zero downtime. Updates no longer happen once every 6-12 months, but are introduced in minor increments on a weekly basis. In the next five to ten years, all vendors in the space will either find a way to adapt to this pattern or they will disappear.”

Along with launching Mirantis Cloud Platform (MCP) 1.0, Mirantis is also first to introduce a unique delivery model for the platform. Unlike traditional vendors that sell software subscriptions, Mirantis will onboard customers to MCP through a build-operate-transfer delivery model. The company will operate an open cloud platform for customers for a period of at least twelve months with up to four nines SLA prior to off boarding the operational burden to customer&;s team, if desired. The delivery model ensures that not just the software, but also the customer&8217;s team and process are aligned with DevOps best practices.

Unlike any other solution in the industry, customers onboarded to MCP have an option to completely transfer the platform under their own management. Everything in MCP is based on popular open standards with no lock-in, making it possible for customers to break ties with Mirantis and run the platform independently should they choose to do so.

“We are happy to see a growing number of vendors embrace Kubernetes and launch a commercially supported offering based on the technology,&; said Allan Naim from the Kubernetes and Container Engine Product Team.

&;As the industry embraces composable, open infrastructure, the &8220;LAMP stack of cloud&8221; is emerging, made up of OpenStack, Kubernetes, and other key open technologies,” said Mark Collier, chief operating officer, OpenStack Foundation. “Mirantis Cloud Platform presents a new vision for the OpenStack distribution, one that embraces diverse compute, storage and networking technologies continuously rather than via major upgrades on six-month cycles.&8221;

Specifically, Mirantis Cloud Platform 1.0 is:

Open Cloud Software &; providing a single platform to orchestrate VMs, containers and bare metal compute resources by:

Expanding Mirantis OpenStack to include Kubernetes for container orchestration.
Complementing the virtual compute stacks with best-in-class open source software defined networking (SDN), specifically Mirantis OpenContrail for VMs and bare metal, and Calico for containers.
Featuring Ceph, the most popular open source software defined storage (SDS), for both Kubernetes and OpenStack.

DriveTrain &8212; Mirantis DriveTrain sets the foundation for DevOps style lifecycle management of the open cloud software stack by enabling continuous integration, continuous testing and continuous delivery through a CI/ CD pipeline. DriveTrain enables:

Increased Day 1 flexibility to customize the reference architecture and configurations during initial software installation.
Greater ability to perform Day 2 operations such as post-deployment configuration, functionality and architecture changes.
Seamless version updates through an automated pipeline to a virtualized control plane to minimize downtime.

StackLight &8212; enables strict compliance to availability SLAs by providing continuous monitoring of the open cloud software stacks through a unified set of software services and dashboards.

StackLight avoids lock-in by including best-in-breed open source software for log management, metrics and alerts.
It includes a comprehensive DevOps portal that displays information such as StackLight visualization and DriveTrain configuration settings.
The entire Mirantis StackLight toolchain is purpose-built for MCP to enable up to 99.99% uptime service level agreements with Mirantis Managed OpenStack.

With the release of MCP, Mirantis is also announcing end-of-life for Mirantis OpenStack (MOS) and Fuel by September 2019. Mirantis will be working with all customers currently using MOS on a tailored transition plan from MOS to MCP.

To learn more about MCP, watch an overview video and sign up for the introductory webinar at www.mirantis.com/mcp.

About Mirantis
Mirantis delivers open cloud infrastructure to top enterprises using OpenStack, Kubernetes and related open source technologies. The company is a major contributor of code to many open infrastructure projects and follows a build-operate-transfer model to deliver its Mirantis Cloud Platform and cloud management services, empowering customers to take advantage of open source innovation with no vendor lock-in. To date Mirantis has helped over 200 enterprises build and operate some of the largest open clouds in the world. Its customers include iconic brands such as AT&;T, Comcast, Shenzhen Stock Exchange, eBay, Wells Fargo Bank and Volkswagen. Learn more at www.mirantis.com.

###

Contact information:
Joseph Eckert for Mirantis
jeckertflak@gmail.comThe post Mirantis Releases Kubernetes Distribution and Updated Mirantis OpenStack appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis