Open Technology Summit focuses on contributors

The Open Technology Summit, now in its fifth year, has become an annual state of the union for the established and budding open source projects that IBM supports.
The conclusion drawn at Sunday’s OTS during IBM InterConnect in Las Vegas is that the state of open tech is strong and getting stronger.
The event brought together leaders from some of today’s top open source projects: , Cloud Foundry, the Linux Foundation, JS Foundation and the Apache Software Foundation, plus the IBM leaders that support these projects.
“The open source community is only as good as the people who are contributing,” Willie Tejada, IBM Chief Developer Advocate, told the capacity crowd.

&;We’ve been systematically building an open innovation platform — cloud, , etc.” @angelluisdiaz https://t.co/HHMqWmi3v4 pic.twitter.com/945FkRbkZg
— IBM Cloud (@) March 20, 2017

Judging by the success stories shared on stage, contributor quality appears to be quite high. In short, the open source community is thriving.
Finding success in the open
The Linux Foundation has become one of the great success stories in open source, thanks largely to the huge number of contributors it has attracted. In his talk, the organization’s executive director, Jim Zemlin, told the crowd that across its various projects, contributors add a staggering 10,800 lines of code, remove 5,300 lines of code and modify 1,875 lines of code per day.
Zemlin called open source “the new norm” for software and application development.

&8220;Open source is now the new norm for software development.&; &; @jzemlin IBMOTS https://t.co/y3V3IGfcTK pic.twitter.com/83k9yLdJdf
— IBM Cloud (@IBMcloud) March 20, 2017

Cloud Foundry Foundation executive director Abby Kearns stressed her organization’s commitment to bringing forward greater diversity among its community.
“When I think about innovation, I think about diversity,” said Kearns, who took over as executive director four months ago. “We have the potential to change our industry, our countries and the world.”
Like Cloud Foundry, the OpenStack community has seen tremendous growth in its user community thanks to increased integration and cooperation with other open source communities. OpenStack Foundation executive director Jonathan Bryce and Lauren Sell, vice president of marketing and community services, shared their community’s pithy, tongue-in-cheek motto:

&8220;In 2014, there was 323 developers contributing to OpenStack. In 2016, we had 531.&8221; @jbryce IBMOTS ibminterconnect pic.twitter.com/6PxYzrVxsL
— IBM WebSphere (@IBMWebSphere) March 20, 2017

The community, which aims to create a single platform for bare metal servers, virtual machines and containers, has seen 5 million cores deployed on it. Contributors have jumped from 323 in 2014 to 531 in 2016.
Sell echoed several of the other speakers, when she noted that we’re living in a “multi-cloud world,” and that open technologies are enabling it.
IBM: Contributors, collaborators, solution providers
While it’s well known that IBM has helped start and lead many of the open source communities that it supports, the company also offers a robust set of unique capabilities around these technologies. The company is constantly working to expand its offerings around open technologies.
For example, IBM Cloud Platform Vice President and CTO Jason McGee previewed the announcement that Kubernetes is now available on IBM Bluemix Container Service.
“This service lets us bring together the power of that project and all of the amazing technology in the engine with Docker and the orchestration layer with Kubernetes and combine it with the power of cloud-based delivery,” McGee said.
David Kenny, senior vice president, IBM Watson and Cloud Platform, also spoke about “the power of the community to move the technology faster and to consume it and learn from it.”
“We’re very much committed as IBM to be participants,” he said. “Certainly IBM Cloud and IBM Watson are two pretty big initiatives at IBM these days, and both of those have come together around the belief that open source is a key part of our platform.”

“IBMCloud and Watson have come together around the belief that is a key part of our platform.” &8211; @davidwkenny IBMOTS pic.twitter.com/gU9DCzMsoC
— Kevin J. Allen (@KevJosephAllen) March 20, 2017

Moving forward as a community
Looking toward the future of open tech, it was clear that its success will depend on the next generation of contributors.
Tejada went so far as to call the open source movement a religion. “The most important piece is to understand the core premises of the religion.” He identified those as:

Embrace the new face of development
Acknowledge and adapt to the new methodologies of application development
Seize the opportunity to do more with less at an accelerated rate

For more on IBM work in open technology, visit developerWorks Open.
The post Open Technology Summit focuses on contributors appeared first on news.
Quelle: Thoughts on Cloud

Blog posts, week of March 20

Here’s what the RDO community has been blogging about in the last week.

Joe Talerico and OpenStack Performance at the OpenStack PTG in Atlanta by Rich Bowen

Last month at the OpenStack PTG in Atlanta, Joe Talerico spoke about his work on OpenStack Performance in the Ocata cycle.

Read more at http://rdoproject.org/blog/2017/03/joe-talerico-and-openstack-performance-at-the-openstack-ptg-in-atlanta/

RDO CI promotion pipelines in a nutshell by amoralej

One of the key goals in RDO is to provide a set of well tested and up-to-date repositories that can be smoothly used by our users:

Read more at http://rdoproject.org/blog/2017/03/rdo-ci-in-a-nutshell/

A tale of Tempest rpm with Installers by chandankumar

Tempest is a set of integration tests to run against OpenStack Cloud. Delivering robust and working OpenStack cloud is always challenging. To make sure what we deliver in RDO is rock-solid, we use Tempest to perform a set of API and scenario tests against a running cloud using different installers like puppet-openstack-integration, packstack, and tripleo-quickstart. And, it is the story of how we integrated RDO Tempest RPM package with installers so it can be consumed by various CI rather than using raw upstream sources.

Read more at http://rdoproject.org/blog/2017/03/a-tale-of-tempest-rpm-with-installers/

An even better Ansible reporting interface with ARA 0.12 by dmsimard

Not even a month ago, I announced the release of ARA 0.11 with a bunch of new features and improvements.

Read more at https://dmsimard.com/2017/03/12/an-even-better-ansible-reporting-interface-with-ara-0-12/

Let rdopkg manage your RPM package by

rdopkg is a RPM packaging automation tool which was written to efortlessly keep packages in sync with (fast moving) upstream.

Read more at http://rdoproject.org/blog/2017/03/let-rdopkg-manage-your-RPM-package/

Using Software Factory to manage Red Hat OpenStack Platform lifecycle by Maria Bracho, Senior Product Manager OpenStack

by Nicolas Hicher, Senior Software Engineer – Continuous Integration and Delivery Software-Factory Software-Factory is a collection of services that provides a powerful platform to build software. It enables the same workflow used to develop OpenStack: using Gerrit for code reviews, Zuul/Nodepool/Jenkins as a CI system, and Storyboard for stories and issues tracker. Also, it ensures a reproducible test environment with ephemeral Jenkins slaves.

Read more at http://redhatstackblog.redhat.com/2017/03/08/using-software-factory-to-manage-red-hat-openstack-platform-lifecycle/
Quelle: RDO

Taking cognitive to the next level in gaming, fintech and Kubernetes

Imagine you’re playing a video game, but instead of getting frustrated by waiting while the game catches up with you, it keeps pace. It learns your behaviors and suggests weapons that you want, or tips that you need (because you can’t get past that type of puzzle without a cheat).
It’s like playing with your best friend, the one that actually knows you.
Welcome to the era of learning. It’s an era powered by the IBM cognitive cloud. Cognitive gaming goes beyond artificial intelligence or rote machine learning. It gives developers insights in near real-time to make your experience better than you anticipated. These games anticipate your moves and help you level up faster.
To speed along advancements in this era, I am excited to announce three developments aimed to empower cognitive developers.
Partnership with PlayFab
Our first announcement is a new partnership with PlayFab, which will bring Watson’s analytical abilities to the gaming industry. PlayFab is the most complete backend platform built exclusively for live games. In live gaming, factoring imagery and making real-time decisions is table stakes, that’s why a game that learns with you cognitively is essential.
IBM Cloud for financial services
Next up is fintech. Fintech companies succeed when their services can draw deep insights while remaining secure. That’s why we are excited to announce a new financial services platform. Newly available APIs for investment analysis and pricing that lets developers add better insights and recommendations to their financial applications.
Infrastructure for cognitive made easy
Finally, we focus on infrastructure. Competing in the cognitive era means creating and deploying secure apps quickly. That’s why we’re launching a new Bluemix container service that creates the first native Kubernetes experience to automatically dedicate and manage infrastructure needed to create, run, and secure an app with containers.
The only cloud built for the cognitive era
Diagnosing diseases. Predicting monsoons. Discovering patterns in human behavior — our needs, our frustrations, our aspirations — that call for a new app. One insight makes all the difference. We’re proud to make the cognitive era a reality for more people developers every day. Join us in the cognitive era for free by trying Bluemix, and let me know how it goes @swaneroo.
I can’t wait to play the next cognitive experience that you’ve built.
The post Taking cognitive to the next level in gaming, fintech and Kubernetes appeared first on news.
Quelle: Thoughts on Cloud

Joe Talerico and OpenStack Performance at the OpenStack PTG in Atlanta

Last month at the OpenStack PTG in Atlanta, Joe Talerico spoke about his work on OpenStack Performance in the Ocata cycle.

Subscribe to our YouTube channel for more videos like this.

Joe: Hi, I’m Joe Talerico. I work on OpenStack at Red Hat, doing OpenStack performance. In Ocata, we’re going to be looking at doing API and dataplane performance and performance CI. In Pike we’re looking at doing mix/match workloads of Rally, Shaker,
and perfkit benchmarker, and different styles, different workloads running concurrently. That’s what we’re looking forward to in Pike.

Rich: How long have you been working on this stuff?

Joe: OpenStack performance, probably right around four years now. I started with doing Spec Cloud development, and Spec Cloud development turned into doing performance work at Red Hat for OpenStack … actually, it was Spec Virt, then Spec Cloud, then performance at OpenStack.

Rich: What kind of things were in Ocata that you find interesting?

Joe: In Ocata … for us … well, in Newton, composable roles, but building upon that, in TripleO, being able to do … breaking out the control plane even further, being able to scale out our deployments to much larger clouds. In Ocata, we’re looking to work with CNCF, and do a 500 node deployment, and then put OpenShift on top of that, and find some more potential performance issues, or performance gains, going from Newton to Ocata. We’ve done this previously with Newton, we’re going to redo it with Ocata.
Quelle: RDO

Bluemix, Watson and bot mania: The cognitive era plays hard at SXSW 

The IBM activation this past week in downtown Austin earned it the number three slot in AdWeek’s compilation of the top eight topics that had attendees buzzing at South by Southwest.
No wonder. IBM at SXSW 2017 enticed developers to the golden age of cognitive by amping up its Bluemix services offerings, specifically around the APIs used to help Watson engage more sentiently with humans. IBM gave SXSW attendees access to Watson to create a bot, remix a song, design a t-shirt, or get a beer recommendation.
With no required badge, a full open bar and DJs on its mega roof deck, the IBM activation was fueled by a regular flow of deep dives at the Maker’s Garage and live talks with IBM heavyweights including CEO Ginni Rometty and Bob Sutor, IBM vice president of cognitive, blockchain and quantum solutions.
Conversation elevated from the cloud infrastructure layer to services throughout the entire activation. With Bluemix getting more recognition thanks to the Watson platform, the event spoke heavily to developers looking for a platform to build on and ways to pull together advanced applications.
Demo areas struck the right tone with non-developers, showing not only how Watson is making the world healthier, more secure, personal, creative and engaged, but also how Watson can now respond to human emotions, preferences and taste palates.
With SXSW interactive dovetailing into the mainstay SXSW music event, the creative aspects of Watson got lots of attention, giving musicians and enthusiasts an opportunity to collaborate and, even better, play with one of the world’s most advanced APIs.
Watson Beat, a research project born out of Austin’s Research Lab uses cognitive technology to compose and transform music by remixing any piece of music using a mood-driven palette to create a personal piece that suits the user’s emotional space.
Meanwhile, TJBot, an open source project, is designed to enable users to play with Watson Services such as Speech to Text, which teaches it to understand human speech. TJBot also demonstrates how Watson can hold a conversation and even respond to different moods using Personality Insights, which can analyze the emotive content of speech in real time.

Capitalizing on the year of the bot
IBM may indeed have had the edge on SXSW’s fever pitch around bots, thanks to Watson and Bluemix.
In one SXSW featured session, the IBM events team got together with Vancouver’s Eventbase, creators of SXSW’s Go Mobile App, to share perspectives on how mobile apps and, more broadly, human experience can be enhanced with augmented intelligence.
This year, both SXSW and IBM’s Events mobile app (debuting the week of 19 March) feature intelligent, conversational user interfaces that act as personal concierge services.
“IBM sits in a unique position to provide a platform for bots and other customer experiences,” said Ted Conroy, IBM Events lead. “The appetite for bespoke, personalized experiences is voracious, and IBM&;s cognitive services definitely can feed it.”
As Conroy pointed out, bots today use a simple, cognitive service to respond to sets of questions. When a service can’t answer, it defaults to scripted answers. Soon, bots will be proactive and able to choose the optimal cognitive service to best answer a broad set of questions without the current context limitations.
Check out how to build a bot in 15 minutes with Watson conversation in this demo.
Learn how to build a TJ bot of your own here.
The post Bluemix, Watson and bot mania: The cognitive era plays hard at SXSW  appeared first on news.
Quelle: Thoughts on Cloud

Now’s the time for a multi-cloud strategy

After recent outages from major cloud providers affected thousands of businesses in the US, many CIOs and CTOs are thinking hard about their cloud strategies.
Specifically, they’re looking at the risks of having one service provider across their entire cloud environment, which includes the vulnerabilities of internal, on-premises systems. This is very similar to the design approach network engineers have long used: diversify routes and carriers when connecting your data centers.
At a minimum, corporate technology leaders should be looking at the possibility of a multi-cloud strategy. Simply put, it is a way to avoid putting all your IT eggs in one cloud provider’s basket. By using two or more cloud services, an enterprise can avoid data loss and downtime caused by a breakdown in any single component in hardware, software, storage (the cause of the recent US outage), network or other areas.
Beyond reducing outage risks, a multi-cloud strategy can improve IT performance by avoiding vendor lock-in and using different platforms and infrastructures.
It also helps strengthen software vulnerabilities. The application software stack can be independent of an organization’s cloud infrastructure. Not only does this reduce the level of commitment to one vendor, but it also creates interoperability that enables workloads to quickly migrate.
How should you begin? I recommend you start small but have a strategy that addresses areas such as governance, applications and data, and platforms and infrastructure. Look at the deployments and workloads that might be the “pioneers” settling with a second cloud provider. That provider should have expertise in your industry and excellence in technology. Stay attuned to how your strategy is implemented and be ready to adjust quickly if necessary.
Additionally, multi-cloud management tools can help you address the challenges of working with several cloud environments. Such tools can help you to configure, provision and deploy development environments, as well as integrate service management from a single, self-service interface.
By avoiding the “all or nothing’ approach,” IT leaders gain greater control over their different cloud services. They can pick and choose the product, service or platform that best fits the requirement of each process or business unit, then integrate those services, thereby avoiding problems that come when a single provider runs into trouble.
Learn more about the advantages of a multi-cloud environment.
The post Now’s the time for a multi-cloud strategy appeared first on news.
Quelle: Thoughts on Cloud

A new tool to manage multicloud with speed and control

In this post, we’ll cover the most critical steps to adopting a multicloud strategy. But first, some exciting news:  We will be announcing the IBM Cloud Automation Manager at InterConnect 2017.
IBM Cloud Automation Manager will be released in March 17, 2017. Companies can use CAM to manage their multicloud environment through a single dashboard. It will provide cognitive operations to facilitate deployment of workloads to cloud based on application requirements. IT operations won’t have to guess ever again.
Why does multicloud management matter?
Three out of four companies today have deployed more than one cloud. Is your organization leveraging multiple clouds to run business applications? Is your IT operations team able to build, maintain and operate these multicloud environments with speed, security, compliance and enterprise-grade quality?
The rapid expansion of business cloud portfolios requires a uniform cloud management platform. They need to manage multicloud environments without losing the visibility, governance or operational control that IT teams require to address business needs. Companies need a solution that provides:

Speed and agility
Control and compliance
A single “pane of glass” to manage multicloud environments
Support for traditional and emerging technologies such as containers, cognitive and analytics

What is a multicloud management solution?
A multicloud management solution will enable you to automatically deploy and manage multicloud environments. At the same time, it will provide easy access for developers to rapidly and securely create applications within company policy.
Multicloud management solutions typically provide an automatic provisioning capability, workflow management. They also accelerate application deployment and automate manual or scripted tasks to request, change or deploy standardized cloud services. You can execute these tasks across a range of cloud platforms, often leveraging other automation tools like configuration management.
What’s stopping enterprises from adopting multicloud management solutions? Many companies manage cloud services as silos of workloads and platforms. They may already have multiple tools to manage their on-premises and off-premises cloud services. Adopting new technology may happen piecemeal, and some IT staff may resist change.
Explore multicloud at InterConnect 2017
As you finalize your InterConnect schedule, check out the following multicloud sessions.

Session : Transform your IT operations with IBM Cloud Automation Manager
Automation is at the heart of cloud management in hybrid cloud environments. Hear about the new offering IBM Cloud Automation Manager from our esteemed panel: Judith Hurwitz, Hurwitz & Associates; Justin Youngblood, IBM; Vishal Rajpal, Perficient and Markus Echser, SwissRe.
Session: : Hybrid cloud management: Trends, opportunities and IBM’s strategy
As businesses procure cloud resources from multiple IT vendors, they are looking for a single tool that can agnostically manage these complex environments with ease. Hear from IBM experts and clients about the IBM role in shaping the future of hybrid cloud management.
Session : Introduction to IBM Cloud Automation Manager
This session will deliver a technical introduction to IBM&;s new Cloud Automation Manager offering. The new tool supports the orchestration, automated provisioning and configuration, as well as lifecycle management of resources across a variety of target clouds. Watch the demo of the new solution and get a look into the future.
Session : Hey IT operators: Automation content makes your job easier
In this session, learn from a survey of real-world accomplishments and quantitative results achieved by using automation content. Use cases from different industries will be shared to illustrate how clients could use blueprints and templates to get their jobs done while deploying applications in the cloud.
Session  : Integrating hybrid cloud management with other services
IBM Cloud Automation Manager allows you to integrate multicloud services with additional DevOps, configuration, monitoring, logs, events, security, costs and identity management available in your business. Learn how this new capability applies to both on premises and off-premises cloud infrastructure and applications.
InterConnect 2017 is just around the corner, brace yourself for an awesome cloud conference. See you there.
The post A new tool to manage multicloud with speed and control appeared first on news.
Quelle: Thoughts on Cloud

Intelligent services for elevators and escalators built with IBM Watson

If elevators and escalators do not work properly, it has a significant impact on the way cities function. People may not get to work in their office buildings. They may even end up with having difficulty getting home.
At KONE, we help people move in and between buildings as smoothly and safely as possible. Globally, we service more than 1.1 million elevators and escalators and move more than 1 billion people every day.
Intelligent services
That’s why KONE launched “24/7 Connected Services,” which uses the IBM Watson Internet of Things (IoT) platform to bring intelligent services to elevators and escalators.
KONE wants to create a completely new experience for customers, with less equipment downtime, fewer faults and detailed information on the performance and usage of their equipment.
For people who use elevators and escalators, it means less waiting time, fewer delays, and the potential for new, personalized experiences.
The company uses the IBM Watson IoT platform and its cognitive capabilities in many different ways. For example, it helps predict the condition of the elevator or escalator, thereby helping customers manage their equipment over its life cycle.

Improving predictability and people flow
By bringing artificial intelligence into services, KONE can help predict and suggest resolutions to potential problems.
KONE can provide individualized services that specifically meet the needs of customers. Customers will get services and outcomes that fit their exact needs. This is significant, as outcomes and results are more important to customers than technological features.
Making machine-to-machine more human
This is is just the beginning for KONE. With this platform, KONE can bring new services and new innovations for customers and consumers to the market faster.
In a first for the industry, KONE is revealing real-time machine conversations between elevators and the IoT cloud. Teams at IBM and KONE worked together to introduce a popular marketing campaign that brings a human touch to intelligent services and demystify a complex topic.
It&;s a fun way to demonstrate what 24/7 Connected Services would be like if elevators could talk.
Learn about other IBM clients who built their success on IBM Cloud.
The post Intelligent services for elevators and escalators built with IBM Watson appeared first on news.
Quelle: Thoughts on Cloud

RDO CI promotion pipelines in a nutshell

One of the key goals in is to provide a set of well tested and up-to-date
repositories that can be smoothly used by our users:

Operators deploying OpenStack with any of the available tools.
Upstream projects using RDO repos to develop and test their patches, as OpenStack
puppet modules, TripleO or kolla.

To include new patches in RDO packages as soon as possible, in RDO Trunk
repos
we build and publish new packages when commits are merged in upstream repositories.
To ensure the content of theses packages is trustworthy, we run different tests which
helps us to identify any problem introduced by the changes committed.

This post provides an overview of how we test RDO repositories. If you
are interested in collaborate with us in running an improving it, feel free to
let us know in rdo channel in freenode or rdo-list mailing list.

Promotion Pipelines

Promotion pipelines are composed by a set of related CI jobs that are executed
for each supported OpenStack release to test the content of a specific RDO repository.
Currently promotion pipelines are executed in diferent phases:

Define the repository to be tested. RDO Trunk repositories are identified
by a hash based on the upstream commit of the last built package. The content of
these repos doesn’t change over time. When a promotion pipeline is launched, it
grabs the latest consistent hash repo and sets it to be tested in following phases.

Build TripleO images. TripleO is
the recommended deployment tool for production usage in RDO and as such, is tested
in RDO CI jobs. Before actually deploying OpenStack using TripleO the required
images are built.

Deploy and test RDO. We run a set of jobs which deploy and test OpenStack
using different installers and scenarios to ensure they behave as expected. Currently,
following deployment tools and configurations are tested:

TripleO deployments. Using tripleo-quickstart
we deploy two different configurations, minimal
and minimal_pacemaker
which apply different settings that cover most common options.
OpenStack Puppet scenarios. Project puppet-openstack-integration (a.k.a. p-o-i)
maintains a set of puppet manifest to deploy different OpenStack services
combinations and configurations (scenarios) in a single server using OpenStack
puppet modules and run tempest smoke tests for the deployed services. The
tested services on each scenario can be found in the
README
for p-o-i. Scenarios 1, 2 and 3 are currently tested in RDO CI as
Packstack deployment. As part of the upstream testing, packstack defines
three deployment scenarios
to verify the correct behavior of the existing options. Note that tempest smoke tests
are executed also in these jobs. In RDO-CI we leverage those scenarios to test
new packages built in RDO repos.

Repository and images promotion. When all jobs in the previous phase succeed,
the tested repository is considered good and it is promoted so that users can use these
packages:

The repo is published using CentOS CDN in https://buildlogs.centos.org/centos/7/cloud/x86_64/rdo-trunk-<release>-tested/)
The images are copied to https://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/<release>/delorean/
to be used by TripleO users.

Tools used in RDO CI

Jobs definitions are managed using Jenkings Job Builder, JJB,
via gerrit review workflow in review.rdoproject.org
weirdo is the tool we
use to run p-o-i and Packstack testing scenarios defined upstream inside RDO CI.
It’s composed of a set of ansible roles and playbooks that prepares the environment
and deploy and test the installers using the testing scripts provided by
the projects.
TripleO Quickstart
provides a set of scripts, ansible roles and pre-defined configurations to deploy
an OpenStack cloud using TripleO
in a simple and fully automated way.
ARA is used to store and visualize
the results of ansible playbook runs, making easier to analize and troubleshoot
them.

Infrastructure

RDO is part of the CentOS Cloud Special Interest Group so we run promotion pipelines
in CentOS CI infrastructure where Jenkins
is used as continuous integration server..

Handling issues in RDO CI

An important aspect of running RDO CI is managing properly the errors found in the
jobs included in the promotion pipelines. The root cause of these issues sometimes
is in the OpenStack upstream projects:

Some problems are not catched in devstack-based jobs running in upstream gates.
In some cases, new versions of OpenStack services require changes in the deployment
tools (puppet modules, TripleO, etc…).

One of the contributions of RDO to upstream projects is to increase test coverage of
the projects and help to identify the problems as soon as possible. When we find them
we report it upstream as Launchpad bugs and propose fixes when possible.

Every time we find an issue, a new card is added to the TripleO and RDO CI Status
Trello board where
we track the status and activities carried out to get it fixed.

Status of promotion pipelines

If you are interested in the status of the promotion pipelines in RDO CI you can
check:

CentOS CI RDO view
can be used to see the result and status of the jobs for each OpenStack
release.

RDO Dashboard shows the overal
status of RDO packaging and CI.

More info

TripleO quickstart demostration by trown
Weirdo: A talk about CI in OpenStack and RDO by dmsimard.
ARA blog posts – from dmsimard blog
Ci in RDO: What do we test? – presentation in RDO and Ceph Meetup BCN.

Quelle: RDO

Detours on the way to microservices

In 2008, I first heard Adrian Cockcroft of Netflix describe microservices as “fine-grained service oriented architecture.” I’d spent the previous six years wrestling with the more brutish, coarse-grained service-oriented architecture, its standards and so-called “best practices.” I knew then that unlike web-scale offerings such as Netflix, the road to microservices adoption by companies would have its roadblocks and detours.
It&;s not quite ten years later, and I am about to attend IBM InterConnect, where microservice adoption by business seems inescapable. What better time to consider these detours and how to avoid them?
Conway&8217;s law may be bi-directional
Melvin Conway introduced the idea that&8217;s become known as Conway&8217;s Law: &;Organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.&;
But I saw it occur in reverse: When enterprise software organizations first decide to adopt microservices and its disciplines, I observed development teams organize themselves around the (micro-) services being built. When constructing enterprise applications by coding and “wiring up” small, independently-operating services, the development organization seemed to adjust itself to fit the software architecture, thereby creating silos and organizational friction.
More than the sum of its parts
When an organization first adopts microservices in its architecture, there are resource shortages. People who are skilled in the ways of microservices find themselves stretched far too thin. And specific implementation languages, frameworks or platforms can be in short supply. There&8217;s loss of momentum, attention and effective use of time because the “experts” must continually switch context and change the focus of their attention.
As is usually the case with resource shortage, the issue is one of prioritization: When there are hundreds or even thousands of microservices to build and maintain, how are allocations of scarce resources going to be made? Who makes them and on what basis?
The cloud-native management tax
The adoption of microservices requires a variety of specialized, independent platforms to which developer, test and operations teams must attend. Many of these come with their own forms of management and management tooling. In one case, I looked through the list of management interfaces and tools for a newly-minted, cloud-native application to discover more than forty separate management tools being used. These tools were covering: the different programming languages; authentication; authorization; reporting; databases; caches; platform libraries; service dependencies; pipeline dependencies; security threat model; audits; workflow; log aggregation and much more. The full list was astonishing.
The benefits of cloud-native architecture do not come without a price: organizations will need additional management tooling and the costs of becoming skilled in those management tools.
Carrying forward the technical debt
When a company embraces cloud migration or digital transformation, a team may be chartered to re-architect and re-implement an existing, monolithic application, its associated data, external dependencies and technical interconnections. Too often, I discovered that the shortcuts and hard-coded aspects of the existing application were being re-implemented as well. There seemed to be a part of the process that was missing when the objective was to migrate an application.
In an upcoming blog post, I&8217;ll consider some of the common detours and look to what practices and technologies are being used to avoid them.
Join me and other industry experts as we explore the world of microserivces at IBM InterConnect on March 19 – 23, 2017.
 
The post Detours on the way to microservices appeared first on news.
Quelle: Thoughts on Cloud