New tools boost data governance ahead of GDPR implementation

Protecting sensitive data is more important now than ever. Organizations obviously want to protect customer information and their reputations, but beyond that, new regulations are making data governance mandatory.
With the EU’s General Data Protection Regulation (GDPR) set to go into effect in May 2018, IBM announced major improvements to data governance and data science initiatives at an event in Munich this week. According to a press release, “organizations will gain greater understanding and control of their data, while facilitating their ability to prepare for rising data regulations.”
The new solutions include the Open Data Governance Consortium for Apache Atlas, a Unified Governance Software Platform, advances in machine learning and a new catalog of governance tools to help ease access. The consortium aims to make “robust governance capabilities open and free to the public.”
Another addition is StoredIQ, which helps identify types of data stores within an organization.
Learn more about the new data governance advances in Computer Business Review‘s full article.
The post New tools boost data governance ahead of GDPR implementation appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Introducing Software Factory – part 1

Introducing Software Factory

Software Factory is an open source, software development forge with an emphasis
on collaboration and ensuring code quality through Continuous Integration (CI).
It is inspired by OpenStack’s development workflow that has proven to be reliable
for fast-changing, interdependent projects driven by large communities.

Software Factory is batteries included, easy to deploy and fully leverages
OpenStack clouds. Software Factory is an ideal solution
for code hosting, feature and issue tracking, and Continuous Integration,
Delivery and Deployment.

Why Software Factory ?

OpenStack, as a very large collection of open source projects with thousands of
contributors across the world, had to solve scale and interdependency problems
to ensure the quality of its codebase; this led to designing new best practices
and tools in the fields of software engineering, testing and delivery.

Unfortunately, until recently these tools were mostly custom-tailored for OpenStack,
meaning that deploying and configuring all these tools together to set up a similar
development forge would require a tremendous effort.

The purpose of Software Factory is to make these new tools easily available
to development teams, and thus help to spread these new best practices as well.
With a minimal effort in configuration, a complete forge can be deployed and
customized in minutes, so that developers can focus on what matters most to them:
delivering quality code!

Innovative features

Multiple repository project support

Software projects requiring multiple, interdependent repositories are very common.
Among standard architectural patterns, we have:

Modular software that supports plugins,drivers, add-ons …
Clients and Servers
Distribution of packages
Micro-services

But being common does not mean being easy to deal with.

For starters, ensuring that code validation jobs are always built from the adequate
combination of source code repositories at the right version can be headache-inducing.

Another common problem is that issue tracking and task management solutions must
be aware that issues and tasks might and will span over several repositories at a time.

Software Factory supports multiple repository projects natively at every step of
the development workflow: code hosting, story tracking and CI.

Smart Gating system

Usually, a team working on new features will split the tasks among its members.
Each member will work on his or her branch, and it will be up to one or more release
managers to ensure that branches get merged back correctly.
This can be a daunting task with long living branches, and prone to human errors
on fast-evolving projects. Isn’t there a way to automate this?

Software Factory provides a CI system that ensures master branches are always
“healthy” using a smart gate pipeline. Every change proposed on
a repository is tested by the CI and must pass these tests before
being eligible to land on the master branch. Nowadays this is quite
common in modern CI systems, but Software Factory goes above and beyond to really
make the life of code maintainers and release managers easier:

Automatic merging of patches into the repositories when all conditions of eligibility are satisfied

Software Factory will automatically merge mergeable patches that have been
validated by the CI and at least one privileged repository maintainer (in Software Factory
terminology, a “core developer”). This is configurable and can be adapted to
any workflow or team size.

The Software Factory gating system takes care of some of the release manager’s tasks, namely
managing the merge order of patches, testing them, and integrating them or requiring
further work from the author of the patch.

Speculative merging strategy

Actually, once a patch is deemed mergeable, it is not merged immediately into the
code base. Instead, Software Factory will run another batch of (fully customizable)
validation jobs on what the master code base would look like if that patch plus others
mergeables patches was merged. In other words, the validation jobs are run on a code base
consisting of:

the latest commit on the master branch, plus
any other mergeable patches for this repository (rebased on each others in approval order), plus
the mergeable patch on top

This ensures not only that the patch to merge is always compatible with the current code base,
but also detects compatibility problems between patches before they can do any harm
(if the validation jobs fail, the patch remains at the gate and will need to be reworked by its author).

This speculative strategy minimizes greatly the time needed to merge patches because
the CI assumes by default that all mergeable patches will pass their validation jobs
successfully. And even if a patch in the middle of the currently tested chain of patches fails,
then the CI will discard only the failing patch and automatically rebase the others (those
previously rebased on the failed one) to the closest one. This is really powerful especially
when integration tests take a long time to run.

Jobs orchestration in parallel or chained

For developers, a fast CI feedback is critical to avoid context switching. Software Factory can run
CI jobs in parallel for any given patch, thus reducing the time it takes to assess the quality of
submissions.

Software Factory also supports chaining CI jobs, allowing artifacts from a job
to be consumed by the next one (for example, making sure software can be built
in a job, then running functional tests on that build in the next job).

Complete control on jobs environments

One of the most common and annoying problems in Continuous Integration
is dealing with jobs flakiness, or unstability, meaning that successive runs of
the same job in the same conditions might not have the same results. This is often
due to running these jobs on the same, long-lived worker nodes, which is prone to
interferences especially if the environment is not correctly cleaned up between runs.

Despite all this, long-lived nodes are often used because recreating a test environment
from scratch might be costly, in terms of time and resources.

The Software Factory CI system brings a simple solution to this problem by managing
a pool of virtual machines on an OpenStack cloud, where jobs will be executed. This
consists in:

Provisioning worker VMs according to developers’ specifications
Ensuring that a given minimal amount of VMs are ready for new jobs at any time
Discarding and destroying VMs once a job has run its course
Keeping VMs up to date when their image definitions have changed
Abiding by cloud usage quotas as defined in Software Factory’s configuration

A fresh VM will be selected from the pool as soon as a job must be executed.

This approach has several advantages:

The flakiness due to environment decay is completely eliminated
Great flexibility and liberty in the way you define your images
CI costs can be decreased by using computing resources only when needed

On top of this Software Factory can support multiple OpenStack cloud providers at
the same time, improving service availability via failover.

User-driven platform configuration, and configuration as code

In lots of organizations, development teams rely on operational teams to manage
their resources, like adding contributors with the correct access credentials to
a project, or provisioning a new test node. This can induce a lot of delays for
reasons ranging from legal issues to human error, and be very frustrating for
developers.

With Software Factory, everything can be managed by development teams themselves.

Software Factory’s general configuration is stored in a Git repository, aptly named
config, where the following resources among others can be defined:

Git projects/repositories/ACLs
job running VM images and provisioning
validation jobs and gating strategies

Users of a Software Factory instance can propose, review and approve (with the adequate accreditations)
configuration changes that are validated by a built-in CI workflow. When a configuration
change is approved and merged the platform
re-configures itself automatically, and the configuration change is immediately applied.

This simplifies the work of the platform operator, and gives more freedom, flexibility
and trust to the community of users when it comes to managing projects and
resources like job running VMs.

Others features included in Software Factory

As we said at the beginning of this article Software Factory is a complete
development forge. Here is the list of its main features:

Code hosting
Code review
Issue tracker
Continuous Integration, Delivery and Deployment
Job logs storage
Job logs search engine
Notification system over IRC
Git repository statistics and reporting
Collaboration tools: etherpad, pasteboard, voip server

To sum up

We can list how Software Factory may improve your team’s productivity.
Software Factory will:

help ensure that your code base is always healthy, buildable and deployable
ease merging patches into the master branch
ease managing projects scattered across multiple Git repositories
improve community building and knowledge sharing on the code base
help reduce test flakiness
give developers more freedom towards their test environments
simplify projects creation and configuration
simplify switching between services (code review, CI, …) thanks to its navigation panel

Software Factory is also operators friendly:

100% Open source, self hosted, ie no vendor lock-in or dependency to external providers
based on Centos 7 and benefits from the stability of this Linux distribution
fast and easy to deploy and upgrade
simple configuration and documentation

This article is the first in a series that will introduce the components of
Software Factory and explain how they provide those features.

In the meantime you may want to check out softwarefactory-project.io and review.rdoproject.org
(both are Software Factory deployments) to explore the features we discussed in this article.

Thank you for reading and stay tuned !
Quelle: RDO

OpenShift Commons Briefing #77: What Does “Monitoring” Mean in a Cloud Native World? with Brian Brazil (RobustPerception)

In this briefing, Brian Brazil, Founder, RobustPerception.io and core Prometheus developer, explains the core concepts behind monitoring in cloud-native environments, and how the interrelated monitoring offerings and OSS projects can help your organization deliver the best monitoring solution for your OpenShift/Kubernetes implementations.
Quelle: OpenShift

Microservice Builder: Software delivery goes from days to minutes

As the world becomes more connected than ever, your business must be ready to face rising challenges. In a study, Gartner predicts that by 2020 there will be more than 20 billion connected “things,” and the total will grow at an astonishing rate of 5.5 million new devices coming online each day.
So how does your business win in this increasingly connected economy?
To succeed, it is imperative your business focuses on interactions and value exchange across your entire partner ecosystem. At the same time, you need to deliver microservices and applications with greater speed, consistency and reliability. Microservices architectures help give you the agility and stability you need as your operations scale up.
Microservices are gaining traction for their ability to develop and deliver modern, lightweight and composable workloads. They achieve this by loosely coupling composable modules, improving fault isolation and easily scaling development. For developers working in Java or another framework, microservices are the perfect solution. But because they must function in a multicloud environment with millions of connected things, these applications are becoming larger and more complex.
Microservices in a multicloud environment
Today, enterprises need a way to securely develop and deploy containerized applications with the flexibility to run in both a public cloud and on-premises system. We built Microservice Builder to help you solve this challenge. The new tool provides your organization with a complete user experience for creating, testing and deploying applications.
Microservice Builder includes everything a business needs to focus on application development rather than the framework. It provides beta binaries to support building and testing environments, and low-touch development-to-deployment experience with simplification of DevOps tasks.
The Microservice Builder turnkey approach makes it easier for thousands of IBM WebSphere users to compose modern microservice-based applications and deploy them using DevOps toolchains on existing WebSphere infrastructure. It can help you accelerate software lifecycle through continuous delivery and you can adopt modern design thinking approaches while integrating with current infrastructure and assets.
This solution also allows ease of deployment to Bluemix using container services, and ease of problem diagnostics with log analytics and monitoring. Microservice Builder also supports single-sign-on (SSO) for simplified security authentication and authorization as well as access to an on-premises platform for managing containerized applications.
Here’s a quick rundown of ways Microservice Builder can benefit your business:

Offers a continuous delivery Pipeline to accelerate software delivery from weeks, to days, to minutes
Allows for rapid hybrid and cloud-native application development and testing cycles with greater agility, scalability and security
Reduces costs and complexity with seamless portability across popular cloud providers including public, dedicated, private and hybrid clouds
Minimizes downtime and maintains service level agreements with real-time diagnosis and resolution of application infrastructure
Future innovation. Easily connects existing applications to new cloud services, like Watson Cognitive

Simply put, Microservice Builder can greatly simplify the software delivery pipeline. Your  developers can adopt a more frictionless application lifecycle from development through production.
Check it out for yourself. You can view a quick video demo that walks you through the experience.
Learn how you can get started on an easy path to building containerized apps in your microservices framework with IBM Microservice Builder on the developerWorks page.
The post Microservice Builder: Software delivery goes from days to minutes appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Microservice Builder: Software delivery goes from days to minutes

As the world becomes more connected than ever, your business must be ready to face rising challenges. In a study, Gartner predicts that by 2020 there will be more than 20 billion connected “things,” and the total will grow at an astonishing rate of 5.5 million new devices coming online each day.
So how does your business win in this increasingly connected economy?
To succeed, it is imperative your business focuses on interactions and value exchange across your entire partner ecosystem. At the same time, you need to deliver microservices and applications with greater speed, consistency and reliability. Microservices architectures help give you the agility and stability you need as your operations scale up.
Microservices are gaining traction for their ability to develop and deliver modern, lightweight and composable workloads. They achieve this by loosely coupling composable modules, improving fault isolation and easily scaling development. For developers working in Java or another framework, microservices are the perfect solution. But because they must function in a multicloud environment with millions of connected things, these applications are becoming larger and more complex.
Microservices in a multicloud environment
Today, enterprises need a way to securely develop and deploy containerized applications with the flexibility to run in both a public cloud and on-premises system. We built Microservice Builder to help you solve this challenge. The new tool provides your organization with a complete user experience for creating, testing and deploying applications.
Microservice Builder includes everything a business needs to focus on application development rather than the framework. It provides beta binaries to support building and testing environments, and low-touch development-to-deployment experience with simplification of DevOps tasks.
The Microservice Builder turnkey approach makes it easier for thousands of IBM WebSphere users to compose modern microservice-based applications and deploy them using DevOps toolchains on existing WebSphere infrastructure. It can help you accelerate software lifecycle through continuous delivery and you can adopt modern design thinking approaches while integrating with current infrastructure and assets.
This solution also allows ease of deployment to Bluemix using container services, and ease of problem diagnostics with log analytics and monitoring. Microservice Builder also supports single-sign-on (SSO) for simplified security authentication and authorization as well as access to an on-premises platform for managing containerized applications.
Here’s a quick rundown of ways Microservice Builder can benefit your business:

Offers a continuous delivery Pipeline to accelerate software delivery from weeks, to days, to minutes
Allows for rapid hybrid and cloud-native application development and testing cycles with greater agility, scalability and security
Reduces costs and complexity with seamless portability across popular cloud providers including public, dedicated, private and hybrid clouds
Minimizes downtime and maintains service level agreements with real-time diagnosis and resolution of application infrastructure
Future innovation. Easily connects existing applications to new cloud services, like Watson Cognitive

Simply put, Microservice Builder can greatly simplify the software delivery pipeline. Your  developers can adopt a more frictionless application lifecycle from development through production.
Check it out for yourself. You can view a quick video demo that walks you through the experience.
Learn how you can get started on an easy path to building containerized apps in your microservices framework with IBM Microservice Builder on the developerWorks page.
The post Microservice Builder: Software delivery goes from days to minutes appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Virtualization is changing how we need to do service assurance

Recently Jim Carey blogged about the ever-increasing pace of change that affects operations teams worldwide. This is particularly true of communication service providers (CSPs), who in this dynamic world face new challenges as they look to generate more revenue and improve operational efficiency.
Disruptive over-the-top service providers are now threatening traditional CSP revenue streams and margins. To stay competitive, CSPs need to tackle disruptive challenges with transformative strategies that continuously evolve network infrastructure. To be competitive, it is critical that they explore opportunities to support new and agile service offerings at significantly lower costs.
Two things are really changing how service providers need to manage their services and network infrastructure: virtualization and the automation enabled by virtualization.
The typical goals of network function virtualization (NFV) and software defined networks (SDN) are to:

Reduce the cost of infrastructure and speed operational agility
Get more and more niche services out the door

Companies need a new management and operational paradigm to improve operational efficiencies and the service lifecycle management of cloud-enabled services leveraging NFV and SDN. Services and applications are now increasingly deployed in virtualized environments, resulting in highly distributed and increasingly complex hybrid networks.
This new deployment model requires an agile and dynamic operations management solution. To leverage rapidly evolving technologies, real-time topology can be key to orchestration and service assurance in these hybrid networks. You must acquire data from sources that are constantly in a state of change before you can construct a model of the underlying network and resources.
A new IBM offering called Agile Service Manager (ASM) can help you maintain these dynamic changes to the resources and topology. ASM is a shared resource, supporting other functions in the assurance and orchestration domains. Operations teams can now uncover insights to help analyse the impact of a fault or detect whether customer experience is being correctly met. For example, Operations teams can see the details of what server a virtual machine is running on, which network function it is a part of and what service instance is its parent.
With ASM operations teams get complete up-to-date visibility and control over dynamic infrastructure and services. It is cloud-born and built on secure, robust and proven technologies.
ASM also lets you query a specific networked resource and then presents a configurable topology view of it within its ecosystem of relationships and states, both in real time and within a definable time window. You can visualize complex network topologies in real time, whether updated dynamically or on-demand. This allows you to further investigate events, incidents and performance. As a result, you can improve operational efficiency and detect and solve faster. You can reduce false alarms while improving automation and collaboration between operational teams.
Observers built into Agile Service Manager are responsible for obtaining data from a specific data source for a specified tenant. Observers are microservices which are used to read information from a data source and update the topology in the topology service. For example, the OpenStack Observer can render detailed OpenStack topologies, and delve further into their states, histories and relationships. You can see how effectively ASM is in keeping track of OpenStack environments here.
Agility and operational trust sometimes place opposing requirements on the systems that manage the lifecycle of services and virtual network functions. You need both to have a credible platform. ASM can help by extending the capabilities of the market-leading Netcool Operations Insight solution. Building on existing analytics and correlations to become truly cognitive, this new capability establishes a foundation for managing SDN and NFV environments by providing timely visibility of highly dynamic infrastructures and services.
Find out more about Netcool and IBM Operations management here.
The post Virtualization is changing how we need to do service assurance appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Cedato’s cloud-based programmatic video solution is a matchmaker for advertisers and publishers

When you’re browsing a news website such as CNN, you will no doubt run across ads that autoplay on the sidebar or before you can watch a news report.
Not everyone who visits CNN is shown the same ad. Behind the scenes, there is a complex profiling process to target the viewer with the right video.
Advertisers bid for a “good user” — someone who fits their criteria, perhaps who uses a Macbook to search online for vacations, lives in a specific demographic and makes regular online purchases — in a process called real-time bidding. The highest paying company that matches additional compatibility and verification criteria will win the bid and show the user its ad.
All this happens between the time the user clicks the play button and when they’re presented the ad. This is what’s called “programmatic video advertising.”
State of Digital defines programmatic advertising as the purchase and sale of advertising space in real time using algorithms. Software automates the buying, placement and optimization of media inventory in a bidding system in real time without people doing manual insertions and trading.
According to eMarketer, more than two-thirds of US digital display ad spending is programmatic, and it’s being rapidly adopted across a variety of channels and ad formats.
Cedato is a company that’s perfectly poised to take advantage of the almost $230 billion per year online digital advertising industry.
How Cedato monetizes video
Cedato powers digital video transactions with its predictive programmatic video advertising software, which is a technology stack that enables advertisers to deliver video and monetize it across any type of screen in any type of content, be it video content, standard web editorial content, apps or other types of content.
Cedato engages with all aspects of video delivery, ad serving, yield optimization, and other requirements to monetize video. In parallel, the company operates a private video marketplace that’s reaching extremely high volumes. The company powers about 15 billion video views every month across two million sites and apps. Cedato is a young company, established in 2015, and is among the fastest growing startups in the ad-tech space.
Disrupting the video advertising market
Many companies in Cedato’s industry niche operate in a revenue-sharing model in which every link in the chain that leads from an advertiser to a publisher takes a small cut out of the budgets that are passing through it. Cedato operates with a totally different approach.
The company positioned itself a programmatic video technology provider, offering a software-as-a-service (SaaS) model by which it charges based on the actual transactions that go through the platform. This business model is highly attractive and highly differentiated, which enabled the startup to grow quickly.
Because Cedato is media agnostic, it can work with everyone that operates in its ecosystem. Cedato enables publishers as well as platforms to repurpose advertising spaces from display to video. They can perform very effective yield optimization across multiple demand sources, enabling customers who do not have video content to take advantage of the fast-growing video advertising space.
Scaling to meet demand
Because of its great product-market fit, Cedato experienced rapid growth.
As a SaaS company, Cedato’s cloud environment is critical to its business flow. The company needed an “elastic” hosting service that would support its nimble growth and future expansion across the globe. Cedato’s leaders were looking for a trustworthy partner that would provide the technical skills and know-how to push the envelope in terms of performance, availability and reliability.
Cedato chose IBM because of its worldwide data center presence and powerful technology. IBM Bluemix offers Cedato multiple bare metal configurations – far more than other providers – that can be almost instantly provisioned when the order is placed.
The company counts on the strong IBM brand and reputation, which provides substantiation for its customers. Its programmatic video solution runs on an IBM-powered infrastructure, which helps to close deals with some of the industry’s leading publishers. Cedato customers know they have a reliable infrastructure that keeps their data and transactions safe, while they automatically lift engagement and business results.
Learn more about IBM Cloud Video.
The post Cedato’s cloud-based programmatic video solution is a matchmaker for advertisers and publishers appeared first on Cloud computing news.
Quelle: Thoughts on Cloud