US Army Logistics extends cloud contract and adds Watson

The US Army’s Logistics Support Activity has extended a cloud computing agreement with IBM first inked in 2012 for an additional three years. The new $135 million contract will include Watson capabilities in addition to they cloud services and software the Army was already using.
“We’re moving beyond infrastructure as a service and embracing both platform and software as a service, adopting commercial cloud capabilities to further enhance Army readiness,” said LOGSA Commander Col. John Kuenzli. Cognitive computing and analytics are of particular interest, Kuenzli said.
The Army’s Logistics Support Activity uses cloud computing to manage hundreds of military bases and other facilities, as well as coordinate the movements of thousands of people and vehicles. Watson will help analyze some 5 billion points of sensor data from jeeps, drones and other military assets to help with predictive maintenance. The Army and IBM initially successfully tested IBM cognitive capabilities on 10 percent of the Army Stryker vehicle fleet.
For more about LOGSA renewing its cloud contract with IBM, check out Nextgov‘s full article.
The post US Army Logistics extends cloud contract and adds Watson appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Is Kubernetes Repeating OpenStack’s Mistakes?

The post Is Kubernetes Repeating OpenStack’s Mistakes? appeared first on Mirantis | Pure Play Open Cloud.
Remember the rise of OpenStack? First there was Amazon and cloud. And then VMware said that cloud can also be private. And then Eucalyptus and CloudStack said that private cloud should be open. And then came Rackspace with OpenStack and said that private cloud should be ever pluggable and flexible. And all the vendors cheered! (And yes, that includes Mirantis). All hail OpenStack, protector of DIY private cloud and conqueror of Amazon!
But in the end few were able to find refuge from AWS through OpenStack. So everybody ran to find new cover. Today that new cover is starting to look a lot like Containers-as-a-Service (aka unstructured PaaS) propelled forward by Google and Kubernetes. We are effectively observing Cloud_Opinion’s law coming in effect: “Every vendor that can’t compete in Cloud chooses hybrid as their strategy.”
Multi-cloud is the new private. Kubernetes is the new OpenStack. But is there an opportunity to learn from the past and do it better this time around? So far, at least some of the parallels are concerning. Let’s examine them…
Before OpenStack there was Eucalyptus and CloudStack. Both were opinionated implementations of a private cloud reference architecture. Their opinionated nature stifled broad customer adoption, but things were chugging along. Then came OpenStack, which played the unopinionated DIY card and the following happened:

Fast forward to today. There is Cloud Foundry and there is OpenShift. Like Eucalyptus and CloudStack, both are opinionated. So things are chugging along, but neither is a runaway success. Both are swimming against the strong current of enterprises’ desire to DIY.
Along comes a Kubernetes-fueled wave of multi-cloud CaaS and, sure enough, unopinionated DIY wins:

Kubernetes CaaS won. There is no denying it. Mesosphere is now a Kubernetes CaaS. Super opinionated Pivotal Cloud Foundry PaaS is now a Kubernetes CaaS. And even the very conservative Gartner in its May report threw out some not so conservative statements: “Platform-as-a-Service vendors… are pivoting to offer CaaS solutions… Such platforms can ultimately make the multicloud promise…a reality.”   
I think there are two ways to look at it. The optimist in me cheers because the unopinionated multi-cloud CaaS is finally getting that bottoms up developer adoption that structured PaaS could never enjoy. The pessimist in me ponders how, once again, the industry caves in to thirst for DIY, choosing short term speed of adoption over long term operational sustainability. We are heading towards a composable, multi-cloud CaaS that is a plethora of best of breed building blocks – Docker, Kubernetes, Helm, Istio, Spinnaker etc. – developed by a variety of loosely coupled interests; each with its own release cycle. So how will we operate all of this stuff?
Operational challenges are exactly what stifled private cloud and dragged OpenStack down with it. So as we move from structured PaaS to composable CaaS are we not marching down the same exact path again?
Opinionated solutions delivered as software can’t win against opinionated solutions delivered as cloud. So the only way to move the infrastructure market with software is to play the DIY card, which after a while makes operational challenges more acute and, consequently, the cloud delivery model more attractive. Is it a spiral of doom?  To get adoption for private IaaS we made it DIY friendly with OpenStack. But then we stumbled with operations and surrendered to public cloud. Now moved to private PaaS software. To get adoption for private PaaS software we are now making it DIY friendly by moving to CaaS. You can guess what will happen next.  
 
The post Is Kubernetes Repeating OpenStack’s Mistakes? appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Writing a SELinux policy from the ground up

SELinux is a mechanism that implements mandatory access controls in Linux systems.
This article shows how to create a SELinux policy that confines a standard service:

Limit its network interfaces,
Restrict its system access, and
Protect its secrets.

Mandatory access control

By default, unconfined processes use discretionary access controls (DAC).
A user has all the permissions over its objects, for example the
owner of a log file can modify it or make it world readable.

In contrast, mandatory access control (MAC) enables more fine grained controls,
for example it can restrict the owner of a log file to only append operations.
Moreover, MAC can also be used to reduce the capability of a regular
process, for example by denying debugging or networking capabilities.

This is great for system security, but is also a powerful tool
to control and better understand an application.
Security policies reduce services’ attack surface and describes
service system operations in depth.

Policy module files

A SELinux policy is composed of:

A type enforcement file (.te): describes the policy type and access control,
An interface file (.if): defines functions available to other policies,
A file context file (.fc): describes the path labels, and
A package spec file (.spec): describes how to build and install the policy.

The packaging is optional but highly recommended since it’s a standard
method to distribute and install new pieces on a system.

Under the hood, these files are written using macros processors:

A policy file (.pp) is generated using: make NAME=targeted -f “/usr/share/selinux/devel/Makefile”
An intermediary file (.cil) is generated using: /usr/libexec/selinux/hll/pp

Policy developpment workflow:

The first step is to get the services running in a confined domain.
Then we define new labels to better protect the service.
Finally the service is run in permissive mode to collect the access it needs.

As an example, we are going to create a security policy for the scheduler
service of the Zuul program.

Confining a Service

To get the basic policy definitions, we use the
sepolicy generate
command to generate a bootstrap zuul-scheduler policy:

sepolicy generate –init /opt/rh/rh-python35/root/bin/zuul-scheduler

The –init argument tells the command to generate a service policy. Other
types of policy could be generated such as user application, inetd daemon
or confined administrator.

The .te file contains:

A new zuul_scheduler_t domain,
A new zuul_scheduler_exec_t file label,
A domain transition from systemd to zuul_scheduler_t when the zuul_scheduler_exec_t is executed, and
Miscellaneous definitions such as the ability to read localization settings.

The .fc file contains regular expressions to match a file path with a label:
/bin/zuul-scheduler is associated with zuul_scheduler_exec_t.

The .if file contains methods (macros) that enable role extension. For example,
we could use the zuul_scheduler_admin method to authorize a staff role to administrate
the zuul service. We won’t use this file because the admin user (root) is unconfined
by default and it doesn’t need special permission to administrate the service.

To install the zuul-scheduler policy we can run the provided script:
$ sudo ./zuul_scheduler.sh
Building and Loading Policy
+ make -f /usr/share/selinux/devel/Makefile zuul_scheduler.pp
Creating targeted zuul_scheduler.pp policy package
+ /usr/sbin/semodule -i zuul_scheduler.pp
Restarting the service should show (using “ps Zax”) that it is now
running with the system_u:system_r:zuul_scheduler_t:s0 context instead of
the system_u:system_r:unconfined_service_t:s0.

And looking at the audit.log, it should show many “avc: denied error” because no
permissions have yet been defined. Note that the service is running fine because
this initial policy defines the zuul_scheduler_t domain as permissive.

Before authorizing the service’s access, let’s define the zuul resources.

Define the service resources

The service is trying to access /etc/opt/rh/rh-python35/zuul and
/var/opt/rh/rh-python35/lib/zuul which inherited the etc_t and var_lib_t labels.
Instead of giving zuul_scheduler_t access to etc_t and var_lib_t,
we will create new types. Moreover the zuul-scheduler manages secret keys
we could isolate from its general home directory and it requires two tcp ports.

In the .fc file, define the new paths:
/var/opt/rh/rh-python35/lib/zuul/keys(/.*)? gen_context(system_u:object_r:zuul_keys_t,s0)
/etc/opt/rh/rh-python35/zuul(/.*)? gen_context(system_u:object_r:zuul_conf_t,s0)
/var/opt/rh/rh-python35/lib/zuul(/.*)? gen_context(system_u:object_r:zuul_var_lib_t,s0)
/var/opt/rh/rh-python35/log/zuul(/.*)? gen_context(system_u:object_r:zuul_log_t,s0)

In the .te file, declare the new types:
# System files
type zuul_conf_t;
files_type(zuul_conf_t)
type zuul_var_lib_t;
files_type(zuul_var_lib_t)
type zuul_log_t;
logging_log_file(zuul_log_t)

# Secret files
type zuul_keys_t;
files_type(zuul_keys_t)

# Network label
type zuul_gearman_port_t;
corenet_port(zuul_gearman_port_t)
type zuul_webapp_port_t;
corenet_port(zuul_webapp_port_t);

Note that the file_type() macro is important since it provides unconfined access to
the new types. Without it, even the admin user could not access the file.

In the .spec file, add the new path and setup the tcp port labels:
%define relabel_files()
restorecon -R /var/opt/rh/rh-python35/lib/zuul/keys

# In the %post section, add
semanage port -a -t zuul_gearman_port_t -p tcp 4730
semanage port -a -t zuul_webapp_port_t -p tcp 8001

# In the %postun section, add
for port in 4730 8001; do semanage port -d -p tcp $port; done

Rebuild and install the package:
sudo ./zuul_scheduler.sh && sudo rpm -ivh ./noarch/*.rpm

Check that the new types are installed using “ls -Z” and “semanage port -l”:
$ ls -Zd /var/opt/rh/rh-python35/lib/zuul/keys/
drwx——. zuul zuul system_u:object_r:zuul_keys_t:s0 /var/opt/rh/rh-python35/lib/zuul/keys/
$ sudo semanage port -l | grep zuul
zuul_gearman_port_t tcp 4730
zuul_webapp_port_t tcp 8001

Update the policy

With the service resources now declared, let’s restart the service and start
using it to collect all the access it needs.

After a while, we can update the policy using “./zuul_scheduler.sh –update”
which basically does: “ausearch -m avc –raw | audit2allow -R”.
This collects all the permissions denied to generates type enforcement rules.

We can repeat this steps until all the required accesses are collected.

Here’s what looks like the resulting zuul-scheduler rules:

allow zuul_scheduler_t gerrit_port_t:tcp_socket name_connect;
allow zuul_scheduler_t mysqld_port_t:tcp_socket name_connect;
allow zuul_scheduler_t net_conf_t:file { getattr open read };
allow zuul_scheduler_t proc_t:file { getattr open read };
allow zuul_scheduler_t random_device_t:chr_file { open read };
allow zuul_scheduler_t zookeeper_client_port_t:tcp_socket name_connect;
allow zuul_scheduler_t zuul_conf_t:dir getattr;
allow zuul_scheduler_t zuul_conf_t:file { getattr open read };
allow zuul_scheduler_t zuul_exec_t:file getattr;
allow zuul_scheduler_t zuul_gearman_port_t:tcp_socket { name_bind name_connect };
allow zuul_scheduler_t zuul_keys_t:dir getattr;
allow zuul_scheduler_t zuul_keys_t:file { create getattr open read write };
allow zuul_scheduler_t zuul_log_t:file { append open };
allow zuul_scheduler_t zuul_var_lib_t:dir { add_name create remove_name write };
allow zuul_scheduler_t zuul_var_lib_t:file { create getattr open rename write };
allow zuul_scheduler_t zuul_webapp_port_t:tcp_socket name_bind;

Once the service is no longer being denied permissions, we can remove the
“permissive zuul_scheduler_t;” declaration and deploy it in production. To avoid
issues, the domain can be set to permissive at first using:

$ sudo semanage permissive -a zuul_scheduler_t

Too long, didn’t read

In short, to confine a service:

Use sepolicy generate
Declare the service’s resources
Install the policy and restart the service
Use audit2allow

Here are some useful documents:

The reference policy
Object Classes and Permissions
Dan Walsh’s Blog
Writing SELinux Policy presentation by Miroslav Grepl

Quelle: RDO

Accelerating the benefits of cognitive solutions with cloud managed services

A recent internal IBM survey found that 70 percent of IT leaders are looking to expand the capabilities of their IT landscape by integrating cognitive technology.
As mentioned in previous Thoughts on Cloud blog posts, deploying your IT environment with a cloud managed services solution can free up IT staff to focus more on innovation while also helping organizations avoid the costs associated with an on-premises data center.
These benefits seem particularly relevant for cognitive solutions. Allowing skilled specialists to handle day-to-day IT management can help businesses reach the benefits of cognitive computing faster and more effectively.
With that in mind, here are three examples of companies that have used cloud managed services to help save time and reduce costs, accelerating the benefits of cognitive technology.
1. Capturing the voice of the customer through cognitive on cloud
An insurance company noticed that its customers were increasingly choosing short-term contracts that allowed them to shop around and switch providers more frequently. As a result, the company decided that improving customer satisfaction through a better user experience needed to be a top priority.
To achieve this goal, the company focused on the customer experience offered at its call centers. Its existing IT environment couldn’t analyze the massive amounts of data associated with each customer interaction in a way that could yield usable insight.
In response, the company deployed a natural language processing (NLP) analytics solution in a managed cloud environment. Powered by IBM Watson technology, this solution can collect and analyze hundreds of thousands of records, providing insights that the business can use to prepare for customer queries before they occur.
This approach has resulted in fewer inbound calls from disgruntled customers, increased rates of completed customer calls and improved customer satisfaction. Since implementation, the call center has increased its rate of completed calls by more than 11 percent and received the highest possible satisfaction rating. The company accomplished this while cutting costs by reducing the need for supplemental operators to handle peak call volumes.
2. Harnessing the weather with cloud cognitive solutions
Up-to-the-minute weather updates are nothing new. Users have had that information at their fingertips for years, but one transportation company recently discovered it could use cognitive computing to transform weather data into usable business insight.
An aging infrastructure and outdated enterprise resource planning (ERP) environment prevented the company from competing with newer, more innovative transportation services. To remain competitive, it needed to find new ways to deliver a better, safer rider experience.
The company implemented a fully-managed cloud solution that combines real-time weather analytics and Watson Internet of Things (IoT). By tapping into cognitive-powered applications, the company can now predict and prepare for increased customer demand due to upcoming rain. More critically, they can improve rider safety by quickly identifying potential flood areas, supplying drivers with alternate routes.
Watson IoT helps the company to further boost its customer experience by monitoring and incentivizing preferred driver behaviors and helping them integrate roadside support.
Through a cloud managed services environment, the business freed up capital that it can use for funding new revenue-generating initiatives. It also enabled IT staff to devote time to finding other ways to transform their business.  In addition, the environment is backed by service level agreements (SLAs) that drive high availability rates for new capabilities.
3. Creating new revenue streams with cognitive on cloud
A pharmaceutical company was projecting a steep decline in revenue when its patent on a very successful psychiatric drug was about to expire, leaving an opening for generic versions of the drug to flood the market. In the pharmaceutical industry, this drop-off in revenue is called the “patent cliff.” For this company, the results could have been devastating, as the drug accounted for 40 percent of its annual revenue.
To reduce its dependence on the ebb and flow of patent periods, the pharmaceutical company sought to create a product that would leverage its extensive psychiatric expertise and research. This company developed and deployed a predictive analytics solution in a managed cloud environment that helps psychiatric hospitals better diagnose and treat patients.
The solution analyzes millions of anonymized case records to predict how patients might respond to certain treatments., how long they may stay in the hospital and the likelihood of rehospitalization. By deploying in the cloud, the company can deliver the solution to hospitals without the need for onsite infrastructure.
The company’s new solution helps it avoid a forecast 12 percent drop in revenue by creating a new revenue stream. Its insights are also helping doctors and hospitals improve patient outcomes through more personalized treatment plans and evidence-based policies to improve quality of care and efficiency.
Ready to explore how cloud managed services can free up your IT resources for innovation? Visit the IBM Cloud Managed Services website.
The post Accelerating the benefits of cognitive solutions with cloud managed services appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

[Podcast] PodCTL #4 – All the Tools in the Kubernetes Toolbox

This week’s show is all about tools, tools, and more tools. It builds upon this great post by our colleague Michael Hassenblas. We look at why there are so many options available to install, update, upgrade, manage and monitor your Kubernetes environment. The discussion looks at why Developers might choose certain tools, DevOps teams a […]
Quelle: OpenShift