Using blockchain, FreshTurf simplifies package delivery in Singapore

Waiting for a package to be delivered can be frustrating. Deliveries mostly happen during work hours, so customers often miss the delivery window, then worry that their packages aren’t safe.
To make online shopping more convenient, the Singaporean government created a nationwide system of delivery lockers from which the public can retrieve packages ordered online.
FreshTurf, a technology company in Singapore, sought to create a secure way of making these lockers available to any delivery company wishing to use them. Think of it as the Uber of delivery lockers.
To create the solution, FreshTurf teamed up with IBM Bluemix Garage developers and designers to build a blockchain-distributed ledger platform prototype on the IBM Bluemix cloud platform. The blockchain solution is designed to manage transactions between merchants, logistic vendors, locker companies and consumers.
“As a startup, we wanted a partner and mentor who would guide us through the journey of adopting blockchain and building with cloud, while also helping us to grow,” says Jarryl Hong, co-founder, FreshTurf. “During the IBM Design Thinking workshop, we had access to technical expertise, consulting and guidance through working with the IBM Bluemix Garage, which allowed us to quickly build our concept.”
It’s a groundbreaking use of blockchain technology. Blockchain has been readily embraced by the financial services sector, but applying it to the logistics industry has opened up a world of possibility for the technology. Freshturf intends to see just how far it can take it.
“With this unique solution in development, we’re hoping to secure pilots with a number of industry leaders to test the feasibility and market readiness of the solution in other industries as well,” says Kevin Lim, co-founder and director at FreshTurf.
Using IBM Cloud has given the FreshTurf team the necessary level of agility to build quickly. “Because we’re working on the IBM cloud, we are able to quickly test and deploy various use-cases,” says Lim. “Building with IBM has allowed us access to technical expertise to build products and solutions to scale quickly.”
The solution is a win for customers and delivery companies. Door-to-door deliveries are difficult to perform and add to the cost of delivery, which ultimately drives up prices for consumers.
“We’re creating a marketplace where delivery points such as apartment complexes can provide locker space and anyone can rent them,” says Lim. “This means greater island-wide access to lockers and better utilization of lockers for the delivery point providers. Consumers have more choices and logistics companies can utilize these assets to simplify the ‘last mile’ of delivery.”
Learn more about IBM Blockchain.
The post Using blockchain, FreshTurf simplifies package delivery in Singapore appeared first on news.
Quelle: Thoughts on Cloud

Data enables proactive healthcare, improving chronic disease management

Coughing, wheezing and tightness in the chest are are all symptoms of asthma, a chronic disease estimated to affect 400 million people around the world by 2025.
Asthma is a unique condition. A patient may live symptom-free life for many days and months with maintenance therapy. Unfortunately, even those who control their symptoms well can experience a trigger that unexpectedly causes an asthma attack. In minutes, a patient can experience a life-threatening situation that can lead to death.
At Teva Pharmaceutical Industries, a global pharmaceutical company based in Israel, we’ve considered if there are ways to identify early warning signs of an asthma attack. We are developing a digital respiratory disease management system which may enable  a proactive, data-driven approach to asthma management.
Many people who live with asthma experience uncontrolled symptoms and frequent attacks, often due to incorrect inhaler use or poor adherence to treatment.
Teva is committed to developing digital respiratory solutions for asthma patients to help them, and their caretakers, control their condition to better manage chronic symptoms. When patients use their digital inhalers and the corresponding software application, they generate data that their doctors can interpret to understand behavior patterns and enable a proactive, systematic and comprehensive approach to chronic disease treatment and management.
The collaboration will combine cloud-connected drug delivery and app technology with more than six billion data points, including integration of data from The Weather Company to incorporate environmental data that could potentially affect asthma patients. Using Watson cognitive processing capabilities and newly developed algorithms these data may be used to calculate the prospective risk of health events, such as an asthma attack. Teva delivers that information directly to caregivers and their patients via an app or other software so they may take a more proactive approach in managing that risk.
Using the IBM Watson Health Cloud will comply with operational and security requirements for health data.
When Teva started looking for a digital solution partner, the company considered several cloud and computing providers. It needed to work with a partner that was able to deliver a global cloud for storing, analyzing and communicating patient data. Teva also needed the capability to perform analysis of multiple data points on millions of patients in real time.
Teva chose IBM as its partner because of the unique capabilities of IBM Watson. Both Teva and IBM have the same aspiration to transform healthcare with digital therapeutic solutions designed to fulfill unmet and emerging patient needs, as well as provide the highest level of care to customers around the world.
As one result of Teva’s global partnership with IBM as a Foundational Life Sciences Partner for IBM Watson Health Cloud, Teva is using IBM Watson Health capabilities to help to improve chronic disease management.
Teva’s vision for the future is that patients will be empowered to better understand and manage chronic diseases, including asthma. They will use data to enable a systematic, comprehensive approach to help them take control of their health conditions and proactively seek the right solution before a health crisis.
In doing so, Teva aims to cut treatment costs by providing patients, payers, healthcare providers and caregivers with relatable data and insights that can inform action.
Learn more about IBM Cloud healthcare solutions.
The post Data enables proactive healthcare, improving chronic disease management appeared first on news.
Quelle: Thoughts on Cloud

Stephen Finucane – OpenStack Nova – What’s new in Ocata

At the OpenStack PTG in February, Stephen Finucane speaks about what’s new in Nova in the Ocata release of OpenStack.

Stephen: I’m Stephen Finucane, and I work on Nova for Red Hat.

I’ve previously worked at Intel. During most of my time working on Nova I’ve been focused on the same kind of feature set, which is what Intel liked to call EPA – Enhanced Platform Awareness – or NFV applications. Making Nova smarter from the perspective of Telco applications. You have all this amazing hardware, how do you expose that up and take full advantage of that when you’re running virtualized applications?

The Ocata cycle was a bit of an odd one for me, and probably for the project itself, because it was really short. The normal cycle runs for about six months. This one ran for about four.

During the Ocata cycle I actually got core status. That was probably as a result of doing a lot of reviews. Lot of reviews, pretty much every waking hour, I had to do reviews. And that was made possible by the fact that I didn’t actually get any specs in for that cycle.

So my work on Nova during that cycle was mostly around reviewing Python 3 fixes. It’s still very much a community goal to get support in Python 3. 3.5 in this case. Also a lot of work around improving how we do configuration – making it so that administrators can actually understand what different knobs and dials Nova exposes, what they actually mean, and what the implications of changing or enabling them actually are.

Both of these have been going in since before the Ocata cycle, and we made really good progress during the Ocata cycle to continue to get ourselves 70 or 80% of the way there, and in the case of config options, the work is essentially done there at this point.

Outside of that, the community as a whole, most of what went on this cycle was again a continuation of work that has been going on the last couple cycles. A lot of focus on the maturity of Nova. Not so much new features, but improving how we did existing features. A lot of work on resource providers, which are a way that we can keep track of the various resources that Nova’s aware of, be they storage, or cpu, or things like that.

Coming forward, as far as Pike goes, it’s still very much up in the air. That’s what we’re here for this week discussing. There would be, from my perspective, a lot of the features that I want to see, doubling down on the NFV functionality that Nova supports. Making things
like SR-IOV easier to use, and more performant, where possible. There’s also going to be some work around resource providers again for SR-IOV and NFV features and resources that we have.

The other stuff that the community is looking at, pretty much up in the air. The idea of exposing capabilities, something that we’ve had a lot of discussion about already this week, and I epxect we’ll have a lot more. And then, again, evolution of the Nova code base – what more features the community wants, and various customers want – going and providing those.

This promises to be a very exciting cycle, on account of the fact that we’re back into the full six month mode. There’s a couple of new cores on board, and Nova itself is full steam ahead.
Quelle: RDO

5 highlights from InterConnect 2017

At this year’s InterConnect, I learned so much, met clients from across industries and geographies, and yet again left Las Vegas buzzing with excitement about IBM solutions and partnerships.
Throughout the week, clients told me they’re thrilled to see the IBM commitment to building a cloud that’s enterprise strong, puts data first and has cognitive at its core. They see their own ambitions in this strategy and are excited to work with a cloud services provider with a hybrid cloud strategy that spans platforms, industries and partners, putting client needs first.
As I reflect on the week, I thought I&;d share five of my personal highlights, in no particular order:
 
1. David Kenny&8217;s entrance
This wasn’t a tech, client or partner announcement, but it was a truly memorable entrance from the IBM senior vice president of IBM Watson and cloud platform. Though he entered to an exciting adventure theme, there was of course a serious message behind the fun: to show IBM clients just how far we’ll go to get them great pricing on cloud object storage. David shared the stage with Arvind Krishna, senior vice president of hybrid cloud and director of IBM Research, Monday morning as Indiegogo and Arrow Electronics announced a partnership to help bring new Internet of Things (IoT) ideas to life.
Monday’s opening event was also packed with announcements around leading technical innovation, such as the Cognitive Security Operations Centre, IBM Cloud Automation Manager, IBM Cloud for Financial Services and much more.
2. Ginni&8217;s keynote
IBM CEO and Chairman Ginni Rometty took the stage Tuesday morning having returned from China the evening beforehand. She announced the IBM Cloud partnership with Wanda China Cloud and shared the stage with clients and partners such as AT&T, Everledger, H&R Block, Royal Bank of Canada and Salesforce.
Salesforce CEO Marc Benioff stood out as he expressed his excitement around the Einstein/Watson partnership and the insights already being made available to Salesforce clients worldwide.
Joining Ginni on stage to close Tuesday’s keynote was the inspirational Reshma Saujani, founder and CEO of Girls Who Code. Three Girls Who Code students also took the stage to share their stories of coding, equality and opportunity.
3. Partnership announcements
With hybrid at the heart of IBM cloud strategy, partnering to extend key capabilities that offer clients choice with consistency is key.
IBM announced strategic partnerships with Red Hat focused on hosted private cloud and Intel and HyTrust with a focus on security. As a Red Hat certified cloud and service provider, IBM Bluemix now delivers managed, private cloud at scale on the IBM Cloud infrastructure, available with the Red Hat OpenStack platform and Red Hat Ceph storage.
With Intel and HyTrust, IBM will offer the IBM Cloud Secure Virtualization Solution running on VMware Cloud Foundation on IBM Cloud. The Secure Virtualization solution safely and securely reduces the barriers to cloud adoption, spanning various vertical specific compliance standards, including the EU General Data Protection Regulation (GDPR).
4. The concourse
As a regular InterConnect attendee over the years, I’ve seen the exhibition area grow in size and improve in quality year over year, and 2017 was no exception. At 350,000 square feet and renamed the concourse, 200-plus exhibitors from across IBM, business partners and the IBM Cloud ecosystem hosted stands manned by subject matter experts to share insight and experience with clients that were keen to learn.
Personal highlights included the Bluemix Garage presence, the Dev/Zone and of course, the cloud adoption leaders’ Cloud Confidence Center, where I spent much of my time.
5. The steps
You haven’t experienced IBM InterConnect if you haven’t clocked at least 15,000 steps on your pedometer each day. With the vast Mandalay Bay Conference Center taken over by IBM for InterConnect, the sheer extent of IBM InterConnect is an experience you won’t forget.
From the hands-on labs and certification hall on the third floor, to the various theatres and breakout rooms on the second floor, not to mention the concourse, exhibition centre and event arena on the ground floor, no trip to InterConnect is complete without a daily workout. Of course, the exercise tends to go unnoticed given the excitement and learning that takes place daily.
I’m writing this post as I travel from Las Vegas to Newark on the first leg of my journey home to Dublin, Ireland. As I recap the week that’s just passed, I’m making notes of clients that’ll be excited to learn more about InterConnect and the plethora of announcements and capabilities IBM just launched.
Feel free to contact me if you’d like to learn more about InterConnect 2017 and if you’re not already, start to make plans for InterConnect 2018 at Mandalay Bay, 18 to 22 March.
The post 5 highlights from InterConnect 2017 appeared first on news.
Quelle: Thoughts on Cloud

Deploying 2048 OpenShift nodes on the CNCF Cluster

By Jeremy Eder, Red Hat, Senior Principal Software Engineer Overview The Cloud Native community has been incredibly busy since our last set of scaling tests on the CNCF cluster back in August.  In particular, the Kubernetes (and by extension, OpenShift) communities have been hard at work pushing scalability to entirely new levels. As a significant [&;]
Quelle: OpenShift

Benefits of increasing use of cloud managed services

This is the second part in a two-part interview series with Lynda Stadtmueller, vice president of cloud services for the analyst firm Frost & Sullivan. In part one, we discussed why 80 percent of US companies plan to increase their use of cloud managed services. Today, she offers perspective on the benefits that cloud managed services can deliver.
Thoughts on Cloud (ToC): Based on your experience, what are the biggest benefits for a company when moving to a managed cloud environment or increasing its use of cloud managed services?
Lynda Stadtmueller, vice president of cloud services, Frost & Sullivan: The primary benefits are increased application performance, improved efficiency in infrastructure utilization and faster speed to market.
But another benefit that might be surprising is that managed services can help manage costs. When I work with enterprises to create a total-cost-of-ownership (TCO) analysis, we often find that managed services can pay for themselves when compared to on-premises deployment and management of some applications, or even with a “do-it-yourself” approach.
In my last survey, 45 percent of businesses using a “do-it-yourself” approach to cloud — meaning they are deploying their own infrastructure as a service — said that for every dollar they spend on the infrastructure itself, they spend three to five dollars managing it. That’s a huge amount. If you&;re using a managed service provider, you can offload a lot of that internal cost.
ToC: Do cloud managed services tend to deliver more benefits for large enterprises?
LS: I&8217;m actually seeing more growth in the midmarket. My personal theory is that midmarket chief information officers (CIOs) often come from large enterprises, which means that they recognize the value of technology for making the company more competitive and increasing revenue.
Midmarket businesses are less willing to spend on “break-fix” types of managed services. They want more value. So I&8217;m seeing higher cloud managed services adoption rates in the midmarket segment because they need those to compete, and they see the value these services can provide.
ToC: What&8217;s the number one recommendation you make to clients that are considering moving to cloud managed services?
LS: When we look at what chief executive officers demand from IT, it’s not just about reduced costs. It’s also about making the business run faster and raising productivity. Because of that, I encourage CIOs to do a TCO analysis that includes more than the top-line expenses.
The cost of downtime is a good example. If you have a managed services partner with availability service level agreements (SLAs) and a tested backup and recovery plan, you can quantify the cost savings for lack of downtime.
When businesses can use dollar figures to quantify the workloads across the entire enterprise that are going to your managed service provider, it helps the CIO better understand the value they’re driving for the company. It also helps them speak in a language that their business colleagues understand.
ToC: If companies choose not to use cloud managed services, what&8217;s the alternative for taking advantage of the benefits of cloud?
LS: Our 2016 study on cloud-based managed services found that 91 percent of businesses hire somebody to do something, but it’s not necessarily managed services. Instead, it might be services to help design and implement a cloud strategy.
But problems can arise when companies forget about applications after they’re launched. Costs start increasing because there’s nobody to keep things running efficiently. The people who uploaded the workload are off doing the next piece of technology and nobody&8217;s minding the store.
To estimate your annual savings from implementing cloud managed services, try the Cost Benefits Estimator.
The post Benefits of increasing use of cloud managed services appeared first on news.
Quelle: Thoughts on Cloud

Scaling with Kubernetes DaemonSets

The post Scaling with Kubernetes DaemonSets appeared first on Mirantis | Pure Play Open Cloud.
We&;re used to thinking about scaling from the point of view of a deployment; we want it to scale up under different conditions, so it looks for appropriate nodes, and puts pods on them. DaemonSets, on the other hand, take a different tack: any time you have a node that belongs to the set, it runs the pods you specify.  For example, you might create a DaemonSet to tell Kubernetes that any time you create a node with the label app=webserver you want it to run Nginx.  Let&8217;s take a look at how that works.
Creating a DaemonSet
Let&8217;s start by looking at a sample YAML file to define a Daemon Set:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
 name: frontend
spec:
 template:
   metadata:
     labels:
       app: frontend-webserver
   spec:
     nodeSelector:
       app: frontend-node
     containers:
       – name: webserver
         image: nginx
         ports:
         – containerPort: 80
Here we&8217;re creating a DaemonSet called frontend. As with a ReplicationController, pods launched by the DaemonSet are given the label specified in the spec.template.metadata.labels property &; in this case, app=frontend-webserver.
The template.spec itself has two important parts: the nodeSelector and the containers.  The containers are fairly self-evident (see our discussion of ReplicationControllers if you need a refresher) but the interesting part here is the nodeSelector.
The nodeSelector tells Kubernetes which nodes are part of the set and should run the specified containers.  In other words, these pods are deployed automatically; there&8217;s no input at all from the scheduler, so schedulability of a node isn&8217;t taken into account.  On the other hand, Daemon Sets are a great way to deploy pods that need to be running before other objects.
Let&8217;s go ahead and create the Daemon Set.  Create a file called ds.yaml with the definition in it and run the command:
$ kubectl create -f ds.yaml
daemonset “datastore” created
Now let&8217;s see the Daemon Set in action.
Scaling capacity using a DaemonSet
If we check to see if the pods have been deployed, we&8217;ll see that they haven&8217;t:
$ kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
That&8217;s because we don&8217;t yet have any nodes that are part of our DaemonSet.  If we look at the nodes we do have &;
$ kubectl get nodes
NAME        STATUS    AGE
10.0.10.5   Ready     75d
10.0.10.7   Ready     75d
We can go ahead and add at least one of them by adding the app=frontend-node label:
$kubectl label  node 10.0.10.5 app=frontend-node
node “10.0.10.5” labeled
Now if we get a list of pods again&8230;
$ kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
frontend-7nfxo              1/1       Running   0          19s
We can see that the pod was started without us taking any additional action.  
Now we have a single webserver running.  If we wanted to scale up, we could simply add our second node to the Daemon Set:
$ kubectl label  node 10.0.10.7 app=frontend-node
node “10.0.10.7” labeled
If we check the list of pods again, we can see that a new one was automatically started:
$ kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
frontend-7nfxo              1/1       Running   0          1m
frontend-rp9bu              1/1       Running   0          35s
If we remove a node from the DaemonSet, any related pods are automatically terminated:
$ kubectl label  node 10.0.10.5 –overwrite app=backend
node “10.0.10.5” labeled

$ kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
frontend-rp9bu              1/1       Running   0          1m
Updating Daemon Sets, and improvements in Kubernetes 1.6
OK, so how do we update a running DaemonSet?  Well, as of Kubernetes 1.5, the answer is &;you don&8217;t.&; Currently, it&8217;s possible to change the template of a DaemonSet, but it won&8217;t affect the pods that are already running.  
Starting in Kubernetes 1.6, however, you will be able to do rolling updates with Kubernetes DaemonSets. You&8217;ll have to set the updateStrategy, as in:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
 name: frontend
spec:
 updateStrategy: RollingUpdate
   maxUnavailable: 1
   minReadySeconds: 0
 template:
   metadata:
     labels:
       app: frontend-webserver
   spec:
     nodeSelector:
       app: frontend-node
     containers:
       – name: webserver
         image: nginx
         ports:
         – containerPort: 80
Once you&8217;ve done that, you can make changes and they&8217;ll propagate to the running pods. For example, you can change the image on which the containers are based. For example:
$kubectl set image ds/frontend webserver=httpd
If you want to make more substantive changes, you can edit or patch the Daemon Set:
kubectl edit ds/frontend
or
kubectl patch ds/frontend -p=ds-changes.yaml
(Obviously you would use your own DaemonSet names and files!)
So that&8217;s the basics of working with DaemonSets.  What else would you like to learn about them? Let us know in the comments below.
The post Scaling with Kubernetes DaemonSets appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

What’s new in Kubernetes 1.6 — a focus on stability

The post What&;s new in Kubernetes 1.6 &; a focus on stability appeared first on Mirantis | Pure Play Open Cloud.
Kubernetes 1.6 is forecast to be released this week. Major themes include new capabilities for Daemon Sets, the beta release of Kubernetes federation and new scheduling features, and new networking capabilities. You can get an in-depth look at all of the new features in the Kubernetes 1.6 release notes, but let&8217;s get a quick overview here.
DaemonSet rolling updates
You&8217;re probably used to dealing with Kubernetes in terms of creating a Deployment or a ReplicationController and having it manage your pods, making certain that you always have a particular number of instances spread among the nodes that are available.  DaemonSets, on the other hand, look at things from the opposite perspective.
With DaemonSets, you specify the nodes to run a particular set of containers, and Kubernetes will make certain that any nodes that satisfy those requirements will run those pods. With Kubernetes 1.6, you now have the option to update those DaemonSets with a new image or other information.  (For more information on DaemonSets, you can see this article,which explains how and why to use them.)
Kubernetes Federation
As Kubernetes takes hold, the likelihood of running into situations in which users have multiple large clusters to deal with increases. Federation enables you to create an infrastructure in which users can use, say, the closest cluster to them, or the one that has the most spare capacity.
Now in beta, kubefed &;supports hosting federation on on-prem clusters, [and] automatically configures kube-dns in joining clusters and allows passing arguments to federation components.&;
Authentication and access control improvements
Role-Based Access Control (RBAC), which makes it possible to define roles for control plane, node, and controller components, is now in the beta phase.  (It also defines default roles for these components.) There are numerous changes from the alpha version (such as a change from using * for all users to using system:authenticated or system:unauthenticated) so make sure to check out the release notes for all the details.
Attribute-Based Access Control (ABAC) also been tweaked, with wild cards defaulting to authenticated users. The kube-apiserver and the authentication API have also seen a number of improvements.
Scheduling changes
Now in beta is the ability to have multiple schedulers, with each controlling a different set of pods. You can also set the scheduler you want for a particular pod on the pod sec, rather than as an annotation, as in the alpha version.
Also in beta are node and pod affinity/anti-affinity. This capability enables you to intelligently schedule pods that should, or shouldn&8217;t be, on the same piece of hardware.  For example, if you have a web application that talks to a database, you might wat them on the same pod.  If, on the other hand, you have a pod that needs to be highly available, you might want to spread different instances over different nodes as a safeguard against failure. You can specify the affinity field on the PodSpec.
Kubernetes 1.6 also includes the beta release of taints and tolerations, and some improvements to that functionality from the alpha version.  Taints enable you to dedicate a node to a particular kind of pod, similar to the way in which you might flavors in OpenStack. Unlike OpenStack, however, you can tell Kubernetes to try to avoid scheduling pods that aren&8217;t explicitly allowed (read: tolerated) to that node, but if it has no choice, it can go ahead. This functionality also enables to you specify a period of time a mod might run on this node before being &8220;evicted.&8221;
And speaking of being evicted, Kubernetes 1.6 now enables you to override the default 5 minute period during which a pod remains bound to a node if there are problems,s o you can specify that a pod either finds another node more quickly, or is more patient and waits even longer.
The Container Runtime Interface is now the default
While it&8217;s natural to assume that containers running on Kubernetes are Docker containers, that&8217;s not always true.  Kubernetes also supports rkt containers, and in fact the goal is to enable Kubernetes to orchestrate any container runtime. Up until now, that&8217;s been difficult, because the container runtimes were coded into the kubelet component that runs the actual containers.
Now, with Kubernetes 1.6, the beta version of the Docker Container Runtime Interface is enabled by default &8212; you can turn it off with &;enable-cri=false &8212; it will be easier to add new runtimes.  The old non-runtime architecture is deprecated in 1.6 and is scheduled for remove in Kubernetes 1.7.
Storage improvements
Kubernetes 1.6 includes the general availability release of StorageClasses, which enable you to specify a particular type of storage resource for users without exposing them to the details.  (This is also similar to flavors in OpenStack.)
Also now in GA are the ability to populate environment variables from a configmap or a secret, as well as support for writing and running your own dynamic PersistentVolume provisioners.
Note that StorageClasses will change the behaviors of PersistentVolumeClaim objects on existing clouds, so be sure to read the Release Notes.
Networking improvements
You now have added control over DNS; Kubernetes 1.6 enables you to set stubDomains, which define the nameservers used for specific domains (such as *.mycompany.local), and to specify what upstreamNameservers you want to use, overriding resolve.conf.
Digging deeper, the Container Network Interface (CNI) is now integrated with the Container Runtime Interface (CRI) by default, and the standard bridge plugin has been validated with the combination.
Other changes
Kubernetes 1.6 includes a huge number of changes and improvements, some of which will only be of interest to operators, as opposed to end users, but all of which are important. Some of these changes include:

By default, etcd v3 is enabled, enabling clusters up to 5000 nodes
The ability to know via the API whether a Deployment is blocked
Easier logging access
Improvements to the Horizontal Pod Autoscaler
The ability to add third party resources and extension API servers with the edit command
New commands for creating roles, as well as determining whether you can perform an action
New fields added to describe output
Improvements to kubeadm

Definitely take a look at the full release notes to get the details.
The post What&8217;s new in Kubernetes 1.6 &8212; a focus on stability appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

How’s your cloud confidence?

I learned a lot at IBM InterConnect this year, but most importantly, I learned how and having super powers are surprisingly similar concepts.
I had a great time speaking with clients at the Cloud Confidence Center on the concourse. The central feature of the space was our cloud story string board, which was a real draw and a fantastic conversation starter to boot.
Cloud adoption leaders asked attendees to talk about their roles, their experiences with cloud so far, what they would like to do with cloud technology, what areas of cloud they would like to learn more about, and what super power they either have or would like to have.
I noticed interesting patterns regarding the super powers the people wanted. Agility came to the fore for those who were interested in how cloud could help them speed up internal processes. Invisibility was key for those who saw how cloud was helping their IT systems stay up and running. Those interested in finding out more about Watson were overall looking for super intelligence as their super power.
Using the board, we were able to take conversations to the next level. The area featured each component of IBM Cloud Technical Engagement, so as well cloud adoption leaders, we had people available from the Bluemix Garage, Cloud Professional Services, Solution Architecture and our support teams ready to help clients expand their stories, gain deeper knowledge and find paths forward.
We tracked hundreds of stories and I noticed a few trends from the people that I spoke to. To start, more people have begun their journey onto the cloud, so fewer are looking for help with the first steps. Many are now looking for help with fully adopting the cloud in their organizations.
While there was still a lot of focus on moving existing workloads to the cloud, there were also many people who were looking to create their first “born-on-the-cloud” applications as well as use the cloud to improve business processes and extend their on-premises infrastructure with hybrid

cloud.
The most popular points for further learning centered around Watson, which is perhaps unsurprising, as organizations are now starting to
realize the power of cognitive within their applications. They’ve been seeing the ease with which Watson APIs can be implemented into Bluemix applications.
Blockchain and Internet of Things (IoT) were big topics of conversation, along with DevOps, process transformation and Bluemix Infrastructure. I had many conversations about containers and microservices, too, with customers keen to understand how they can take advantage of technologies such as Docker, Kubernetes and OpenWhisk within their organization.
The best thing about the board was that it was a real talking point and a focus for visitors who were themselves taking a minute to look at the patterns that were emerging. I think it may have also been the most photographed exhibit, too.
Missed an InterConnect keynote or want to watch again? Catch up on IBMGO.
The post How’s your cloud confidence? appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

The journey of a new OpenStack service in RDO

When new contributors join RDO, they ask for recommendations about
how to add new services and help RDO users to adopt it. This post is
not a official policy document nor a detailed description about how to carry
out some activities, but provides some high level recommendations to newcomers
based on what I have learned and observed in the last year working in RDO.

Note that you are not required to follow all these steps and even you can
have your own ideas about it. If you want to discuss it, let us know your thoughts, we are always open to improvements.

1. Adding the package to RDO

The first step is to add the package(s) to RDO repositories as shown
in RDO documentation.
This tipically includes the main service package, client library and maybe
a package with a plugin for horizon.

In some cases new packages require some general purpose libraries. If they
are not in CentOS base channels, RDO imports them from Fedora packages
into a dependencies repository. If you need a new dependency which already
exists in Fedora, just let us know and we’ll import it into the repo. If it
doesn’t exist, you’ll have to add the new package into Fedora following
the existing process.

2. Create a puppet module

Although there are multiple deployment tools for OpenStack based on several
frameworks, puppet is widely used by different tools or even directly
by operators so we recommend to create a puppet module to deploy your new service
following the Puppet OpenStack Guide.
Once the puppet module is ready, remember to follow the RDO new package
process
to get it packaged in the repos.

3. Make sure the new service is tested in RDO-CI

As explained in a previous post
we run several jobs in RDO CI to validate the content of our repos. Most
of the times the first way to get it tested is by adding the new service
to one of the puppet-openstack-integration scenarios which is also
recommended to get the puppet module tested in upstream gates. An example
of how to add a new service into p-o-i is in this review.

4. Adding deployment support in Packstack

If you want to make it easier for RDO users to evaluate a new service, adding
it to Packstack is a good idea.
Packstack is a puppet-based deployment tool used by RDO users to deploy small proof
of concept (PoC) environments to evaluate new services or configurations
before deploying it in their production clouds. If you are interested you can
take a look to these two reviews
which added support for Panko and Magnum in Ocata cycle.

5. Add it to TripleO

TripleO is a powerful
OpenStack management tool able to provision and manage cloud environments
with production-ready features, as high availability, extended security,
etc… Adding support for new services in TripleO will help the users to
adopt it for their cloud deployments. The TripleO composable roles tutorial
can guide you about how to do it.

6. Build containers for new services

Kolla is the upstream
project providing container images and deployment tools to operate OpenStack
clouds using container technologies. Kolla supports building images for
CentOS distro using binary method which uses packages from RDO. Operators using
containers will have it easier it if you add containers for new services.

Other recomendations

Follow OpenStack governance policies

RDO methodology and tooling is conceived according to OpenStack upstream
release model, so following policies about release management
and requirements
is a big help to maintain packages in RDO. It’s specially important to create
branches and version tags as defined by the releases team.

Advertise your work to the RDO community

Making potential users aware of availability of new services or other
improvements is a good practice. RDO provides several ways to do this as
sending mails to our mailing lists,
writing a post in the blog, adding
references in our documentation, creating screencast demos, etc… You
can also join the RDO weekly meeting
to let us know about your work.

Join RDO Test Days

RDO organizes test days at several
milestones during each OpenStack release cycle. Although we do Continuous
Integration testing in RDO, it’s good to test that it can be deployed
following the instructions in the documentation. You can propose new
services or configurations in the test matrix and add a link to the
documented instructions about how to do it.

Upstream documentation

RDO relies on upstream OpenStack Installation Guide for
deployment instructions. Keeping it up to date is recommended.
Quelle: RDO