Avoid these mistakes when migrating applications to the cloud

It must be nice to be a startup, with no legacy infrastructure weighing you down. Write a great app, provision yourself some space in the cloud, and your business is global, just like that.
For the rest of the world, however, life is not so straightforward. Any business of consequence, or older than 10 years, has plenty of baggage: proprietary technology, complex IT architectures, and aging applications. And there is one rather intimidating process standing between these companies and the  benefits of the cloud: migration.
“Approximately 65 to 75 percent of the applications that exist in enterprises right now, today, will benefit from moving to the cloud … depending on the age of the company, how well the applications are architected, those sorts of things,” says David Linthicum, senior vice president of Cloud Technology Partners. Linthicum recommends identifying and categorizing enterprise applications, then determining what the cost and effort would be to move them to the cloud. Often a proof-of-concept trial is helpful, from which you can build a business case for migrating applications you’ve identified as good candidates, he says. “It’s something every enterprise should go through these days. Because you may be missing the boat, you may be leaving millions and millions of dollars on the table that can be allocated to more innovative things in your enterprise.”
As you consider moving applications to the cloud, it’s important to learn from others’ migration mistakes. And there are many of them. Here is a list of five common mistakes, both mental and technological, to avoid when embarking on your migration to the cloud:
1. Incompatibility
When moving applications to the cloud, the transition will go much smoother if the cloud service provider offers the same operating systems and middleware environments (down to the version number) that you are using in your own data center. The benefit of this compatibility is that applications can be moved to the cloud without requiring significant rewrites — saving time, money, and frustration. Some enterprises might want to make performance enhancements to take maximum advantage of the cloud service provider’s hardware capabilities, but the fewer application changes that must be made the better.
2. Not enough/too much security
As with most everything related to IT these days, security should be top of mind. When moving any application out of the relative safety of the data center to a third-party environment, there must be security considerations. The amount of security you apply to a cloud-based application should depend on the sensitivity of the data it deals with. For applications that deal with highly sensitive information, consider access controls, authentication, and encryption. And don’t forget about compliance requirements that apply regardless of where the application is running. However, applications that deal with more mundane data won’t need as much protection; in fact, adding too many security layers may prove to slow down processes and frustrate users.
3. Fools rush in
Deciding to move most of your application portfolio to the cloud may seem like you’re taking a proactive stance to benefit the company, but you might regret it. A better approach is to start slow by moving one application – preferably a non-critical one that doesn’t deal with highly sensitive data — to the cloud. Pick an application that has a high chance of success of moving to the cloud smoothly (for example a recently developed mobile app, as opposed to a 20-year-old back-end application) and that will make a positive business impact (save money, boost productivity, reach new customers, etc.). Once the ROI is achieved, promote these benefits within the organization. Repeat.
4. All or nothing
Despite the many benefits that migrating applications to the cloud can offer, it’s not necessarily the right choice for every application. For example, complex legacy applications might take more work and investment to move to the cloud than is worth it. However, for each new application that’s developed or deployed, consider the cloud and weigh the pros and cons against developing or deploying it in house. Chances are the cloud will emerge as the right choice.
5. Subscribe and forget
One of the top benefits of signing on with a cloud service provider is being able to wash your hands of the management and maintenance of migrated applications– but that doesn’t mean you are without responsibility. This is particularly true if you’ve decided to take a hybrid cloud approach, for example where an application runs in the cloud but makes calls to a database located in your data center. Someone needs to be on top of that integration and communication, so make sure you have experienced staff who can not only help develop and implement your cloud strategy, but keep it running smoothly.
The post Avoid these mistakes when migrating applications to the cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

What’s new in Kubernetes 1.7, and should you care?

The post What’s new in Kubernetes 1.7, and should you care? appeared first on Mirantis | Pure Play Open Cloud.
Kubernetes 1.7 is out, focusing on production features such as security, extensibility, and stateful applications. Do you need it?  Well, let’s look at what it does for you.
Security
Especially in this age of cyberattacks, security is on everyone’s mind, so it’s no surprise that the Kubernetes community has been working on solutions in this area.  In Kubernetes 1.7, these features are mostly “below the surface” — meaning that users won’t interact with them directly — but not completely.
The Network Policy API, which had been introduced in previous versions, is now classified as “stable”.  This API enables users to create rules that determine which pods can communicate with each other. By default, pods are “non-isolated” but you can also create “isolated” pods by creating a NetworkPolicy resource, ensuring that unauthorised services can’t communicate with your pod.
For data that remains “at rest” such as Secrets and other resources stored in etcd, you now have the ability to protect that information from prying eyes (and programs) with encryption. (Encryption for at-rest data is considered to be “alpha”, so you’ll probably want to wait before using it on production systems.)
Moving a little deeper into the “guts” of Kubernetes, you can now improve security by rotating client and server certificates for Kubelet TLS bootstrapping, and audit logs now support event filtering and webhooks, and provide greater customization and richer data to keep track of what’s going on in your system.
Finally, Kubernetes 1.7 addresses a security problem you might not have known you have: the kubelet on one node accessing secrets, pods, or other resources on other nodes. Kubernetes now includes the node authorizer and admission control plugin, which ensure that kubelet instances only access the appropriate objects.
Application development and stateful workloads
If you’re more of a user than an operator, you’ll be interested in features that are aimed at making Kuvernetes a more hospitable place for real world applications.
For many people, “real world” is code for “stateful” applications such as databases, etcd, Zookeeper, and other “pets”. Even though containers were originally targeted at “cattle”, or stateless applications, StatefulSets (originally called “PetSets”) were introduced a few months ago. At the time, they enabled users to create stateful applications on containers, but they didn’t have the same robust update capabilities as their stateless counterparts, so updating these applications was more tedious and involved potential downtime. Kubernetes 1.7 fixes that problem by providing StatefulSet Updates. In addition, if your applications pods can be fired up in any order, you can now burst them for scaling using the Pod Management Policy, improving performance.
Also on the subject of updates, it was already possible to update DaemonSets, which fire up when a node first starts, but if there was a problem, you couldn’t roll the DaemonSet back again. Now you can.
Getting back to persistence, however, there are also some improvements in storage. Because containers are designed to be transitory, it’s important to make sure you understand how your data is being stored.   One option that’s available in Kubernetes 1.7 is a new StorageOS Volume plugin. StorageOS is a plugin that provides high availability cluster-wide storage volumes, making it possible to deploy databases and other applications that require persistent data.
Also, while storage is normally provided through the use of volumes, it’s not always easy to provided storage volumes, particularly in dev and test situations. That’s one reason developers have been asking for the ability to create local storage volumes; because they can be accessed as standard Persistent Volumes using StorageClasses, changing to a different type of volume doesn’t require recoding later.
Extensibility
Unless you’re a power-user, you’re unlikely to need the extensibility features, but they’re pretty exciting, so it’s good to know they’re there in case they’ll solve a problem you run up against.
At the lowest level, there are enhancements to the Container Runtime Interface (CRI), which is what enables Kubernetes to run different kinds of containers.  (It’s not ALL about Docker, after all.) You can even run non-containers with projects such as Virtlet!
According to the Kubernetes 1.7 announcement from Mirantis’ Ihor Dvoretskyi and Google’s Aparna Sinha, “Container Runtime Interface (CRI) has been enhanced with New RPC calls to retrieve container metrics from the runtime. Validation tests for the CRI have been published and Alpha integration with containerd, which supports basic pod lifecycle and image management is now available. Read our previous in-depth post introducing CRI.”
API aggregation
The most powerful (and potentially interesting) new feature in this release, however, is called API aggregation. According to the documentation, “The aggregation layer enables installing additional Kubernetes-style APIs in your cluster. These can either be pre-built, existing 3rd party solutions, such as service-catalog, or user-created APIs like apiserver-builder, which can get you started.”
What this means is that you (or your Kubernetes distribution vendor) can add new functionality that can be accessed in a method similar to the core Kubernetes features.
In addition, Kubernetes 1.7 includes an alpha version of external admission controllers, which make it possible to add custom logic that the API server will execute to modify objects when they’re created, as well as Policy-based Federated Resource Placement, which lets you create placement policies for the federated clusters. This way, if you have specific regulations you need to follow, or if you want to segregate resources based on pricing, performance, or other factors, you can use these policies to accomplish that.
All of this makes it possible to extend Kubernetes beyond its core capabilities; will vendors take advantage of these features to create differentiated versions of the technology?  And if they do, will it create its own form of vendor lock-in?
Only time will tell.
The post What’s new in Kubernetes 1.7, and should you care? appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

How to unbox your dynamic operations

Today’s smartphones frequently offer larger screens with smaller bezels. Their manufacturers claim an uninterrupted view of the world. Conceptually, this “unboxing” is exactly what’s needed to manage dynamic infrastructure and services.
Looking back for a moment, service delivery used to happen at a slower pace. IT teams would spend a lot of time designing services, getting the hardware to run them and finally bringing them into a long-running steady state with a bit of assurance tagged on. Combined with how businesses structured themselves, this approach resulted in pronounced silos of IT, network, storage and app management. Staffing, procedures and tooling were managed within those silos, often only integrated by phone calls between teams. Still, at least we knew what had to be managed within those silos and their discontinuous views of the world.
Then, along came the great enablers of modern services: virtualization, containerization, cloud services and line-of-business autonomy. Now the teams in those silos faces some problems:

Programmatic control, automation and ease of choice mean that infrastructure, services or tooling can be very unpredictable due to things like autoscaling. How can operations teams be confident and certain that they’re working on the right view of the world if it can change in a heartbeat?
The ability to consume traditionally specialized workloads, such as network or storage, in new ways means that those siloed teams may be insufficient to fully manage such services given their additional IT dependencies. One network service provider I spoke to put it this way: “To our network SMEs, virtual routers look no different to their physical counterparts, but they can behave very differently when under load. Which means getting the IT guys involved as the network team doesn’t understand IT virtualization.” The net result is that organizations are having to restructure their operations teams.
Service delivery schedules are greatly compressed because the time from conception to having to manage a service is greatly reduced. Assurance must be integral to the design and implementation of a service as a lack of it cannot be an inhibitor to rapid service delivery. Inline management Virtual Network Functions (VNFs) are a good example of how assurance can be woven into the fabric of a service:

Does your environment look like this?
Operations teams now have additional issues to contend with: cross-pollination of infrastructure and workload, increased sets of dependencies and unpredictable behavior and dilution of tribal knowledge. Coupled all of this with pressure from the business to restructure their teams and improve collaboration. Therefore  there’s a great need for trustworthy context and surety across silos even when developers or change agents in the environment are not keeping operations in the loop.
In my next blog post, I’ll look at what an unboxed and uninterrupted view needs.
If you’re interested in learning more about IBM Netcool Agile Service Manager and how it can deliver an unboxed view, check out this blog post and this analyst report.
The post How to unbox your dynamic operations appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Service Discovery on OpenShift Using Multicast

Take advantage of the new multicast features introduced in OpenShift Origin 1.5 and OpenShift Container Platform 3.5 and learn how easy it is get multicast up and running with guidance on annotating your project and enabling the ovs-multitenant SDN plugin.
Quelle: OpenShift

Announcing IBM Cloud private for Microservice Builder

Last week IBM announced the launch of Microservice Builder, a powerful new technology stack squarely aimed at simplifying development of microservices. Microservice Builder enables developers to easily learn about the intricacies of microservice app development and quickly compose and build innovative services. Then developers can rapidly deploy microservices across environments by using a pre-integrated DevOps pipeline. And there’s step-by-step guidance for developers along the way.
Today, IBM is releasing IBM Cloud private, a further enhancement of the enterprise developer experience led by Microservice Builder.
Users will be able to deploy Microservices Builder on the IBM Cloud private platform to develop and deploy cloud-native applications in a secure, private cloud environment. Based on both Docker containers and Kubernetes, IBM Cloud private enables both integrated private infrastructure as a service (IaaS) and platform as a service (PaaS) models. Our cloud platform allows clients to build, orchestrate and manage microservices-based applications. This is ideal for companies facing stringent regional, governmental or industry-based regulations that require strict control over applications and data.
IBM Cloud private boosts Microservice Builder in three key ways:

Developing, deploying and running microservice-based applications
Creating a true hybrid cloud by securely integrating and using data and services from both public and on-premises environments
Refactoring and modernizing legacy enterprise applications with microservices

While Microservice Builder can maintain flexibility in deployment options, IBM Cloud private provides improved access to production application services that include data analytics, messaging and caching – all essential for enterprise developers working to quickly iterate based on business needs.
Our WebSphere Application Server team conceived and drove the Microservice Builder initiative. The team leveraged its experience and insight into what enterprise developers need to eliminate many of the challenges faced when adopting a microservice architecture. It begins with a speedy getting-started experience and a pre-integrated DevOps pipeline that supports continuous delivery of applications.
And now with IBM Cloud private as a client-controlled and secure deployment platform, Microservice Builder can truly deliver a complete user experience geared for the enterprise developer. The platform supports rapid hybrid and cloud-native application development and delivery with greater agility and scalability. And as a result, Microservice Builder promotes closer collaboration between lines of business, development and IT operations.
Ready to try it out? Get ahead of the microservices revolution today by visiting our developerWorks page.
Learn more about the many benefits Microservice Builder can provide your organization by visiting the marketplace page.
The post Announcing IBM Cloud private for Microservice Builder appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Network slicing and 5G and wireless, oh my!

The post Network slicing and 5G and wireless, oh my! appeared first on Mirantis | Pure Play Open Cloud.
If you’re not in the telecom business, you probably haven’t given much thought to the upcoming 5G standard, except perhaps to wonder when your phone will have faster data. But the time is coming when you may find yourself immersed in it — not just because it’s on your phone, but because it’s everywhere, and it affects every industry you deal with on a daily basis.
Let’s set expectations up front, however: as of this writing, there is no “5G Standard”. There’s lots of work going on, and there have been a few trials, but there isn’t anything definitively settled yet.
That said, there are a few things that you should know.
5G is going to be much faster than anything we have now, with much less latency.
Current cellular speeds hover around 4-12 Mbps, with peak download speeds of 50 Mbps if you’re lucky.  According to the Next Generation Mobile Networks Alliance, 5G should be able to achieve 100 Mbps in metropolitan areas. As far as latency, the European Commission‘s Horizon 2020 suggests that in order to be successful, 5G should target latency of 5 ms — significantly faster than the average 120 ms seen in a study of 4G carriers.
Considering that you’ll be able to download a movie in about 4 seconds, you might even find yourself wanting to use your 5G connection rather than your home wifi.
5G is going to be more complicated than what we have now, with many more pieces.
Whereas current cellular technologies rely on the periodic cell tower to provide signal, that’s not going to be practical for 5G, for a number of reasons.  First off, the spectrum that’s been allocated for 5G is such that it has a much shorter range than current technologies, so instead of one big tower every few miles, 5G will involve many, many, smaller routers in various places. For example, a business might have several 5G routers on its premises, enabling nearby employees to transmit data to each other at as much as 1GB/sec.
5G will also have to accommodate as many as 100 devices per square meter — without increasing latency — in order to be practical for serving the exploding Internet of Things. As latency is a function of processing power, it will be necessary to inject additional power into the network.
5G is going to be more like the physical networks we have now, in that it will be more programmable.
The last few years have seen an explosion in networking power due to Software Defined Networking (SDN), and more recently, Network Functions Virtualization (NFV). For the most part, however, these capabilities have been limited to physical networks — as in, non-wireless based.
In 5G, we’ll have the opportunity to change that. Here at Mirantis, we’ve joined the 5G Transformer project, which is working on bringing SDN and NFV to the 5G space, making it possible to create programmable virtual wireless networks on top of physical wireless networks, just as we’ve been creating programmable virtual networks on top of physical networks in the wired space.
That’s where network slicing comes in.
What is network slicing?
In the OpenStack world, we’re used to partitioning a single network into multiple virtual networks, using them to isolate traffic from each other in order to provide multiple users and clients with their own network. We’re also used to creating different levels of service for different users, such as using different flavors for instances or volumes.
Network slicing enables us to do both. With network slicing, we can create different virtual networks that provide different levels of performance and different SLAs. For example, a hospital’s personnel communications might have different technical requirements than a car company trying to run autonomous vehicles.
What is 5G Transformer?
The 5G Transformer project aims to create the technology necessary for making network slicing in 5G not just feasible, but standard. Its mission is to make it possible for various verticals to define standard “flavors” of network slices, called “customized Mobile Transport and Computing Platform (MTP) slices”. Companies should then be able to request these slices in a matter of minutes.
The project is also working on a Service Orchestrator that will handle federating and coordinating all of the resources needed to make these end-to-end connections work.
5G Transformer is focusing on 3 specific vertical industry use cases:

Automotive, including Autonomous Cruise Control (ACC) enforcement, Collaborative Advanced Driver Assistance Systems (ADAS) and Remote Vehicle Interaction (RVI)
Health care, including municipal emergency communication
Media, with a specific focus on applications for stadiums. (Several telecoms are poised to roll out 5G demos for the 2018 and 2020 Olympic games.)

When will we see 5G?
Mirantis works with a number of different telcos, and we have our hands deep into NFV, so we’ve had our eye on 5G for some time. That’s one reason we joined the 5G Transformer project. That said, it does tend to take about 10 years between “generations” of mobile data, which puts us on track for a 5G debut in 2022, but with demos expected to be rolling out for the next two Olympic Games, we may not have to wait that long.
Regardless, work has already begun, and it’s likely that we’ll be seeing the fruits of those labors sooner rather than later.
The post Network slicing and 5G and wireless, oh my! appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis