RDO Newton Released

The community is pleased to announce the general availability of the RDO build for OpenStack Newton for RPM-based distributions – Linux 7 and Red Hat Enterprise Linux. RDO is suitable for building private, public, and hybrid clouds. Newton is the 14th release from the OpenStack project, which is the work of more than 2700 contributors from around the world.
(Source)

The RDO community project curates, packages, builds, tests, and maintains a complete OpenStack component set for RHEL and CentOS Linux and is a member of the CentOS Cloud Infrastructure SIG. The Cloud Infrastructure SIG focuses on delivering a great user experience for CentOS Linux users looking to build and maintain their own on-premise, public or hybrid clouds. At latest count, RDO contains 1157 packages.

All work on RDO, and on the downstream release, Red Hat OpenStack Platform, is 100% open source, with all code changes going upstream first.

Getting Started

There are three ways to get started with RDO.

To spin up a proof of concept cloud, quickly, and on limited hardware, try the RDO QuickStart You can run RDO on a single node to get a feel for how it works.

For a production deployment of RDO, use the TripleO Quickstart and you’ll be running a production cloud in short order.

Finally, if you want to try out OpenStack, but don’t have the time or hardware to run it yourself, visit TryStack, where you can use a free public OpenStack instance, running RDO packages, to experiment with the OpenStack management interface and API, launch instances, configure networks, and generally familiarize yourself with OpenStack

Getting Help

The RDO Project participates in a Q&A service at ask.openstack.org, for more developer oriented content we recommend joining the rdo-list mailing list. Remember to post a brief introduction about yourself and your RDO story. You can also find extensive documentation on the RDO docs site.

The rdo channel on Freenode IRC is also an excellent place to find help and give help.

We also welcome comments and requests on the CentOS Mailing lists and the CentOS IRC Channels (centos, and centos-devel, on irc.freenode.net), however we have a more focused audience in the RDO venues.

Getting Involved

To get involved in the OpenStack RPM packaging effort, see the RDO community pages and the CentOS Cloud SIG page. See also the RDO packaging documentation.

Join us in rdo on the Freenode IRC network, and follow us at @RDOCommunity on Twitter. If you prefer Facebook, we’re there too, and also Google+.

And, if you’re going to be in Barcelona for the OpenStack Summit two weeks from now, join us on Tuesday evening at the Barcelona Princess, 5pm – 8pm, for an evening with the RDO and Ceph communities. If you can’t make it in person, we’ll be streaming it on YouTube.
Quelle: RDO

Announcing Internet Protocol Version 6 (IPv6) support for Amazon CloudFront, AWS WAF, and Amazon S3 Transfer Acceleration

Internet Protocol Version 6 (IPv6) is a new version of the Internet Protocol that uses a larger address space than its predecessor IPv4. With IPv6 support, you will be able to meet the requirements for IPv6 adoption set by governments, remove the need for IPv6 to IPv4 translation software or systems, and benefit from IPv6 extensibility, simplicity in network management, and additional built-in support for security
Quelle: aws.amazon.com

Set expiration date for VMs in Azure DevTest Labs

In scenarios such as training, demos and trials, you may want to create virtual machines  and delete them automatically after a fixed duration so that you don’t incur unnecessary costs. We recently announced a feature which allows you to do just that; set an expiration date for a lab VM.

This feature is currently only available using our APIs which you can use through Azure resource manager (ARM) template, Azure PowerShell SDK and Azure CLI.

You can create a lab VM with an expiration date using an ARM template by specifying the expirationDate property for the VM. You can check out a sample Resource Manager template in our public GitHub repository. You can also modify any of the existing sample Resource Manager templates for the VM creation (name starting with 101-dtl-create-vm) by adding the expirationDate property.

For more details on this feature and what’s coming next, please check out the post on our team blog.

Please try this feature and let us know how we can make it better by sharing your ideas and suggestions at the DevTest Labs feedback forum.  Note that this feature will be available soon in the Azure portal as well.

If you run into any problems with this feature or have any questions, we are always ready to help you at our MSDN forum.
Quelle: Azure

Amazon Cognito introduces administrator creation of users in Your User Pools

With this launch, administrators and developers can create users in an Amazon Cognito user pool. A user pool is a fully managed user directory that makes it easy to add sign-up and sign-in to your mobile and web apps. Use the new AdminCreateUser API to set up accounts for new users and send them a customized invitation with their user name and a temporary password. You can also use this feature to control membership in a user pool by only allowing administrators to create users or also allowing users to sign themselves up.
Quelle: aws.amazon.com

What’s brewing in Visual Studio Team Services: October 2016 Digest

This post series provides the latest updates and news for Visual Studio Team Services and is a great way for Azure users to keep up-to-date with new features being released every three weeks. Visual Studio Team Services offers the best DevOps tooling to create an efficient continuous integration and release pipeline to Azure. With the rapidly expanding list of features in Team Services, teams can start to leverage it more efficiently for all areas of their Azure workflow, for apps written in any language and deployed to any OS.

Git best practice with Team Services: Branch Policies

How can you ensure you are finding bugs before they’re introduced into your codebase while still ensuring you have the right people reviewing? Branch policies can go a long way to enhancing your Pull Requests workflow.

Becoming more productive with Git: Tower and Team Services

Working with Git in Visual Studio Team Services and Team Foundation Server just became even easier: the popular Git desktop client Tower now comes with dedicated integrations for these services.

One-click Import of Git repositories into Team Services

Teams can now import a Git repository from GitHub, BitBucket, GitLab, or other locations. You can import into either a new or an existing empty repository.

Enable continuous deployment to App Stores with Team Services

Whether you build apps for the iOS, Android or Windows, Team Services has app store extensions that make it easy to publish your app and set up continuous deployment

New features released in September 2016

Two rounds of Team Services updates in September empower your team to get stuff done faster so you can enjoy those pumpkin spice lattes and the crisp autumn air. Custom work item types, more static analysis options in builds and a new feedback option in the Exploratory Testing extension are just a few among many new delightful updates.

 

Build and Release pricing update

Release Management is coming out of trial mode in Team Foundation Server “15”. Learn more to see how teams get billed for releases in Team Services and TFS post TFS “15” release.

New build queue tab

A redesign of the Queued builds experience in the Build hub brings richer details of your queued/running builds in a more intuitive way.

Changes to the way you log into Team Services

The new screens simplify login for users in organizations that use Azure Active Directory, bringing the experience more in line with the way you login to Azure, Office 365, etc.

Quelle: Azure

Using the CloudForms Topology View

When working with complex provider environments with many objects, the topology widget in Red Hat CloudForms can be extremely useful to quickly view and categorize information. The topology view provides the ability to view a container provider’s objects plus their details, such as the properties, status, and relationships to other objects on the provider. The topology view is also quite useful for showing cross-links between objects — all of which can be very difficult to visualize when only viewing an object’s summary page.

First introduced in CloudForms 4.0, the topology view was previously available only for containers providers. As of CloudForms 4.1, the topology view can also be used for network providers for viewing objects such as cloud subnets, floating IPs, and network routers:

In addition to topology view for network providers, CloudForms 4.1 also adds a search bar so that objects can be easily searched by name.
The topology view for containers is accessed through the Compute menu, by navigating from Compute > Containers > Topology. The network providers topology is accessed from Networks > Topology. In addition, you can access the topology view from the provider summary screen by clicking the icon in the Topology section.

Simplifying details with the Topology View
First of all, the topology view allows you to simplify information.
In the example below, the OpenShift Enterprise environment comprises multiple objects. It can be difficult to track down specific information, for example the OpenShift nodes. The topology view can assist in sorting out the objects to focus only on the relevant ones.

To view only the nodes, click each object except for Nodes in the top bar to toggle their display off. When the display is off for these objects, they appear greyed out. As a result, in the next screenshot, all you see are the container provider’s nodes:

Alternatively, you can also rearrange the topology diagram by dragging the objects into a manageable layout. For example, if you want to isolate one node and see its relationships, click on the node and pull it to one side of the topology map.
The following screenshot shows one node isolated from the rest, so you can easily see its relationships:

Searching with the Topology View
To make finding objects even simpler, Red Hat CloudForms 4.1 adds a search bar to the topology widget to allow you to search for an object by name. Search a name, and any unrelated objects are greyed out, so that only the objects you’ve searched for are highlighted.
In the following example, a search for the name “ruby-hello-world” reveals a container and a service by that name:

Note that you must cancel the search by clicking the X button before running your next search.
Identifying Relationships with the Topology View
Let’s say you wanted to find out which cloud network provider your floating IPs were attached to. Your network provider setup is fairly complicated, with many cross-linked relationships:

To make viewing your floating IPs much easier, hide the objects you aren’t interested in by clicking them on the top menu bar —  in this case, hide everything but Floating IPs and Cloud Networks.
As a result, you can easily see that 10 of your floating IPs are attached to the cloud network on the left, 7 floating IPs are attached to the cloud network on the right, and one floating IP is not attached to any cloud network. If you want more details on which cloud network is connected to 10 floating IPs and which is connected to 7 floating IPs, you could find out more details by either hovering over the cloud network icons or double clicking on the icon to open a summary page. Right clicking on the icon brings up a dialog with additional actions you can perform on the item.

Troubleshooting with the Topology View
The topology view is also very useful for identifying objects that are not functioning properly, along with errors. Objects that are active and running correctly are displayed with a green outline, where objects with problems are outlined in red, and a grey outline signifies an unknown status. Let’s look at this containers topology example again:

This image shows four non-functional nodes. After identifying these nodes, you can find more information by either:

Hovering over the node: This will show the the object’s name, type (node), and its status.
Double-clicking on a node: This opens a summary page listing the node’s properties, labels, relationships, conditions, and Smart Management tags.
Toggling the Display Names checkbox: This shows all displayed objects’ names.

You can then quickly narrow down where there may be a problem, and troubleshoot from there.
For more information about working with these provider types, see the Red Hat CloudForms 4.1 Managing Providers guide:

For containers providers &; see Chapter 5. Containers Providers
For network providers &8211; see Chapter 4. Network Providers

Quelle: CloudForms

6 causes of application deployment failure

Today’s businesses operate in an environment of accelerated transformation and rapidly changing business models. It is critical for concerned IT leaders to reduce the risk of failure.
It’s no secret that application deployment failures and slow deployment timelines lead to massive financial losses. Potential damage to one’s businesses reputation and, ultimately, the loss of customers make failure one of the top priorities for every management level, from CEOs to IT Directors, according to a recent ADT report.
The costs alone are intimidating. Infrastructure failures can cost as much as $100,000 per hour. Production outages cost roughly $5,000 per minute. Critical applications can cost organizations $500,000 to $1 million per hour in some cases.
Why all the problems? Based on my 13 years of IT experience working with clients of all sizes across various industries, these are some key causes of application deployment failure:
1. Process inadequacy.
Operational resilience means more than the ability to recover from failure. It also includes the ability to prevent failures and take actions to avoid them. Many organizations do not have the appropriate operational resilience maturity required for their IT and business. It is practically impossible to prevent application failures completely, but it is important that organizations take the time to find, predict and fix them.
2. Lack of consistency in the release pipeline.
Many organizations experience a mismatch of software deployment models through their IT systems. This results in failures because systems are typically interconnected in IT landscapes.
3. Process complexity.
Some environments are complicated by the myriad of different toolsets and deployment procedures used by development and operational teams. The vast array of tools creates multiple tooling domains with embedded manual processes between the domains, which results in process complexity. In addition, there are examples where the provisioning and deployment processes are very different at the opposite ends of the release pipeline.
4. Deviance.
Lack of standardization and flexibility throughout the development and release process show up commonly in application vulnerability scanning. These weaknesses are caused by development teams not carrying out the appropriate security testing because they lack the appropriate governance measures. In some cases, testing can be viewed as expensive and time consuming, leading to a tendency to minimize efforts.
5. Lack of skills.
Every organization has its hero developers or operations experts who can single-handedly solve every problem. Overtime processes are built around these individuals, making these processes difficult to run when they move on. It is crucial to have processes that are not just built around one or two critical resources, but that also scale and are repeatable and automated to meet the changing demands of the organization.
6. Uncertainty.
A lack of proper communication and interoperability between the demand and supply side of IT, development and operational teams, results in situations in which actions taken in isolation seem sensible, but put together end to end, result in failure. In many organizations, the majority of changes are incremental additions or alterations. These changes can often attract less oversight and control than major projects.
How can one avoid the big bad six? An easy way is to discover faults that could result in failure early on in the release cycle. Doing so will reduce the cost of fixing the faults and eliminate the cost that could have been incurred from an application deployment failure.
In my next post, I will discuss approaches to resolve these causes of failure.
Connect with IBM Cloud Advisor Osai Osaigbovo on LinkedIn.
The post 6 causes of application deployment failure appeared first on news.
Quelle: Thoughts on Cloud