Containers are Linux

Containers are Linux. The operating system that revolutionized the data center over the past two decades is now aiming to revolutionize how we package, deploy and manage applications in the cloud. Of course you’d expect a Red Hatter to say that, but the facts speak for themselves.
Quelle: OpenShift

What’s New in OpenShift 3.5 – Enhanced Usability

The team continues to process feedback and turn it into improvements to the experience of OpenShift, the 3.5 release is no different. There are too many to list in this single blog post, so we’ll highlight a few here such as: “create from URL”, improved feedback messages, more kubernetes resources support and pipeline samples.
Quelle: OpenShift

What’s New in OpenShift 3.5 – Cluster Management

One of the things I hear while visiting customers is how much they love the fact we continue to release new software features in OpenShift at the pace of one release every quarter. OpenShift Container Platform 3.5 is now our 6th “minor” release of OpenShift with countless errata releases (on average about every 3 weeks) since 2015. What you might have not noticed, is the fact all our OpenShift and RHEL engineers are pulling double duty during releases. While we were late up at night getting OpenShift 3.5 ready to release, they were also finishing up Kubernetes 1.6. That pace of innovation and passion is only possible by working in an open community.
Quelle: OpenShift

DevOps: What your application management is missing

DevOps and Application Performance Management (APM) go hand in hand. I want to take you through a simple journey which shows why APM is such a key part of DevOps today. Let’s take a look at typical types of metrics that need to be tracked and measured, as well as the key features needed in APM to help in the DevOps environment.
When we talk about DevOps today, we often also mean cloud, microservices, and cloud level availability, like 99.999 precent or 26.3 seconds per month of downtime. Microservice behavior is critical to DevOps success. In a DevOps environment, microservices must be able to report the following about themselves:

Am I healthy?
What is my latency?
How many times do I connect to my dependent systems?
What is the latency of each of those dependent connections?
How many of those dependent connections succeed and fail?
Am I doing the work I am supposed to do?
How many customers do I have?
Am I gaining new customers?

Microservices need to be built carefully so that these types of metrics are available for each of the microservice instances. Why? If you want to hit that availability figure of less than 26.3 seconds of downtime per month, these metrics will help you to restore service faster. Some of these are easier to measure. But capturing “am I doing the work I am supposed to do” may need some development depending on what your microservice does.
Let’s now talk about two key features that a APM solution must have.
First, developers tend to create some pointers in the application log, like how many times a certain kind of error occurred. This can be problematic because logs have a higher latency than metrics in reaching the server.  And at these demanding levels, every second counts. Therefore, a better practice is to be able to measure latency at the microservice code level and push it to the APM. Then, have the APM system accept these custom metrics and transport it to the server. That way, latency can be analyzed and visualized just like the regular APM metrics.
Secondly, if something fails in the cloud, the standard response is to restart the component in an automated fashion. However, there is one set of problems that is very nasty and cannot be solved by restarts. This happens when the latency of your microservice suddenly goes south. And this is where the APM tool makes a difference, capturing a broad range of metrics like the ones mentioned above. APM can show metrics from different microservices and help users isolate the faulty microservice or other kinks in the process.
If you are in this business for serious production deployments, the development team has already embedded monitoring into the process. If not yet done, better get it done soon. Without APM it is much more difficult to guarantee 99.999 percent service levels.
Want to dig deeper? Check out this blog from my colleague, Mike Mallo, who explores how to drive DevOps transformation when developers own application monitoring. And read APM and DevOps: A Winning Combination to learn more.
 
The post DevOps: What your application management is missing appeared first on news.
Quelle: Thoughts on Cloud

Overloaded? Digital assistants to the rescue

Data is empowering us like never before. But there’s a flip side to having access to so much valuable data: information overload. That’s when data causes more pain than gain. It looks something like this:

It all feels like a bit too much.
The catch-22 of “knowledge work”
The term “knowledge workers” generally describes anyone with a desk job. That’s tens of millions of people across the globe. Each day, these workers must process, analyze and manage information to solve problems and innovate. Already, information overload is hurting workplace productivity.
IDC reports that digital data will surge to a trillion gigabytes by 2025. That’s 10 times the 16.1 zettabytes of data generated in 2016. On the other hand, the number of knowledge workers is shrinking. McKinsey Global Institute predicts a shortage of 80 million knowledge workers worldwide. So while workloads are increasing, the workforce is decreasing.
What’s the solution to this dilemma? Intelligent digital assistants.

Bring on the robots
Meet the next disruptor: the intelligent digital assistant. Enabled by artificial intelligence, these digital helpers can automate complex data work, helping employees do higher-value work. Sifting through mounds of data, prioritizing projects and managing tedious tasks are just a few of the many activities digital assistants can do.
Digital assistants in action: A usage scenario
How might a digital assistant make knowledge work easier? Let’s walk through a scenario.
Imagine Rob, a software account rep with more than 40 accounts, is having trouble staying on top of them. He&;s overloaded with information and tools, including Salesforce, Gmail, Slack, Google Sheets, and LeadLander, among others. Rob needs to constantly check these disparate systems and synthesize information to get the insights he needs to effectively serve his customers.
A digital assistant could do a lot of this work for him. For example, Rob could have his assistant monitor product usage and Salesforce to detect customers who are up for renewals in the next three months, but haven&8217;t been actively using the product. The assistant can send Rob notifications, enabling Rob to resolve any issues that may be occurring and increase the customer’s chances of renewing.
He could also have his assistant watch for new job postings from his clients on Indeed.com and LinkedIn. If the assistant finds a job ad from one of his clients, it can notify Rob that the customer may need additional software licenses for the new employees. The digital assistant can even proactively recommend additional tasks to offload to the assistant, enabling him to be more proactive. Collectively, these actions could add hours back to Rob’s work week while helping Rob better meet his goals.
Rob frequently works with Alice in customer support, who could also benefit from a digital assistant. To provide proactive service to customers, she could train her assistant to monitor new support tickets. If three or more customers report the same problem with the same product within a week, the assistant can automatically send the engineering team a high-priority ticket that includes a summary of the related tickets. The assistant can also send ongoing status updates to management, saving the support team significant time.
As these scenarios illustrate, there are three key capabilities that can make or break the effectiveness of a digital assistant app:
Usability
Many digital assistants require IT intervention because of complexity. That’s not exactly a motivator for adoption. Workers should look for a digital assistant that’s intuitive to set up and use—allowing them to easily create their own complex situations to detect and actions to take. The technology should also include a catalog of pre-built skills that can be personalized—to help users get started quickly.
Customization
Employees should look for a digital assistant that can work with the systems they use and accommodate their unique key performance indicators (KPIs) and work processes. In other words, the digital assistant should be as useful as a human assistant that they might train.
 
Intelligence
Detecting situations that matter, delivering context-driven notifications at the right times and automating actions are all intelligent capabilities. But after a while, the digital assistant should be able to learn from how workers use it, and make proactive recommendations. That’s what users would expect their human assistants to do. So employees should look for a digital assistant that they don’t have to micromanage.
To see how you can start optimizing your productivity with help from an intelligent digital assistant, check out this video.
The post Overloaded? Digital assistants to the rescue appeared first on news.
Quelle: Thoughts on Cloud

Account Executive

The post Account Executive appeared first on Mirantis | Pure Play Open Cloud.
We are transforming the industry and you will be helping us lead the charge.  As an account executive at Mirantis you will develop and execute a strategic and comprehensive business plan for your territory, including identifying core customers, mapping the benefits of OpenStack to customer’s business requirements. You will take full responsibility for accurate forecasting, regular quarterly revenue delivery, and facilitation of sales enablement and regulate the implementation of agreed account and business plans. Your overall focus areas will be in prospecting, developing business, responding to RFP&;s, developing proposals for presentation to customers, and selling Services and Products. Cross-functional teams from Mirantis’ Marketing, Solutions Engineering, Professional Services, and Product Development functions will provide support and tools for you to leverage to attain and exceed sales performance goals. Primary ResponsibilitiesPipeline Generation- acquire new customer database from calling into high level within prospect organizations, networking and various customer account lists.Participates in campaigns, conferences, works with marketing team to understand new offers and leads in assigned region, generates leads independently and follows-up appropriatelySolution Selling – consults with clients to determine their needs and works with application sales specialists to generate multi-product/service solutions. Takes initiative to learn new offers and products, as they become available. Able to apply technology knowledge in business development effortsProposal/Presentation Generation: incorporates executive summary, ROI analysis, and solution design to develop customer-specific proposals and presentations.Develop Scope of Work – works with the customer and engineering team to define and document the project scopeRelationship Management – develops and manages relationships with current clients to develop additional business as well as ensure a high level of client satisfactionAccurate Forecasting – captures activity information on a timely basis as client interactions occur to insure accurate product and services forecastingRequirementsAdvanced selling skills with a demonstrated track record of selling into complex organizations with multiple layers of decision makers. 10+ years selling experience with telecom and other technology products and solutions such as Cisco, EMC (Storage), VMware, NetApp, Oracle and managed services.Market knowledge (i.e. industry knowledge relevant to geographic area) and technical knowledge are necessary, and if assigned to vertical markets, knowledge of public sector is required.Must possess business experience to analyze client business requirements and develop creative solutions as well as utilize technical resources to complete an accurate and technically assured sales order.Exceptional communication skillsAbility to accept constructive criticism; and ability to maintain and develop positive team cohesivenessWork constructively across cultural boundaries in a globally distributed organization What We OfferWork in the Silicon Valley with established leaders in their industryWork with exceptionally passionate, talented and engaging colleaguesBe a part of cutting edge of open-source innovation since LinuxHigh-energy atmosphere of a young company, competitive compensation package with strong benefits plan and stock optionsLots of freedom for creativity and personal growthThe post Account Executive appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Posting CloudForms Notifications to Slack

Keeping the whole IT team informed about events or actions in your IT infrastructure can be challenging. Many IT teams have turned to team messaging applications, like Slack, to improve internal team communications. CloudForms, with its flexible integration capabilities, can be connected to Slack to notify the team whenever important events happen.
The Event Switchboard in CloudForms exposes provider events to Automate, allowing automation based on any events. In this example, we will post Slack messages to a team channel whenever Kubernetes events are raised.

 
Slack Webhook and Ruby
In order to post messages to Slack, we first need a webhook and the associated access token, which is created via a Slack Button. During this process, the CloudForms-bot user account will be given the proper permissions to post messages. Next, we need client side code to interface to Slack. In this case, there is the slack-ruby-client which will need to be installed on the appliance, along with any prerequisites.
gem install slack-ruby-client

Reading the Event Stream and Posting a Message
The present event data is contained within the event_stream object and can be read easily, for example:
event_stream = $evm.root[‘event_stream’]
data = event_stream.full_data

p data[:event_type]
p data[:timestamp]
p data[:message]
Configuring CloudForms to post an event to Slack involves writing up a short ruby method to interface with Slack and then connecting that method to the events we would like posted. The channel we are posting to is defined in the calling method. Here is an example method used to connect CloudForms to Slack.
Next, we need to call this method whenever an event is triggered. To do this, we update the schema to invoke it for all events. In Automate explorer, navigate to the ‘Kubernetes’ Class, edit the Schema and add a relationship value to call the new method, ‘rel1’ in the example below.

Results
Now CloudForms posts a message to the team Slack channel whenever an event is triggered.

This is just one example of how CloudForms can be integrated into IT notification systems. Other examples could include raising SNMP traps, sending emails or creating ServiceNow records based on events.
Quelle: CloudForms

Jupyter on OpenShift Part 2: Using Jupyter Project Images

The quickest way to run a Jupyter Notebook instance in a containerised environment such as OpenShift, is to use the Docker-formatted images provided by the Jupyter Project developers. Unfortunately the Jupyter Project images do not run out of the box with the typical default configuration of an OpenShift cluster.

In this second post of this series about running Jupyter Notebooks on OpenShift, I am going to detail the steps required in order to run the Jupyter Notebook software on OpenShift.
Quelle: OpenShift