Jupyter on OpenShift Part 4: Adding a Persistent Workspace

To provide persistence for any work done, it becomes necessary to copy any notebooks and data files from the image into the persistent volume the first time the image is started with that persistent volume. In this blog post I will describe how the S2I enabled image can be extended to do this automatically, as well as go into some other issues related to saving of your work.
Quelle: OpenShift

Get visibility and control across your hybrid cloud

Executives at some companies may have a vision of moving their business entirely to the public cloud. But the reality is that most will need a hybrid cloud approach to reach their business goals.
The hybrid cloud model uses a mix of public and private cloud services as well as on-premises applications and back-end systems to deliver the complete cloud offering.
The good news is that the hybrid approach provides many benefits: faster deployment, dynamic scalability, lower development and operational costs, secure back end transactions and data retention. Unfortunately, some may not realize until much later in the deployment cycle that a hybrid cloud approach offers a few unique challenges as well. Service availability and performance can impact service level agreement (SLA) adherence, which could lead to lost revenue—or worse—lost customers.
Let’s look at an example. Consider a company with a hybrid cloud offering using a public cloud storefront site to host their huge catalog of products, which includes pictures and videos. Based on loads and seasonal spikes, additional instances of the catalog are spun up to meet demand.
Once their customer places items in the cart, they are directed to a secure, private cloud to enter sensitive information that completes the transaction. Finally, the transaction is sent to the on-premises back-end applications that process the order and execute fulfillment.
Now let’s assume something went wrong: the order was not placed. So, where is the problem? Where do you begin to look? Is the problem within the public cloud, private cloud or on-premises applications? Maybe it is a combination of problems.
What you need is visibility and control of the entire hybrid cloud environment. You need tools that encompass the full hybrid cloud spectrum for the entire lifecycle. Then you can remain in control, no matter where their application or services are running.
Companies may have bits and pieces of these application performance and user-monitoring tools in place today, but they may not all work together. These tools may be for cloud-only services, or only work on-premises offerings. The tools may cover high-level application monitoring of cloud services, on-premises applications and middleware, but lack transaction tracking and application deep dive diagnostics. As a result, you might not see what real customers are experiencing as it relates to response times in navigating and using your software.
There is a real danger in a partial or mixed tool approach that could lead to gaps in the visibility and control of your hybrid cloud offerings. If an outage or poor performance were to occur, your business could be severely impacted.
Businesses should look at software that can provide a complete application monitoring and user experience across the entire hybrid cloud environment. You need visibility and insights at both a high level and at the transnational level. Your DevOps and IT operations organizations can better identify, isolate and resolve problems before they impact your customers and your business.
Learn more about IBM Application Performance Management here.
The post Get visibility and control across your hybrid cloud appeared first on news.
Quelle: Thoughts on Cloud

How do you build 12-factor apps using Kubernetes?

The post How do you build 12-factor apps using Kubernetes? appeared first on Mirantis | Pure Play Open Cloud.
It&;s said that there are 12 factors that define a cloud-native application.  It&8217;s also said that Kubernetes is designed for cloud native computing. So how do you create a 12-factor application using Kubernetes?  Let&8217;s take a look at exactly what twelve factor apps are and how they relate to Kubernetes.
What is a 12 factor application?
The Twelve Factor App is a manifesto on architectures for Software as a Service created by Heroku. The idea is that in order to be really suited to SaaS and avoid problems with software erosion &; where over time an application that&8217;s not updated gets to be out of sync with the latest operating systems, security patches, and so on &8212; an app should follow these 12 principles:

Codebase
One codebase tracked in revision control, many deploys
Dependencies
Explicitly declare and isolate dependencies
Config
Store config in the environment
Backing services
Treat backing services as attached resources
Build, release, run
Strictly separate build and run stages
Processes
Execute the app as one or more stateless processes
Port binding
Export services via port binding
Concurrency
Scale out via the process model
Disposability
Maximize robustness with fast startup and graceful shutdown
Dev/prod parity
Keep development, staging, and production as similar as possible
Logs
Treat logs as event streams
Admin processes
Run admin/management tasks as one-off processes

Let&8217;s look at what all of this means in terms of Kubernetes applications.
Principle I. Codebase
Principle 1 of a 12 Factor App is &;One codebase tracked in revision control, many deploys&;.
For Kubernetes applications, this principle is actually embedded in the nature of container orchestration itself.  Typically, you create your code using a source control repository such as a git repo, then store specific versions of your images in the Docker Hub. When you define the containers to be orchestrated as part of a a Kubernetes Pod, Deployment, DaemonSet, you also specify a particular version of the image, as in:

spec:
     containers:
     – name: AcctApp
       image: acctApp:v3

In this way, you might have multiple versions of your application running in different deployments.
Applications can also behave differently depending on the configuration information with which they run.
Principle II. Dependencies
Principle 2 of a 12 Factor App is &8220;Explicitly declare and isolate dependencies&8221;.
Making sure that an application&8217;s dependencies are satisfied is something that is practically assumed. For a 12 factor app, that includes not just making sure that the application-specific libraries are available, but also not counting on, say, shelling out to the operating system and assuming system libraries such as curl will be there.  A 12 factor app must be self-contained.
That includes making sure that the application is isolated enough that it&8217;s not affected by conflicting libraries that might be installed on the host machine.
Fortunately, if an application does have any specific or unusual system requirements, both of these requirements are handily satisfied by containers; the container includes all of the dependencies on which the application relies, and also provides a reasonably isolated environment in which the container runs.  (Contrary to popular belief, container environments are not completely isolated, but for most situations, they are Good Enough.)
For applications that are modularized and depend on other components, such as an HTTP service and a log fetcher, Kubernetes provides a way to combine all of these pieces into a single Pod, for an environment that encapsulates those pieces appropriately.
Principle III. Config
Principle 3 of a 12 Factor App is &8220;Store config in the environment&8221;.
The idea behind this principle is that an application should be completely independent from its configuration. In other words, you should be able to move it to another environment without having to touch the source code.
Some developers achieve this goal by creating configuration files of some sort, specifying details such as directories, hostnames, and database credentials. This is an improvement, but it does carry the risk that someone will check a config file into the source control repository.
Instead, 12 factor apps store their configurations as environment variables; these are, as the manifesto says, &8220;unlikely to be checked into the repository by accident&8221;, and they&8217;re operating system independent.
Kubernetes enables you to specify environment variables in manifests via the Downward API, but as these manifests themselves do get checked int source control, that&8217;s not a complete solution.
Instead, you can specify that environment variables should be populated by the contents of Kubernetes ConfigMaps or Secrets, which can be kept separate from the application.  For example, you might define a Pod as:
apiVersion: v1
kind: Pod
metadata:
 name: secret-env-pod
spec:
 containers:
   – name: mycontainer
     image: redis
     env:
       – name: SECRET_USERNAME
         valueFrom:
           secretKeyRef:
             name: mysecret
             key: username
       – name: SECRET_PASSWORD
         valueFrom:
           secretKeyRef:
             name: mysecret
             key: password
       – name: CONFIG_VERSION
         valueFrom:
           configMapKeyRef:
             name: redis-app-config
             key: version.id
As you can see, this Pod receives three environment variables, SECRET_USERNAME, SECRET_PASSWORD, and CONFIG_VERSION, the first two from from referenced Kubernetes Secrets, and the third from a Kubernetes ConfigMap.  This enables you to keep them out of configuration files.
Of course, there&8217;s still a risk of someone mis-handling the files used to create these objects, but it&8217;s them together and institute secure handling policies than it is to weed out dozens of config files scattered around a deployment.
What&8217;s more, there are those in the community that point out that even environment variables are not necessarily safe for their own reasons.  For example, if an app crashes, it may save all of the environment variables to a log or even transmit them to another service.  Diogo Mónica points to a tool called Keywhiz you can use with Kubernetes, creating secure secret storage.
Principle IV. Backing services
Principle 4 of the 12 Factor App is &8220;Treat backing services as attached resources&8221;.
In a 12 Factor app, any services that are not part of the core application, such as databases, external storage, or message queues, should be accessed as a service &8212; via an HTTP or similar request &8212; and specified in the configuration, so that the source of the service can be changed without affecting the core code of the application.
For example, if your application uses a message queuing system, you should be able to change from RabbitMQ to ZeroMQ (or ActiveMQ or even something else) without having to change anything but configuration information.
This requirement has two implications for a Kubernetes-based application.
First, it means that you must think about how your applications take in (and give out) information. For example, if you have a backing database, you wouldn&8217;t want to have a local Mysql instance, even if you&8217;re replicating it to other instances. Instead, you would want to have a separate container that handles database operations, and make those operations callable via an API. This way, if you needed to change to, say, PostgreSQL or a remotely hosted MySQL server, you could create a new container image, update the Pod definition, and restart the Pod (or more likely the Deployment or StatefulSet managing it).  
Similarly, if you&8217;re storing credentials or address information in environment variables backed by a ConfigMap, you can change that information and replace the Pod.
Note that both of these examples assume that though you&8217;re not making any changes to the source code (or even the container image for the main application) you will need to replace the Pod; the ability to do this is actually another principle of a 12 Factor App.
Principle V. Build, release, run
Principle 5 of the 12 Factor App is &8220;Strictly separate build and run stages&8221;.
These days it&8217;s hard to imagine a situation where this is not true, but a twelve-factor app must have a separate build stage.  In other words, you should be able to build or compile the code, then combine that with specific configuration information to create a specific release, then deliberately run the release.
Releases should be identifiable.  You should be able to say, &;This deployment is running Release 1.14 of this application&8221; or something similar, the same way we say we&8217;re running &8220;the OpenStack Ocata release&8221; or &8220;Kubernetes 1.6&8221;.  They should also be immutable; any changes should lead to a new release.  If this sounds daunting, remember that when we say &8220;application&8221; we&8217;re no longer talking about large, monolithic releases.  Instead, we&8217;re talking about very specific microservices, each of which has its own release, and which can bump releases without causing errors in consuming services.
All of this is so that when the app is running, that &8220;run&8221; process can be completely automated. Twelve factor apps need to be capable of running in an automated fashion because they need to be capable of restarting should there be a problem.
Translating this to the Kubernetes realm, we&8217;ve already said that the application needs to be stored in source control, then built with all of its dependencies.  That&8217;s your build process.  We talked about separating out the configuration information, so that&8217;s what needs to be combined with the build to make a release. And the ability to automatically run the application &8212; or multiple copies of the application &8212; is precisely what Kubernetes constructs like Deployments, ReplicaSets, and DaemonSets do.
Principle VI. Processes
Principle 6 of the 12 Factor App is &8220;Execute the app as one or more stateless processes&8221;.
Stateless processes are a core idea behind cloud native applications. Every twelve-factor application needs to run in individual, share-nothing processes. That means that any time you need to persist information, it needs to be stored in a backing service such as a database.
If you&8217;re new to cloud application programming, this might be deceptively simple; many developers are used to &8220;sticky&8221; sessions, storing information in the session with the confidence that the next request will come to the same server. In a cloud application, however, you must never make that assumption.
Instead, if you&8217;re running a Kubernetes-based application that hosts multiple copies of a particular pod, you must assume that subsequent requests can go anywhere.  To solve this problem, you will want to use some sort of backing volume or database for persistence.
One caveat to this principle is that Kubernetes StatefulSets can enable you to create Pods with stable network identities, so that you can, theoretically, direct requests to a particular pod. Technically speaking, if the process doesn&8217;t actually store state, and the pod can be deleted and recreated and still function properly, it satisfies this requirement &8212; but that&8217;s probably pushing it a bit.
Principle VII. Port binding
Principle 7 of the 12 Factor App is &8220;Export services via port binding&8221;.
In an environment where we&8217;re assuming that different functionalities are handled by different processes, it&8217;s easy to make the connection that these functionalities should be available via a protocol such as HTTP, so it&8217;s common for applications to be run behind web servers such as Apache or Tomcat.  Twelve-factor apps, however, should not be depend on an additional application in that way; remember, every function should be in its own process, isolated from everything else. Instead, the 12 Factor App manifesto recommends adding a web server library or something similar to the app itself, so that the app can await requests on a defined port, whether it&8217;s using HTTP or another protocol.
In a Kubernetes-based application, this is done partly through the architecture of the application itself, and partly by making sure that the application has all of its dependencies as part of the creation of the base containers on which the application is built.
Principle VIII. Concurrency
Principle 8 of the 12 Factor App is to &8220;Scale out via the process model&8221;.
When you&8217;re writing a twelve-factor app, make sure that you&8217;re designing it to be scaled out, rather than scaled up. That means that in order to add more capacity, you should be able to add more instances rather than more memory or CPU to the machine on which the app is running. Note that this specifically means being able to start additional processes on additional machines, which is, fortunately, a key capability of Kubernetes.
Principle IX. Disposability
Principle 9 of the 12 Factor App is to &8220;Maximize robustness with fast startup and graceful shutdown&8221;.
It seems like this principle was tailor made for containers and Kubernetes-based applications. The idea that processes should be disposable means that at any time, an application can die and the user won&8217;t be affected, either because there are others to take its place, because it&8217;ll start right up again, or both.
Containers are built on this principle, of course, and Kubernetes structures that manage multiple instances and maintain a certain level of availability even in the face of problems, such as ReplicaSets, complete the picture.
Principle X. Dev/prod parity
Principle 10 of the 12 Factor App is &8220;Keep development, staging, and production as similar as possible&8221;.
This is another principle that seems like it should be obvious, but is deeper than most people think. On the surface level, it does mean that you should have identical development, staging, and production environments, inasmuch as that is possible. One way to accomplish this is through the use of Kubernetes namespaces, enabling you to (theoretically) run code on the same actual cluster against the same actual systems while still keeping environments separate. In some situations, you can also use tools such as Minikube or kubeadm-dind-cluster to create near-clones of production systems.
At a deeper level, though, as the Twelve-Factor App manifesto puts it, it&8217;s about three different types of &8220;gaps&8221;:

The time gap: A developer may work on code that takes days, weeks, or even months to go into production.

The personnel gap: Developers write code, ops engineers deploy it.

The tools gap: Developers may be using a stack like Nginx, SQLite, and OS X, while the production deploy uses Apache, MySQL, and Linux.

The goal here is to create a Continuous Integration/Continuous Deployment situation in which changes can go into production virtually immediately (after testing, of course!), deployed by the developers who wrote it so they can actually see it in production, using the same tools on which the code was actually written in order to minimize the possibility of compatibility errors between environments.
Some of these factors are outside of the realm of Kubernetes, of course; the personnel gap is a cultural issue, for example. The time and tools gaps, however, can be helped in two ways.
For the time gap, Kubernetes-based applications are, of course, based on containers, which themselves are based on images that are stored in version-control systems, so they lend themselves to CI/CD. They can also be updated via rolling updates that can be rolled back in case of problems, so they&8217;re well-suited to this kind of environment.
As far as the tools gap is concerned, the architecture of Kubernetes-based applications make it easier to manage, both by making local dependencies simple to include in the various images, and by modularizing the application in such a way that external backing services can be standardized.
Principle XI. Logs
Principle 11 of the 12 Factor App is to &8220;Treat logs as event streams&8221;.
While most traditional applications store log information in a file, the Twelve-Factor app directs it, instead, to stdout as a stream of events; it&8217;s the execution environment that&8217;s responsible for collecting those events. That might be as simple as redirecting stdout to a file, but in most cases it involves using a log router such as Fluentd and saving the logs to Hadoop or a service such as Splunk.
In Kubernetes, you have at least two choices for automatic logging capture: Stackdriver Logging if you&8217;re using Google Cloud, and Elasticsearch if you&8217;re not.  You can find more information on setting Kubernetes logging destinations here.
Principle XII. Admin processes
Principle 12 of the 12 Factor App is &8220;Run admin/management tasks as one-off processes&8221;.
This principle involves separating admin tasks such as migrating a database or inspecting records from the rest of the application. Even though they&8217;re separate, however, they must still be run in the same environment and against the same base code and configuration as the application, and their code must be shipped alongside the application to prevent drift.
You can implement this a number of different ways in Kubernetes-based applications, depending on the size and scale of the application itself. For example, for small tasks, you might use kubectl exec to operate on a specific container, or you can use the Kubernetes Job to run a self-contained application. For more complicated tasks that involve orchestration of changes, however, you can also use Kubernetes Helm charts.
How many of these factors did you hit?
Unless you&8217;re still building desktop applications, chances are you can feel good about hitting at least a few of these essential principles of twelve-factor apps. But chances are you also found at least one or two you can probably work a little harder on.
So we want to know: which of these factors are the biggest struggle for you? Where do you think you need to work harder, and what would make it easier for you?  Let us know in the comments.
Thanks to Jedrzej Nowak, Piotr Shamruk, Ivan Shvedunov, Tomasz Napierala and Lukasz Oles for reviewing this article!
Check out more Kubernetes resources.
The post How do you build 12-factor apps using Kubernetes? appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Protection in the cloud with a two-pronged approach

Hacking often conjures images of malicious criminals breaking into computer systems to steal and sell data for monetary gain. More recently, hacking has become known as a weapon that can cause public embarrassment and wreck reputations.
Cyber-extortion and the threat to expose sensitive information are on the rise. In a recent IBM report, “Ransomware: How consumers and businesses value their data,” almost half of executives (46 percent) reported that they’ve had some experience with ransomware attacks in the workplace, and 70 percent of those respondents paid to get data back.
Such threats and the news stories they generate are primary reasons why security is a top concern for businesses. However, executives at the forefront of implementing hybrid cloud strategies say they are overcoming this challenge, according to the report “Growing up hybrid: Accelerating digital transformation.” Extending the same level of security controls and best practices they have in place for traditional IT to the cloud is one way to reduce risk. Assigning business-critical work to on-premises resources is another. In fact, 78 percent of these executives say that hybrid cloud is actually improving security.
As the benefits of — agility, innovation and efficiency — become undeniable, information security is taking on greater importance. The best defense is to cultivate security as a behavior. Technology alone cannot protect companies. There must be a cultural change in the way all employees think about security.
Expanding cloud computing
Market demands are shifting the way CIOs think about data storage and security, says Roy Illsley, chief analyst with the digital consultancy firm Ovum. Most companies still store their most sensitive data in mainframes. However, Illsley expects that to change in the next five to 10 years as consumers insist on instant access to data.
“If you’ve got everything in a mainframe and it’s stored in Frankfurt, all nice and secure, but you’ve got customers all over the world, the latency of that from someone traveling in China is probably too great to be of any use to them,” he says.
Andras Cser, an analyst with Forrester Research, says financial concerns also weigh heavily. “You can’t choose to have a legacy system because of the cost,” he says. “The cloud is so much more inexpensive. The question isn’t whether a company should move to the cloud, it’s how.”
Cloud computing is no longer limited to just the world of computer servers, data storage and networking. Increasingly, it is core to mobile devices, sensors, cognitive and the Internet of Things (IoT). As innovations like cognitive and IoT become widespread, cloud computing is seemingly everywhere.
Outsmarting the hackers
As more information is digitized, security awareness needs to increase. Hackers gain entry to secure systems via phishing attacks in which employees click on malicious attachments or visit websites that download malware onto their machines. Organizations must take a two-pronged approach to security that uses tech-based solutions, and requires workers to change how they use technology.
For instance, encryption protects email, but employees also need to be careful about what they say in their emails. It goes back to human behavior. Is that the right medium for that communication or should you just pick up a phone? That decision is made by an individual.
Many high-profile email attacks that splashed across headlines were partly the result of inadequate technology, such as not using the right email signatures, and a misguided use of the medium. It all boils down to each user’s level of security consciousness and the best practices that he or she has internalized as a behavior. The adversary is getting cannier, so security relies upon individual actions and decisions.
Security requires a strong technical defense as well. Standard defenses include encryption and geofencing, which builds a virtual fence around data and monitors employees’ comings and goings. It’s not enough to merely have such technologies on hand, however. The key is to examine how well they are configured.
Organizations need people who not only know how to secure the system, but also stay ahead of emerging threats. One positive trend: companies are beginning to work together to fight hackers.
Such cooperation shows that security has become a global issue. Never before have businesses and public personalities had a better reason to work collaboratively to thwart cybersecurity threats.
The post Protection in the cloud with a two-pronged approach appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

DevOps Engineer

The post DevOps Engineer appeared first on Mirantis | Pure Play Open Cloud.
Mirantis is looking for a highly qualified candidate with experience in systems integration, release management, and package development in DEB format. The Infrastructure  team takes code from the Open Source community and applies fixes and patches generated both internally and from external contributors to deliver OpenStack. Experience handling large-scale upgrades, zero-downtime maintenance, and contingency planning are highly desirable.Responsibilities:define and manage test environments required for different types of automated tests,drive cross-team communications to streamline and unify build and test processes,track  hardware utilization by CI/CD pipelinesprovide and maintain specifications and documentation for Infrastructure systems,provide support for users of Infrastructure systems (developers and QA engineers),produce and deliver technical presentations at internal knowledge transfer sessions, public workshops and conferences,Deploy new slaves and standalone servers by puppet/salt/ansibleRequired Skills:Linux system administration &; package management, services administration, networking, KVM-based virtualization;scripting with Bash and Python;experience with the DevOps configuration management methodology and tools (Puppet, Ansible, Salt);ability to describe and document systems design decisions;familiarity with development workflows &8211; feature design, release cycle, code-review practices;English, both written and spoken.Will Be a Plus:knowledge of CI tools and frameworks (Jenkins, Buildbot, etc.);release engineering experience &8211; branching, versioning, managing security updates;understanding of release engineering and QA practices of major Linux distributions;experience in test design and automation;experience in project management;involvement in major Open Source communities (developer, package maintainer, etc.).What We Offer:challenging tasks, providing room for creativity and initiative,work in a highly-distributed international team,work in the Open Source community, contributing patches to upstream,opportunities for career growth and relocation,business trips for meetups and conferences, including OpenStack Summits,strong benefits plan,medical insurance.The post DevOps Engineer appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Comcast Business offers up IBM Cloud dedicated links

Comcast Business has partnered with IBM to offer its customers a direct, dedicated network link to IBM Cloud and its global network of 50 data centers in 19 countries.
Using those direct links, customers will be able to access the network at speeds of up to 10 gigabits per second.
The partnership gives enterprise customers &;more choices for connectivity so they can store data, optimize their workloads and execute mission-critical applications in the cloud, whether it be on-premise, off-premise or a combination of the two,&; said Jeff Lewis, vice president of data services at Comcast Business.
Enterprises can also gain greater speed, reliability and security with dedicated links than with a standard, open internet connection. Services will be backed by a service-level agreement.
&8220;Enterprises definitely need help with cloud implementations and anything that can make it easier for them is a good thing,&8221; Jack Gold, an analyst at J. Gold Associates, told ComputerWorld.
Read more in ComputerWorld&;s full article.
The post Comcast Business offers up IBM Cloud dedicated links appeared first on news.
Quelle: Thoughts on Cloud

Jupyter on OpenShift Part 3: Creating a S2I Builder Image

In the prior post in this series I described the steps required to run the Jupyter Notebook images supplied by the Jupyter Project developers. When run, these notebook images provide an empty workspace with no initial notebooks to work with. Depending on the image used, they would include a range of pre-installed Python packages, but they may not have all packages installed that a user needs.
Quelle: OpenShift

Operations Engineer (STO Team)

The post Operations Engineer (STO Team) appeared first on Mirantis | Pure Play Open Cloud.
Mirantis is the leading global provider of Software and Services for OpenStack ™, a massively scalable and feature-rich Open Source Cloud Operating System. OpenStack is used by hundreds of companies, including AT&T, Cisco, HP, Internap, NASA, Dell, GE, and many more.As a leading global provider, Mirantis offers the leading OpenStack technology platform coupled by unique, cost-effective global services delivery model founded on years of deep software engineering experience for demanding Fortune 1000 companies.Mirantis Inc. are inviting enthusiastic Operations engineers, who will be extending OpenStack to support enterprise-grade private IaaS platforms for company&;s customers. We need talented engineers, who are willing to work on intersection of IT and software engineering, be passionate about open-source and not afraid of maintaining huge codebase, written by best developers in the area.Responsibilities:System Administration on Linux (Ubuntu, CentOS, etc).Technical support OpenStack products for customers.Test components for cloud applications using Python in case alarm conditions.Troubleshooting OpenStack installation and fixing bugs in OpenStack components.Participating in public activities: user groups, conferences, company’s blog both in Russia and USA.Requirements:Excellent Linux system administration and troubleshooting skills.Good knowledge of Python.Good understanding of networking concepts and protocols.Nice to have:Experience of working with and maintaining large Python codebases.Experience working with virtualization solutions (KVM, XEN).Understanding of NAS/SAN.Awareness of distributed file systems (Gluster, Ceph).Experience with configuring and extending monitoring tools (Nagios, Ganglia, Zabbix).Experience working with configuration management tools (Chef, Puppet, Cobbler).Experience of deploying and extending of OpenStack is a plus.Fluent english.We offer:Competitive salary (after interview).Career and professional growth.20-working days paid vacation, 100% paid sick list.Medical insurance.Benefit programThe post Operations Engineer (STO Team) appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Account Development Rep

The post Account Development Rep appeared first on Mirantis | Pure Play Open Cloud.
We are looking for an energetic, enthusiastic, well-organized team player to join our team. This individual will spend much of the day on the phone prospecting, managing inbound and outbound calls qualifying prospective buyers. This is a great, Entry Level opportunity.Key Responsibilities:Generate sales-ready leads through a combination of cold calling, following up on nurturing marketing campaigns, and prompt response to inbound inquiriesManage inbound leadsExecute outbound calls to targeted accountsUnderstand the Mirantis products and effectively map the prospects’ needs to the solutionEnter, update, and maintain daily activity and prospect information in Salesforce.comAchieve and exceed monthly quotas and objectivesWork closely with sales and marketing team members to achieve company goalsAlign workflow and sales objectives with managementFollow up on campaigns and provide detailed feedback on the success of each campaignEnsure that all new leads are processed within agreed upon SLAsSchedule meetings within targeted accounts and qualified prospectsRequirements:Dynamic, high energy person with a desire to break into the enterprise B2B software and/or services market.  Demonstrated success of working independently with a proven track record of achieving goal attainment. (e.g. meet or exceed targets)Desire to work in an environment that is measured by having secured a high volume of qualified meetings for the field sales teamStrong organizational skills with ability to manage time, territory and resources effectivelySuperb communication skills with ability to consultatively highlight product benefits/advantages/featuresSelf directed, a quick study and an enthusiastic team playerDetail- and results-oriented person who works well under pressureExcellent interpersonal and presentation skillsBachelor Degree preferredKey Skills and Qualifications:Excellent oral and written communication skillsDemonstrated self-starterDemonstrable and consistent over-achievement of targetsAn innate hunger for personal and company success combined with strong interpersonal skillsAbility to change priorities quickly and capacity to handle multiple tasksPerks:If you enjoy working for an innovative and growing company with a clear mission, apply today! We are located in the heart of the Silicon Valley in California. Mirantis offers a generous benefits package to take care of you and your family that includes medical, dental, and vision coverage, paid time off, 401K, life insurance and disability plans and stock options.The post Account Development Rep appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

KubeCon Europe 2017: Bigger and Broader

End of March I attended CloudNativeCon + KubeCon Europe in Berlin and compared with the event last year in London, I think two words describe it best: bigger and broader. With over 1200 attendees the event was impressive but still felt like a place where you can have meaningful discussions with peers.
Quelle: OpenShift