Payroll Specialist- Contract

The post Payroll Specialist- Contract appeared first on Mirantis | Pure Play Open Cloud.
Mirantis, Inc. is the number one pure-play OpenStack Company. We deliver all the technology, integration, training and support required for companies to succeed with production-grade open source cloud. We are transforming the industry, and you will be helping us lead the charge.If you’re an ambitious Payroll Specialist and thrive on working on tough, real-world problems, you want to work here.SummaryAs the Payroll Specialist at Mirantis, you will be part a high performance accounting team to support hyper-growth of the company. This position manages Payroll and Employee expense reimbursement payments for US employees.Responsibilities3-4 Month ContractProcess semi-monthly payroll for approximately 200 US employeesProcess and audit data entry and import/export interfaces for employee changes and make corrections/adjustments as appropriateReview, audit and balance payroll-related input to ensure accurate and timely payment of salaries and benefits and compliant with federal and state regulationsCreate payroll Journal EntriesResearch and respond to tax agency inquiries and tracers as requiredRespond to employees on payroll-related inquiries in a timely manner and provide excellent customer serviceEntry and on-going audit of all tax levies, child support and garnishmentsPrepare manual checks for out-of-cycle paymentsAssist in stock administrationReconcile taxes, garnishments, stock option exercises and 401K contributionsCreate and update payroll procedures and run ad hoc reports as necessaryAssist with special projects related to the payroll function and streamline processesProcess accurate and timely year-end reporting (W-2, W-2c, etc)Work closing with HR to timely and accurately update pay rate adjustments and bonus payoutsProcess employee expense reimbursements in accordance with Company reimbursement policyReconcile Concur expense reports to NetsuiteQualifications:5+ years of payroll experience in mid-to-large tech, multi-state companyExperience with Paylocity preferredExperience with Netsuite and Concur preferredComplete understanding of payroll process and US payroll tax lawsExcellent analytical and problem solving skills, ability to define problems, collect data to establish facts, draw valid conclusion and report on findings for resolutionSelf-starter with ability to work independently with minimum supervisionProficient in PC skills including, ability to perform pivots, Vlookups and other functions within ExcelWorking knowledge of payroll best practicesStrong work ethic and team playerHigh degree of professionalismAbility to deal sensitively with confidential materialStrong interpersonal (verbal and written) communication skillsDecision-making, problem-solving, and analytical skillsOrganizational, multi-tasking, and prioritizing skillsThe post Payroll Specialist- Contract appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

RDO Ocata released

The community is pleased to announce the general availability of the RDO build for OpenStack Ocata for RPM-based distributions, Linux 7 and Red Hat Enterprise Linux.
RDO is suitable for building private, public, and hybrid clouds. Ocata is the 15th release from the OpenStack project, which is the work of more than 2500 contributors from around the world (source).

The RDO community project curates, packages, builds, tests and maintains a complete OpenStack component set for RHEL and CentOS Linux and is a member of the CentOS Cloud Infrastructure SIG.
The Cloud Infrastructure SIG focuses on delivering a great user experience for CentOS Linux users looking to build and maintain their own on-premise, public or hybrid clouds.

All work on RDO, and on the downstream release, Red Hat OpenStack Platform, is 100% open source, with all code changes going upstream first.

Interesting things in the Ocata release include:

Significant Improvements to Tempest and Tempest plugin packaging in RDO

The OpenStack-Ansible project now supports deployment on top of CentOS with the help of RDO-packaged dependencies

For cloud operators, RDO now provides packages for some new OpenStack Services:

Tacker: an ETSI MANO NFV Orchestrator and VNF Manager
Congress: an open policy framework for the cloud
Vitrage: the OpenStack RCA (Root Cause Analysis) Service
Kolla: The Kolla project provides tooling to build production-ready container images for deploying OpenStack clouds

Some other notable additions:

novajoin: a dynamic vendordata plugin for the OpenStack nova metadata service to manage automatic host instantiation in an IPA server
ironic-ui: a new Horizon plugin to view and manage baremetal servers
python-virtualbmc VirtualBMC is a proxy that translates IPMI commands to libvirt calls. This allows projects such as OpenStack Ironic to test IPMI drivers using VMs.
python-muranoclient: a client for the Application Catalog service.
python-monascaclient: a client for the Monasca monitoring-as-a-service solution.
Shaker: the distributed data-plane testing tool built for OpenStack
Multi-architecture support: aarch64 builds are now provided through an experimental repository – enable the RDO ‘testing’ repositories to get started

From a networking perspective, we have added some new Neutron plugins that can help Cloud users and operators to address new use cases and scenarios:

networking-bagpipe: a mechanism driver for Neutron ML2 plugin using BGP E-VPNs/IP VPNs as a backend
networking-bgpvpn: an API and framework to interconnect BGP/MPLS VPNs to Openstack Neutron networks
networking-fujitsu: FUJITSU ML2 plugins/drivers for OpenStack Neutron
networking-l2gw: APIs and implementations to support L2 Gateways in Neutron
networking-sfc: APIs and implementations to support Service Function Chaining in Neutron

From the Packstack side, we have several improvements:

We have added support to install Panko and Magnum
Puppet 4 is now supported, and we have updated our manifests to cover the latest changes in the supported projects

Getting Started

There are three ways to get started with RDO.

To spin up a proof of concept cloud, quickly, and on limited hardware, try the All-In-One Quickstart. You can run RDO on a single node to get a feel for how it works.
For a production deployment of RDO, use the Quickstart and you’ll be running a production cloud in short order.
Finally, if you want to try out OpenStack, but don’t have the time or hardware to run it yourself, visit TryStack, where you can use a free public OpenStack instance, running RDO packages, to experiment with the OpenStack management interface and API, launch instances, configure networks, and generally familiarize yourself with OpenStack. (TryStack is not, at this time, running Ocata, although it is running RDO.)

Getting Help

The RDO Project participates in a Q&A service at ask.openstack.org, for more developer-oriented content we recommend joining the rdo-list mailing list. Remember to post a brief introduction about yourself and your RDO story. You can also find extensive documentation on the RDO docs site.

The rdo channel on Freenode IRC is also an excellent place to find help and give help.

We also welcome comments and requests on the CentOS mailing lists and the CentOS and TripleO IRC channels (centos, centos-devel, and tripleo on irc.freenode.net), however we have a more focused audience in the RDO venues.

Getting Involved

To get involved in the OpenStack RPM packaging effort, see the RDO community pages and the CentOS Cloud SIG page. See also the RDO packaging documentation.

Join us in rdo on the Freenode IRC network, and follow us at @RDOCommunity on Twitter. If you prefer Facebook, we’re there too, and also Google+.
Quelle: RDO

Universal access: An early prototype for graphical VNC console in Ironic

The post Universal access: An early prototype for graphical VNC console in Ironic appeared first on Mirantis | Pure Play Open Cloud.
Ideally, users should be able to have a universal experience when it comes to accessing their nova instances, but when integrated as part of OpenStack deployment, nova instances deployed on Ironic baremetal nodes have certain limitations compared to standard virtual machines created by nova. In particular, it is not currently possible to access the graphical VNC console of these instances via the Horizon Dashboard.
To fix this problem, the ironic community has started to work on introducing a framework for graphical console access for baremetal nodes. Because each hardware vendor implements a different way of providing graphical console access, the framework is planned to be quite generic, leaving details of the actual graphical console configuration and enablement to a proposed GraphicalConsole interface of an ironic driver.
One interesting type of hardware to consider in this regard is Dell servers supporting iDRACv7 or newer (PDF). The iDRAC firmware on such servers supports native access to the server’s graphical console over the OpenVNC-compatible protocol directly, without the need for proprietary VNC proxies or clients. An administrator who has appropriate access to the iDRAC configuration can enable this built-in VNC server and set the password, connection timeout and SSL encryption options.
In order to test the VNC capabilities of such hardware, I have implemented a prototype of a graphical console interface for the DRAC driver. It uses the WS-MAN HTTP API (as do the rest of the DRAC-specific driver interfaces) to toggle the VNC server feature on and off and set its properties. I have also created a prototype of get_vnc_console method for the Ironic virt-driver in Nova. As a result, I was able to get access to the graphical console in the Horizon Dashboard for the nova instance deployed on top of a Dell R630 server managed by Ironic.

Of course, no prototype is complete without any bugs/problems discovered during testing. Here is what I’ve been hitting my head against and hacking around while making this to work:

This prototype was done prior to the generic graphical console framework implementation done in ironic. Thus the prototype implementation is, for now, overriding the existing serial console interface in an Ironic driver that was specifically created for this purpose. That means that currently it is not possible to have both a serial console and graphical console.
Conveniently though, the proposed base GraphicalConsole interface will have the same API as the current Console (SerialConsole in the future) interface. This means that once the generic framework for the graphical console interfaces is implemented in Ironic, this prototype can be plugged in as graphical console interface basically as-is.
The interface implementation is using the low-level WS-MAN Python client calls for now, because support for managing the iDracCardService is yet lacking from python-dracclient. The work to enable this functionality is already ongoing in the community, though.
The ironic virt-driver changes are rather specific for this particular case are meant exclusively to let me quickly test this functionality. After the generic graphical console is implemented in Ironic and the required complementary functionality is available in python-ironicclient, this will change.
The OpenVNC implementation in iDRAC does not seem to be complete, as noVNC can not properly connect to it. The result is an apparently connected console with no graphical output (issue#). Resolving this problem involves disabling a single passed encoding parameter in the noVNC code. For now, I have had to patch noVNC, but I have not yet determined the implications of these changes on access to a standard VM graphical console.
In order for noVNC to connect, you must set the password on the VNC server, because noVNC cannot accept an empty password in its password prompt, and setting the password for the iDRAC VNC server to None/empty string still results in the VNC server requesting a password on connection. I am not sure if this should be considered a bug in the iDRAC VNC server or in noVNC.
I have not tested yet how the iDRAC VNC server works with noVNC when SSL is enabled in iDRAC VNC Server.
The iDRAC VNC server is limited to a single VNC session at a time, so it is not really suitable for a multi-user setup. On the other hand, this still might suffice for undercloud-like use cases such as TripleO.
Note that in the current prototype, all nodes running the nova-novncproxy service (or the single one specified as “vncserver_proxyclient_address” in the config for nova-compute with the Ironic virt-driver) must effectively have access to the BMC network, as the built-in iDRAC VNC server is serving from its own BMC IP address. Take care to set up such proxying securely in a clustered nova deployment.

As you can see, there&;s still a ways to go before this functionality is available in a production capacity. Nevertheless, this seems like an interesting and promising development in the hardware market. I consider it as yet another small step on the way forward to close the gap between baremetal and virtual servers in OpenStack, and to enable a unified user experience for the compute service.
The post Universal access: An early prototype for graphical VNC console in Ironic appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

How to use GPUs in OpenShift and Kubernetes

Running general-purpose compute workloads on Graphics Processing Units (GPUs) has become increasingly popular recently in a wide range of application domains, mirroring the increased ubiquity of deploying applications in Linux containers. Thanks to community participant Clarifai, Kubernetes became able to schedule workloads depending on GPUs beginning with version 1.3, enabling us to develop applications that are on the cutting edge of both trends with Kubernetes or OpenShift.
Quelle: OpenShift

3 types of companies harnessing the power of cloud

What if you could redefine your competitive position by enhancing your products or services?  What if you had access to unlimited computing resources to scale your business? What if you could invent a new customer need and own a new market?
Cloud is the enabler that makes these scenarios possible.
Five years ago, only a third of senior business leaders in our study “The Power of Cloud” said they had a solid plan for adopting .  In 2016, more than three-fourths of the executives we spoke with for our ”Tailoring Hybrid Cloud” study describe their cloud initiatives as part of a coordinated program or fully integrated as part of an overall strategic transformation.

Cloud-driven revolution
In “The Power of Cloud,” we developed a “cloud enablement framework” which characterizes the impact of an organization’s cloud-enabled business strategy. This framework revealed three archetypes that represent how organizations can use cloud to improve their customer value proposition while adapting their company and industry value chains:

Optimizers
Optimizers use cloud to incrementally enhance customer experiences and improve organizational efficiency. They tend to deepen customer relationships without risking the potential failure that comes with radical new business models. While optimizers can expand the value they offer through improved products and services, enhanced customer experiences, and augmented channel delivery options, they tend to bring about lower revenue and market share gains than innovators or disruptors.
Innovators
Innovators significantly improve customer experiences through cloud adoption, resulting in new revenue streams. This alters their role within an existing industry ecosystem. In doing so, they transform the organization’s role within an industry or enter an adjacent market or industry space. By extending and transforming, innovators can reconfigure elements of their value chains and value propositions to gain competitive advantage.
Disruptors
Disruptors wield cloud to invent radically new customer experiences, generating new customer needs and segments while creating new industry value chains. They provide customers with products or services they didn’t even know they wanted. They capture a unique competitive advantage by creating a new market or disrupting an existing industry. By taking risks, disruptors can gain a “first-mover” advantage that confers higher rewards.
Whether companies choose to become cloud-enabled optimizers, innovators or disruptors depends on a variety of factors, including how much risk they are willing to take and their current competitive context. Business leaders should carefully assess their organizations to determine which archetype they most closely match today, as well as which one they aspire to be. Only then can they determine how cloud can help them create new business models that promote long-term growth and profit.

Getting started
You can start these three initiatives today to begin capturing value from cloud-enabled business models:

Establish shared responsibility for cloud adoption across line-of-business executives and IT management to help ensure cloud strategy and governance permeates business objectives.
Look within and beyond the organization’s borders and ecosystem boundaries to envision the optimal value that can come from cloud adoption.
Determine whether the organization seeks to be an optimizer, innovator or disruptor, and use cloud to reconstruct the business model to realize that potential.

Will your organization become a cloud optimizer, innovator or disruptor?
For more on what it takes to become a cloud optimizer, innovator or disruptor, including more steps to get started, please read “The Power of Cloud: Driving Business Model Innovation.”
For more on how hybrid cloud can answer an enterprise’s unique needs, please read “Tailoring Hybrid Cloud:  Designing the Right Mix for Innovation, Efficiency and Growth.”
The post 3 types of companies harnessing the power of cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

The cloud stories you can’t miss at InterConnect

Should you stay home from an event where more than 20,000 representatives from potential customers and partners gathered in one place Of course not. That’s why you absolutely must join me and my colleagues for IBM InterConnect.
There are far too many benefits to name, but if you still need convincing that InterConnect is a must for your business, take a look at my top four cloud sessions.
1. Session 7354: A discussion with Girls Who Code founder Reshma Saujani
Does your business have a mission bigger than its next quarter? As technology leaders, we have a responsibility to do more than innovate. We should help solve the pressing social issues of our day. As part of the all-new Innovation Talks, IBM will welcome Reshma Saujani, founder and CEO of Girls Who Code.
Saujani will advocate a new model of female leadership focused on embracing risk and failure, promoting mentorship and boldly charting your own course. How  can your organization change the world?
2. Session 6186: American Airlines: Booking automation using IBM ODM
If you’re striving to transform your business processes or build a seamless automated experience, this session might be the answer.  Hear how one of the world&;s largest airlines, American Airlines, harnessed the power of IBM Operational Decision Maker (ODM) to transform internal processes and automate bookings.
3. Session 3069: Using cognitive to shift from reactive to proactive IT operations: A client panel
Do you ever feel like your business is constantly rushing to solve the latest problem? Do you wish you could address problems before they became urgent? Join an expert panel of IBM clients to learn how to unleash the power of cognitive computing to extract deep insights from IT systems. It’s time to stop putting out fires and start being more proactive.
4. Session 7286A: Trends in microservices architectures for deploying cloud apps
As your business needs increase, you may struggle to succeed unless you continue to innovate. This session will help empower your organization to bring faster and more flexible responses to market dynamics. Microservices can be a key strategic software architecture pattern to successfully expand customer engagement and increase the realization of wider cloud application adoption. Join IBM experts to learn how to transition to a microservices-based architecture.
These are just a few highlights of the many exciting sessions at the conference. Don’t miss this world-class opportunity. If you’re still not signed up for InterConnect 2017, be sure to register today.
The post The cloud stories you can’t miss at InterConnect appeared first on news.
Quelle: Thoughts on Cloud

Cloud-based video analytics drive actionable insights

Keeping up with hundreds, thousands or even tens of thousands of hours of video can be a difficult task for any enterprise, institution or service provider. Beyond just storage, extracting valuable insights from this seemingly unending and rapidly expanding pool of data might seem beyond impossible.
Whether videos are generated in marketing campaigns, training videos, lectures, interviews, video surveillance, B-roll, video production or even from Internet of Things (IoT) devices or cameras using computer vision, it’s safe to say that the amount of video will continue to skyrocket as digital initiatives continue to expand.
One of the powerful advantages of video is that it can be used as an incredibly effective and versatile engagement and monitoring tool across many use cases in all industries. The downside is that most enterprises and institutions must devote a huge amount of manpower to monitor and tag those videos with descriptive metadata if they are to extract any actionable insights or information. In addition, turnaround times for traditional manpower watching these videos can be days, weeks or even months, which can limit the effectiveness of actionable insights, since they might be needed on a short timeline.
As a result, most video is left completely untapped and business-impacting insights that could be generated from these videos are left on the enterprise’s “cutting room floor.”
Imagine if you could teach a cognitive service to watch and listen to a video, recognize what was in it and do so in a fraction of the time?
How VideoRecon analyzes content
That’s exactly what our cloud-based video analytics platform, VideoRecon, enables for our users. Using cognitive computing technology in the IBM Cloud, it automatically sees, listens and understands video content in a fraction of the actual viewing time needed by a human.

VideoRecon was built by our digital development company, BlueChasm, which specializes in building open digital and cognitive platforms on IBM Bluemix. VideoRecon was originally born out of one of BlueChasm’s prototypes but grew organically into a full enterprise platform, due to the incredible feedback of our clients over the last year.  
In a nutshell, VideoRecon watches and listens to videos; identifies key objects, themes or events within the footage; tags and timestamps those events; and returns the metadata to the user via an API, our VideoRecon portal or one of our flexible integrations (such as Box). It also supports full audio transcription capabilities for use cases that need more than just basic concepts and descriptors.

VideoRecon at work on Bluemix
From a Bluemix backend services perspective, VideoRecon temporarily uploads new video footage into the IBM Cloud Object Storage platform where our other VideoRecon microservices (deployed via IBM Containers and mostly written in server-side Swift) can access and analyze the content. Next, the IBM Watson Visual Recognition API and some of our unique services analyze the footage and identify its contents.

BlueChasm also leans on the IBM Watson Speech to Text API to convert audio content from the videos into text. Transcripts are fed into text analytics algorithms to identify specific words or phrases and determine the most relevant keywords across all the spoken dialogue.
Whenever an interesting visual, audio object or event is detected, the VideoRecon service creates a tag denoting what is recognized and a timestamp of the point in the video when either the object was recognized or the event occurred. The tags are stored in the IBM Cloudant fully managed JSON document store before they are provided back to the user. You can view a basic demo of our original prototype of VideoRecon here.

Video analytics in the real world
BlueChasm is continuing to work and collaborate with clients and partners of VideoRecon in many industries, including oil and gas, media and entertainment, retail and distribution, government, and higher education
Because VideoRecon is a cognitive platform that improves with custom training, there is typically upfront and ongoing work required to train the platform on industry-specific data and industry-specific use cases. For example, a retailer might teach VideoRecon to identify specific types of products, while a law enforcement organization might teach it to recognize certain types of vehicles, weapons or other objects of interest. A media and entertainment company might use VideoRecon to “watch” and classify hours of B-roll or video footage so that they can much more effectively and swiftly execute on their video production process.

Working with IBM cognitive analytics and cloud data services gives BlueChasm the power to transform the way today’s industries handle all kinds of tasks, from monitoring to content management to actionable insights.
Learn more about IBM and BlueChasm.
The post Cloud-based video analytics drive actionable insights appeared first on news.
Quelle: Thoughts on Cloud

Use cloud integration to build customer relationships

It’s no secret that the business world is changing rapidly. Sometimes it can seem like it’s changing faster than you can keep up. This pace is only going to continue to increase. New companies, and even new industries, appear every day. To stay competitive, you’ll need to create inventive and interactive customer experiences that bring you closer to your target audience.
It’s not just about selling anymore. Customers are increasingly searching for a connection with businesses where they spend their money. Traditional commerce is not set up to facilitate these kinds of relationships, but that’s exactly where the digital economy shines. Transforming into a digital business might seem like a daunting undertaking, but it’s all about connections, whether it’s connections between people or processes. IBM Cloud Integration can help you achieve these connections by allowing you to move data easily across your business network and by providing the flexibility you need to serve your customers.
The emergence and evolution of cognitive APIs is fueling this economic atmosphere by lowering the barrier of entry for new businesses and enabling existing businesses to transform. The goal of this evolution is to achieve a point of presence, which is when a business connects with a customer at the perfect moment with the perfect offer. The point of presence allows customers and businesses to transcend the buyer/seller relationship and enter a mutually beneficial partnership.
Cognitive APIs provide the capability to easily connect applications and data wherever they exist. By combining cognitive services with information from Internet of Things devices, such as weather and traffic data, complex processes like supply chain become more intelligent. You get a whole new view where potential problem areas are identified and integrated. Cognitive services can even predict problems and recommend changes before they pose a risk to businesses or the public.
Imagine a dairy shipment headed from Wisconsin to California on a refrigerated truck that’s due to have a layover in Nevada for a night. Hours before the truck arrives, a sensor in the facility’s cooler shows that the equipment is not functioning properly. Rather than waiting to find out about the problem when the cargo is lost, businesses can use a supply chain solution to locate and contact an alternative facility. The driver is seamlessly rerouted and the business avoids any compromise to the cargo or the schedule. That’s just one small glimpse into the power of a fully integrated, cognitive API-fueled digital business.
The Trends and Directions session at InterConnect 2017 will demonstrate the importance of points of presence. It will dive into the emerging technology that creates these crucial integrations and enables them to interact both inside and outside of your organization. You’ll learn not only what you can accomplish with IBM Cloud Integration, but also how far-ranging and significant integration can help you achieve next-generation customer experiences.
Join me Tuesday, March 21 from 3:45 PM to 4:30 PM in Bayside A. If you still haven’t registered for InterConnect, be sure to sign up today.
The post Use cloud integration to build customer relationships appeared first on news.
Quelle: Thoughts on Cloud

Network Deployment Engineer

The post Network Deployment Engineer appeared first on Mirantis | Pure Play Open Cloud.
We are looking for talented OpenStack Network Deployment Engineer, who is willing to work on intersection of IT and software engineering, be passioned about open-source and be able to design and deploy cloud network infrastructure build on top of open-source components.Responsibilities: Plan and deploy networks / SDNs for OpenStack and kubernetes cloud solutions for our customers;Work with NFV components to deliver end to end network solutions for our customers;Extend functionality for OpenStack networking &; supporting developers in a network architecture;Facilitate knowledge transfer to the customers during deployment projects; Work with geographically distributed international teams on technical challenges and process improvements; Contribute to Mirantis’ deployment knowledge base; Continuously improve tooling and technologies set.<spanMinimum requirements:At least 1 year of practical administration or monitoring experience in Linux (RHEL, CentOS, Ubuntu) as a server platform. Required experience with Linux operation system itself as well as with production level software and hardware. Practical experience of organization of highly available clusters is also required; At least 3 years of practical administration experience in legacy networks on CCNP level minimum (certification NOT required). At least 2 years of practical experience in conventional Linux administrator&;s script language Bash-script; Ability to understand and troubleshoot code written in Python. English language on an intermediate level; Ability to travel abroad for 3-6 months if neededWill be a plus:Practical experience of Python programming;Practical experience in configuration automation tool (Puppet, Ansible, Salt)Knowledge and experience of SDN and NFV;CCNP or CCIE certifications (or similar).Knowledge of OpenStack is a big plus;Knowledge of Juniper Contrail is a big plus; Knowledge of Linux Containers is a big plusWe offer:High-energy atmosphere of a young companyBuild large scale, innovative systems for mission-critical useCollaborate with exceptionally passionate, talented and engaging colleaguesCompetitive compensation package with strong benefits planLots of freedom for creativity and personal growth.            DON&8217;T PANIC JUST BUILD, OPERATE, TRANSFER and APPLY!The post Network Deployment Engineer appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis