Solutions Delivery Executive

The post Solutions Delivery Executive appeared first on Mirantis | The Pure Play OpenStack Company.
Mirantis, Inc. is looking for an experienced Solutions Delivery Executive to help lead our clients on their journey to the cloud. This highly-visible, senior leadership role within the Mirantis Services organization is a functional peer to the Enterprise Sales executives aligned to our most strategic global accounts.Your top-level responsibilities will include: overall ownership of the end-to-end service delivery experience, building & executing multi-year client account plans, establishing/maintaining corporate governance, driving cross-functional collaboration & communications with client executives and business stakeholders, and ensuring operational excellence & successful business outcomes for the clientCandidates considered for this role must have a good mix of strong operational & business skills combined with strategic thought-leadership and a mind for tactical execution. Acting as a liaison between the client and Mirantis worldwide, you should be a strong advocate for the client, but with the goals of sound business judgments and mutual assured success for both parties.Primary ResponsibilitiesLead the global service delivery experience; single point of ownership and accountability for all client service delivery related activitiesBuilding and maintaining trusted advisor relationships with influential client decision-makers for the successful adoption and deployment of cloud services and technologiesWork in collaboration with the client Sales team to create and execute multi-year business plans to accelerate the adoption of cloud across the client’s business units, exceed revenue goals, and driving client referrals and referencesManage client level P&L &; drive revenue recognition, achieve and/or exceed quarterly PS revenue, cost, utilization & profitability objectivesEnsure client-specific operational, change management and compliance practices are implemented and adhered-to; continually seek to improve processes, reduce complexity and drive predictability for clientsAct as escalation lead for all service delivery-related issues that could impact client relationshipParticipate in contract and financial negotiations (MSAs, SOWs, ELAs, T&Cs)Qualifications10+ years experience in infrastructure/cloud solutions Services company,ideally as an executive within a large Enterprise IT organization, consulting firm or global systems integration companyBachelor’s degree (Business, Science, Technology, Engineering, Math) or equivalent experienceAnalytical decision-making and detail-oriented thinking combined with strong management skillsDemonstrated experience managing large, cross-functional teams within matrix organizationsSuperior interpersonal, written, verbal, listening and presentation skills &8211; ability to communicate cross-functionally with most senior-level executivesHighly organized, able to track multiple concurrent tasks and activities simultaneously; first-hand Change Management and Business Process Mapping experienceHistory of leading successful business transformations using cloud & related technologiesIn-depth knowledge of OpenStack or similar cloud technologies (AWS, Azure, CloudStack)Ability to travel freely between client sites and Mirantis HQ as neededWhat We OfferPartner with exceptionally passionate, talented and engaging colleagues.Implement cloud solutions for some of the best known brands in the industry for use in mission critical applications.High-energy atmosphere of a young company, competitive compensation package with strong benefits plan and stock options.Environment that fosters creativity and personal growth.The post Solutions Delivery Executive appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Senior CI Engineer

The post Senior CI Engineer appeared first on Mirantis | The Pure Play OpenStack Company.
Mirantis is the leading global provider of Software and Services for OpenStack(TM), a massively scalable and feature-rich Open Source Cloud Operating System. OpenStack is used by hundreds of companies, including AT&T, Cisco, Symantec, NASA, Dell, PayPal and many more.Mirantis has more experience delivering OpenStack clouds to more customers than any other company in the world. We build the infrastructure that makes OpenStack work. We are proud to serve on the OpenStack Foundation Board and to be one of the top contributors to OpenStack.Mirantis is looking for a qualified candidate with experience in continuous integration, release engineering, or quality assurance, to join our CI Services team, which designs and implements CI/CD pipelines to build and test product artifacts and deliverables of the Mirantis Openstack distribution.Responsibilities:design and implement CI/CD pipelines,develop a unified CI framework based on existing tools (Zuul, Jenkins Job Builder, fabric, Gerrit, etc.),define and manage test environments required for different types of automated tests,drive cross-team communications to streamline and unify build and test processes,track and optimize hardware utilization by CI/CD pipelines,provide and maintain specifications and documentation for CI systems,provide support for users of CI systems (developers and QA engineers),produce and deliver technical presentations at internal knowledge transfer sessions, public workshops and conferences,participate in upstream OpenStack community, working together with OpenStack Infra team on common CI/CD tools and processes.Required Skills:Linux system administration &; package management, services administration, networking, KVM-based virtualization;scripting with Bash and Python;experience with the DevOps configuration management methodology and tools (Puppet, Ansible);ability to describe and document systems design decisions;familiarity with development workflows &8211; feature design, release cycle, code-review practices;English, both written and spoken.Will Be a Plus:knowledge of CI tools and frameworks (Jenkins, Buildbot, etc.);release engineering experience &8211; branching, versioning, managing security updates;understanding of release engineering and QA practices of major Linux distributions;experience in test design and automation;experience in project management;involvement in major Open Source communities (developer, package maintainer, etc.).What We Offer:challenging tasks, providing room for creativity and initiative,work in a highly-distributed international team,work in the Open Source community, contributing patches to upstream,opportunities for career growth and relocation,business trips for meetups and conferences, including OpenStack Summits,strong benefits plan,medical insurance.The post Senior CI Engineer appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Automate bare metal server provisioning using Ironic (bifrost) and the ansible deploy driver

The post Automate bare metal server provisioning using Ironic (bifrost) and the ansible deploy driver appeared first on Mirantis | The Pure Play OpenStack Company.
On our team, we mostly conduct various research in OpenStack, so we use bare metal machines extensively. To make our lives somewhat easier, we&;ve developed a set of simple scripts that enables us to backup and restore the current state of the file system on the server. It also enables us to switch between different backups very easily. The set of scripts is called multi-root (https://github.com/vnogin/multi-root).
Unfortunately, we had a problem; in order to use this tool, we had to have our servers configured in a particular way, and we faced different issues with manual provisioning:

It is not possible to set up more than one bare metal server at a time using a Java-based IPMI application
The Java-based IPMI application does not properly handle disconnection from the remote host due to connectivity problems (you have to start installation from the very beginning)
The bare metal server provisioning procedure was really time consuming
For our particular case, in order to use multi-root functionality we needed to create software RAID and make required LVM configurations prior to operating system installation

To solve these problems, we decided to automate bare metal node setup, and since we are part of the OpenStack community, we decided to use bifrost instead of other provisioning tools. Bifrost was a good choice for us as it does not require other OpenStack components.
Lab structure
This is how we manage disk partitions and how we use software RAID on our machines:

As you can see here, we have the example of a bare metal server, which includes two physical disks.  Those disks are combined using RAID1, then partitioned by the operating system.  The LVM partition then gets further partitioned, with each copy of an operating system image assigned to its own partition.
This is our network diagram:

In this case we have one network to which our bare metal nodes are attached. Also attached to that network is the IRONIC server. A DHCP server assigns IP addresses for the various instances as they&8217;re provisioned on the bare metal nodes, or prior to the deployment procedure (so that we can bootstrap the destination server).
Now let&8217;s look at how to make this work.
How to set up bifrost with ironic-ansible-driver
So let&8217;s get started.

First, add the following line to the /root/.bashrc file:
# export LC_ALL=”en_US.UTF-8″

Ensure the operating system is up to date:
# apt-get -y update && apt-get -y upgrade

To avoid issues related to MySQL, we decided to ins tall it prior to bifrost and set the MySQL password to &;secret&;:
# apt-get install git python-setuptools mysql-server -y

Using the following guideline, install and configure bifrost:
# mkdir -p /opt/stack
# cd /opt/stack
# git clone https://git.openstack.org/openstack/bifrost.git
# cd bifrost

We need to configure a few parameters related to localhost prior to the bifrost installation. Below, you can find an example of an /opt/stack/bifrost/playbooks/inventory/group_vars/localhost file:
echo “—
ironic_url: “http://localhost:6385/”
network_interface: “p1p1″
ironic_db_password: aSecretPassword473z
mysql_username: root
mysql_password: secret
ssh_public_key_path: “/root/.ssh/id_rsa.pub”
deploy_image_filename: “user_image.qcow2″
create_image_via_dib: false
transform_boot_image: false
create_ipa_image: false
dnsmasq_dns_servers: 8.8.8.8,8.8.4.4
dnsmasq_router: 172.16.166.14
dhcp_pool_start: 172.16.166.20
dhcp_pool_end: 172.16.166.50
dhcp_lease_time: 12h
dhcp_static_mask: 255.255.255.0″ > /opt/stack/bifrost/playbooks/inventory/group_vars/localhost
As you can see, we&8217;re telling Ansible where to find Ironic and how to access it, as well as the authentication information for the database so state information can be retrieved and saved. We&8217;re specifying the image to use, and the networking information.
Notice that there&8217;s no default gateway for DHCP in the configuration above, so I&8217;m going to fix it manually after the install.yaml playbook execution.
Install ansible and all of bifrost&8217;s dependencies:
# bash ./scripts/env-setup.sh
# source /opt/stack/bifrost/env-vars
# source /opt/stack/ansible/hacking/env-setup
# cd playbooks

After that, let&8217;s install all packages that we need for bifrost (Ironic, MySQL, rabbitmq, and so on) &;
# ansible-playbook -v -i inventory/localhost install.yaml

&8230; and the Ironic staging drivers with already merged patches for enabling Ironic ansible driver functionality:
# cd /opt/stack/
# git clone git://git.openstack.org/openstack/ironic-staging-drivers
# cd ironic-staging-drivers/

Now you&8217;re ready to do the actual installation.
# pip install -e .
# pip install “ansible>=2.1.0″
You should see typical &8220;installation&8221; output.
In the /etc/ironic/ironic.conf configuration file, add the &8220;pxe_ipmitool_ansible&8221; value to the list of enabled drivers. In our case, it&8217;s the only driver we need, so let&8217;s remove the other drivers:
# sed -i ‘/enabled_drivers =*/cenabled_drivers = pxe_ipmitool_ansible’ /etc/ironic/ironic.conf

If you want to enable cleaning and disable disk shredding during the cleaning procedure, add these options to /etc/ironic/ironic.conf:
automated_clean = true
erase_devices_priority = 0

Finally, restart the Ironic conductor service:
# service ironic-conductor restart

To check that everything was installed properly, execute the following command:
# ironic driver-list | grep ansible
| pxe_ipmitool_ansible | test |
You should see the pxe_ipmitool_ansible driver in the output.
Finally, add the default gateway to /etc/dnsmasq.conf (be sure to use the IP address for your own gateway).
# sed -i ‘/dhcp-option=3,*/cdhcp-option=3,172.16.166.1′ /etc/dnsmasq.conf

Now that everything&8217;s set up, let&8217;s look at actually doing the provisioning.
How to use ironic-ansible-driver to provision bare-metal servers with custom configurations
Now let&8217;s look at actually provisioning the servers. Normally, we&8217;d use a custom ansible deployment role that satisfies Ansible&8217;s requirements regarding idempotency to prevent issues that can arise if a role is executed more than once, but because this is essentially a spike solution for us to use in the lab, we&8217;ve relaxed that requirement.  (We&8217;ve also hard-coded a number of values that you certainly wouldn&8217;t in production.)  Still, by walking through the process you can see how it works.

Download the custom ansible deployment role:
curl -Lk https://github.com/vnogin/Ansible-role-for-baremetal-node-provision/archive/master.tar.gz | tar xz -C /opt/stack/ironic-staging-drivers/ironic_staging_drivers/ansible/playbooks/ –strip-components 1

Next, create an inventory file for the bare metal server(s) that need to be provisioned:
# echo “—
 server1:
   ipa_kernel_url: “http://172.16.166.14:8080/ansible_ubuntu.vmlinuz”
   ipa_ramdisk_url: “http://172.16.166.14:8080/ansible_ubuntu.initramfs”
   uuid: 00000000-0000-0000-0000-000000000001
   driver_info:
     power:
       ipmi_username: IPMI_USERNAME
       ipmi_address: IPMI_IP_ADDRESS
       ipmi_password: IPMI_PASSWORD
       ansible_deploy_playbook: deploy_custom.yaml
   nics:
     –
       mac: 00:25:90:a6:13:ea
   driver: pxe_ipmitool_ansible
   ipv4_address: 172.16.166.22
   properties:
     cpu_arch: x86_64
     ram: 16000
     disk_size: 60
     cpus: 8
   name: server1
   instance_info:
     image_source: “http://172.16.166.14:8080/user_image.qcow2″” > /opt/stack/bifrost/playbooks/inventory/baremetal.yml

# export BIFROST_INVENTORY_SOURCE=/opt/stack/bifrost/playbooks/inventory/baremetal.yml
As you can see the above we have added all required information for bare-metal node provisioning using IPMI. If needed you can add information about various number of bare-metal servers here and all of them will be enrolled and deployed later.
Finally, you&8217;ll need to build a ramdisk for the Ironic ansible deploy driver and create a deploy image using DIB (disk image builder). Start by creating an RSA key that will be used for connectivity from the Ironic ansible driver to the provisioning bare metal host:
# su – ironic
# ssh-keygen
# exit

Next set environment variables for DIB:
# export ELEMENTS_PATH=/opt/stack/ironic-staging-drivers/imagebuild
# export DIB_DEV_USER_USERNAME=ansible
# export DIB_DEV_USER_AUTHORIZED_KEYS=/home/ironic/.ssh/id_rsa.pub
# export DIB_DEV_USER_PASSWORD=secret
# export DIB_DEV_USER_PWDLESS_SUDO=yes

Install DIB:
# cd /opt/stack/diskimage-builder/
# pip install .

Create the bootstrap and deployment images using DIB, and move them to the web folder:
# disk-image-create -a amd64 -t qcow2 ubuntu baremetal grub2 ironic-ansible -o ansible_ubuntu
# mv ansible_ubuntu.vmlinuz ansible_ubuntu.initramfs /httpboot/
# disk-image-create -a amd64 -t qcow2 ubuntu baremetal grub2 devuser cloud-init-nocloud -o user_image
# mv user_image.qcow2 /httpboot/

Fix file permissions:
# cd /httpboot/
# chown ironic:ironic *

Now we can enroll anddeploy our bare metal node using ansible:
# cd /opt/stack/bifrost/playbooks/
# ansible-playbook -vvvv -i inventory/bifrost_inventory.py enroll-dynamic.yaml
Wait for the provisioning state to read &8220;available&8221;, as a bare metal server needs to cycle through a few states and could be cleared, if needed. During the enrollment procedure, the node can be cleared by the shred command. This process does take a significant amount of time time, so you can disable or fine tune it in the Ironic configuration (as you saw above where we enabled it).
Now we can start the actual deployment procedure:
# ansible-playbook -vvvv -i inventory/bifrost_inventory.py deploy-dynamic.yaml
If deployment completes properly, you will see the provisioning state for your server as &8220;active&8221; in the Ironic node-list.
+————————————————————–+———+——————–+—————–+————————-+——————+
| UUID                                                    | Name  | Instance UUID | Power State | Provisioning State | Maintenance |
+————————————————————–+———+——————–+—————–+————————-+——————+
| 00000000-0000-0000-0000-000000000001   | server1| None          | power on      | active                     | False            |
+————————————————————–+———+——————–+—————–+————————-+——————+

Now you can log in to the deployed server via ssh using the login and password that we defined above during image creation (ansible/secret) and then, because the infrastructure to use it has now been created, clone the multi-root tool from Github.
Conclusion
As you can see, bare metal server provisioning isn&8217;t such a complicated procedure. Using the Ironic standalone server (bifrost) with the Ironic ansible driver, you can easily develop a custom ansible role for your specific deployment case and simultaneously deploy any number of bare metal servers in automation mode.
I want to say thank you to Pavlo Shchelokovskyy and Ihor Pukha for your help and support throughout the entire process. I am very grateful to you guys.
The post Automate bare metal server provisioning using Ironic (bifrost) and the ansible deploy driver appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Bluemix Local System can help banks ease operational pressures and boost innovation

The banking sector is facing some tough challenges.
New entrants into the sector are quickly gaining customers through innovative services and because of increasingly stringent regulations. The urge for greater operational agility in response to new business models and rapidly evolving customer preferences is made even stronger by the market’s disruptor/disrupted scenario.
Many banks’ lack of agility in aligning IT operations with business imperatives is particularly needless when cloud capabilities promise consistent advantages.
Choice with consistency and hybrid integration are two principles of the IBM Cloud platform. These ideas underline the strong focus of IBM on the hybrid cloud.
Many companies, especially banks, have their cloud strategy still largely articulated around the virtualization phase of the cloud journey. Meanwhile, they’re lagging behind in other areas, such as orchestration and automation in the deployment and management of application environments. This is where IBM Bluemix Local System can play a pivotal role in streamlining the overall governance of the application life cycle while reducing risks and guaranteeing the support of open technologies.
The Bluemix Local System is the evolution of the IBM PureApplication System. As such, it uses the concept of application patterns: they are pre-defined templates of application environments, providing the operating system support, scripting tools and orchestration capabilities needed to fully automate the deployment and management of those environments, no matter how complex.
The adoption of patterns enables a significant acceleration in the life cycle of middleware and applications by automating low-level, trivial tasks. Thus, high-value resources can focus predominantly on building and enhancing applications.
Since the beginning, PureApplication and its patterns have had high affinity with the applications in the industrialized core of organizations, where availability, stability and cost optimization matter most. Whenever a pattern is available (off-the-shelf or built ad-hoc) for such applications, the result is acceleration of their cloudification.
With such capabilities, PureApplication has proven to be the ideal platform for these cloud-enabled applications, as many clients have learned.
The story goes further with the coming of Bluemix Local System. As the name indicates, this new system can host an instance of Bluemix local, reaching the other end of the application realm: the innovation edge. Here, the main focus is on speed and agility to build new apps with the objective to engage more deeply with existing and new clients.
In this arena, traditional, big banks must confront newcomers. These are mainly start-ups with a high propensity to build applications directly on the cloud, thereby taking advantage of all the services offered in it. On IBM Bluemix, they can make use of unique capabilities in the catalogue of services, in areas including cognitive (Watson), analytics, and Internet of Things (IoT) to build applications.
Bluemix Local System is the ideal platform for the coexistence of cloud-enabled and cloud-native applications.

Using this architecture, banks can run their industrialized core on patterns, achieving an agility posture. They can build new apps for their innovation edge with the Bluemix runtime and services. The ultimate goal is not just a coexistence, but rather, a fully symbiotic experience.
The hybrid character of this architecture is augmented because both sides can expand in the public or dedicated cloud (Bluemix public and dedicated, PureApplication Service).
There are a number of pacesetters in the banking industry that are competing with the disruptors because they have evolved. They’ve done so by unleashing the full integration of cloud-native and cloud-enabled applications using Bluemix Local System.
Learn more about Bluemix Local System and patterns.
The post Bluemix Local System can help banks ease operational pressures and boost innovation appeared first on news.
Quelle: Thoughts on Cloud

Maven Multi-Module Projects and OpenShift

There is no need to move away from Maven’s multi-module approach to building and deploying application when working with OpenShift, if that is a process you’re familiar with. It can become a powerful tool for helping break apart existing applications into more consumable microservices as it goes some way to enabling each component to have its own lifecycle, regardless of how the source code repository is managed. Sometimes it may require a little bit of customisation to give you the behaviour you need, and hopefully you’ll get some insight into how that customisation is achieved through this post.
Quelle: OpenShift

How to Install and Run Tempest

Tempest is a set of integration tests to run against an OpenStack cluster. In this blog I’m going to show you, how to install tempest from git repository, how to install all requirements and run tests against an OpenStack cluster.

I’m going to use a fresh installation of Centos7 and OpenStack cluster provided by packstack. If you’ve done that, follow the instructions below.

Tempest Installation
You have two options how to install tempest. You can install it through RPM or you can clone tempest from GitHub repository. If you choose installation through RPM, follow this link.

Installation from GitHub repository
Now you can clone upstream tempest or you can clone RedHat’s fork of upstream tempest. The RedHat’s fork provides config_tempest.py, which is a configuration tool. It will generate tempest.conf for you, what can be handy.

[1.] Install dependencies:

$ sudo yum install -y gcc python-devel libffi-devel openssl-devel

[2.] Clone tempest:

$ git clone https://github.com/openstack/tempest.git

Or (RedHat’s fork):

$ git clone https://github.com/redhat-openstack/tempest.git

[3.] Install pip, for example:

$ curl “https://bootstrap.pypa.io/get-pip.py” -o “get-pip.py”
$ sudo python get-pip.py

[4.] Install tempest globally in the system. If you don’t want to do that, skip this step and continue reading.

$ sudo pip install tempest/

Install tempest in a virtual environment
Sometimes you don’t want to install things globally in the system. For this reason you may want to use a virtual environment. I’m going to explain installation through virtualenv and tox.

Setting up Tempest using virtualenv
[1.] Install virtualenv:

$ easy_install virtualenv

Or through pip:

$ pip install virtualenv

[2.] Enter tempest directory you’ve cloned before:

$ cd tempest/

[3.] Create a virtual environment and let’s name it .venv:

$ virtualenv .venv
$ source ./venv/bin/activate

[4.] Install requirements:

(.venv) $ pip install -r requirements.txt
(.venv) $ pip install -r test-requirements.txt

NOTE: If problems occur during requirements installation, it may be due to an old version of pip, upgrade may help:

(.venv) $ pip install pip –upgrade

[5.] After dependencies are installed, run following commands, which install tempest within the virtual environment:

(.venv) $ cd ../
(.venv) $ pip install tempest/

Or this command does the same without using pip:

$ python setup.py install If you need to trigger installation to developer mode run:

(.venv) $ python setup.py develop `setup.py develop` comes from limitations on [pbr](http://docs.openstack.org/developer/pbr/). If you are interested, [here is](https://setuptools.readthedocs.io/en/latest/setuptools.html#-mode) an explanation of difference between `install` and `develop`.

Setting up Tempest using TOX
[1.] Install tox:

$ easy_install tox

Or if you want to use pip:

$ pip install tox

[2.] Install tempest:

$ tox -epy27 –notest
$ source .tox/py27/bin/activate This will create a virtual environment named `.tox`, install all dependencies (*requirements.txt* and *test-requirements.txt*) and tempest within it. If you check `tox.ini` file, you’ll see tox actually run tempest installation in develop mode you could run manually as it was explained above.

Optional:

[3.] If you want to expose system-site packages, tox will do it for you. Deactivate environment, you are currently in (if you followed the step before) and create another environment:

(py27) $ deactivate
$ tox -eall-plugin –notest
$ source .tox/all-plugin/bin/activate

[4.] Then if you want to install plugins test packages based on the OpenStack Components installed, let this script to do it:

(all-plugin) $ sudo python tools/install_test_packages.py
(all-plugin) $ python setup.py develop

Generate tempest.conf
About tempest.conf and what it is used for you can read in this documentation.
If you want to create tempest.conf let config_tempest.py do it for you. The tool is part of RPM tempest (check this documentation) or if you don’t want to install tempest globally, you can clone RedHat’s tempest fork and install it within a virtual environment as it was explained above.

RedHat’s tempest fork
Create a virtual environment as I already mentioned and source credentials (if you installed OpenStack cluster by packstack credentials are saved in /root/):

(.venv) $ source /root/keystone_admin

And run config tool:

(.venv) $ python tools/config_tempest.py –debug identity.uri $OS_AUTH_URL
identity.admin_password $OS_PASSWORD –create After this, `./etc/tempest.conf` is generated.

NOTE:
If you running OSP, it’s needed to add a new argument to config_tempest tool:

(.venv) $ ./tools/config_tempest.py object-storage.operator_role swiftoperator

It’s because OSP uses lowercase operator for the swift operator_role, however, tempest default value is “SwiftOperator”.
To override the default value run config_tool like this:

$ python tools/config_tempest.py –debug identity.uri $OS_AUTH_URL
identity.admin_password $OS_PASSWORD
object-storage.operator_role swiftoperator –create

Running tests
If you’ve installed tempest and have tempest.conf, you cant start testing.
To run tests you can use testr or ostestr. If you want to run tempest unit tests, check this out.

Note: following commands run withing the virtual environment you’ve created before.
To run specific tests run for example:

$ python -m testtools.run tempest.api.volume.v2.test_volumes_list OR

$ ostestr –regex tempest.api.volume.v2.test_volumes_list

Alternatively you can use tox, for example:

$ tox -efull Run only tests tagged as smoke:

$ tox -esmoke

Quelle: RDO

How a mobile development platform can help IT pros cut through clutter

Have you ever felt overwhelmed by the number of mobile gadgets you see every day?
If so, you are not alone. In 2015, the total number of mobile devices worldwide (7.9 billion) eclipsed the world’s population (7.4 billion). Though smartphone manufacturers often pitch their products as if they are fashion accessories, a recent study by the IBM Institute for Business Value uncovered that companies around the globe are driving the adoption of mobile because it makes good financial sense. Sixty-two percent of the executives surveyed as part of the study said that their top mobile initiatives achieved return on investment (ROI) in 12 months or less.
Chief information officers, enterprise architects, software development managers and other information technology professionals should plan for a growing number of mobile projects.
IBM a leader in mobile development platforms
IBM is a recognized leader in helping enterprises launch and accelerate their mobility efforts. The research report “The Forrester Wave: Mobile Development Platforms, Q4 2016,” Forrester Research Inc, 24 October 2016, “included 12 vendors in the assessment” and “evaluated vendors against 32 criteria.”
It states: “The MobileFirst Foundation on-premises offering was once the most full-featured of IBM’s solutions, but the Bluemix cloud solution is now functionally equivalent, driving IBM’s move to the Leaders category.”
Customers increasingly demand conversational interfaces for interactions with brands. Such customer experiences are complex to implement with only traditional application development skills and tools. Companies, like Elemental Path, grand prize recipient of the Watson Mobile Developer Challenge, chose IBM Watson to simplify the task of building conversational interfaces. Availability of IBM Watson technologies for developers is just one of the features that differentiate IBM MobileFirst platform from other mobile development platforms on the market.
Forrester reported that “IBM is best fit for shops that focus on data integration, especially complex integration scenarios.”
The IBM hybrid cloud infrastructure helps mobile application developers ensure high degree of application and data isolation, security, auditability, and compliance with data privacy, as well as other regulatory requirements. According to the Forrester report, IBM “customers cited the openness of the platform as a reason for purchase, particularly its front-end tooling partnerships with Cordova and Ionic.”
Apple partnership and IBM mobile expertise
IBM expertise in mobile is based in part on the experience of working in partnership with Apple to help global businesses transform enterprise mobility.
As part of the Apple partnership, IBM developed and delivered more than 100 applications using the MobileFirst platform, covering 14 industries. The applications transformed work for professions ranging from wealth advisors to flight attendants.
For example, working with SAS, the largest airline in Scandinavia, IBM developed a Passenger+ app that enabled flight attendants to access a 360-degree view of each passenger’s past flight preferences, interests, and purchasing decisions. With this information, IBM MobileFirst became essential in helping SAS deliver a more elite and personalized flying experience.
Download a copy of “The Forrester Wave: Mobile Development Platforms, Q4 2016” and find out why the IBM Mobile Development Platform is a leader in its field.
The post How a mobile development platform can help IT pros cut through clutter appeared first on news.
Quelle: Thoughts on Cloud