Hybrid service management: An IBM view of the challenges ahead

As I go through the final stages of preparation for the Gartner IT Operations Strategies & Solutions Summit 2017, I wanted to write this blog post to start a conversation about service management and some of the challenges it faces.
I am looking forward to hearing from Gartner analysts and industry players about what they are seeing in the market. At this event in 2016, the primary focus was on DevOps. In 2017, speakers will focus on infrastructure and operations automation. I see these two topics as very closely related.  And it’s a nice evolution of the IOSS Summit’s conversation.
If you are an IT operations professional at any level in your company, it is fair to say that you are tasked with managing an ever-growing level of complexity. It might be that different teams and units at your company are empowered to choose cloud providers and the tools that run operations. Ultimately, however, when something goes wrong, it will probably end up coming back to you, your team and your colleagues.
I’d say that in 2017, the objectives for IT operations have to be:

getting out in front of issues before they occur
guiding the teams and units in how they should implement their application
determining how to consume cloud services

It is through a partnership model that the IT operations team can insist on the right levels of automation to be put in place. While it might be OK for one team to avoid automation for just one application or service, it will not work for you when you are responsible for hundreds of applications and services. This is where IBM wants to help you, the IT operations person.
IBM can help DevOps team members to become agile, giving them the tools to monitor their apps during development and testing. IBM supports the IT operations team get structured so they can streamline the management of all the infrastructure and applications running across the company.
As infrastructure grows and number of applications grows and the IT operations team is not necessarily expanding, it means they need to find ways to get more efficient. They need to employ analytics tools that give them insight into the operations data that they are collecting, enabling them to become more proactive. Then, they can reach out to the DevOps team or line of business manager and tell them they have an issue before it impacts what matters most:  their end users, the reputation of their business and bottom-line revenue.
I am really excited to be attending the IOSS Summit again in 2017. My colleague Andrew Hans and I are jointly presenting the IBM strategy for service management. IBM also has two demo booths and countless client meetings planned. So please stop by and say hello. Come see how we can help you to get agile, get structured and get efficient.
The post Hybrid service management: An IBM view of the challenges ahead appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Cognitive tool helps professionals discover food trends

Like fashion, the world of food is ever-changing. What was trendy last month may not be in favor next month. For chefs, producers, retailers and food services companies, knowing what’s current is a vital part of staying relevant.
That’s why Barcelona-based Reimagine Food, an international hub devoted to boosting food innovation, developed SmartFoodS, a discovery tool that helps food professionals know about all the latest trends, industry news, academic studies, new products, start-up companies and other innovations in cuisine.

The company developed the tool on IBM Cloud, specifically using Watson Natural Language Understanding API on IBM Bluemix, which uses language detection, keyword extraction and sentiment analysis. It processes natural-language questions to offer up relevant results using the IBM Retrieve and Rank service, the Apache Solr search server and algorithms that get right to the heart of the query. The tool scours 40,000 information sources to find information.
“By building this discovery tool on the IBM Cloud, we have the ability to easily expand as the tool consumes more and more data, and as more and more people use it,” said Francesc Saldaña, corporate services manager at Reimagine Food.
As they do, professionals and organizations in the food production industry can keep cooking with confidence using a tool from Spain, producer of half the world’s olive oil and home to the world’s oldest restaurant, Sobrino de Botín, according to the Guinness Book of World Records.
Learn more about the power of Watson APIs.
The post Cognitive tool helps professionals discover food trends appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Openstack Operations Engineer

The post Openstack Operations Engineer appeared first on Mirantis | Pure Play Open Cloud.
Mirantis is the leading global provider of Software and Services for OpenStack ™, a massively scalable and feature-rich Open Source Cloud Operating System. OpenStack is used by hundreds of companies, including AT&T, Cisco, HP, Internap, NASA, Dell, GE, and many more.As a leading global provider, Mirantis offers the leading OpenStack technology platform coupled by unique, cost-effective global services delivery model founded on years of deep software engineering experience for demanding Fortune 1000 companies.Mirantis Inc. are inviting enthusiastic Operations engineers, who will be extending OpenStack to support enterprise-grade private IaaS platforms for company’s customers. We need talented engineers, who are willing to work on intersection of IT and software engineering, be passionate about open-source and not afraid of maintaining huge codebase, written by best developers in the area.Responsibilities:System Administration on Linux (Ubuntu, CentOS, etc).Technical support OpenStack products for customers.Test components for cloud applications using Python in case alarm conditions.Troubleshooting OpenStack installation and fixing bugs in OpenStack components.Participating in public activities: user groups, conferences, company’s blog both in Russia and USA.Requirements:Excellent Linux system administration and troubleshooting skills.Good knowledge of Python.Good understanding of networking concepts and protocols.Nice to have:Experience of working with and maintaining large Python codebases.Experience working with virtualization solutions (KVM, XEN).Understanding of NAS/SAN.Awareness of distributed file systems (Gluster, Ceph).Experience with configuring and extending monitoring tools (Nagios, Ganglia, Zabbix).Experience working with configuration management tools (Chef, Puppet, Cobbler).Experience of deploying and extending of OpenStack is a plus.The post Openstack Operations Engineer appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

3 ways to outpace customer expectations with cloud

What prevents some companies from meeting customer expectations while others are thriving? Knowing the answer to that question could be the difference between failure and success.
According to a Dynatrace survey, 79 percent of users will abandon a mobile app if it fails to meet expectations.
And guess what, an engaging customer experience isn’t just essential for consumer business. It’s also critical to your business-to-business (B2B) customers and partnerships. If you can’t manage expectations for B2B customers fast enough, they too may go elsewhere.
The shift to cloud technology is rewarding the leaders and exposing the laggards. In nearly every industry, the organizations excelling at creating delightful customer experiences are also leading in cloud transformation. More than twice as many high-performing organizations report having fully integrated cloud initiatives than low-performing organizations.
If you study these top companies closely, you’ll notice three things every business needs in its cloud strategy. I am convinced that organizations with these characteristics will continue to win business and entice customers with personal, engaging and interactive experiences.
1. Flexibility
Change is the only constant in the digital economy. A hybrid cloud model gives you the flexibility to adapt and grow. This flexibility allows you to go after the big opportunities knowing that the location of your company’s data won’t be an obstacle. The value is in more than just cost and speed, though these remain key rationale for the public cloud. The hybrid model also enables you to realize new insights across your entire ecosystem and quickly move priorities and resources to meet opportunities.
2. Freedom to choose
Never forget you have options. You need to be able to easily change where an application runs based on your business needs. Which cloud helps you realize the most value: public, private? What if that changes? If you’re locked in with a public vendor, for example, moving data without disruption can be very costly and time-consuming. Look for solutions that give you the freedom to choose, with easy application portability regardless of your architectural environment across any cloud. Solutions like IBM WebSphere Application Server Version 9  are built to put clients in control, not cloud providers.
3. Cognitive insights
We’re living in the cognitive era, inseparable from cloud innovation. Cognitive is the way to outthink the competition and make sense of information. What can your data do?  Bring new customer experiences, new applications and even new business models, for starters.
IBM Cloud offers a host of accessible cognitive capabilities which you can build into your applications. IBM WebSphere Connect, for example, provides the ability to seamlessly connect your on-premises apps to hundreds of Bluemix cloud services like IBM Watson, IBM Cloudant and IBM dashDB.
You can rapidly infuse apps with cognitive capabilities to gain operational insights and dazzle your customers. Use these cognitive capabilities to breathe new life into your existing investments and extend their value while still putting the customer first.
If you are looking to learn more about how IBM Cloud can help you maintain a spot at the top of your industry, take the WebSphere Cloud Readiness assessment. You’ll see how to take advantage of hybrid capabilities and receive suggestions on how to cut costs, speed time to market and create new business models with WebSphere on cloud.
 
The post 3 ways to outpace customer expectations with cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Everything you ever wanted to know about using etcd with Kubernetes v1.6 (but were afraid to ask)

The post Everything you ever wanted to know about using etcd with Kubernetes v1.6 (but were afraid to ask) appeared first on Mirantis | Pure Play Open Cloud.
In the world of cloud native applications, etcd is the only stateful component of the Kubernetes control plane. This makes matters for an administrator simpler, but the Kubernetes 1.6 release throws a wrench into the process of maintaining 9s of reliability. But don’t fear, I’ll make sure you’re covered.
etcd data store format: v2 or v3?
etcd version 3.0.0 and up supports two different data stores: v2 and v3, but it’s important to know what version you’re using because it impacts your ability to back up your information.
In Kubernetes 1.5, the default data store format was v2, but v3 was still available if you set it explicitly. For Kubernetes v1.6, however, the default data store is etcd v3, but you will still need to think about which format, for the various components that surround it.
For example, Calico, Canal, and Flannel can only write data to the etcd v2 data store, so combining their etcd data with the Kubernetes etcd data store can complicate the maintenance of etcd if Kubernetes is using v3.
Users blindly upgrading from Kubernetes v1.5 to v1.6 may be in for a surprise. (Just one reason it’s important to always read the release notes!) Kubernetes v1.6 changes the default etcd backend from v2 to v3, so make sure that before you start, you manually migrate etcd to v3. This way, you can ensure data consistency, which requires shutting down all kube-apiservers.
If you don’t want to migrate just yet, you can pin kube-apiserver back to v2 etcd with the following option:
–storage-backend=etcd2
Backing up etcd
All configuration data for Kubernetes is stored inside etcd, so in the event of an irrecoverable disaster, an operator can use an etcd backup to recover all data. Etcd creates snapshots regularly on its own, but daily backups stored on a separate host are a good strategy for disaster recovery for Kubernetes.
Backup methods
etcd has different backup methods for v2 and v3, and each has its own advantages and disadvantages.  The v3 backup is much cleaner and consists of a single, compact file, but it has one major drawback: it won’t backup or recover v2 data.
This means that if you have only etcd v3 data (for example, if your network plugin doesn’t consume etcd), you can use the v3 backup, but if you have any v2 data–even if it’s mixed with v3 data–you must use the v2 backup method.
Let’s look at each of these methods.
Etcd v2 backups
The etcd v2 backup method creates a directory structure with a single WAL file. You can perform a backup online without interrupting etcd cluster operations. To back up an etcd v2+v3 data store, use the following command:

etcdctl backup –data-dir /var/lib/etcd/ –backup-dir /backupdir
You can find the official procedure for etcd v2 restore here, but here is an overview of the basic steps. The challenging part is to rebuild the cluster one node at a time.

Stop etcd on all hosts
Purge /var/lib/etcd/member on all hosts
Copy the backup to /var/lib/etcd/member on the first etcd host
Start up etcd on the first etcd host with –force-new-cluster
Set the correct the PeerURL on the first etcd host to the IP of the node instead of 127.0.0.1.
Add the next host to the cluster
Start etcd on the next host with –initial-cluster set to existing etcd hosts + itself
Repeat 5 and 6 until all etcd nodes are joined
Restart etcd normally (using existing settings)

You can see these steps in the following script:
#!/bin/bash -e

# Change as necessary
RESTORE_PATH=${RESTORE_PATH:-/tmp/member}

#Extract node data from etcd config
source /etc/etcd.env || source /etc/default/etcd
function with_retries {
 local retries=3
 set -o pipefail
 for try in $(seq 1 $retries); do
   ${@}
   [ $? -eq 0 ] && break
   if [[ “$try” == “$retries” ]]; then
     exit 1
   fi
   sleep 3
 done
 set +o pipefail
}

this_node=$ETCD_NAME
node_names=($(echo $ETCD_INITIAL_CLUSTER |
awk -F'[=,]’ ‘{for (i=1;i<=NF;i+=2) { print $i }}’))
node_endpoints=($(echo $ETCD_INITIAL_CLUSTER |
awk -F'[=,]’ ‘{for (i=2;i<=NF;i+=2) { print $i }}’))
node_ips=($(echo $ETCD_INITIAL_CLUSTER |
awk -F'://|:[0-9]’ ‘{for (i=2;i<=NF;i+=2) { print $i }}’))
num_nodes=${#node_names[@]}

# Stop and purge etcd data
for i in `seq 0 $((num_nodes – 1))`; do
 ssh ${node_ips[$i]} sudo service etcd stop
 ssh ${node_ips[$i]} sudo docker rm -f ${node_names[$i]}
|| : # Kargo specific
 ssh ${node_ips[$i]} sudo rm -rf /var/lib/etcd/member
done

# Restore on first node
if [[ “$this_node” == ${node_names[0]} ]]; then
 sudo cp -R $RESTORE_PATH /var/lib/etcd/
else
 rsync -vaz -e “ssh” –rsync-path=”sudo rsync”
“$RESTORE_PATH” ${node_ips[0]}:/var/lib/etcd/
fi

ssh ${node_ips[0]} “sudo etcd –force-new-cluster 2>
/tmp/etcd-restore.log” &
echo “Sleeping 5s to wait for etcd up”
sleep 5

# Fix member endpoint on first node
member_id=$(with_retries ssh ${node_ips[0]}
ETCDCTL_ENDPOINTS=https://localhost:2379
etcdctl member list | cut -d':’ -f1)
ssh ${node_ips[0]} ETCDCTL_ENDPOINTS=https://localhost:2379
etcdctl member update $member_id ${node_endpoints[0]}
echo “Waiting for etcd to reconfigure peer URL”
sleep 4

# Add other nodes
initial_cluster=”${node_names[0]}=${node_endpoints[0]}”
for i in `seq 1 $((num_nodes -1))`; do
 echo “Adding node ${node_names[$i]} to ETCD cluster…”
 initial_cluster=
“$initial_cluster,${node_names[$i]}=${node_endpoints[$i]}”
 with_retries ssh ${node_ips[0]}
ETCDCTL_ENDPOINTS=https://localhost:2379
etcdctl member add ${node_names[$i]} ${node_endpoints[$i]}
 ssh ${node_ips[$i]}
“sudo etcd –initial-cluster=”$initial_cluster” &>/dev/null” &
 sleep 5
 with_retries ssh ${node_ips[0]}
ETCDCTL_ENDPOINTS=https://localhost:2379 etcdctl member list
done

echo “Restarting etcd on all nodes”
for i in `seq 0 $((num_nodes -1))`; do
 ssh ${node_ips[$i]} sudo service etcd restart
done

sleep 5

echo “Verifying cluster health”
with_retries ssh ${node_ips[0]}
ETCDCTL_ENDPOINTS=https://localhost:2379 etcdctl cluster-health

Etcd v3 backups
The etcd v3 backup creates a single compressed file. Remember, it cannot be used to back up etcd v2 data, so be careful before using this method. To create a v3 backup, run the command:

ETCDCTL_API=3 etcdctl snapshot save /backupdir

The official procedure for etcd v3 restore is documented here, but as you can see, the general process is much simpler than it was for v2; the v3 restore process is capable of rebuilding the cluster without such granular steps.
The steps required are as follows:

Stop etcd on all hosts
Purge /var/lib/etcd/member on all hosts
Copy the backup file to each etcd host
source /etc/default/etcd on each host and run the following command:

ETCDCTL_API=3 etcdctl snapshot restore BACKUP_FILE
–name $ETCD_NAME–initial-cluster “$ETCD_INITIAL_CLUSTER”
–initial-cluster-token “$ETCD_INITIAL_CLUSTER_TOKEN”
–initial-advertise-peer-urls $ETCD_INITIAL_ADVERTISE_PEER_URLS
–data-dir $ETCD_DATA_DIR

Tuning etcd
Because etcd is used to store Kubernetes’ configuration information, its performance is crucial to the efficient performance of your cluster. Fortunately, etcd can be tuned to better operate under various deployment conditions. All write operations require synchronization between all etcd nodes, which leads us to the following functional requirements:

etcd needs fast access to disk
etcd needs low latency to other etcd nodes
etcd needs to synchronize data across all etcd nodes before writing data to disk

Therefore, the following recommendations can be made:

The etcd store should not be located on the same disk as a disk-intensive service (such as Ceph)
etcd nodes should not be spread across datacenters
The number of etcd nodes should be 3; you need an odd number to prevent “split brain” problems, but more than 3 can be a drag on performance

The default etcd settings are not ideal for low disk I/O scenarios typically seen in test environments. As a result, set the following values:

ETCD_ELECTION_TIMEOUT=5000 #default 1000ms
ETCD_HEARTBEAT_INTERVAL=250 #default 100ms

Note that raising these values higher has a negative impact on read/write performance. It also creates a time penalty for the cluster to perform election, as the system takes longer to realize something is wrong. If these values are too low, however, the cluster will assume there’s a problem and perform re-elections frequently if there is poor network or disk latency.
Troubleshooting etcd
Here are some problems we’ve run into with etcd, and the solutions we came up with to fix them.

Problem
Solution

My restore fails and I see “etcdmain: database file (/var/lib/etcd/member/snap/db) of the backend is missing” in my etcd log.
The etcd v2 backup took place while etcd was writing a snapshot file. This backup file is not usable. The only solution is to restore from another backup file.

Why is etcd not listening on port 2379?
There are several possible reasons. First, ensure that the etcd service is running. Next, check etcd service logs on each host to see if there are issues with election and/or quorum. At least 51% of the cluster must be online in order for any data to be read or written, to prevent split brain problems; this way you won’t find yourself in a situation where different data is written across the cluster. That means a 3 node cluster must have at least 2 functional nodes.

Why does etcd perform so many re-elections?
Try raising ETCD_ELECTION_TIMEOUT and ETCD_HEARTBEAT_INTERVAL. Also, try reducing the amount of load on the host. You can find more information here.

 
Your turn
So that’s our take on etcd and the issues you need to think of when it comes to Kubernetes. Do you know of any tips we left out, or did we miss your troubleshooting question? Let us know in the comments!
The post Everything you ever wanted to know about using etcd with Kubernetes v1.6 (but were afraid to ask) appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Redefine your digital media strategy around data

The digital media industry has long faced a unique data challenge. Radio, television and film productions generate massive unstructured data sets, which can measure up to hundreds of terabytes total for a single large-scale film project.
The amount of data generated by these digital media productions is growing rapidly. In fact, IDC has projected that the world’s total amount of data will grow to 44 zettabytes (or a billion terabytes) by 2020, and a whopping 80 percent of that growth will be from unstructured content, much of it created by the digital media industry.
Managing and storing all this digital content can be both complicated and costly.
Not only do digital media companies manage the data generated by their current projects, they also store and organize all the data associated with their previous broadcasts, shows and movies. In this way, they take on the role of archivists, not only to manage their own assets, but also to archive them in a way that provides historical value to society. You’ve seen this come into play when broadcasters share old audio clips and video footage in order to provide context and historical perspective on current events.
In the past, managing this massive amount of data in a way that makes it easily searchable and accessible has been labor intensive. The early process of doing this required that content be manually logged. This produced basic details of the footage, including subject matter, date and time. Eventually this was automated using media asset management (MAM) software, which saved in human capital costs but did not necessarily deepen the understanding of usable assets broadcasters had in their archives.
Today, we are seeing that the industry is rethinking its data strategy in an effort to redefine the value of content and how that impacts the bottom line. Organizations are discovering new ways to use and manage their data in a way that turns what used to be a costly obligation into a potential revenue-building asset.
Danmon Group, an international broadcast and media solutions provider headquartered in Denmark, delivers solutions ranging from complete turnkey television and radio stations to outside-broadcast vehicles and satellite communications as well as virtual studios, archive & storage solutions and workflow management software.
When Danmon sought a new strategy for helping its customers storing and managing their massive collection of unstructured object data, it identified two critical needs: a global, scalable cloud infrastructure and the ability to reach into its data library to quickly pinpoint specific content on demand.
Danmon chose a combination of IBM solutions to fulfill its goals. First, the company taps IBM Aspera High Speed Data Transfer to upload content to the IBM Cloud data center. Then, to efficiently manage all the data, Danmon turns to IBM Cloud Object Storage and IBM Watson.
Data is ingested into the IBM Cloud frame-by-frame while its metadata is analyzed by IBM Watson Visual Recognition. Danmon uses this Watson API to apply visual recognition to the clips and footage and adds meta tags on-the-fly and does so, according to Danmon, with a level of granularity that far exceeded that of a MAM system.
It will be a game changer as it can help Danmon’s customers turn their once burdensome data storage and archiving process into a potentially huge asset – a library of easily searchable, accessible content in the cloud.
Another Danish company that has recently turned to IBM Cloud Object Storage and Watson technologies is any.cloud, which provides cloud hosting for customers across a wide range of industries, including digital media.
Any.cloud deployed IBM Cloud Object Storage to manage the massive amounts of unstructured data generated by its digital media customers, and it is currently testing IBM Watson Visual Recognition for a new service. The expansion of any.cloud’s data archive offerings, by combining its 24×7 storage availability on IBM Cloud with Watson services and technology, can help clients derive new insights and value from the data they store on any.cloud, giving both any.cloud and its client base new revenue streams.
Learn more information on IBM at NAB 2017. Find out more about IBM Cloud Object Storage.

The post Redefine your digital media strategy around data appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Director of Alliances & Channels

The post Director of Alliances & Channels appeared first on Mirantis | Pure Play Open Cloud.
Director of Alliances & Channels We’re disrupting IT infrastructure and rewriting the rules of cloud for enterprises, developers, and service providers, and we want your help. As the leader for Mirantis’ Alliances and Channels, you’ll play a critical role in successfully driving our strategic partnerships to maximize revenue and the reach of Mirantis within and across our ecosystem and the cloud market overall. If this sounds like the challenge you’re looking for, then we want you at Mirantis!You will help define the strategy for the Mirantis Global Alliances, prioritize our key partnerships, and then ensure Mirantis is executing to plan for each identified alliance, maximizing the Mirantis opportunity within each joint strategy. You will also identify a small set of key channel partners aligned within the corporate strategy in segments such as Managed Service Providers to develop joint solutions and drive GTM activity in the field. You will own the WW channel program and execution, on-boarding identified partners, executing channel business, enablement and programs. Key Responsibilities:Own the product marketing for Mirantis Cloud Platform and our MMO offering within our Build/Operate/Transfer strategyPerform primary and secondary market research (end-users, go-to-market partners, resellers) to understand market problems, current solutions, key metrics, and buying processesDevelop a detailed go-to-market strategy as part of a larger product and marketing plan for the fiscal year and align to the buyer’s journey for private cloudProvide key market segment use-cases, requirements, and competitive analysisEnsure messaging is clearly defined and consistently communicated to the marketDevelop and deliver extensive customer facing content, tools and internal information, as well as training and enablement to support the full sales cycleDevelop TCO analysis tools for use by the sales teams against primary competition, and enable sales to effectively utilize the tools for effective business value sellingHelp direct product launchesServe as product evangelist in customer meetings, conferences and other forums.What We’re Looking For:10+ years experience in Global Alliances and/or Channel Partnerships for software technology products and ecosystems, preferably cloud and/ or open-source. Prior OpenStack and/or Kubernetes experience is a plus.Relationships and proven ability to forge strong partnerships and joint strategies culminating in tangible plans driven globally across two or more companies.Deep understanding of private and public cloud. Intimate understanding of key business problems and buying process in the space. Experience with key verticals such as service provider, telco, automotive/manufacturing, financial a plus.Proven understanding and execution of Channel Programs and channel partner management.Adept at understanding the compensation specifics of different routes to market when multiple companies are involved in joint solutions and methodologies for maximizing sales and channel focus on driving desired outcomes.Experience working in and across all aspects of a business to form strategies, align agreement, and execute plans: Engineering, Sales, Product Management, Product Marketing, Services, Support, Legal, Finance, Executives.Excellent written and verbal communication skills, including public speakingStrong interpersonal and team skills and the ability to interact with customers, partners and foster cross-functional teamwork among sales, marketing, and product teams.Strong negotiation skills and the ability to interact with partners in difficult situations, arriving at mutually beneficial agreements in the best interest of Mirantis.Bachelor’s degree is required. MBA a plusThe post Director of Alliances & Channels appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

OpenShift with Artifactory: A Powerful PaaS with a JFrog Stack

If you’re containerizing, cloudifying, and doing DevOps, you want your tools to work together nicely so you don’t have the headache of managing infrastructure. We are making it even easier to make your enterprise-grade devops environment work great with JFrog Artifactory on OpenShift – Red Hat’s container platform based on Kubernetes.
Quelle: OpenShift

The Next Generation of Red Hat OpenShift Online is Here

Today we announce the initial availability of the next generation of Red Hat OpenShift Online, a cloud-based container platform that enables developers to build, deploy, host, and scale container-based applications. OpenShift Online can dramatically improve developer performance by providing a ready, easy to use container-based platform from any web browser, IDE, and command line – with support for local development. Re-engineered to be built on the same powerful and agile technology as Red Hat OpenShift Container Platform, OpenShift Online is one of the first multi-tenant cloud application platforms powered by docker-format containers and Kubernetes-based container orchestration.
Quelle: OpenShift

How to build a mobile application in 3 easy steps with Kinetise

Kinetise, a rapid mobile apps development platform, is one of the fastest ways in the world to produce mobile applications. It just takes three easy steps:

Put together your complete app in the drag and drop editor.
Generate the native source code.
Modify the code with Xcode / Android studio (if needed).

Using Kinetise to build apps won’t take months of development by an army of developers. It only requires a few hours spent by a single developer. It takes just a few hours.
Even better, here’s one you can have developed and deployed on your phone in two minutes: a completely custom, advanced, feature-rich, well-performing native mobile application, connecting to any RESTful API, coded from scratch, but without coding.

Here is how the development process works, provided in step-by-step instructions:
1. Put together a complete app in drag and drop editor
Kinetise editor enables you to put together layout and navigation and, more importantly, create functionalities, for example:

Lists, galleries, videos, charts and maps, fed with API data
Forms with textinputs, radiobuttons, checkboxes, dropdowns
Reusable, nested screens (detail screens), passing logic and hierarchy when user navigates through them
Dynamically defined, parametrized and authenticated API calls (GET, POST and more)
Local variables
Local database for offline scenarios
User roles
GPS tracking
Animating overlays
Calling, texting, opening files and websites
Camera access and QR-code/barcode scanning
Content caching rules

In short, Kinetise offers the power and flexibility a traditional developer has when they are coding manually, but without having to code manually. Users can put all these together in any flow and combination to get your app complete and ready for deployment or app store submission.
2. Generate native source code
There are a variety of reasons someone might need to generate the source code:

They want to add the feature (or SDK) that the Kinetise editor doesn’t offer.
The enterprise company they work for simply requires the source code as a general policy (for security).
They are developing an app from scratch, outside Kinetise, but they want to take a part of Kinetise code and incorporate it into their project.
They want to learn from it/ after all it is always a very good way of learning for the beginners to check how pros did it.

Just generate the code and download it to your hard drive.
3. Modify the code with Xcode / Android Studio
What you get are actually two source codes: Objective-C for iOS and Java for Android. Both are well-structured, clean, highly-performing and editable, as needed. The source codes can be tweaked, modified, or enhanced with SDKs and the like.
As was mentioned before, perhaps the source code is not really needed, because the editor itself allows you to build and compile complete application. However, the source code definitely removes any possible lock-in.
Check out Kinetise on Bluemix. Let’s see what you can develop in just a few hours.
NOTE: It is very easy to integrate Kinetise apps with other IBM Web services, provided they expose data via RESTful API. Kinetise editor includes a very powerful request settings feature, which enables quick and dynamic configuring of all the API calls.

A version of this article originally appeared on the IBM Bluemix blog.
The post How to build a mobile application in 3 easy steps with Kinetise appeared first on Cloud computing news.
Quelle: Thoughts on Cloud