Container-native Storage for the OpenShift Masses

Learn how Red Hat Container-Native Storage 3.6, released today, reaches a new level of storage capabilities on the OpenShift Container Platform. Container-native storage can now be used for all the key infrastructure pieces of OpenShift: the registry, logging, and metrics services
Quelle: OpenShift

Is your own Equifax crisis hiding in your infrastructure?

The post Is your own Equifax crisis hiding in your infrastructure? appeared first on Mirantis | Pure Play Open Cloud.
Photo Credit: USDA
We may never know why the disastrous security flaw that allowed hackers to steal the personal information for more than 145 million people was still present on Equifax systems two months after it was first discovered. What anybody who works in enterprise IT does know, however, is that even when you do know about security updates that need to be made, once your infrastructure reaches a certain degree of complexity, knowing can be easier than doing.
Looking at the fallout from the Equifax breach, however, it’s hard to make the case for that as a valid excuse. Fortunately, it doesn’t have to be that way; the same technology that makes it possible to create massively complex systems also makes it possible to keep them updated and secure — if you do it right. As we saw with the Equifax breach, however, there are lots of ways to get things wrong.
What happened at Equifax
The vulnerability that enabled hackers to breach Equifax’s security systems wasn’t due to anything that engineers at Equifax did, exactly. Instead, it stemmed from a bug that existed in one of the software packages Equifax used to run the website consumers used to file disputes regarding information on their credit reports.
Once hackers were able to breach the web server, it seems, they were able to work their way to related Equifax systems, making a series of intrusions and stealing sensitive personal information for 44% of the population of the United States.
Seems like there was nothing they could do, right?
Wrong.
The reality is that the vulnerability that led to this breach had been identified and disclosed by the U.S. Department of Homeland Security, US-CERT two months before the breaches began. What’s more, once the attackers gained entrance to the initial system, they should have had very limited options; because such systems are on the front lines, so to speak, they should always follow the principle of “least privileges”. While we can only speculate, of course, that doesn’t seem to have been the case here.
Once the breaches began, it took another two and a half months for the company to notice — and even then they didn’t take the vulnerable system down until the next day, when they saw “further suspicious activity”.
At this point it may sound like we’re piling on Equifax, but that’s not the case. The reality is that the company had a lot of things going against it.
Why keeping large systems safe can be difficult
While it’s tempting to think that Equifax simply ignored the vulnerability, that doesn’t actually seem to have been the case. In fact, according to the company, “The particular vulnerability … was identified and disclosed by U.S. CERT in early March 2017. Equifax’s Security organization was aware of this vulnerability at that time, and took efforts to identify and to patch any vulnerable systems in the company’s IT infrastructure.”
So why did it take so long?
Again, we’re only speculating here, but most enterprise systems suffer from the same problem: individuality. While it’s great for people, it’s not so great when you’ve got dozens or hundreds or even thousands of servers, and they all need individual care. Manually configuring and documenting the status of individual servers can quickly become unmanageable. What’s more, the difficulty of keeping up with what needs to be done sometimes leads operators to cut corners by relaxing security or increasing permissions to get around problems, rather than trying to make everything consistent — and correct.
Once these systems are up and running, there are so many different logs and alerts and events that it’s impossible to simply follow them all without some form of dashboard, and even then, the raw data doesn’t necessarily tell you anything. It’s no wonder it took so long for Equifax to realize they’d been breached, and that they had to hire a security firm to tell them the extent of the damage.
How Equifax could have prevented it
Equifax’s problems seems to be divided into three phases:  before, during, and after.
If Equifax’s security team knew about the vulnerability months before the breach, why wasn’t it patched?  The answer has two parts.
First, when you’ve got production systems that large, you can’t just go applying patches without testing; you could easily destroy your entire deployment. Instead, engineers or operators need to isolate the problem and the fix, then test the fix before planning to deploy it.  Those tests need to be in an environment that is as close to the production environment as possible, and when the fix is deployed, that deployment has to match the way it was done in testing in order to duplicate the results.
Once you’ve determined what the fix is and how to deploy it, you need to go ahead and do that — something that can take a significant amount of time and manpower.
Obviously this isn’t something that can be easily achieved manually, even with just a handful of servers.
Infrastructure as Code
When We talk about “manually” configuring servers, the truth is that that’s more and more rare these days, as configuration management systems such as Puppet, Ansible, and Salt become more common. These systems enable administrators or operators to specify either the end condition they want, in the case of declarative systems, or the actions that should be taken, in the case of imperative systems.  Either way, they wind up with a script that can be treated like program code.
This is important for a number of different reasons. The most obvious is that it enables administrators to easily manage multiple machines with a single set of commands, but that’s just the beginning.
When we say that these scripts can be treated like program code, this has a number of different implications:

They can be checked into version control systems such as Git, making it possible to keep track of the “official” version of the various scripts, and any changes that are made to them.
They can be incorporated into a Continuous Integration / Continuous Deployment (CI/CD) system that enables them to be tested and deployed automatically (if appropriate) when changes are made, rather than having an administrator manually address each server individually.
When a fix needs to be made, these scripts can be analyzed to determine what systems are affected, and the fix can be easily integrated, tested, and deployed.
Fixes can be made not only to servers, but also to security policies, web application frameworks, and other pieces of the security puzzle, enabling manageable virtual patching and rollback.
Because everything is scripted, the environment can be strictly controlled, ensuring that when a fix is deployed, it’s deployed in exactly the way it was deployed for testing.

With these advantages in place, Equifax would have found it much easier to both determine what systems needed to be patched, and to test and implement the fixes.
Scripting these operations can also solve another problem: the tendency to cut corners and loosen security because it’s easier than figuring out how to make everything work without opening the system door wide.
Monitoring and pro-active management
Of course, no system is perfect; even if Equifax had managed to implement every patch as soon as it were made available, there’s no guarantee they wouldn’t get hit with a so-called “zero-day vulnerability” that hadn’t yet been disclosed.
To solve this problem, it’s important to have a meaningful logging, monitoring, and alerting system that makes it possible to spot problems and anomalies as early as possible.
That means more than just having logs; logs can provide information on specific errors, but won’t show you trends. For that you need tools such as Grafana or other time-based reporting tools.  In addition, newer technologies such as Machine Learning can spot anomalies humans might miss.
Finally, the best reporting in the world won’t save you if nobody’s paying attention. The best monitoring is pro-active, and the same skills that make it possible to watch for trends and predict problems such as hardware failure can make it possible to spot issues such as a large data outflow that shouldn’t be happening, and take action immediately, rather than waiting for it to happen again in order to be “sure” of what you’re seeing.
The takeaway
It’s easy to look at the disaster that was the Equifax breach and think, “That won’t happen to us.”  But the truth is, cyberattacks are becoming commonplace; if you’re not taking steps to protect your infrastructure, it’s not a question of whether you’re going to be attacked, but when — and how bad the damage will be when it happens.
The post Is your own Equifax crisis hiding in your infrastructure? appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Introducing IBM UrbanCode Deploy on Cloud

Imagine doing application deployments from the cloud.
Today, many companies have strategic initiatives to move as many applications as they can to the cloud.  In his report Six Trends That Will Shape DevOps Adoption in 2017 And Beyond, Robert Stroud, principal analyst serving infrastructure and operations professionals at Forrester Research, writes:
Cloud is no longer just for new applications. It is rapidly becoming the default choice for both new workloads and an emerging number of workloads that are being lifted and shifted to take advantage of dynamic scaling and perceived lower cost of cloud.
This is exactly why we’re introducing IBM UrbanCode Deploy on Cloud.
IBM UrbanCode Deploy on Cloud has IBM UrbanCode Deploy as its foundation, only now it is available as a service. UrbanCode Deploy is an industry-leading application release automation solution that makes deploying applications predictable, repeatable and auditable.
Deploying applications means moving new and updated software across environments: development to test, to staging, to production with control and visibility. UrbanCode Deploy has been adopted by leading companies worldwide and is used by hundreds of teams within IBM itself. Why? Reasons for adoption include unique capabilities for complex orchestration, configuration management and drift detection, approval gates, granular security controls and the ability to deploy applications virtually anywhere they are needed.
With UrbanCode Deploy on Cloud, your company pays a monthly subscription—no more license management—and IBM installs, configures, monitors, patches and upgrades the software. You use your IBM cloud credentials and a URL to access a private cloud environment—one that is restricted to your company’s designated employees. Employees spend time writing and perfecting deployment processes, share learnings, templates, and blueprints while deploying applications with automation to whatever environment you desire: IBM Cloud, other cloud providers, on premises data centers and even to the mainframe.
UrbanCode Deploy on Cloud has an added bonus: elastic usage. You purchase a number of agents for your expected usual deployment volume; each endpoint you deploy to requires an agent. However, in peak seasons, when you need to expand your management to more targets, you can temporarily scale up and be billed only for the amount you exceed your baseline.
To learn more about UrbanCode Deploy on Cloud, visit the official web page.
The post Introducing IBM UrbanCode Deploy on Cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Beyond the hype: What it really means to have an enterprise-strong cloud

Cloud computing is the foundation upon which enterprises are transforming their businesses by making it easy to gain new insights from their data to drive innovation and competitive advantage.
The cloud makes these new technologies and architectures accessible to all. With that said, digitizing a business can be a complex and unpredictable process.
Enterprises have moved well past the stage of viewing the cloud in isolation or strictly as a cost-savings tool. Higher-value services that manage and analyze data have become the focal point. As cloud computing evolves, there has been a lot of hype around what it can and can’t do. In a blog post earlier this year, I laid out what it means to have a cloud that is tailored for AI, so today I’d like to discuss what it means to have a cloud that is enterprise strong.
First of all, there are several key factors that all enterprises should carefully consider when making a move to the cloud, including existing infrastructure investments and architectures; industry specific business model needs; internal IT skill sets and gaps; security, geographic, environmental and regulatory considerations; and availability of services including Internet of Things (IoT), AI, analytics, serverless and more.
The advantages of an enterprise-strong, AI cloud are achieved when you have the ability to build truly cloud-native solutions that work across public, private and hybrid deployments. While enterprises continue focus on data and move towards public cloud, the long-term value lies in an elastic infrastructure that enables enterprises to bring data to life no matter where it resides.
Enterprises should blend their environments, creating new, cloud-native solutions that interact with their existing infrastructure and applications. This includes a cloud strong enough to run traditional enterprise applications and architectures (such as SAP) so that they can innovate where they want without having to rewrite or port everything. An enterprise cloud has to provide the ability to connect to the core business (through services like API Connect), provide the ability to deploy components in multiple environments (on premises, dedicated, public), and have a public cloud that supports these traditional architectures.
With a secure, one architecture approach to its cloud platform, IBM is well positioned to help enterprises protect and get the most from their data while seamlessly managing local data requirements. Our clients’ needs are at the center of our strategy, and IBM has established a larger global footprint because we know our clients want to keep their data in the country for a variety of reasons. IBM investment in its cloud data center footprint continues to grow, with nearly 60 data centers across 19 countries, including 16 data centers in Europe alone. IBM also is one of the first global cloud companies to adopt the EU’s Data Protection Code of Conduct for Cloud Service Providers, demonstrating a firm commitment to data privacy and security in its cloud infrastructure services.
Additionally, the IBM Cloud is supported by IBM Security, which serves 10,000 clients in 133 countries, with a global network of 8,000 security professionals monitoring 270 million endpoints. Watson for Cyber Security is already helping pinpoint threats and attacks.
Because the vast majority of companies have invested heavily in their infrastructure, applications and data, the IBM Cloud provides a strong public cloud for maximum flexibility with compelling economics. It offers private cloud capabilities for clients who want to keep sensitive data on-premises for security or regulatory reasons, and expertise in hybrid to connect private/public clouds, as well as data, applications, transactions and workflow. This combination enables the enterprise to invest in new technologies and architectures while preserving investments in traditional architectures, software and tooling, effectively eliminating an all-or-nothing proposition for moving to the cloud.
That’s why many Fortune 500 companies and some of the world’s most notable brands turn to the IBM Cloud.
As you listen to cloud providers talk about intelligence in the cloud and providing services that are enterprise-grade, remember to look beyond the hype and find the solution that is right for you.
Learn more about why IBM Cloud is the right choice for so many businesses.
The post Beyond the hype: What it really means to have an enterprise-strong cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Using Red Hat OpenStack Platform director to deploy co-located Ceph storage – Part Two

Previously we learned all about the benefits in placing Ceph storage services directly on compute nodes in a co-located fashion. This time, we dive deep into the deployment templates to see how an actual deployment comes together and then test the results!
Enabling Co-Location
This article assumes the director is installed and configured with nodes already registered. The default Heat deployment templates ship an environment file for enabling Pure HCI. This environment file is:

/usr/share/openstack-tripleo-heat-templates/environments/hyperconverged-ceph.yaml

This file does two things:

It redefines the composable service list for the Compute role to include both Compute and Ceph Storage services. The parameter for storing this list in ComputeServices.

It enables a port on the Storage Management network for Compute nodes using the OS::TripleO::Compute::Ports::StorageMgmtPort resource. The default network isolation disables this port for standard Compute nodes. For our scenario we must enable this port and its network for the Ceph services to communicate. If you are not using network isolation, you can leave the resource at None to disable the resource.

Updating Network Templates
As mentioned, the Compute nodes need to be attached to the Storage Management network so Red Hat Ceph Storage can access the OSDs on them. This is not usually required in a standard deployment. To ensure the Compute node receives an IP address on the Storage Management network, you need to modify the NIC templates for your  Compute node to include it. As a basic example, the following snippet adds the Storage Management network to the compute node via the OVS bridge supporting multiple VLANs:
   – type: ovs_bridge
     name: br-vlans
     use_dhcp: false
     members:
     – type: interface
       name: nic3
       primary: false
     – type: vlan
       vlan_id:
         get_param: InternalApiNetworkVlanID
       addresses:
       – ip_netmask:
           get_param: InternalApiIpSubnet
     – type: vlan
       vlan_id:
         get_param: StorageNetworkVlanID
       addresses:
       – ip_netmask:
           get_param: StorageIpSubnet
     – type: vlan
       vlan_id:
         get_param: StorageMgmtNetworkVlanID
       addresses:
       – ip_netmask:
           get_param: StorageMgmtIpSubnet
     – type: vlan
       vlan_id:
         get_param: TenantNetworkVlanID
       addresses:
       – ip_netmask:
           get_param: TenantIpSubnet
The blue highlighted section is the additional VLAN interface for the Storage Management network we discussed.
Isolating Resources
We calculate the amount of memory to reserve for the host and Red Hat Ceph Storage services using the formula found in “Reserve CPU and Memory Resources for Compute”. Note that we accommodate for 2 OSDs so that we can potentially scale an extra OSD on the node in the future.
Our total instances:
32GB / (2GB per instance + 0.5GB per instance for host overhead) = ~12 hosts
Total host memory to reserve:
(12 hosts * 0.5 overhead) + (2 OSDs * 3GB) = 12GB or 12000MB
This means our reserved host memory is 12000MB.
We can also define how to isolate the CPU resources in two ways:

CPU Allocation Ratio – Estimate the CPU utilization of each instance and set the ratio of instances per CPU while taking into account Ceph service usage. This ensures a certain amount of CPU resources are available for the host and Ceph services. See the ”Reserve CPU and Memory Resources for Compute” documentation for more information on calculating this value.

CPU Pinning – Define which CPU cores are reserved for instances and use the remaining CPU cores for the host and Ceph services.

This example uses CPU pinning. We are reserving cores 1-7 and 9-15 of our Compute node for our instances. This leaves cores 0 and 8 (both on the same physical core) for the host and Ceph services. This provides one core for the current Ceph OSD and a second core in case we scale the OSDs. Note that we also need to isolate the host to these two cores. This is shown after deploying the overcloud. 

Using the configuration shown, we create an additional environment file that contains the resource isolation parameters defined above:
parameter_defaults:
 NovaReservedHostMemory: 12000
 NovaVcpuPinSet: [‘1-7,9-15′]
Our example does not use NUMA pinning because our test hardware does not support multiple NUMA nodes. However if you want to pin the Ceph OSDs to a specific NUMA node, you can do so using following “Configure Ceph NUMA Pinning”.
Deploying the configuration …
This example uses the following environment files in the overcloud deployment:

/usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml – Enables network isolation for the default roles, including the standard Compute role.

/home/stack/templates/network.yaml – Custom file defining network parameters (see Updating Network Templates). This file also sets the OS::TripleO::Compute::Net::SoftwareConfig resource to use our custom NIC Template containing the additional Storage Management VLAN we added to the Compute nodes above.

/home/stack/templates/storage-environment.yaml – Custom file containing Ceph Storage configuration (see Appendix A. Sample Environment File: Creating a Ceph Cluster for an example).

/usr/share/openstack-tripleo-heat-templates/environments/hyperconverged-ceph.yaml – Redefines the service list for Compute nodes to include the Ceph OSD service. Also adds a Storage Management port for this role. This file is provided with the director’s Heat template collection.

/home/stack/templates/hci-resource-isolation.yaml – Custom file with specific settings for resource isolation features such as memory reservation and CPU pinning (see Isolating Resources).

The following command deploys an overcloud with one Controller node and one co-located Compute/Storage node:
$ openstack overcloud deploy
–templates /usr/share/openstack-tripleo-heat-templates
    -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml       -e /home/stack/templates/network.yaml
    -e /home/stack/templates/storage-environment.yaml
    -e /usr/share/openstack-tripleo-heat-templates/environments/hyperconverged-ceph.yaml
    -e /home/stack/templates/hci-resource-isolation.yaml
    –ntp-server pool.ntp.org
Configuring Host CPU Isolation
As a final step, this scenario requires isolating the host from using the CPU cores reserved for instances. To do this, log into the Compute node and run the following commands:
$ sudo grubby –update-kernel=ALL –args=”isolcpus=1,2,3,4,5,6,7,9,10,11,12,13,14,15″
$ sudo grub2-install /dev/sda

This updates the kernel to use the isolcpus parameter, preventing the kernel from using cores reserved for instances. The grub2-install command updates the boot record, which resides on /dev/sda for default locations. If using a custom disk layout for your overcloud nodes, this location might be different.
After setting this parameter, we reboot our Compute node:
$ sudo reboot
Testing
After the Compute node reboots, we can view the hypervisor details to see the isolated resources from the undercloud:
$ source ~/overcloudrc
$ openstack hypervisor show overcloud-compute-0.localdomain -c vcpus
+——-+——-+
| Field | Value |
+——-+——-+
| vcpus | 14 |
+——-+——-+
$ openstack hypervisor show overcloud-compute-0.localdomain -c free_ram_mb
+————-+——-+
| Field       | Value |
+————-+——-+
| free_ram_mb | 20543 |
+————-+——-+
2 of the 16 CPU cores are reserved for the Ceph services and only 20GB out for 32GB is available for the host to use for instance.
So, let’s see if this really worked. To find out, we will run some Browbeat tests against the overcloud. Browbeat is a performance and analysis tool specifically for OpenStack. It allows you to analyse, tune, and automate the entire process.
For our test we have run a set of Browbeat benchmark tests showing the CPU activity for different cores. The following graph displays the activity for a host/Ceph CPU core (Core 0) during one of the tests:

The green line indicates the system processes and the yellow line indicates the user processes. Notice that the CPU core activity peaks during the beginning and end of the test, which is when the disks for the instances were created and deleted respectively. Also notice the CPU core activity is fairly low as a percentage.
The other available host/Ceph CPU core (Core 8) follows a similar pattern:

The peak activity for this CPU core occurs during instance creation and during three periods of high instance activity (the Browbeat tests). Also notice the activity percentages are significantly higher than the activity on Core 0.
Finally, the following is an unused CPU core (Core 2) during the same test:

As expected, the unused CPU core shows no activity during the test. However, if we create more instances and exceed the ratio of allowable instances on Core 1, then these instances would use another CPU core, such as Core 2.
These graphs indicate our resource isolation configuration works and the Ceph services will not overlap with our Compute services, and vice versa.
Conclusion
Co-locating storage on compute nodes provides a simple method to consolidate storage and compute resources. This can help when you want to maximize the hardware of each node and consolidate your overcloud. By adding tuning and resource isolation you can allocate dedicated resources to both storage and compute services, preventing both from starving each other of CPU and memory. And by doing this via Red Hat OpenStack Platform director and Red Hat Ceph Storage, you have a solution that is easy to deploy and maintain!
Quelle: RedHat Stack

Translation as a service bolsters customer service via IBM Bluemix platform

Let’s say the owner of a small hotel in France wants to attract international tourists from China. The hotelier must translate the hotel’s website and reservation system, which seems like a difficult and time-consuming project. The hotelier might send content to a translation agency via email. It could take two or three weeks for the agency to do its magic, and the hotelier probably wouldn’t have any idea who translated the material or have any control over the content.
That doesn’t sound too promising, but it doesn’t have to be that way.
Cloud-based services from Text United GmbH have transformed yesterday’s messy, fragmented, and inefficient manual translation process with a translation-as-a-service platform.
The technology behind translation as a service
Text United had a vision to create better, faster and cheaper translations by modernizing and automating the process. The company’s leaders knew the way to do this was to move to the cloud. They looked at different vendors and solutions and ultimately decided to join the IBM Global Entrepreneur Program to test translation software and more fully develop the solution using the IBM Bluemix platform.
The translation-as-a-service solution uses IBM DB2 Enterprise Server Edition software to manage texts. It holds onto all texts from its customers in a database, so when a new translation project comes in the system, it can reference previously translated passages, ensuring consistent word use.
Because it’s cloud based, global users such as clients, translators and project managers can collaborate and communicate directly on translation projects.
Since translators are working around the clock, system uptime is absolutely critical. IBM Bluemix ensures the reliability Text United needs.
Translation in hours, not days or weeks
Now, translation is much more efficient and accurate. There is no need to send files back and forth, so version control is more straightforward, too.
The solution reduces costs for translation service requestors: when segments of text are adopted from previous translations, no costs are incurred for those segments. Translators can earn more money by using the platform because there is no intermediary taking a commission.
The French hotelier could sign up with Text United, enter the URL to the website and request a Chinese translation. When the hotelier clicks “submit,” Text United would organize everything. That is all a website owner has to do.
Text United gives e-commerce proprietors a JavaScript which is installed on the web server to act as a language selector. All translations will be pulled from the Text United server, and voilà, global business. If a web page is updated or supplemented, the translation is immediately added.
Text United is considering how the IBM Watson Speech to Text and IBM Watson Text to Speech technologies can be incorporated into translation as a service solution to further support and improve the process.
Read the case study to learn more.
The post Translation as a service bolsters customer service via IBM Bluemix platform appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

The A to Z guide to IBM Cloud migration solutions

The IBM Cloud has been built to help you solve business problems and create competitive advantage in a world flush with data. This platform allows you to focus on enterprise innovation, make sense of your data and move from data insights to data intelligence using cognitive systems. IBM is excited about the recent announcement of IBM Cloud Mass Data Migration, which provides a new option for customers to migrate their data to the IBM Cloud. Together with IBM Aspera’s high-speed cloud migration solutions, IBM now provides a comprehensive set of solutions for customers to migrate their data into the cloud.
The Mass Data Migration offering and network-based Aspera high-speed transfer solutions have each been designed to solve specific migration needs and challenges. Which is the best method to migrate your data to the IBM Cloud?  The answer is a function of data size, available bandwidth, urgency and destination.
Size & bandwidth
To illustrate this point, take a look at the typical Aspera network transfer times as determined by both the size of the data set and available bandwidth. As Aspera network transfer times approach the 1 week mark, or if you are in a remote area where bandwidth is limited or unavailable, physical data transfer using Mass Data Migration might be the ideal solution.

Urgency
IBM Mass Data Migration can help companies free up on-premises storage, move SAP or VMWare workloads to the cloud, create a cloud archive or de-commission a data center. The solution is designed to allow customers to typically transfer 120 TB of data in as few as 7 days.
Unlike traditional physical data migration, Aspera’s high-speed data transfer solutions can quickly deliver data in near real-time over high-speed networks, while also enabling use cases such as continuous, cloud synchronization. A global media company recently used Aspera to migrate 7PBs of data in 45 days into their cloud archive. That’s 100 TBs per day! Netflix also deployed Aspera to migrate its original and mastered content to the cloud, and continues to use the software to receive 30+ terabytes of new content a month from over 200 global partners.
Destination
The final element to assess in your choice between migration solutions is the target destination. IBM Mass Data Migration moves data into IBM Cloud Object Storage, which can serve as a customer’s gateway to IBM Cloud applications and services like Watson Content Enrichment.
The IBM Aspera multi-cloud, hybrid solution can migrate data to multiple destinations, including IBM Cloud Object Storage, the Bluemix Infrastructure, and on-premises storage. Natively integrated into market-leading public clouds, Aspera’s high-speed transfer technology can move large data over any distance to, from or between almost any cloud object storage.
Additional considerations
Data security is critical when it comes to the physical transfer of data. IBM Cloud Mass Data Migration devices are designed to maximize security: all devices are housed in rugged, tamper-evident cases and feature industry-standard AES 256-bit encryption and RAID-6 redundancy to address both data security and data integrity during transport and ingestion.
Security is also often raised as a concern when transferring data over the public Internet. IBM Aspera provides enterprise-grade security to protect valuable data during the entire migration. Its technology platform supports a comprehensive end-to-end security model: authentication, encryption in-transit and at-rest using strong cryptography, data integrity verification to protect against man-in-the middle attacks, and FIPS 140-2 compliance.
Real-time monitoring is provided throughout an Aspera transfer so you always know the exact status of the migration. Should something go wrong in transit, administrators can pause and resume an interrupted transfer. In the event of a network interruption, Aspera transfers will auto-resume from the point of interruption. When the migration requires automation, Aspera offers Aspera Orchestrator and software APIs to programmatically move data over the wire. Once an initial migration is complete, IBM Aspera allows network transfers to continue, keeping data in the cloud in sync with on-premises storage.
Optimize your migration
IBM Cloud’s newly-introduced Mass Data Migration tool provides customers an additional option to migrate data. Whether transferring 1TB or 1PB of data to the IBM Cloud, customers have the ability to select the migration solution that best matches their needs.
Need to talk to someone about your cloud migration? Contact sales@asperasoft.com.
For more information on IBM Cloud Mass Data Migration, please click here.
Just getting started on planning your migration strategy? IBM Cloud Migration Services can provide expertise and guidance.
The post The A to Z guide to IBM Cloud migration solutions appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Three key use cases of private clouds today

What if you could benefit from the cloud without giving up control? Mind blown? It can be done. The answer is a combination of good practices and the right cloud technology.
For example, administrators and developers have the power to support each other. When Dev shares a secure and reliable platform that Ops manages, they can innovate more rapidly, even as business requirements change. In this model, Ops gives Dev the tools, speed and flexibility to build applications, even in sensitive or highly-regulated environments. A private cloud platform that acknowledges the needs of the enterprise can facilitate and strengthen developers’ work, and the enterprise benefits from agile development practices.
After working with many clients’ teams, here are what I would identify as three of the best use cases for private cloud.
1. Optimize / modernize applications on cloud
Some existing enterprise apps aren’t easily extended to the cloud and require specialists. Cultural and organizational silos prevent developers using modern cloud development practices from accessing these heritage apps. This is one advantage of moving to cloud-enabled, componentized and consistently managed apps. If you’ve invested in previous apps, you don’t have to start from scratch.
With a microservices framework, you can construct systems from a collection of small services with their own processes that communicate though lightweight protocols. Refactoring existing applications or their parts into microservices makes sense. A private cloud is the ideal point to modernize and unify tools, developers and software.
2. Open your data center to work with cloud services
Developers want to create cloud-native apps on a private cloud to integrate data and app services from existing apps or new, public cloud services. They want the immense processing capacity available on their mainframes for large analytics jobs. What if they could pull mainframe data into an application on a private cloud that can leverage an external push notification service hosted on a public cloud? This way all their needs are met.
3. Create new cloud-native applications
Cloud-native applications are built with a variety of runtimes. Application portability should be a key feature of any cloud platform. What if developers could build cloud-native applications anywhere and move them anywhere, using their tool chains without compromising security and compliance?
How IBM meets these needs
IBM has a private cloud platform — IBM Cloud Private. Our principles, based on years of working with developers and operators, allow us to meet your needs in these two key ways:
1. We are enterprise-focused
Our platform allows you to take a microservices architecture approach. Many of the advantages of microservices come from resource isolation, scale up and scale down, and lightweight movement of application workloads.
As the number of microservices constituting an application grows, management and overhead become more complex. Developers need to discover existing services to avoid duplication while administrators need to monitor and secure the environment. The IBM Cloud Private platform allows development and operations teams to build, deploy and manage workloads built as microservices. Because IBM Cloud Private is deployed on-premise, workloads requiring low-latency access to enterprise APIs benefit.
2. Our platform has application services
Application services are runtimes, software, data and other services that can be added to cloud-native applications or connected to existing ones. IBM Cloud Private allows teams to create and stand up elastic runtimes based on workloads. IBM packages both open-source and IBM software, as well as databases with capabilities to build and run enterprise workloads. IBM also delivers enhanced support to run CPU-intensive capabilities such as machine learning or data analytics by taking advantage of Graphics Processing Unit (GPU) clusters.
These application services have been built or re-imagined for cloud-native workloads and influenced by the long history IBM has with enterprise workloads. Developers can use the application services they want while operations can ensure the catalog of services is up to date and available to their teams, whether geographically/network isolated or fully connected. IBM Cloud Private recognizes that enterprises need flexibility, expect IBM to embrace open technologies and desire a strong point of view for managing, developing and delivering their enterprise workloads.
If you’d like to talk more about these ideas and see IBM Cloud Private in action, I’ll be speaking at Java One in San Francisco. Here are the details on my talk:
Modernize Your Enterprise Apps for Microservices with IBM Cloud Private [CON7971]
Tuesday, Oct 03, 3:00 p.m. – 3:45 p.m. | Moscone West – Room 2012
Find our more and register for Java One, October 1-5, San Francisco, CA.
Want to try IBM Cloud Private?
These are only some of the many reasons why this is such a great cloud solution. You can install IBM Cloud Private Community Edition (CE) at no charge. You can also learn more about IBM Cloud Private on the official website.
The post Three key use cases of private clouds today appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

[Podcast] PodCTL #8 – Managing High Performance Workloads

Go bigger and go faster! That’s the theme of this week’s show with Jeremy Eder (@jeremyeder, Senior Principal Software Engineer at Red Hat), as we discuss the newly formed Kubernetes Resource Management Working Group. The show focuses on how this cross-functional group is working on these core strategies: Support for performance sensitive workloads (exclusive cores, cpu […]
Quelle: OpenShift

Using Red Hat OpenStack Platform director to deploy co-located Ceph storage – Part One

An exciting new feature in Red Hat OpenStack Platform 11 is full Red Hat OpenStack Platform director support for deploying Red Hat Ceph storage directly on your overcloud compute nodes. Often called hyperconverged, or HCI (for Hyperconverged Infrastructure), this deployment model places the Red Hat Ceph Storage Object Storage Daemons (OSDs) and storage pools directly on the compute nodes.
Co-locating Red Hat Ceph Storage in this way can significantly reduce both the physical and financial footprint of your deployment without requiring any compromise on storage.

Red Hat OpenStack Platform director is the deployment and lifecycle management tool for Red Hat OpenStack Platform. With director, operators can deploy and manage OpenStack from within the same convenient and powerful lifecycle tool.
There are two primary ways to deploy this type of storage deployment which we currently refer to as pure HCI and mixed HCI.

Pure HCI: All compute nodes in the overcloud are co-located with Ceph Storage services. You can find a complete deployment scenario in the Hyperconverged Infrastructure Guide available on the Red Hat portal.

Mixed HCI: The overcloud is deployed with both standard compute nodes and co-located compute nodes. This requires the creation of a new custom role in director. Customizing roles are part of the composability features provided with Red Hat OpenStack Platform director. You can find further information about this deployment scenario, including accompanying code available from Github, in the Hyperconverged Infrastructure Guide and the Hyperconverged Red Hat OpenStack Platform 10 and Red Hat Ceph Storage 2 Reference Architecture.

In this two-part blog series we are going to focus on the Pure HCI scenario demonstrating how to deploy an overcloud with all compute nodes supporting Ceph. We do this using the Red Hat OpenStack Platform director. In this example we also implement resource isolation so that the Compute and Ceph services have their own dedicated resources and do not conflict with each other. We then show the results in action with a set of Browbeat benchmark tests.
But first …
Before we get into the actual deployment, let’s take a look at some of the benefits around co-locating storage and compute resources.

Smaller deployment footprint: When you perform the initial deployment, you co-locate more services together on single nodes, which helps simplify the architecture on fewer physical servers.

Easier to plan, cheaper to start out: co-location provides a decent option when your resources are limited. For example, instead of using six nodes, three for Compute and three for Ceph Storage, you can just co-locate the storage and use only three nodes.

More efficient capacity usage: You can utilize the same hardware resources for both Compute and Ceph services. For example, the Ceph OSDs and the compute services can take advantage of the same CPU, RAM, and solid-state drive (SSD). Many commodity hardware options provide decent resources that can accommodate both services on the same node.

Resource isolation: Red Hat addresses the noisy neighbor effect through resource isolation, which you orchestrate through Red Hat OpenStack Platform director.

However, while co-location realizes many benefits there are some considerations to be aware of with this deployment model. Co-location does not necessarily offer reduced latency in storage I/O. This is due to the distributed nature of Ceph storage: storage data is spread across different OSDs, and OSDs will be spread across several hyper-converged nodes. An instance on one node might need to access storage data from OSDs spread across several other nodes.
The Lab
Now that we fully understand the benefits and considerations for using co-located storage, let’s take a look at a deployment scenario to see it in action. 

We have developed a scenario using Red Hat OpenStack Platform 11 that deploys and demonstrates a simple “Pure HCI” environment. Here are the details.
We are using three nodes for simplicity:

1 director node
1 Controller node
1 Compute node (Compute + Ceph)

Each of these nodes are these same specifications:

Dell PowerEdge R530
Intel Xeon CPU E5-2630 v3 @ 2.40GHz  – This contains 8 cores each with hyper-threading, providing us with a total of 16 cores.
32 GB RAM
278 GB SSD Hard Drive

Of course for production installs you would need a much more detailed architecture; this scenario simply allows us to quickly and easily demonstrate the advantages of co-located storage. 

This scenario follows these resource isolation guidelines:

Reserve enough resources for 1 Ceph OSD on the Compute node
Reserve enough resources to potentially scale an extra OSD on the same Compute node
Plan for instances to use 2GB on average but reserve 0.5GB per instance on the Compute node for overhead.

This scenario uses network isolation using VLANs:

Because the default Compute node deployment template shipped with the tripleo-heat-templates do not attach the Storage Management network computes we need to change that. They require a simple modification to accommodate the Storage Management network which is illustrated later.

Now that we have everything ready, we are set to deploy our hyperconverged solution! But you’ll have to wait for next time for that so check back soon to see the deployment in action in Part Two of the series!

Want to find out how Red Hat can help you plan, implement and run your OpenStack environment? Join Red Hat Architects Dave Costakos and Julio Villarreal Pelegrino in “Don’t fail at scale: How to plan, build, and operate a successful OpenStack cloud” today.
For full details on architecting your own Red Hat OpenStack Platform deployment check out the official Architecture Guide. And for details about Red Hat OpenStack Platform networking see the detailed Networking Guide.
Quelle: RedHat Stack