How to assess application modernization quality with continuous software testing

Quality is essential to every project. That is the clear message from businesses across every industry.
Today’s enterprise IT environment, however, is more diverse and complicated than ever before. The combination of technologies, including mobile, Internet of Things (IoT), cloud, artificial intelligence (AI) and blockchain, are helping businesses drive competitive advantage. While companies are adjusting to this evolving business landscape on a macro level, delivery teams are also reacting and adjusting on their own modernization and optimization journey. Many delivery teams are finding a need for streamlined, continuous software testing.
Application modernization
While new applications are being deployed, established applications are still necessary for standard business operations. A typical enterprise may have 1,000 applications or more with dependencies across multiple clouds and on-premises ecosystems, plus possible regulatory dependencies.
As a result of this complex application ecosystem, many organizations are looking to Kubernetes to simplify the management of applications, ensuring cloud portability and rapid delivery across the full software lifecycle. This is supported with a microservices architecture, which breaks down single, often monolithic applications, into a collection of smaller, independently deployable services managed by different teams.
Test software quality throughout the delivery lifecycle
Throughout the application modernization and optimization journey, it is essential for delivery teams to assess quality at every opportunity. The combination of automated testing and test service virtualization can help teams asses the quality of their deliverables throughout the delivery lifecycle.
Quality is essential and as the need for dynamic, agile quality assessment grows, the software test automation market is also growing.
How to find the right tools for continuous software testing
The challenge can be selecting the right tools to enable continuous software testing through the DevOps pipeline. Some criteria to consider when choosing a vendor include the following:

Product design, architecture and scalability. Tools should streamline workloads now and in the future. Evaluate product specifics, such as the ability to share data and a common web-based UI across integration testing, functional testing and performance testing.
Ease of deployment and use. The ability to use one solution for testing all types of technologies and environments will enable all teams to remain in communication, ensure a strong feedback loop and improve overall agility.
Vendor support and services. Be sure testing tools can grow with your company.

 
These criteria align well with those used by EMA to evaluate DevOps continuous testing platform products in a recent report. Based on their assessment with wide range of users, EMA awarded the DevOps 2020 Top 3 award for Continuous Testing Platforms to IBM Rational Test Workbench.
Learn more about continuous software testing and find the right set of tools for your company.
The post How to assess application modernization quality with continuous software testing appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Introducing Red Hat OpenShift 4: Kubernetes for the Enterprise

Today at Red Hat Summit we celebrate the announcement of Red Hat OpenShift 4, which will be available in the next month. A big thank you to our customers from more than 1,000 worldwide organizations, our partners, the Kubernetes community at large, and our Red Hat teams for all of the progress we’ve made together […]
The post Introducing Red Hat OpenShift 4: Kubernetes for the Enterprise appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Splunk Connect for OpenShift – Logging Part

Red Hat OpenShift already provides an aggregated logging solution based on the EFK stack, fully integrated with the platform. But we also provide choice for companies that have settled on a different platform. Some companies have a Splunk logging platform to store and to aggregate the logs for all their environments and they want to […]
The post Splunk Connect for OpenShift – Logging Part appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

OpenShift Commons Gathering at Red Hat Summit Boston 2019 Recap [with Slides]

It’s a wrap! the OpenShift Commons Gathering at Red Hat Summit showcased 15 OpenShift Production Case Studies and technical updates from project engineers and architects.  The OpenShift Commons Gathering will featured speakers from NASA, Volkswagen, UPS, RBC Microsoft Azure, VMWare and Red Hat’s CEO Jim Whitehurst. The OpenShift Commons Gathering at Red Hat Summit brought together […]
The post OpenShift Commons Gathering at Red Hat Summit Boston 2019 Recap [with Slides] appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Meet the Red Hat Monitoring Team at KubeCon EU 2019

KubeCon Barcelona is just around the corner, and if you’re looking for a way to enhance the monitoring capabilities of your Red Hat OpenShift clusters, then you’ll want to attend the conference’s Thursday keynote, as well as a number of other talks by the team inside Red Hat that works on Prometheus. Now that Kubernetes […]
The post Meet the Red Hat Monitoring Team at KubeCon EU 2019 appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Quick Tip: Use Apache as a proxy server to access internal IPs from an external machine

The post Quick Tip: Use Apache as a proxy server to access internal IPs from an external machine appeared first on Mirantis | Pure Play Open Cloud.
Sometimes, when you’re using a cloud server, you find yourself in a situation where you don’t have a GUI, but you still want to access a web server running on a local IP address. For example, if you install MCP using the Model Designer, what you get back will be an instance that includes DriveTrain running on a local IP address.  To solve this problem, we can use Apache as a proxy server to access that local IP address via an external IP address to that VM.
Obviously this isn’t something you will do lightly, and you may not do it at all for a production system; make sure to do your security due diligence! But Just for testing, this can be a handy tip.
Fortunately it’s a straightforward process with just a few steps:

Install apache2. On Ubuntu, this is just a matter of calling the package manager:
sudo apt-get install apache2

Enable the various modules needed. You can do that with the a2enmod tool:
a2enmod proxy
a2enmod proxy_http
a2enmod proxy_ajp
a2enmod rewrite
a2enmod deflate
a2enmod headers
a2enmod proxy_balancer
a2enmod proxy_connect
a2enmod proxy_html

Configure Apache by editing the /etc/apache2/sites-available/000-default.conf file to read:
<VirtualHost *:*>
   ProxyPreserveHost On

   # Servers to proxy the connection, or;
   # List of application servers:
   # Usage:
   # ProxyPass / http://[IP Addr.]:[port]/
   # ProxyPassReverse / http://[IP Addr.]:[port]/

   # Example:
   ProxyPass / http://10.10.0.15:8081/
   ProxyPassReverse / http://10.10.0.15:8081/

   ServerName localhost
</VirtualHost>
Obviously, make sure to use your own target URLs.
Restart the apache2 service:
service apache2 restart

At this point you can access the internal IP address (or whatever address you chose) from the main URL served by Apache.  
The post Quick Tip: Use Apache as a proxy server to access internal IPs from an external machine appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

How to navigate multicloud management

With most businesses already relying on multiple cloud providers to meet their business objectives, we are now living in a multicloud world. But how can organizations navigate multiple cloud environments while meeting the demands of their most critical business priorities?
Research shows that most organizations haven’t been able to solve that problem yet. According to a study by the IBM Institute of Business Value, more than 60 percent of customers don’t have the tools and procedures to manage and operate in a complex multicloud environment. This can slow the progress of moving high priority workloads to use the cloud and can unintentionally introduce risk to an organization.
There are three major challenges that stand in the way:

Rapid application innovation. As developers discover new ways to develop and deploy applications, the number of software services is growing rapidly within organizations. Often, this growth exceeds the enterprise’s ability to effectively manage and control risk. This can be especially true when applications are spread across a wide range of software environments.
Data overload. Enterprises are embracing new technology around data and artificial intelligence (AI). The problem is that many organizations are still using traditional management methods to handle this data. This can leave many teams without the management capabilities to execute on their own data strategy. It’s also a challenge because improperly managed data can introduce significant risk for an organization.
Difficulty adopting DevOps and SRE best practices. If you work in development or IT, you have probably felt the pressure to embrace DevOps and Site Reliability Engineer (SRE) best practices. This is certainly the right direction. The issue, however, is that these changes go beyond technology alone. Moving to a DevOps model can also require difficult cultural shifts with teams learning new ways of operating, and individuals taking on what was previously multiple roles.

Because of these key challenges, many businesses feel pressured to choose cloud management solutions that provide either speed or control, but not both. To succeed, enterprises will need to balance these two seemingly competing priorities and select a cloud management solution that will help them appropriately achieve both.
Overcoming the challenges: Three things to look for in a multicloud management solution
The good news is that there are strategies and solutions that can help steer your business in the right direction. A lot of the challenges can be mitigated by choosing the right multicloud management solution.
Here are three key features to look for:

Visibility. It’s critical to know where business application components are running. You must monitor the health of resources (such as deployments, pods, Helm releases) across Kubernetes environments, whether they are in public or private clouds, and in the appropriate business context.
Governance. As cloud-native environments proliferate across the enterprise, DevOps teams are tasked with ensuring that these environments are managed according to the enterprise’s governance and security policies. It’s advantageous to have a single dashboard that provides a consistent set of configuration and security policies, at service inception time. An increase in the number of cloud native components, such as Kubernetes clusters, should not mean an increase in the risk to a business, nor an increase in management costs.
Automation. Whether an enterprise application is a cloud-native, 12-factor application or a traditional application, enterprises need a consistent and flexible way to deploy and manage that application. The goal is to simplify the IT and application management, while increasing flexibility and cost savings with intelligent data analysis driven by predictive signals.

IBM Multicloud Manager: the award-winning solution for cloud management
IBM offers a multicloud management approach built for the enterprise that nurtures a high-performance, agile culture and embraces modern operational practices.
With built-in security and compliance, IBM Multicloud Manager helps teams organize complex applications running on any cloud, reducing management costs and reducing business risk through compliance standards. All of this provides the much-needed combination of speed and control that today’s progressive enterprises are searching to find.
Recently, IBM was recognized with the prestigious Gold Thomas Edison Award for the team’s innovation in creating this product. Companies can adopt this technology knowing that it is respected by experts across the industry.
To learn more about multicloud, and the broader enterprise cloud journey, take a look at the following assets:

Hybrid, multicloud lookbook
IBM hybrid, multicloud management
IBM Multicloud Manager website

 
The post How to navigate multicloud management appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Plan for cloud-first initiatives with a cloud strategy road map

The term “cloud first” has different meanings for different organizations, and no two companies’ journeys to a cloud-first position are exactly alike.
That’s why outlining a cloud strategy road map, a thorough guide to embracing the cloud, can help organizations prepare for their specific needs.
A cloud-first strategy shapes how enterprises handle both technology and business decisions. It considers the nitty-gritty of operations and sets a foundation for larger pursuits, such as artificial intelligence (AI) and the Internet of Things (IoT), to help enterprises pursue business initiatives without hesitation.
Before implementing a cloud strategy, it’s important to set out a road map to ensure your early steps toward a cloud-first initiative are on sound footing. Here are a few things to keep in mind as you embark on that journey.
“Cloud first” doesn’t mean “cloud only”
Many assume “cloud first” means every new technological project must be vetted for the cloud. Others argue that while many tasks are well suited for the cloud, critical applications should stay on premises for the sake of control.
There’s benefit to keeping certain technologies in-house, but that doesn’t mean an enterprise can’t adopt a cloud-first mentality. Being cloud first pushes organizations to always review how the many advantages of cloud technology, including improved scalability, more effective resilience and easier capital cost management, can more quickly deliver products and services. It’s a conversation worth having for every business opportunity.
View cloud strategy as part of both IT and business
The efficiency and flexibility of a business rely a great deal on technology; and, as such, decisions surrounding them should be intertwined.
That’s why it’s critical to not view cloud first as simply an IT strategy. Pigeonholing technological decisions that shape outcomes can overlook the value that business leaders provide when they also review cloud initiatives. HR leaders, for example, might know all about the upsides of a particular cloud technology after attending a conference. IT isn’t always directly on the pulse of business functions and could underestimate the disruption a sudden technological change might cause.
Form a “cloud center of excellence”
To ensure all voices are heard, follow the lead of other enterprises and create a committee that oversees the study, implementation, management and evolution of a cloud-first strategy. Some organizations call this body a “cloud center of excellence”, or a CCoE. The committee should align to company practices and goals with the proper cloud-based services.
The CCoE should set goals and deadlines, spearhead training programs and look for new opportunities. It should consistently cheerlead the use of cloud technology, but with a full measure of reasoning about why it matters.
The CIO, CMO, development and operations, HR leaders, data scientists, and other key roles within an organization can belong to a CCoE. However, don’t forget to include front-line business managers, too. These roles will have ground-level insight into how the cloud can improve key revenue-generating functions such as sales, marketing and customer experience. Demonstrate how a potential cloud application can enhance their work while accepting their feedback.
Go at your own pace
Even if your organization goes all in on a cloud-first strategy, it doesn’t have to immediately shift everything to the cloud. You have some time to wait.
As workflows and applications become more mobile-centric — and because AI and IoT are conducive to how products and services are developed — it’s only natural that organizations will increasingly turn to the cloud. The flexibility of cloud services may prove particularly appealing for enterprises that need to make big architectural changes in order to accommodate new technologies.
These steps can help you avoid some of the most common obstacles that stop a cloud-first strategy in its tracks. That strategy will likely evolve as business pursuits and technology change, but a well-thought-out cloud strategy road map will provide the foundation for a seamless transition to the cloud.
Looking to help your organization create a cloud-first strategy for the future? Register to learn more about finding the next-generation cloud platform that will work best for your business.
The post Plan for cloud-first initiatives with a cloud strategy road map appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

7 pillars of a strong hybrid cloud security strategy

Hybrid cloud environments give companies the best of both worlds. They offer the elasticity and operational expenditure of public clouds with the data sovereignty, security and control found in a private cloud environment. By combining the two, companies can allocate workloads to the environment that makes the most sense for them.
As organizations build these environments, hybrid cloud security is crucial. According to Cybersecurity Insiders’ “2018 Cloud Security Report”, nine out of 10 cybersecurity professionals say they are concerned about cloud security. This is up 11 points from last year’s survey.
Securing these environments can be time consuming, but luckily, you don’t have to start from scratch. Adhering to these seven key pillars for a hybrid cloud security strategy will make sure you get great results with less stress.
1. Approach hybrid cloud security as a shared responsibility.
Companies should approach hybrid cloud security as a joint endeavor with their cloud service provider. Assuming the cloud partner will take care of everything once the data leaves the on-premises systems is a recipe for oversights and errors. Even with the best-equipped hybrid cloud provider out there, maintaining security still requires a proactive mindset.
For example, administrative staff could accidentally expose sensitive records through a simple misconfiguration of a public cloud environment. According to GCN, misconfigured data buckets left the voter information of hundreds of thousands of individuals exposed in 2018.
Without proper security efforts, one misstep can jeopardize a company’s reputation and consumer trust.
2. Standardize processes.
Companies that use different processes for public and private cloud environments, or that fail to implement processes, risk introducing disparities that could lead to manual errors and potential security loopholes. These processes will likely be unique to an organization’s needs, but some general best practices apply.
For example, an organization could ensure that administrators follow the same security procedures in a public cloud environment as they do with on-premises systems and check that public cloud assets are properly password protected. For example, developers may leave database administrative accounts with default settings in an on-premises development environment, but forget to change the credential settings when they take the databases live in the cloud. This oversight can lead to some serious data breaches.
Formalizing processes to manage assets, such as databases, as they pass between on-premises and cloud-based environments will help organizations avoid problems like the large-scale exposure of sensitive customer records in cloud-based systems.
3. Configure secure tools and processes for the cloud.
Companies can reduce the likelihood of human error and inconsistent administrative approaches by codifying these secure processes into automated workflows. In the case of software development and deployment, a common use case in hybrid cloud environments, secure DevOps (DevSecOps) practices can be a game changer.
Secure DevOps enables security professionals to build automatic gating checks into software development, forcing code through a series of tests that it must pass before being deployed. Automated tools can also securely manage the provisioning and teardown of virtual development and deployment infrastructure so that stray virtual machines and storage buckets don’t become a security liability.
4. Verify everything everywhere.
Hybrid cloud computing environments tend to blast through traditional network perimeters, as companies distribute workloads across different infrastructures and locations. This means conventional, perimeter-based protections no longer work. Instead, protect access to each virtual asset and data resource. Adopt a “never trust, always verify” approach to all computing resources across both infrastructures.
5. Manage access across hybrid environments.
A uniform identity and access management (IAM) framework can help protect assets in hybrid environments. Security teams might use various approaches to extend IAM across the entire environment, depending on their public and private infrastructures, including unified directories and SAML-based identity federations.
Ensure that this framework mirrors the concept of least-privilege access across both private and public clouds so that employees, contractors and other users only have access to the resources they absolutely need.
6. Ensure visibility and ownership.
One danger in dealing with two different environments is that it can be difficult to get a comprehensive view of what’s happening across the entire infrastructure. Explore using a management system that can aggregate monitoring and asset management across both private and public clouds.
Ideally, administrators should be able to see both from a single dashboard. Security teams should also ensure that all assets and data across both environments have defined ownership. An individual or team should be responsible for them so that nothing falls through the cracks.
7. Protect data.
Data protection includes not only encryption, which should be standard in any hybrid IT environment, but also other techniques as well. These might be pseudo-normalization or tokens stored in public cloud databases that refer to sensitive data stored in on-premises systems.
Before beginning your organization’s hybrid cloud journey, think carefully about your long-term approach and what you will expect from your hybrid cloud environment in the years to come. By considering these seven pillars of hybrid cloud security, you can help your organization transition smoothly between on-premises and cloud environments.
Learn more by signing up to receive The IT leaders guide to the next generation cloud operating model, where you can learn how to perfect your journey to cloud.
The post 7 pillars of a strong hybrid cloud security strategy appeared first on Cloud computing news.
Quelle: Thoughts on Cloud