Improved Offline Publishing

The best technology is invisible and reliable. You almost forget it’s there, because things just work. Bad technology never disappears into the background — it’s always visible, and worse, it gets in your way. We rarely stop to think “My, what good Wifi!” But we sure notice when the Wifi is iffy.

Good technology in an app requires solid offline support. A WordPress app should give you a seamless, reliable posting experience, and you shouldn’t have to worry whether you’re online or offline while using WordPress Mobile. And if we’ve done our jobs right, you won’t have to! 

We all need fewer worries in life, so if you haven’t already head to https://apps.wordpress.com/get/ to download the apps.

Offline Publishing

On the go and without a connection?  No worries! The apps will now remember your choices and once you’re back online, your content will be saved and published as requested.  But if you changed your mind about publishing a post while you’re still offline, you can still safely cancel it.

The new Offline Publishing flow.

This improved publishing flow comes together with a revamped UI for your post status.  You’ll be able to clearly see which posts are pending, saving or publishing.

Smoother Messaging

We removed several alerts that were being presented while you were offline.  These blocking alerts required you to take action but often provided no insights on either what the problem was, or how to resolve it.

They have been replaced with contextual non-blocking messages both within the UI, and in notices appearingright above the toolbar.

As a result, you’ll see less disruptive and uninformative alerts, and more inline and informative messages, such as the one shown above.

Safeguards

We also added some safeguards to ensure there are no surprises!

You can cancel offline publishing.

Modifying posts that are scheduled for publishing will cancel the publishing action. Don’t worry, though – you can always reschedule the post for publishing.

All queued save and publishing operations will be canceled if your device stays offline for more than 48 hours.  We want you to be in complete control of what gets published and when.
Quelle: RedHat Stack

A Crash Course in Remote Management

Remote work is a prominent topic lately, as people around the world are doing their best to live their lives and keep themselves and their families safe and prepared during the COVID-19 outbreak. The impact of this outbreak is felt across societies and cultures as well as in the workplace.  

Automattic, the company behind WordPress.com, is a primarily distributed company with more than 1,000 employees across 76 countries. I’m an engineering lead, currently working on the Developer Experience team. As Automattic has grown, we’ve learned a lot about working remotely and across time zones, and have shared insights on what we see as the future of work on the Distributed podcast, hosted by our CEO, Matt Mullenweg. 

This week, Nicole Sanchez, the founder of Vaya Consulting and an expert on workplace culture, and I had an opportunity to co-present a Crash Course in Remote Management, a free one-hour webinar hosted on Zoom. Nicole has previously held social impact and leadership roles at GitHub and the Kapor Center for Social Impact.

Nicole and I walked an engaged audience through proven practices and what they’ve learned about leading, communicating with, and measuring the success of remote teams. Participants offered insightful questions, leading to lively discussions around:

Collaboration and relationship-building.The cost, benefit, and ideal frequency of bringing teams together for face-to-face interaction (in general, if not as commonly right now).Communicating and prioritizing messages across a variety of channels. Encouraging people to go outside, exercise, spend time with family, or otherwise step away from the computer (also known as being “AFK,” or “Away From Keyboard”) without the fear of being judged or anxiety over being less productive.

Some companies are encouraging employees to experiment with working from home, which can feel very different from in-person and office work. If you’re interested in learning more, please check out the full video recording of the course:

Matt’s latest blog post, “Coronavirus and the Remote Work Experiment No One Asked For,” is also worth a read. For more information and advice on COVID-19, please visit resources from the CDC, World Health Organization, and other health authorities.
Quelle: RedHat Stack

Turning a Page with Page Layouts

Need to add a new page to your site but don’t know where to start? Making a brand new site on WordPress.com and want to design a homepage quickly? There’s a new addition to the WordPress experience that’ll help with exactly that.

Let’s take a look at Page Layouts! They’re pre-designed pages you can drop content into, without needing to decide what to put where.

To add a Page Layout to your site, head to My Sites > Site > Pages and click the “Add New Page” button — it’s the pink one:

Next, we’ll show you a selection of layouts you can choose from — there are layouts available for

About pagesContact pagesServices pagesPortfolio pagesRestaurant Menu, Team, and Blog pagesand even starting points for Home pages

Here’s one of the available Portfolio Page Layouts, for example.

These layouts are all made using blocks in our block editor, which means you can edit the images, content, and layout all in one place. Start by replacing the default images and text, and you’ll be on your way!

You can use Page Layouts to make great-looking pages with only a few clicks. For inspiration, here are a selection of layouts using a variety of WordPress.com themes.

What other types of pages and designs would be useful for your site? Let us know what you’d like to see — we’d love to hear from you!
Quelle: RedHat Stack

Announcing a New Scholarship for LGBTQ+ WordPress Community Members

The Queeromattic Employee Resource Group, Automattic’s LGBTQ+ internal organization, is proud to announce a scholarship for LGBTQ+ WordPress Community members who need financial support to attend a WordCamp flagship event for the first time. 

For those unfamiliar with WordCamps, they are informal, community-organized events that are put together by WordPress users like you. Everyone from casual users to core developers participate, share ideas, and get to know each other. There are currently four flagship events each year: WordCamp Europe, WordCamp Asia, WordCamp US, and WordCamp Latin America. We’re going to sponsor one member of the LGBTQ+ community to attend each of these events!

Our hope in sponsoring folks to attend an initial WordCamp flagship event is that it will provide a career-enhancing opportunity for folks to connect more deeply with members of the WordPress community and level up their own WordPress skills to take back into their everyday life. Many of us at Automattic found our way here through the wider WordPress community and we’re really excited to share that chance with folks from the LGBTQ+ community who might not have the opportunity otherwise. 

Right now, we’re accepting applications to WordCamp US 2020. If you’re a member of the LGBTQ+ community and a WordPress user, we encourage you to apply: https://automattic.com/scholarships/queeromattic/  To be considered, please apply no later than Sunday, May 31, 2020 at 12 a.m. Pacific Time.

If you know someone who would be perfect for an opportunity like this, please share it with them! We want folks from all over the world to have the chance to benefit from this new scholarship.
Quelle: RedHat Stack

Digital Transformation in Italy, Powered by OpenShift

Recently in Milan, Red Hat presented an Italian edition of the OpenShift Commons gathering. The event brings together experts from all over the world to discuss open source projects that support the OpenShift and Kubernetes ecosystem as well as to explore best practice for native-cloud application development and getting business value from container technologies at scale. Presenting in Milan were three organizations leading the way: SIA, Poste Italiene and Amadaus.*
Amadeus’ OpenShift infrastructure
Amadeus Software Engineer Salvatore Dario Minonne spoke of the five-year relationship between Red Hat and Amadeus. “In the fall of 2014 we got to know the Red Hat engineering team in Raleigh in the United States. Our teams got their hands on the first versions of OpenShift and started a fruitful collaboration with Red Hat that has become a true engineering partnership. We continue to contribute our use cases to the community, to help drive open source innovation that meets our real-world needs,” said Minonne.
“Not all Amadeus applications are in the cloud,” added Minonne, underlining that their infrastructure is a hybrid of public and private cloud, and there is a careful consideration when migrating workloads to the cloud. 
“At Amadeus,” said Minonne, “We are looking closely into multicloud, not just to avoid vendor lock-in, but also to mitigate the risks of impact if something goes wrong with a provider, and to give us the ability to spin down a particular cluster if it is buggy or there is a security issue.” 
Minonne talked about the change in mindset required with a move to hybrid cloud. “Software development and management practices must also change, to mitigate compatibility issues that might occur with applications not originally designed for the cloud. In fact, many Kubernetes resources have been created precisely to reduce these incompatibilities.”
Poste Italiane
Pierluigi Sforza and Paolo Gigante, Senior Solutions Architects working in Poste Italiane’s IT Technological Architecture Group, spoke to the OpenShift Milan Commons audience about how Poste Italiane has accelerated its digital transformation efforts in the last year.
Sforza emphasised how they are embracing a DevOps philosophy along with their increased use of open source, which has involved building a closer relationship with Red Hat. Gigante added that the rise in open source at Poste Italiene “reflects the current technology landscape, where rapidly evolving competition, increased digitalization and changing customer expectations require faster time to market, which is one area where proprietary technologies from traditional vendors often fall short.” 
Sforza added that, “the need for agility and speed of delivery sometimes necessitates taking a risk in trying less mature technologies, starting by experimenting with the open source community and then relying on trusted vendors, such as Red Hat, to have the levels of security and stability needed to go into production.”
Poste Italiane has been adapting its legacy infrastructures and processes to the new world of DevOps and containerization. This laid the foundation for new projects, such as an adaptation it has made to its financial platform in line with the PSD2 directive. “With OpenShift, we were able to create a reliable, high performance platform perfectly adapted to our needs, in order to meet another of our major business goals: to be at the forefront of innovation,” said Sforza.
The organization’s infrastructure modernization implicates the migration of some workloads off the mainframe. Sforza explained: “Where it makes sense, we are aiming to move monolithic workloads to a containerized infrastructure using microservices, which is more cost effective and gives us greater scalability. This will help us manage applications more efficiently and provide a more slick end-user experience, especially given the rise in customers using our digital channels.”
SIA
SIA is headquartered in Milan and operates in 50 countries. SIA is a European leader in the design, construction and management of technology infrastructures and services for financial institutions, central banks, public companies and government entities, focusing on payments, e-money, network services and capital markets.
Nicola Nicolotti, a Senior System Administrator at SIA, explained how they are supporting customers with the move to containers: “the traditional waterfall approach is often not compatible with the adoption of new technologies, which require a deeper level of integration. However, many traditionally structured organizations face multiple difficulties when adopting new technologies and putting changes into practice, so we aim to help them understand what those challenges might be as well as the corresponding solutions that can help them meet their business objectives.”
Matteo Combi, SIA Solution Architect, emphasised the importance of collaboration when working with the open source community – not just via software development. “When we participated in Red Hat Summit in Boston, we recognised the value in sharing diverse experiences at an international level. Being able to compare different scenarios enables us to develop new ideas to improve our use of the technology itself as well as how it can be applied to meet our business goals.”
Learn more about our customer stories at Red Hat Summit Virtual Experience.
* Customer insights in this post originally appeared in Italian as part of a special feature in ImpresaCity magazine, issue #33, October 2019, available to read here. 
The post Digital Transformation in Italy, Powered by OpenShift appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Announcing a New Scholarship for LGBTQ+ WordPress Community Members

The Queeromattic Employee Resource Group, Automattic’s LGBTQ+ internal organization, is proud to announce a scholarship for LGBTQ+ WordPress Community members who need financial support to attend a WordCamp flagship event for the first time. 

For those unfamiliar with WordCamps, they are informal, community-organized events that are put together by WordPress users like you. Everyone from casual users to core developers participate, share ideas, and get to know each other. There are currently four flagship events each year: WordCamp Europe, WordCamp Asia, WordCamp US, and WordCamp Latin America. We’re going to sponsor one member of the LGBTQ+ community to attend each of these events!

Our hope in sponsoring folks to attend an initial WordCamp flagship event is that it will provide a career-enhancing opportunity for folks to connect more deeply with members of the WordPress community and level up their own WordPress skills to take back into their everyday life. Many of us at Automattic found our way here through the wider WordPress community and we’re really excited to share that chance with folks from the LGBTQ+ community who might not have the opportunity otherwise. 

Right now, we’re accepting applications to WordCamp US 2020. If you’re a member of the LGBTQ+ community and a WordPress user, we encourage you to apply: https://automattic.com/scholarships/queeromattic/  To be considered, please apply no later than Sunday, May 31, 2020 at 12 a.m. Pacific Time.

If you know someone who would be perfect for an opportunity like this, please share it with them! We want folks from all over the world to have the chance to benefit from this new scholarship.
Quelle: RedHat Stack

Red Hat and the value of sharing

This article is translated from the original Italian. The speakers were attending the first Italian edition of the OpenShift Commons Gathering event.
Back in September in Milan, with more than 350 very attentive people, the first Italian edition of the OpenShift Commons Gathering event took place. The event, which brings together experts from all over the world to discuss open source projects that support the OpenShift and Kubernetes ecosystem, as well as analyzing best practices on native cloud applications and deepening container technologies, brought together developers, DevOps experts and system administrators, to explore the evolution of container technologies in order to make them effective and safe on a large scale. A day full of ideas and reflections, thanks also to the presence of Red Hat experts and above all of the users who brought their testimonies.
Director of the entire operation, and perfectly at ease in the role of “hostess”, was Diane Mueller, Director, Community Development, Cloud Platform of Red Hat, to which ImpresaCity asked to tell in person the value of the OpenShift Commons event. From her base in Vancouver to Canada, Diane Mueller is particularly concerned with community development, and has therefore immediately declared herself very satisfied with the presence of over 350 people at the first Italian edition of the OpenShiftOpen Shift meeting, also underlining “the excellent mix of people present, all united by the desire to participate in a community event whose purpose is not to ‘sell’ something such as happens in the classic corporate events, but aims to exchange experiences on the topic of open source and particular on the Open Shift ecosystem.”
Concrete experiences

A relevant aspect of OpenShift Commons, and perhaps the reason for its success, is for Diane Mueller the fact that, “a good part of the event is dedicated to getting people who actually work on our products to talk, to share their experiences with their colleagues: if it is true that in general many events are organized, it is also true that community meetings like this make the difference because we bring both our experts and companies that talk about concrete cases, and the presence of users this contributes to increasing trust in open source and why not also to create new connections with a view to networking.”
Not only: “open source has long changed many aspects of the technological landscape, and even companies have understood that taking part in open source communities plays an increasingly fundamental role in the drive towards innovation and digital transformation”, he continues Diane Mueller, underlining that “this is even more true today as practically all companies of a certain size are substantially software companies, given that now the apps permeate all aspects of corporate life. And the more companies will realize that contributing to the sharing of experiences, in the philosophy of open source development, the more they will see that they can derive concrete benefits “.
Even Giuseppe Bonocore, Red Hat’s Senior Solution Architect, is satisfied with the participation at the event: “this is certainly a very good start, which shows how the participants understood the value of the community. Even the presence of three important customers sees us satisfied, even if we need to motivate companies even more to perceive the advantages that can be had in telling their needs and sharing use cases with others.”
The latter is in fact an entirely Italian peculiarity, as there is often some resistance on the part of companies in sharing projects, for an issue that is perhaps more cultural than substantial. But according to Bonocore there is room for optimism, also judging by the “presence of customers in three different vertical sectors, with Amadeus for the Travel and Internet sphere, Poste Italiane representing the Public Administration, and SIA for banking, demonstrating that the needs are common regardless of the sector of activity, given that all three companies presented substantially similar experiences on stage. I am confident that other companies will follow this example for future community events that we will organize.” Also because, concludes Bonocore, “companies later on these issues have already noticed: being more present with testimonials in first person to the community events represents an excellent tool of employer branding, to present oneself at best in order to attract valid talents and resources, which is a well-known problem in the IT field.”
In addition to the Red Hat experts and the testimonies of the three customers, which will be examined later, at the OpenShiftOpen Shift Commons event, Microsoft was present as the sole sponsor. Impresa City asked Marco D’Angelo, Developers Relationship Manager, Microsoft’s Western Europe, to explain the reasons for this participation. Which are multiple, starting from a general character so to speak, which sees Microsoft focus a lot on the presence of this type of community events to, “be able to show our new face up close: as we know, we have not been just a long time software vendors, but we have increasingly become cloud providers with Azure, “notes D’Angelo, pointing out that another of the purposes is to, “establish or re-establish relations with communities that did not use Microsoft software.”

In particular, continues D’Angelo, “Open Shift Commons is an event of which we are the only sponsor also because we have contributed to bringing it to Italy, but not with purely commercial connections as enhancing the connections with the technical people of the various intervened realities.” According to Microsoft, the formula of community events lends itself to underlining the desire to work with the developer community “even if there is no contract or otherwise, that is, without a short-term advantage, instead an investment in the facts, showing concretely that our focus on the open source world, as happens in this Red Hat event, is of extreme collaboration,” explains D’Angelo.
Speaking of which, D’Angelo points out that “Red Hat represents one of the partners with whom we can both talk to the same kind of audience, during community events like this, and to do business together, as our market goals they are perfectly aligned. To take just one example, we as Microsoft have brought native Red Hat applications into our cloud, and in essence it’s as if we were resellers of their products.” It is another aspect of a profitable relationship between the two companies, which began about four years ago, while “In Italy it is the third year that we present ourselves on the market together, sponsoring joint events or presenting us together: the results are not lacking, given that at this moment the relationship with Red Hat in Italy is what is showing the greatest growth of joint engagements,” concludes Marco D’Angelo.
But the time has come to give the word to companies, starting with …
Amadeus’ OpenShift infrastructure
A relationship that has lasted over five years, the one between Red Hat and Amadeus. “In 2014 we were looking for a partner to support us in bringing workloads to the cloud and we started a selection, from which few names remained in the shortlist, to the point of choosing Red Hat,” said Amadeus Software Engineer Salvatore Dario Minonne. Milan event told how OpenShift revolutionized the company’s cloud infrastructure.
There is no need for big presentations for Amadeus, as in addition to being one of the leading providers of advanced technological solutions for the global travel sector, it is also among the top ten software companies in the world, with more than 17 thousand employees and operations in 190 markets. Amadeus chose Red Hat’s OpenShift Container Platform as the basis for its application infrastructure, called Amadeus Cloud Services, improving the service offered to customers, increasing platform availability, simplifying operations and shortening the time to market of new services.

“In the fall of 2014 we got to know the Red Hat engineering team in Raleigh in the United States, putting their hands on the first versions of OpenShift for the first time and actually starting a fruitful collaboration that has become a real ‘ engineering partnership ‘, to the development of which we contribute with our use cases, in the best spirit of open source,” continues Minonne. Today in Amadeus the Enterprise version of OpenShift is used, which is present on multiple cloud platforms, with many workloads in production. However, it should be said that “not all Amadeus applications are on the cloud,” explains Minonne, underlining that it is “a long process that must be carefully evaluated, even if Red Hat’s strategy towards the unified hybrid cloud presents more than one reason of interest, since both public and private clouds are present in Amadeus.”
Towards the multicluster
In Minonne’s words, “Red Hat has always been seen by many as a provider of open source super partes software, and we continue to perceive it as such. Also because events like today’s show the will to remain super partes, and the fact that the Multicluster Federation is pushed in the first person by Red Hat also goes in this direction.” In this latter regard, Minonne underlines that “in Amadeus we look carefully at the multicluster, and not only for a simple vendor lock-in speech, but also for the possibility, unfortunately always existing, that something goes wrong with a provider , or to have the possibility to quickly dismantle a particular cluster because it is buggy and therefore there is a safety problem. These are real needs that would be well met.”
Spreading the discussion in a more general context, Minonne points out that “still today many levels of the “all cloud” obstacles remain. The first is the cultural one: if once it was said ‘not invented here’, today the syndrome has remained, but with the different name of ‘not hosted here’, and it is more widespread than we think.” Not only: “if you want to bring everything to the cloud, the software development and management practices must also change. Here there is also a technical obstacle, because applications not originally designed for the cloud sometimes have compatibility problems with the cloud itself. In fact, many Kubernetes resources have been created precisely to reduce these incompatibilities, “continues Minonne. But the message is that the obstacles can be overcome, even if there is still a lot to do both on the client side and on the cloud side: as Minonne pointed out in his presentation to the audience at the OpenShift Commons Gathering in Milan, the Cloud provider APIs are still different and poorly uniform, explaining that, “it is a fact that customers would like to install the same cluster everywhere, and it would be desirable for providers to do more to comply, because it would lead to greater portability and ease of migration of workload.”
Poste Italiane’s new technological path between open source and DevOps
The testimony of a client like the giant Poste Italiane, who is known to be active in numerous fields of activity, starting with the postal service, for which he was born over 150 years ago, is also very relevant. Today, Poste Italiane is also present in many other areas: just to name a few, it goes from the banking sector, where the widespread PostePay payment card stands out, to the insurance card, with PosteVita which represents a not inconsiderable share of the entire business of the company, to then arrive at mobile telephony, in which it operates with the PosteMobile brand and is characterized by being the most successful “virtual” operator in Italy.
At the Red Hat event in Milan “we retraced the ways in which Poste Italiane pressed the accelerator of the Digital Transformation in the last year, focusing above all on open source and on the DevOps philosophy,” explains Pierluigi Sforza, Poste’s Senior Solutions Architect Italian, which together with Paolo Gigante, who has the same position in the company, is part of Poste Italiane’s IT Technological Architecture Group. It is, underlines Sforza, of “a support structure for the activities of delivery, experimentation and implementation of new technologies”, framed within the IT function, which sees Mirko Mischiatti, Group CIO of Poste Italiane as head, and that is ” in essence centralized and unified: even if the various companies that are part of the galaxy have local IT micro-functions, the general structure is unique,” explains Gigante.
Looking more closely at the experience presented at the Red Hat event, it was “a process of modernization of the infrastructures, in a DevOps key and therefore open source, with a closer relationship with Red Hat, in light of an intense use of more advanced technologies like OpenShift or Ceph, to give some examples,” continues Sforza. Gigante echoes him, stressing that the emphasis on open source was born as a “reflection of the recent technological scenario, where the demands placed by the Digital Transformation require accelerations that often the proprietary technologies of classic vendors fail to follow with due timeliness” . And even if it is true that “a part of the company has always worked with open source, both in Community and Enterprise versions, with different levels and with different vendors”, as Sforza points out, it is also true that “the needs of delivery sometimes requires taking the risk of trying less resilient technologies, starting to experiment with the open source community and then relying on more structured vendors, such as Red Hat, to have the levels of resilience needed to go into production.”
Infrastructure in production
The path followed by just over a year now started with a real project of “adapting old legacy infrastructures to the new world of DevOps and containerization, which then laid the foundations for creating a stand-alone project that led to main realization, or rather that of the regulatory adaptation to the PSD2 directive of our financial platform,” explains Sforza, underlining that” with OpenShiftOpen Shift we were able to create a reliable platform, perfectly adapted to our needs and above all very performing, in order to fully correspond to a another of the goals we had set ourselves, that is to say, the one to be posed as the backbone of new successive developments.”

More in detail, one of the important guidelines of Poste Italiane’s IT renewal is that of simplification, where “a relevant chapter is also that of offloading the mainframe,” emphasizes Sforza, stating that “it is a strategic project to answer to the increasing traffic coming from digital channels. The idea is to use the mainframe to make it perform only the typical tasks of the mainframe, that is those that require ‘real’ transactionality, as is the case for transfers, for example, lifting it instead of loads that are not properly transactional, as is the case of the account statement or other similar requests. We will go more and more towards the lightening of the mainframe from what does not make sense to keep you in a monolithic, non-scalable and substantially expensive way, entrusting instead these tasks to a containerized infrastructure, with microservices and other systems such as MongoDB or Kafka, always making more efficient traffic due to the growing amount of requests coming from digital channels.”
Finally, two words on participation in community events: “we consider them very useful as they allow us to have a vision of the market, thanks also to the customers who concretely tell their implementations, in addition to explaining the challenges to which the suppliers have been called,” points out Pierluigi Sforza. It is also for this reason that “we have tried to intensify our presence in this type of event: from customers it is always interesting to participate, also to have a direct comparison with other companies, and not only with those operating in our own field,” he concludes Paolo Gigante.
The methodology of change in SIA
Also SIA, which is headquartered in Milan and is present in 50 countries, does not need too much introduction, being a European leader in the design, construction and management of technological infrastructures and services dedicated to Financial Institutions, Central Banks, Companies and Public Administrations in the areas of payments, e-money, network services and capital markets.

The testimony of SIA at the OpenShift Commons Gathering event in Milan wanted “not so much to bring our experience in the use of a Red Hat product, but to tell how the company and its people moved to ensure that the services made on the basis of OpenShift were successful,” explains Nicola Nicolotti, Senior System Administrator of SIA, underlining that “in other words, we wanted not to focus all our attention on technological applications but to highlight above all the organizational considerations, which are equally relevant.” A focus that is also centered on the spirit that animates this Red Hat event, which “unlike other events where attention is above all on technology, is more a moment of sharing, where the main aim is to create community to grow products and share best practices, even if each of us works in different companies,” adds Matteo Combi, SIA Solution Architect.
More in detail, “we presented a working methodology with which we intend to demonstrate that the traditional waterfall approach is often not combined with the adoption of new technologies, which instead require more intensive integration,” continues Nicolotti, underlining that “in the structured companies, there are many difficulties that can be encountered when changes are put into practice, and this is why we believe that it is useful to tell both what the problems in adopting new technologies may be and the answers given for looking at business objectives, sharing the approach adopted to resolve the difficulties.”
The value of sharing
The aspect of sharing is perhaps the main value of this type of event: “we also participated in the Red Hat Summit in Boston, noting that at the international level there is a greater propensity to compare different experiences,” notes Combi, explaining that “often the comparison with different realities allows to develop other ideas, not only related to technologies, to better face the challenges, and it is also for this reason that we have decided to participate in this event as speaker.”
In conclusion, Nicolotti summarizes, “we are very satisfied with the opportunity to participate in meetings such as the Red Hat OpenShift Commons Gathering, also for the networking possibilities and for employer branding opportunities for our company, given that we present our experiences to events of this type undoubtedly constitute a showcase for attracting talents interested in the new technological frontiers.”
 
The post Red Hat and the value of sharing appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

CyFIR helps businesses find and fix cybersecurity threats faster with IBM Cloud

Cybercrime is on the rise, and many companies may have malicious breaches within their network without even knowing it. Breaches can enter and remain dormant for weeks, months, and years. This is just what happened in a Marriott cybersecurity breach, and it has affected other companies as well.
Criminals are evolving and creating new strategies and models to scale cybercrime globally. According to a Ponemon Institute report on the cost of cybercrime, the average cost to an attacked organization is US$11.7 million. Business endpoint protection is increasingly important in today’s environment where persistent threats—such as cyber offensive activities by Iran and other hostile governments—are on the rise.
CyFIR provides another layer of detection defense against this changing landscape and a first line forensic investigative platform. We specifically look for things that have bypassed company antivirus measures and firewalls without detection and for situations that don’t involve malware. These threats may not be active but are lurking and waiting for an optimal time to strike.
CyFIR is helping companies quickly detect and respond to cyber threats to keep data and intellectual property safe. We do this by providing forensics level data at enterprise scale and speed. CyFIR offers the CyFIR Enterprise platform, which is a distributed forensics software; as well as a cloud-hosted managed service for monitoring and threat hunting; and digital forensic investigation services to rapidly investigate risks. For those companies that want to avoid costly overhead and infrastructure costs, CyFIR Investigator is available on demand by the hour.
Deploying cloud-based solutions for clients helps CyFIR find and resolve cybersecurity risks faster. Working on the IBM Cloud, CyFIR can spin up a new client environment or scale an existing environment extremely quickly. This speed of deployment and scale is critical in getting services to clients and service team members for many types of investigations.
Destressing organizations under cyber attack
When an organization is under siege from a cybersecurity threat, there can be a lot of panic within the company. It’s very distracting to have to go through an elongated process to identify and eradicate the threat.
Because CyFIR can see deeply and broadly into a client’s network in a simultaneous fashion, we can identify a breach very quickly and then perform the forensic analysis and remediation, often within hours. Finding and eliminating the threat this quickly destresses the organization and is a considerable improvement over a process that can, traditionally, take 60 days or more. By using technology such as IBM Cloud as a backbone to the CyFIR infrastructure, CyFIR is able to move more quickly, which helps us affect the whole company atmosphere.
The CyFIR ability to get in and help an organization understand whether customer data has been compromised (therefore requiring disclosure of the threat to clients and customers) is quite helpful. Quick understanding of what data has and hasn’t been compromised and removing the need to disclose a security threat publicly is a great relief to company CEOs and boards of directors.
Finding and fixing threats faster with the cloud
The power of the cloud enables CyFIR to move quickly and differentiate from our competition. By leveraging the IBM Cloud, CyFIR can generate a client-specific forensic environment within a few minutes, configure and publish installation files for the endpoints, enroll all of the systems, and quickly connect to the production CyFIR environment. The ability to create the forensics support infrastructure for very large enterprises reduces risk in the midst of challenging incidents and reduces the changes that critical security event data is lost over time.
Because of the distributed nature of the CyFIR platform, made possible by VMware on IBM Cloud, investigators have the ability to remotely search across entire network environments and then quickly look deep into systems or data for specific forensic artifacts to determine the how, what, when and why that is so critical to breach and crisis management. This means threats can be found and fixed, breaches discovered, and investigations completed more quickly. This Speed to Resolution capability reduces risk and cost. Legacy forensics platforms that are not powered by the CyFIR Total Dynamic Visibility distributed forensic processing could take months to manually scan through the images of affected endpoints. CyFIR, however, performs the searches across the environment in minutes. By deploying through the IBM Cloud, the creation of forensic capability, investigation, analysis and reporting can be performed entirely remotely, completed often before legacy forensics and incident response teams/capabilities can even get on the plane.
Working on IBM Cloud speeds CyFIR’s ability to provision and manage resources for our digital forensics platform on the fly. From setting up a client environment or an internal demo environment to spinning down the environment as needed, we can get the system up or down in less than 15 minutes.
The CyFIR solution is also strengthened by enhanced resiliency, flexibility, and the native security of IBM Cloud. To achieve this resiliency, CyFIR is using Veeam on IBM Cloud to manage backup and disaster recovery. The CyFIR teams also rely upon native and IBM partner security services, including QRadar and Resilient, that seamlessly work with in the IBM Cloud ecosystem. The IBM Cloud offers a full suite of add-on or cloud-native features, software and services right from within the client interface, allowing the teams to quickly execute from proof of concept to production deployment, all within a model that has been proven to work.
“The IBM Cloud solution allowed us to reinvent how we look at the cloud and how we deploy systems, apps and even security,” shares Brian Herr, Chief Security Officer at CyFIR. “And, because IBM Cloud is part of a larger service ecosystem, we’ve been able to deploy additional necessary business functions, which has enabled us to reinvent ourselves in an unprecedented way. We’ve been able to do this in an exceptionally short period of time because all the other services and software, and everything else that IT and security needs, is already baked into the IBM Cloud.”
CyFIR aims to move further toward becoming cloud native in 2020 and is working with the IBM Garage to expedite development.
Ready to explore how the cloud can enhance your business? Learn more about IBM Cloud solutions and schedule a complimentary visit to the IBM Garage to get started.
 
The post CyFIR helps businesses find and fix cybersecurity threats faster with IBM Cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

OpenShift Scale: Running 500 Pods Per Node

The Basics
A common request from OpenShift users has long been to raise the number of pods per node. OpenShift has set the limit to 250 starting with the first Kubernetes-based release (3.0) through 4.2, but with very powerful nodes, it can support many more than that.
This blog describes the work we did to achieve 500 pods per node, starting from initial testing, bug fixes and other changes we needed to make, the testing we performed to verify function, and what you need to do if you’d like to try this.
Background
Computer systems have continued unabated their relentless progress in computation power, memory, storage capacity, and I/O bandwidth. Systems that not long ago were exotic supercomputers are now dwarfed in their capability (if not physical size and power consumption) by very modest servers. Not surprisingly, one of the most frequent questions we’ve received from customers over the years is “can we run more than 250 pods (the — until now — tested mechanism) per node?”. Today we’re happy to announce that the answer is yes!
In this blog, I’m going to discuss the changes to OpenShift, the testing process to verify our ability to run much larger numbers of pods, and what you need to do if you want to increase your pod density.
Goals
Our goal with this project was to run 500 pods per node on a cluster with a reasonably large number of nodes. We also considered it important that these pods actually do something; pausepods, while convenient for testing, aren’t a workload that most people are interested in running on their clusters. At the same time, we recognized that the incredible variety of workloads in the rich OpenShift ecosystem would be impractical to model, so we wanted a simple workload that’s easy to understand and measure. We’ll discuss this workload below, which you can clone and experiment with.
Initial Testing
Early experiments on OpenShift 4.2 identified issues with communication between the control plane (in particular, the kube-apiserver) and the kubelet when attempting to run nodes with a large number of pods. Using a client/server builder application replicated to produce the desired number of pods, we observed that the apiserver was not getting timely updates from the kubelet when pods came into existence, resulting in problems such as networking not coming up for pods and the pods (when they required networking) failing as a result.
Our test was to run many replicas of the application to reproduce the requisite number of pods. We observed that up to about 380 pods per node, the applications would start running normally. Beyond that, we would see some pods remain in Pending state, and some start but terminate. Pods terminating that are expected to run do so because the pod itself decides to terminate. There were no messages in the logs identifying particular problems; the pods appeared to be starting up correctly, but the code within the pods was failing, resulting in the pods terminating. Studying the application, the most likely reason the pods would terminate was that the client pod would be unable to connect to the server, indicating that it did not have a network available.
As an aside, we observed that the kubelet declared the pods to be Running very quickly; the delay was in the apiserver realizing this. Again, there were no log messages in either the kubelet or the apiserver logs indicating any issue. The network team requested that we collect logs from the openshift-sdn that manages pod networking; that too showed nothing out of the ordinary. Indeed, even using host networking didn’t help.
To simplify the test, we wrote a much simpler client/server deployment, where the client would simply attempt to connect to the server until it succeeded rather than failing, using only two nodes. The client pods logged the number of connection attempts made and the elapsed time before success. We ran 500 replicas of this deployment, and found that up to about 450 pods total (225 per node), the pods started up and quickly went into Running state. Between 450 and 620, the rate of pods transitioning to Running state slowed down, and actually stalled out for about 10 minutes, after which the backlog cleared at a rate of about 3 pods/minute until eventually (after a few more hours) all of the pods were running. This supported the hypothesis that there was nothing really wrong with the kubelet; the client pods were able to start running, but most likely timed out connecting to the server, and did not retry.
On the hypothesis that the issue was rate of pod creation, we tried adding sleep 30 between creating each client-server pair. This staved off the point at which pod creation slowed down to about 375 pods/node, but eventually the same problem happened. We tried another experiment placing all of the pods within one namespace, which succeeded — all of the pods quickly started and ran correctly. As a final experiment, we used pause pods (which do not use the network) with separate namespaces, and hit the same problem, starting at around 450 pods (225/node). So clearly this was a function of the number of namespaces, not the number of pods; we had established that it was possible to run 500 pods per node, but without being able to use multiple namespaces, we couldn’t declare success.
Fixing the problem
By this point, it was quite clear that the issue was that the kubelet was unable to communicate at a fast enough rate with the apiserver. When that happens, the most obvious issue is the kubelet throttling transmission to the apiserver per the kubeAPIQPS and kubeAPIBurst kubelet parameters. These are enforced by Go rate limiting. The defaults that we inherit from upstream Kubernetes are 5 and 10, respectively. This allows the kubelet to send at most 5 queries to the apiserver per second, with a short-term burst rate of 10. It’s easy to see how under a heavy load that the kubelet may need a greater bandwidth to the apiserver. In particular, each namespace requires a certain number of secrets, which have to be retrieved from the apiserver via queries, eating into those limits. Additional user-defined secrets and configmaps only increase the pressure on this limit.
The throttling is used in order to protect the apiserver from inadvertent overload by the kubelet, but this mechanism is a very broad brush. However, rewriting it would be a major architectural change that we didn’t consider to be warranted. Therefore, the goal was to identify the lowest safe settings for KubeAPIQPS and KubeAPIBurst.
Experimenting with different settings, we found that the setting QPS/burst to 25/50 worked fine for 2000 pods on 3 nodes with a reasonable number of secrets and configmaps, but 15/30 didn’t.
The difficulty in tracking this down is that there’s nothing in either the logs or Prometheus metrics identifying this. Throttling is reported by the kubelet at verbosity 4 (v=4 in the kubelet arguments), but the default verbosity, both upstream and within OpenShift, is 3. We didn’t want to change this globally. Throttling had been seen as a temporary, harmless condition, hence its being relegated to a low verbosity level. However, with our experiments frequently showing throttling of 30 seconds or more, and this leading to pod failures, it clearly was not harmless. Therefore, I opened https://github.com/kubernetes/kubernetes/pull/80649, which eventually merged, and then pulled it into OpenShift in time for OpenShift 4.3. While this alone would not solve throttling, it greatly simplifies diagnosis. Adding throttling metrics to Prometheus would be desirable, but that is a longer-term project.
The next question was what to set the kubeAPIQPS and kubeAPIBurst values to. It was clear that 5/10 wouldn’t be suitable for larger numbers of pods. We decided that we wanted some safety margin above the tested 25/50, hence settled on 50/100 following node scaling testing on OpenShift 4.2 with these parameters set.
Another piece of the puzzle was the watch-based configmap and secret manager for the kubelet. This allows the kubelet to set watches on secrets and configmaps supplied by the apiserver, which in the case of items that don’t change very often, are much more efficient for the apiserver to handle, as it caches the watched objects locally. This change, which didn’t make OpenShift 4.2, would enable the apiserver to handle a heavier load of secrets and configmaps, easing the potential burden of the higher burst/QPS values. If you’re interested in the details of the change, in Go 1.12, the details are here, under net/http
To summarize, we made the following changes between OpenShift 4.2 and 4.3 to set the stage for scaling up the number of pods:

Change the default kubeAPIQPS from 5 to 50.
Change the default kubeAPIBurst from 10 to 100.
Change the default configMapAndSecretChangeDetectionStrategy from Cache to Watch.

Testing 500 pods/node
The stage was now set to actually test 500 pods/node as part of OpenShift 4.3 scaling testing. The questions we had to decide were:

What hardware do we want to use?
What OpenShift configuration changes would be needed?
How many nodes do we want to test?
What kind of workload do we want to run?

Hardware
A lot of pods, particularly with many namespaces, can put considerable stress on the control plane and the monitoring infrastructure. Therefore, we deemed it essential to use large nodes for the control plane and monitoring infrastructure. As we expected the monitoring database to have very large memory requirements, we placed (as is our standard practice) the monitoring stack on a separate set of infrastructure nodes rather than sharing that with the worker nodes. We settled on the following, using AWS as our underlying platform:

Master Nodes
The master nodes were r5.4xlarge instances. r5 instances are memory-optimized, to allow for large apiserver and etcd processes. The instance type consists of:

CPU: 16 cores, Intel Xeon Platinum 3175
Memory: 128 GB
Storage: EBS (no local storage), 4.75 Gbps
Network: up to 10 Gbps.

Infrastructure Nodes
The infrastructure nodes were m5.12xlarge instances. m5 instances are general purpose. The instance type consists of:

CPU: 48 cores, Intel Xeon Platinum 8175
Memory: 192 GB
Storage: EBS (no local storage), up to 9.5 Gbps
Network: 10 Gbps

Worker Nodes
The worker nodes were m5.2xlarge. This allows us to run quite a few reasonably simple pods, but typical application workloads would be heavier (and customers are interested in very big nodes!). The instance type consists of:

CPU: 8 cores, Intel Xeon Platinum 8175
Memory: 16 GB
Storage: EBS (no local storage), 4.75 Gbps
Network: up to 10 Gbps

Configuration Changes
The OpenShift default for maximum pods per node is 250. Worker nodes have to contain parts of the control infrastructure in addition to user pods; there are about 10 such control pods per node. Therefore, to ensure that we could definitely achieve 500 worker pods per node, we elected to set maxPods to 520 using a custom KubeletConfig (using the procedure described here)[https://docs.openshift.com/container-platform/4.3/scalability_and_performance/recommended-host-practices.html]
% oc label –overwrite machineconfigpool worker custom-kubelet=large-pods
% oc apply -f – <<’EOF’
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
name: “set-max-pods”
spec:
machineConfigPoolSelector:
matchLabels:
custom-kubelet: large-pods
kubeletConfig:
maxPods: 520
EOF
This requires an additional configuration change. Every pod on a node requires a distinct IP address allocated out of the host IP range. By default, when creating a cluster, the hostPrefix is set to 23 (i. e. a /23 net), allowing for up to 510 addresses — not quite enough. So clearly we had to set hostPrefix to 22 for this test in the install-config.yaml used to install the cluster.
In the end, no other configuration changes from stock 4.3 were needed. Note that if you want to run 500 pods per node, you’ll need to make these two changes yourself, as we did not change the defaults.
How many nodes do we want to test?
This is a function of how many large nodes we believe customers will want to run in a cluster. We settled on 100 for this test.
What kind of workload do we want to run?
Picking the workload to run is a matter of striking a balance between the test doing something interesting and being easy to set up and run. We settled on a simple client-server workload in which the client sends blocks of data to the server which the server returns, all at a pre-defined rate. We elected to start at 25 nodes, then follow up with 50 and 100, and use varying numbers of namespaces and pods per namespace. Large numbers of namespaces typically stress the control plane more than the worker nodes, but with a larger number of pods per worker node, we didn’t want to discount the possibility that that would impact the test results.
Test Results
We used ClusterBuster to generate the necessary namespaces and deployments for this test to run.
ClusterBuster is a simple tool that I wrote to generate a specified number of namespaces, and secrets and deployments within those namespaces. There are two main types of deployments that this tool generates: pausepod, and client-server data exchange. Each namespace can have a specified number of deployments, each of which can have a defined number of replicas. The client-server can additionally specify multiple client containers per pod, but we didn’t use this feature. The tool uses oc create and oc apply to create objects; we created 5 objects per oc apply, running two processes concurrently. This allows the test to proceed more quickly, but we’ve found that it also creates more stress on the cluster. ClusterBuster labels all objects it creates with a known label that makes it easy to clean up everything with
oc delete ns -l clusterbuster
In client-server mode, the clients can be configured to exchange data at a fixed rate for either a fixed number of bytes or for a fixed amount of time. We used both here in different tests.
We ran tests on 25, 50, and 100 nodes, all of which were successful; the “highest” test (i. e. greatest number of namespaces) in each sequence was:

25 node pausepod: 12500 namespaces each containing one pod.

25 node client-server: 2500 namespaces each containing one client-server deployment consisting of four replica client pods and one server (5 pods/deployment). Data exchange was at 100 KB/sec (in each direction) per client, total 10 MB in each direction per client.

50 node pausepod: 12500 namespaces * 2 pods.

50 node client-server: 5000 namespaces, one deployment with 4 clients + server, 100 KB/sec, 10 MB total.

100 node client-server: 5000 namespaces, one deployment with 9 clients + server, 100 KB/sec for 28800 seconds. In addition, we created and mounted 10 secrets per namespace.

Test Data
I’m going to cover what we found doing the 100 node test here, as we didn’t observe anything during the smaller tests that was markedly different (scaled appropriately).
We collected a variety of data during the test runs, including Prometheus metrics and another utility from my OpenShift 4 tools package (monitor-pod-status), with Grafana dashboards to monitor cluster activity. monitor-pod-status strictly speaking duplicates what we can get from Grafana, but it’s in an easy to read textual format. Finally, I used yet another tool clusterbuster-connstat to retrieve log data left by the client pods to analyze the rate of data flow.
Test Timings
The time required to create and tear down the test infrastructure is a measure of how fast the API and nodes can perform operations. This test was run with a relatively low parallelism factor, and operations didn’t lag significantly.

Operation
Approximate Time (minutes)

Create namespaces
4

Create secrets
43

Create deployments
34

Exchange data
480

Delete pods and namespaces
29

One interesting observation during the pod creation time is that pods were being created at about 1600/minute, and at any given time, there were about 270 pods in ContainerCreating state. This indicates that the process of pod creation took about 10 seconds per pod throughout the run.
Networking
The expected total rate of data exchange is 2 * Nclients * XferRate. In this case, of the 50,000 total pods, 45,000 were clients. At .1 MB/sec, this would yield an expected aggregate throughput of 9000 MB/sec (72,000 Mb/sec). The aggregate expected transfer rate per node would therefore be expected to be 720 Mb/sec, but as we’d expect on average about 1% of the clients to be colocated with the server, the actual average network traffic would be slightly less. In addition, we’d expect variation due to the number of server pods that happened to be located per node; in the configuration we used, each server pod handles 9x the data each client pod handles.
I inspected 5 nodes at random; each node showed a transfer rate during the steady state data transfer of between 650 and 780 Mbit/sec, with no noticeable peaks or valleys, which is as expected. This is nowhere near the 10 Gbps limit of the worker nodes we used, but the goal of this test was not to stress the network.
Quis custodiet ipsos custodes?
With apologies to linguistic purists, the few events we observed were related to Prometheus. During the tests, one of the Prometheus replicas typically used about 130 Gbytes of RAM, but a few times the memory usage spiked toward 300 Gbytes before ramping down over a period of several hours. In two cases, Prometheus crashed; while we don’t have records of why, we believe it likely that it ran out of memory. The high resource consumption of Prometheus reinforces the importance of robust monitoring infrastructure nodes!
Future Work
We have barely scratched the surface of pod density scaling with this investigation. There are many other things we want to look at, over time:

Even more pods: as systems grow even more powerful, we can look at even greater pod densities.

Adding CPU and memory requests: investigate the interaction between CPU/memory requests and large numbers of pods.

Investigate the interaction with other API objects: raw pods per node is only part of what stresses the control plane and worker nodes. Our synthetic test was very simple, and real-world applications will do a lot more. There are a lot of other dimensions we can investigate:

Number of configmaps/secrets: very large numbers of these objects in combination with many pods can stress the QPS to the apiserver, in addition to the runtime and the Linux kernel (as each of these objects must be mounted as a filesystem into the pods).
Many containers per pod: this stresses the container runtime.
Probes: these likewise could stress the container runtime.

More workloads: the synthetic workload we used is easy to analyze, but is hardly representative of every use people make of OpenShift. What would you like to see us focus on? Leave a comment with your suggestions.

More nodes: 100 nodes is a starting point, but we’re surely going to want to go higher. We’d also like to determine whether there’s a curve for maximum number of pods per node vs. number of nodes.

Bare metal: typical users of large nodes are running on bare metal, not virtual instances in the cloud.

Credits
I’d like to thank the many members of the OpenShift Perf/Scale, Node, and QE teams who worked with me on this, including (in alphabetical order) Ashish Kamra, Joe Talerico, Ravi Elluri, Ryan Phillips, Seth Jennings, and Walid Abouhamad.
The post OpenShift Scale: Running 500 Pods Per Node appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

French insurer teams with IBM Services to develop fraud detection solution

Auto insurance fraud costs companies billions of dollars every year. Those losses trickle down to policyholders who absorb some of that risk in policy rate increases.
Thélem assurances, a French property and casualty insurer whose motto is “Thélem innovates for you”, has launched an artificial intelligence program, prioritizing a fraud detection use case as its initial project.
Fraud detection is a model that lends itself well to online machine modeling and is a project that would allow us to enter into artificial intelligence starting with the analytical field that we have prioritized. A successful fraud detection project would deliver immediate, significant financial gains for the company.
Tapping into IBM Services
We carried out a few preliminary tests and experiments internally with our data scientists and data engineers but encountered problems with tools and with the environment. Therefore, in order to go a step further, we decided two things. First, we needed to find a solution that would make it possible for us to free ourselves from storage and performance constraints. Second, to increase our expertise we realized we needed to engage experts in the field.
During the course of our research, we met with various representatives from IBM who showed the advanced analytics capabilities of IBM Watson Studio and IBM Cloud. We discovered that the value proposition that they proposed corresponded exactly to our needs.
At the beginning of the collaboration, an IBM Global Business Services (GBS) team met with different Thélem assurances teams including marketing and claims management to identify use cases for artificial intelligence. Car insurance is the area in which we experienced the majority of our cases of fraud, so we chose to begin there.
In addition to IBM Watson Studio, which is used as the development environment for analytical models and cases, additional solutions we employed include IBM Cloud with Secure Gateway Service to transfer data from Thélem to the IBM core; IBM Cloud Object Storage, which hosts data stored in the cloud; and IBM Watson Machine Learning, used for deploying IT scripts.
We also had to take into account GDPR legislation and regulations in Europe. Because of these regulations, we paid special focus to minimizing the amount of personal data uploaded and of securing the personal data we received throughout the initiatives implemented.
IBM GBS worked with us on the architecture definition and the addition of data in a secure manner to the cloud. Then, we worked together to define the method to implement use cases and followed up with training using data science models.
Discovering five times more potential cases of fraud
We realized an advantage right from the start: flexibility. The flexibility in launching the fraud detection solution, without having to concern ourselves with storage capacity, machine performance or services used, was phenomenal.
The IBM solution also makes it possible to facilitate joint work among the different data scientists working on initiatives. More than one of them can work on a single initiative, share their work and the progress of their algorithms.
More concrete, tangible advantages include the fact that we’ve increased five fold the relevance of cases identified as potentially fraudulent. Additionally, IBM GBS helped us develop a methodology that we can use again on our own. We now have a tool, the methodology and data modeling know-how. This makes it possible for us to enrich our models over time, make better progress and ultimately increase the relevance rate, which should allow us to save an additional several hundred thousand euros every year over the next few years.
Going forward, we plan to begin exploring additional Watson tools such as testing aspects of image recognition or of chatbot services.
Read the case study for more details.
The post French insurer teams with IBM Services to develop fraud detection solution appeared first on Cloud computing news.
Quelle: Thoughts on Cloud