Automate big decisions in a big data world

No longer only the domain of science fiction, artificial intelligence (AI) is poised to impact everything from how steel is produced to how banks recommend financial products and even to how farmers grow lettuce. It could even change how people move around cities and do business.
At its core, AI can be defined as a set of technologies that enable computing systems to sense, comprehend and act. Within 10 years, AI and robotics are expected to make a creative disruption impact estimated of between $14 trillion and $33 trillion dollars in cost reductions across manufacturing and healthcare, enabled by the automation of knowledge work, according to a Bank of America Merrill Lynch report.
Rule engine and expert systems have been essential components of symbolic artificial intelligence for decades. How, then, can this AI technology complement machine learning today? How can rules be applied on big data to augment enterprises’ digital capabilities while bringing transparency to decision making?
Business rules provided by IBM Operational Decision Manager (ODM) have proven successful in implementing eligibility, pricing and fraud detection systems. Whether an organization’s systems are running on the cloud or on premises, transactionally or in a batch, its rule engine can run decision logic in few milliseconds and at scale in a cluster. With a governed approach, business users and developers can validate a new version of a policy and inject it dynamically in running systems without stopping or repackaging them.
Traditional business rule applications process records a few megabytes of data at a time. As solutions move to big data, rules may be applied to terabytes of data. To this end, the integration of ODM with Apache Spark and Hadoop MapReduce can help scale business rules solutions to the world of big data, and combine them with machine learning algorithms.
Benefits of a successful big decision program include the following:

Ability for business users to author, test, simulate and deploy corporate decision automation and management services
Collaboration across business users, developers and operations teams in a governance schema
Continuous and fast decision automation and service delivery
Agility in the change of automated corporate policies, including emergency fixes to production environments to respond to new situations, for example, fraud detection
Scalability of the decision automation enabled by open source cluster technology including Hadoop and Spark
Possibility of the convergence of machine learning and rule algorithms in the same cluster and the same data lake
Auditability to justify automated decisions

Any organization, regardless of size, can employ a rules engine and utilize big data to achieve high performance levels and automate its policies to deliver repeatable and transparent decisions also reducing costs. The world of big decisions is agile and fast-paced and enables massive simulations and production of AI workloads with potential combinations of Watson services and Data Science Experience.
To learn more, tune in to our “Think big: Scale your business rules solutions up to the world of Big Data” webinar that explores how ODM business rules applied to big data can transform your decision automation strategy.
The post Automate big decisions in a big data world appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Flexible Container Images with OpenShift

Create containers that enable customization, fast adoption, and reuse of software components with this introduction to flexible containers, a concept that focuses on building container images consistently for more efficient product delivery.
Quelle: OpenShift

Sleep easier with a baby-movement monitor built on Watson IoT

The Mayo Clinic confirms what parents already know: sleepless nights are a rite of passage for most new parents.
Their baby might go back to sleep on his or her own, but if it’s a young infant who is hungry or needs a diaper change, it’s not likely. By the time weary parents hear their baby crying, the baby is already agitated. Typical baby monitors require parents to listen or watch for updates, not something that can be done while getting a good night’s sleep.
This dilemma is what sparked an idea in a parent who knows the value of bringing his baby a bottle immediately.
The baby movement app
I work with IBM Business Partner CapGemini and had just attended a Bluemix workshop. Prior to the workshop, I had never worked with Bluemix and became fascinated by how easy it was to develop an app in just two hours that would gather sensor data from a phone, produce websites with graphs to display it, and tweet about it.
I had discussed with my wife how nice it would be to have some kind of alert for when our baby started to wake rather than when he’s already crying. The workshop made clear it was possible.
I created an Internet of Things (IoT) sensor device from a cell phone I wasn’t using, hooked it up to the Watson IoT platform in Bluemix, secured it to my baby’s bed and defined thresholds for movement to be detected by the sensors.
If the device detects movement greater than the threshold (which typically means the baby is awake), a parent gets an alert right away, much sooner than the typical monitor, which is when the baby is already crying, which by then is too late.

Parents can be alerted by an email message sent to a VIP list, call, text, tweet, and it’s even possible to update your Facebook status with the alert.
When a parent gets the alert from the movement sensor, as soon as the baby starts waking up, a mom or dad can tend to the baby’s needs, and he or she will fall back asleep immediately. This way, everyone gets more sleep.
Innovating with ease
It’s easy to put the different components in Bluemix together. It’s quick and inexpensive to go from idea to prototype, particularly when the alternative is investing in server and technology infrastructure. Additionally, it’s not necessary to be a hardcore programmer to use it. Bluemix is very intuitive.
There are no plans to commercialize the baby-movement monitor. It was only created to solve the real-world problem of helping a family sleep at night. However, because Bluemix can help to speed innovation across industries, I envisioned how it could work for his client in the health sector that just opened a new hospital.
The CTO loved the idea of using movement sensors to monitor patients who may not be able to pull their alert cord or press a button for assistance. A movement sensor could indicate to hospital staff that there was a problem that needs attention, for example, the patient is not safely in bed or having a seizure. In this case, a simple app created in Bluemix could have a major impact.
Read a related post on LinkedIn, connect your IoT device/phone sensor data here and create your own free Bluemix account.
The post Sleep easier with a baby-movement monitor built on Watson IoT appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Are You Certifiable? Why Cloud Technology Certifications Matter

The post Are You Certifiable? Why Cloud Technology Certifications Matter appeared first on Mirantis | Pure Play Open Cloud.
There was a time when tech professionals in certain fields could expect to get multiple emails from recruiters — every single day. In most cases, those days are over, so you want to make sure that you build your resume to best advantage. Last month, we talked to Mariela Gagnon of Cre8Hires and consultant Jens Soldners about one of your greatest advantages: technical certifications. You can view the full webinar here, and we thought we’d bring the Q&A’s here to the blog.
Q: Do employers negatively view certifications for older technologies? Should they not be added to a resume? What are examples of cloud-related certifications that should not be added?
Mariela Gagnon: There’s no such thing as a “bad” certification. Older certifications aren’t likely to prove your qualification for a particular position, but all certifications show that you have the discipline to work to improve yourself, so it’s all favorable. Of course, you don’t want to focus on certifications that aren’t relevant, but you should certainly include them.
Q: At what points in a career are certifications most useful?
Mariela: Certifications are useful at all stages of your career. Early on, when you don’t have much experience, they show that you have the relevant knowledge. When you do have more experience, they show that you have the drive and the self-discipline to constantly improve yourself, and they can help you to get promoted, or even to survive a downsizing.
Q: What is the difference between Mirantis and Red Hat OpenStack certification?
Nick Chase: The major difference is that the Mirantis OpenStack and Kubernetes certifications are completely vendor neutral, whereas the Red Hat certifications are proprietary and focus on their own products. The Mirantis certifications focus on the technologies themselves, so you can use that knowledge in any environment. The Red Hat certifications are specific to their products and may not be as applicable if you’re not in a Red Hat environment.
Q: What is the best way to prepare for Mirantis Kubernetes certification?
Nick: The easiest, surest way is, of course, to take the Mirantis Kubernetes & Docker bootcamp. You can either take the instructor-led or the self-paced version. You can also get a list of the exam requirements at https://training.mirantis.com/kcm100-exam-requirements/ so you can study up on your own.
Q: If I have a large number of certifications, should I list them all or highlight specific ones?
Mariela: You definitely want to highlight the ones that are closer to the task you’re applying for, but it never hurts to have any certifications on your resume. Just make sure that the most recent, the most active, the most relevant ones are closer to the top.
Q: If I am unemployed and use that time to study and earn certifications, can that time be listed as schooling on my employment history?
Mariela: I’ve come across this a lot, especially when the job market wasn’t too great. We’d have candidates who might have a year gap in their resume. So that would be tough, especially when you’re up against candidates who don’t have a gap. So you’re always being asked to explain, “What were you doing for that year?” A lot of candidates, I’ll have them put thier education into that time on their resume so it kind of fills in the void and it also shows that even though I’m not working, I’m still keeping up-to-date on the technologies, so if you get me in this role, I’m up to date and ready to go. So it really adds value and connects to whatever prior experience you have.
Q: Are there essential skills I need to have before starting the cloud certification journey?
Jens Soeldner: Most often, you’re dealing with a Linux environment, so you’ll want to have Linux command line skills, and have a basic understanding of networking. In the case of Microsoft-centric environments, you’d be well served to have a familiarity with PowerShell.
Q: What cloud certification is recommended as a starting point?
Jens: Which cloud certifications to do first are highly dependant on what vendor you’re working with. There’s no common denominator here; some knowledge is transferable, since cloud services are similar.
Q: What types of certifications are not worth getting? Are there any that would give you negative value?
Mariela: There is never a certification that is not worth getting. It is really a personal decision. You want to ensure that you are becoming certified in technologies that you are interested in working with. It will help you target and get hired for the roles you want!
Q: Also please tell us how to go through resume parsers?
Mariela: Resume parsers go off of Keyword searches, or what is known as Booleans, to gather large numbers of resumes that include those keywords. These parsers highlight the keywords in the resume, so make sure that it’s closer to the top and more recent. This process will get more automated as time goes on, as well.
Q: There are so many certifications out there. Is there a specific path of certifications that you would recommend for System Administrators ? It would be good to know must have vs nice to have.
Jens: I teach VMware’s and other classes, and it’s great if the participants have a working understanding of Windows or Linux. Obviously if it’s a Windows-centric position you’re looking at Microsoft certifications such as Microsoft Certified System Administrator for Windows Server 2012 or 2016. In the Linux world, I would recommend to go for any certification out there, proprietary or vendor neutral. Just something that certifies that you know your way around the command line, you can start and stop services, use the vi editor, and so on, just as a baseline. If you want to go a little bit deeper on networking, i would recommend Cisco Certified Network Associate. it’s very widespread and well recognized. If the job involves VMware, then VCP certainly makes sense.
Q: How I could take the Mirantis exam? I’m not in the USA. Can I take it online?
Nick: Yes, we have proctored virtual exams available in either US or EMEA friendly times. Also, the Associate OCM50 written exam can be taken anytime, anywhere.
Q: Are there any Mirantis bootcamps in or around Reading or London in the UK?
Nick: The best thing to do is to check the schedule. We have classes in Europe, and we add new classes on a monthly basis. You can also send a request through http://training.mirantis.com to request a new location.
The post Are You Certifiable? Why Cloud Technology Certifications Matter appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

OpenShift Commons Briefing #80: Workspaces for Dev Teams with Eclipse Che

Eclipse Che is a next generation cloud IDE and developer workspace server that allows anyone to contribute to a project without having to install software. Che uses a server to start and snapshot containerized developer workspaces attached to a cloud IDE. In this briefing we’ll demonstrate how Che can be used by a development team with a multi-container application to speed project bootstrapping.
Quelle: OpenShift

What can NFV do for a business?

The post What can NFV do for a business? appeared first on Mirantis | Pure Play Open Cloud.
[NOTE:  This article is an excerpt from Understanding OPNFV, by Amar Kapadia. You can also download an ebook version of the book here.]
Telcos, multiple-system operators (MSOs i.e. cable & satellite providers), and network providers are under pressure on several fronts, including:
 

 
 
 

OTT/ Web 2.0
Over-the-top and Web services are exploding, requiring differentiated services and not just a ‘pipe’.
ARPU under pressure
Average revenue per user is under pressure due to rising acquisition costs, churn and competition.
Increased Agility
Pressure to evolve existing services and introduce new services faster is increasing.

 
Enterprises with extensive branch connectivity or IOT deployments also face similar challenges. If telecom operators or enterprises were to build their networks from scratch today, they would likely build them as software-defined resources, similar to Google or Facebook’s infrastructure. That is the premise of Network Functions Virtualization.
What is NFV?
In the beginning, there was proprietary hardware.
We’ve come a long way since the days of hundreds of wires connected to a single tower, but even when communications services were first computerized, it was usually with the help of purpose-built hardware such as switches, routers, firewalls, load balancers, mobile networking nodes and policy platforms. Advances in communications technology moved in tandem with hardware improvements, which was slow enough that there was time for new equipment to be developed and implemented, and for old equipment to be either removed or relegated to lesser roles. This situation applied to phone companies and internet service providers, of course, but it also applied to large enterprises that controlled their own IT infrastructure.
Today, due largely to the advent of mobile networking and cloud computing, heightened user demands in both consumer and enterprise networks have led to unpredictable (“anytime, anywhere”) traffic patterns and a need for new services such as voice and video over portable devices. What’s more, constant improvement in consumer devices and transmission technology continue to evolve these themes.
This need for agility led to the development of Software Defined Networking (SDN). SDN enables administrators to easily configure, provision, and control networks, subnets, and other networking architectures on demand and in a repeatable way over commodity hardware, rather than having to manually configure proprietary hardware. SDN also made it possible to provide “infrastructure as code,” where configuration information and DevOps scripts can be subject to the same oversight and version control as other applications.
Of course, there was still the matter of those proprietary hardware boxes.
Getting rid of them wasn’t as simple as deploying an SDN; they were there for a reason, and that reason usually had to do with performance or specialized functionality. But with advances in semiconductor performance and the ability of conventional compute hardware to perform sophisticated packet processing functions came the ability to virtualize and consolidate these specialized networking functions.
And so, Network Functions Virtualization (NFV) was born. NFV enables complex network functions to be performed on compute nodes in data centers. A network function performed on a compute node is called a Virtualized Network Function (VNF). So that VNFs can behave as a network, NFV also adds the mechanisms to determine how they can be chained together to provide control over traffic within a network.
Simplified Network Architecture Before NFV

Simplified Network Architecture After NFV

Although most people think of it in terms of telecommunications, NFV encompasses a broad set of use cases, from Role Based Access Control (RBAC) based on application or traffic type, to Content Delivery Networks (CDN) that manage content at the edges of the network (where it is often needed), to the more obvious telecom-related use cases such as Evolved Packet Core (EPC) and IP Multimedia System (IMS).
Benefits of NFV
NFV is based on the “Google infrastructure for everyone else” trend where large companies attempt to copy the best practices from the web giants to increase revenue and customer satisfaction while also slashing operational and capital costs. This explains the strong interest in NFV from both telcos and enterprises with numerous benefits:
Increased Revenue
New services can be rolled out faster (since we are writing and trying out code and vs. designing ASICs or new hardware systems), and existing services can be provisioned faster (again, software deployment vs. hardware purchases). For example, Telstra’s PEN product was able to reduce the provisioning time for WAN-on-demand from three weeks to seconds, eliminate purchase orders and man-hours of work and reduce customer commitment times for the WAN link from one year to one hour.
Telstra’s PEN Offering

Improved Customer Satisfaction
With an agile infrastructure, no one service runs out of resources as each service is dynamically provisioned with the exact amount of infrastructure required based on the utilization at that specific point in time. (Of course, there’s still a limit on the aggregate amount of infrastructure.) For example, no longer will mobile end users experience reduced speed or service degradation. Customer satisfaction also improves due to rapid self-service deployment of services, a richer catalog of services and the ability, if offered by the operator, to try-before-you-buy.
Reduced Operational Expenditure (Opex)
NFV obviates numerous manual tasks. Provisioning of underlying infrastructure, network functions and services can all be automated; even offered as self-service. This removes a whole range of truck rolls, program meetings, IT tickets, architecture discussions, and so on. At a non-telco user, cloud technologies have been able to reduce operations team sizes by up to 4x, freeing up individuals to focus on other higher-value tasks.
The standardization of hardware also slashes operational costs. Instead of managing thousands of unique inventory items, your team can now standardize on a few dozen. A bonus to reduced opex is reduced time-to-break-even. This occurs because, in addition to just virtualizing individual functions, NFV also allows complex services consisting of a collection of functions to be deployed rapidly, in an automated fashion. By shrinking the time and expense from customer request to revenue by instantly deploying services, the time-to-break-even can go down significantly for operators.
Reduced Capital Expenditure (Capex)
NFV dramatically improves hardware utilization. No longer do you waste unused cycles on proprietary fixed function boxes provisioned for peak load. Instead you can deploy services with the click of a button, and have them automatically scale-out or scale-in depending on utilization. In another non-telco industry example, a gaming IT company, G-Core, was able to double their hardware utilization by switching to a private cloud.
Using industry standard servers and open source software further reduces capex. Industry standard servers are manufactured in high volumes by multiple vendors resulting in attractive pricing. Open source software is also typically available from multiple vendors, and the competition drives down pricing. This is a win-win where reduced or elimination of vendor lock-in comes with reduced pricing.
Additionally, operators can reduce capex by utilizing different procurement models. Before NFV, the traditional model was to issue an RFP to Network Equipment Manufacturers (NEMs) and purchase a complete solution from one of them. With NFV, operators can now pick and choose different best-in-class vendors for different components of the stack. In fact, in some areas an operator could also choose to skip vendors entirely via the use of 100% open source software. (The last two option is not for the faint-of-heart, and we will explore the pros and cons of different procurement models in the next chapter.)
TIA Network’s “The Virtualization Revolution: NFV Unleashed – Network of the Future Documentary, Part 6” states that the total opex plus capex benefit of an NFV-based architecture could be a cost reduction of up to 70%.
Freed up Resources for New Initiatives
If every operator resource is busy with keeping current services up and running, there aren’t enough staff resources to work on new upcoming initiatives such as 5G and IoT. The side effect of reduced opex is that the organization will now have resources freed up to look at these important new initiatives, and so contribute to overall increased competitiveness. Or putting it another way, unless you fully automate the lower layers, there won’t be enough time and focus on the OSS/BSS layer, which is the layer that improves competitiveness and generates revenue.
Example Total-cost-of-ownership (TCO) Analysis
 

Intel and the PA Consulting Group have created a comprehensive TCO analysis tool for the vCPE use case (see below). In one representative study conducted with British Telecom, the tool was populated with assumptions for an enterprise customer where physical network functions from the customer’s premise were moved to the operator’s cloud. In this study, the tool shows that the operator can reduce their total cost by 32% to 39%. The figure encompassed all costs including hardware, software, data center, staff and communication costs. The TCO analysis was conducted over a five-year period, and included a range of functions such as firewall, router, CGNAT, SBC, VPN and WAN optimization. These results are representative and will obviously change if another study has different assumptions. Also, as mentioned earlier, cost is only one of the many benefits of NFV.

NFV Use Cases
Since the initial group of companies that popularized NFV was made up primarily of telecommunications carriers, it is perhaps no surprise that most of the original use cases are related to that field. As we’ve discussed, NFV use cases span a broader set of industries. Instead of covering all use cases comprehensively, we are going to touch upon the three most common:
vCPE (Virtual Customer Premise Equipment)
vCPE virtualizes the set of boxes (such as firewall, router, VPN, NAT, DHCP, IPS/ IDS, PBX, transcoders, WAN optimization and so on) used to connect a business or consumer to the internet, or branch offices to the main office. By virtualizing these functions, operators and enterprises can deploy services rapidly to increase revenue and cut cost by eliminating truck rolls and lengthy manual processes. vCPE also provides an early glimpse into distributed computing where functionality in a centralized cloud can be supplemented with edge compute.
vEPC (Virtual Evolved Packet Core)
Both the sheer amount of traffic and the number of subscribers using data services has continued to grow as we have moved from 2G to 4G/LTE, with 5G around the corner. vEPC enables mobile network operators (MVNO) and enablers (MVNE) to use a virtual infrastructure to host voice and data services rather than using an infrastructure built with physical functions. A prerequisite to providing multiple services simultaneously requires “network slicing” or the network multi-tenancy, a capability also enabled by vEPC. In summary, vEPC can cut opex and capex while speeding up delivery and enabling on-demand scalability.
vIMS (Virtual IP Multimedia System)
OTT competitors are driving traditional telco, cable and satellite providers towards offering voice, video, and messaging over IP as a response. A virtualized system can offer the agility and scalability required to make IMS an economically viable offering to effectively compete with startups.
This list is by no means comprehensive, even in the short term. Numerous other use cases exist today and new ones are likely to emerge. The most obvious one is 5G. With 50x higher speeds, 10x lower latencies, machine-to-machine communication, connected cars, smart cities, e-health, IOT and the emergence of mobile edge computing and network slicing, it is hard to imagine telecom providers or enterprises being successful with physical network functions.
[NOTE:  This article is an excerpt from Understanding OPNFV, by Amar Kapadia. You can also download an ebook version of the book here.]
The post What can NFV do for a business? appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Fox Sports ensures uninterrupted content delivery with Aspera

When people down under want to watch AFL, rugby, cricket rugby league, car racing or even darts, they turn to Fox Sports Australia.
The network provides round-the-clock coverage of worldwide sports, including exclusive coverage of everything from events such as UFC to MotoGP and F1 racing. Fans can find scores, commentary and videos on the Fox website, as well as view highly sought-after insights in and around their sports from former and current players. For many, Fox Sports is a permanent fixture in their daily television watching.
Fox Sports Australia Pty Limited is Australia’s leading producer of sports television coverage and is home to Australia’s favorite subscription television sports channels, as well as Australia’s number one general sports website.
How does Fox Sports keep fans engrossed? It provides a constant stream of interesting and relevant content with more than 13,000 hours of live sports programming every year across the network’s seven channels, coupled with quality programs sourced from all over the world.
This is no easy feat: up until recently, as with most broadcasters, the final line in the sand for the change from physical- to file-based delivery of content is still yet to complete. The content is distributed in multiple formats (including tapes and hard drives), and people are tasked with assimilating it in a haphazard way, holding their collective breath that everything goes off without a hitch. Fox Sports, being in the unique geographical location of Australia, is further impacted by the tyranny of distance, so logistics and data transfer are even more of a challenge.
As the network grows, more providers are being added to the mix, and the process of sourcing programming becomes increasingly convoluted and time consuming to ensure viewers aren’t looking at stale content. In the world of premium sports, the speed to customer is paramount.
Fox Sports selected high-speed file transfer solutions from Aspera to streamline the ingestion of content from global providers. Aspera, an IBM company, enables suppliers to upload programs quickly and reliably, saving time and ensuring content is received, processed and ready to go in time for broadcast.
Fox Sports uses Aspera Shares and Aspera Point-to-Point to simplify and accelerate the process. Providers initiate a secure, high-speed transfer via Aspera Point-to-Point. Administrators at Fox are immediately notified when the transfer completes and the content is ready. The files are then passed through a series of steps, including transcoding and quality control (QC) checks via Telestream Vantage. Once editing and processing is completed, the finished content is uploaded to Aspera Shares for distribution, where it can be browsed and downloaded at high speed by users with appropriate access rights.
The network chose Aspera because Aspera has established itself as the international market leader for efficient, file-based transfer. Fox Sports wants granular control over all the content coming in and out of its building from third party vendors, internal vendors and syndicated vendors. Additionally, Aspera’s strong local presence in Australia was appealing.
The platform also gives Fox Sports the capability to conduct ad hoc business. For instance, if the network hears of an upcoming sporting event it wants to rapidly deliver and that a niche sport content aggregator has, the network can quickly arrange to feature it without changing infrastructure or negatively impacting workflows. With Australia on the other side of the world, and its low bandwidth and poor infrastructure, Aspera’s ability to move data at maximum speed regardless of file size, transfer distance or network conditions was and continues to be an essential feature.
Sports is an enormous commodity in Australia, and fans want the best-quality premium content on their screens as quickly as possible. Whether they’ve invited all their friends over to watch the big game or it’s the kids who want to watch soccer, when people click their remotes, they expect to be entertained, as do their family and friends. There can be no hiccups in the broadcast, or providers run the risk of rowdy sports fans tossing beer cans at the TV set or switching channels.
Learn more about Aspera.
The post Fox Sports ensures uninterrupted content delivery with Aspera appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Improving the RDO Trunk infrastructure, take 2

One year ago, we discussed the improvements made to the RDO Trunk infrastructure in this post. As expected, our needs have changed in this year, and so has to change our infrastructure. So here we are, ready to describe what’s new in RDO Trunk.

New needs

We have some new needs to cover:

A new DLRN API has been introduced, meant to be used by our CI jobs. The main goal behind this API is to break the current long, hardcoded Jenkins pipelines we use to promote repositories, and have individual jobs “vote” on each repository instead, with some additional logic to decide which repository needs to be promoted. The API is a simple REST one, defined here.

This new API needs to be accessible for jobs running inside and outside the ci.centos.org infrastructure, which means we can no longer use a local SQLite3 database for each builder.

We now have an RDO Cloud available to use, so we can consolidate our systems there.

Additionally, hosting our CI-passed repositories in the CentOS CDN was not working as we expected, because we needed some additional flexibility that was just not possible there. For example, we could not remove a repository in case it was promoted by mistake.

Our new setup

This is the current design for the RDO Trunk infrastructure:

We still have the build server inside the ci.centos.org infrastructure, and not available from the outside. This has proven to be a good solution, since we are separating content generation from content delivery.

https://trunk.rdoproject.org is now the URL to be used for all RDO Trunk users. It has worked very well so far, providing enough bandwidth for our needs.

The database has been taken out to an external MariaDB server, running on the RDO Cloud (dlrn-db.rdoproject.org). This database is set up as master-slave, with the slave running on an offsite cloud instance that also servers as a backup machine for other services. This required a patch to DLRN to add MariaDB support.

Future steps

Experience tells us that this setup will not stay like this forever, so we already have some plans for future improvements:

The build server will migrate to the RDO Cloud soon. Since we are no longer mirroring our CI-passed repositories on the CentOS CDN, it makes more sense to manage it inside the RDO infrastructure.

Our next step will be to make RDO Trunk scale horizontally, as described here. We want to use our nodepool VMs in review.rdoproject.org to build packages after each upstream commit is merged, then use the builder instance as an aggregator. That way, the hardware needs for this instance become much lower, since it just has to fetch the generated RPMs and create new repositories. Support for this feature is already in DLRN, so we just need to figure out how to do the rest.

Quelle: RDO

Recent blog posts, July 3

Here’s what the community is blogging about lately.

OVS-DPDK Parameters: Dealing with multi-NUMA by Kevin Traynor

In Network Function Virtualization, there is a need to scale functions (VNFs) and infrastructure (NFVi) across multiple NUMA nodes in order to maximize resource usage.

Read more at https://developers.redhat.com/blog/2017/06/28/ovs-dpdk-parameters-dealing-with-multi-numa/

OpenStack Down Under – OpenStack Days Australia 2017 by August Simonelli, Technical Marketing Manager, Cloud

As OpenStack continues to grow and thrive around the world the OpenStack Foundation continues to bring OpenStack events to all corners of the globe. From community run meetups to more high-profile events like the larger Summits there is probably an OpenStack event going on somewhere near you.

Read more at http://redhatstackblog.redhat.com/2017/06/26/openstack-down-under-openstack-days-australia-2017/

OpenStack versions – Upstream/Downstream by Carlos Camacho

I’m adding this note as I’m prone to forget how upstream and downstream versions are matching.

Read more at http://anstack.github.io/blog/2017/06/27/openstack-versions-upstream-downstream.html

Tom Barron – OpenStack Manila – OpenStack PTG by Rich Bowen

Tom Barron talks about the work on Manila in the Ocata release, at the OpenStack PTG in Atlanta.

Read more at http://rdoproject.org/blog/2017/07/tom-barron-openstack-manila-openstack-ptg/

Victoria Martinez de la Cruz: OpenStack Manila by Rich Bowen

Victoria Martinez de la Cruz talks Manila, Outreachy, at the OpenStack PTG in Atlanta

Read more at http://rdoproject.org/blog/2017/06/victoria-martinez-de-la-cruz-openstack-manila/

Ihar Hrachyshka – What’s new in OpenStack Neutron for Ocata by Rich Bowen

Ihar Hrachyshka talks about his work on Neutron in Ocata, and what’s coming in Pike.

Read more at http://rdoproject.org/blog/2017/06/ihar-hrachyshka-whats-new-in-openstack-neutron-for-ocata/

Introducing Software Factory – part 1 by Software Factory Team

Introducing Software Factory Software Factory is an open source, software development forge with an emphasis on collaboration and ensuring code quality through Continuous Integration (CI). It is inspired by OpenStack’s development workflow that has proven to be reliable for fast-changing, interdependent projects driven by large communities.

Read more at http://rdoproject.org/blog/2017/06/introducing-Software-Factory-part-1/

Back to Boston! A recap of the 2017 OpenStack Summit by August Simonelli, Technical Marketing Manager, Cloud

This year the OpenStack ® Summit returned to Boston, Massachusetts. The Summit was held the week after the annual Red Hat ® Summit, which was also held in Boston. The combination of the two events, back to back, made for an intense, exciting and extremely busy few weeks.

Read more at http://redhatstackblog.redhat.com/2017/06/19/back-to-boston-a-recap-of-the-2017-openstack-summit/
Quelle: RDO