CKS Exam Series #5 ImagePolicyWebhook

itnext.io – The idea is to create an ImagePolicyWebhook Admission-Controller-Plugin which prevents all Pod creation, because the external service which should allow/deny is not reachable. This will be enough to…
Quelle: news.kubernauts.io

Using Nginx-Ingress as a Static Cache

medium.com – Kubernetes clusters often use the NGINX Ingress Controller. This provides a nice solution when hosting many websites. Yet all contents is retrieved from the back-end pods at every request. Would it…
Quelle: news.kubernauts.io

Introducing Ruby on Google Cloud Functions

Cloud Functions, Google Cloud’s Function as a Service (FaaS) offering, is a lightweight compute platform for creating single-purpose, stand-alone functions that respond to events, without having to manage a server or runtime environment. Cloud functions are a great fit for serverless, application, mobile or IoT backends, real-time data processing systems, video, image and sentiment analysis and even things like chatbots, or virtual assistants.Today we’re bringing support for Ruby, a popular, general-purpose programming language, to Cloud Functions. With the Functions Framework for Ruby, you can write idiomatic Ruby functions to build business-critical applications and integration layers. And with Cloud Functions for Ruby, now in Preview, you can deploy functions in a fully managed Ruby 2.6 or Ruby 2.7 environment, complete with access to resources in a private VPC network. Ruby functions scale automatically based on your load. You can write HTTP functions to respond to HTTP events, and CloudEvent functions to process events sourced from various cloud and Google Cloud services including Pub/Sub, Cloud Storage and Firestore.You can develop functions using the Functions Framework for Ruby, an open source functions-as-a-service framework for writing portable Ruby functions. With Functions Framework you develop, test, and run your functions locally, then deploy them to Cloud Functions, or to another Ruby environment.Writing Ruby functionsThe Functions Framework for Ruby supports HTTP functions and CloudEvent functions. A HTTP cloud function is very easy to write in idiomatic Ruby. Below, you’ll find a simple HTTP function for Webhook/HTTP use cases.CloudEvent functions on the Ruby runtime can also respond to industry standard CNCF CloudEvents. These events can be from various Google Cloud services, such as Pub/Sub, Cloud Storage and Firestore.Here is a simple CloudEvent function working with Pub/Sub.The Ruby Functions Framework fits comfortably with popular Ruby development processes and tools. In addition to writing functions, you can test functions in isolation using Ruby test frameworks such as Minitest and RSpec, without needing to spin up or mock a web server. Here is a simple RSpec example:Try Cloud Functions for Ruby todayCloud Functions for Ruby is ready for you to try today. Read the Quickstart guide, learn how to write your first functions, and try it out with a Google Cloud free trial. If you want to dive a little bit deeper into the technical aspects, you can also read our Ruby Functions Framework documentation. If you’re interested in the open-source Functions Framework for Ruby, please don’t hesitate to have a look at the project and potentially even contribute. We’re looking forward to seeing all the Ruby functions you write!Related ArticleNew in Cloud Functions: languages, availability, portability, and moreCloud Functions includes a wealth of new capabilities that make it a robust platform on which to build your applicationsRead Article
Quelle: Google Cloud Platform

Compute Engine explained: Scheduling the OS patch management service

Last year, we introduced the OS patch management service to protect your running Compute Engine VMs against defects and vulnerabilities. The service makes patching Linux and Windows VMs with the latest OS upgrades simple, scalable and effective. In this blog, we share a step-by-step guide on how to set up a project with a schedule to automatically patch filtered VM instances, resolve issues if an agent is not detected, and view an overview of patch compliance across your VM fleet.Getting startedImagine an example project with several VM instances hosting a mythical web service. You want to automatically keep the instances updated with the latest critical fixes and security updates against malicious software. You have a production fleet and a development fleet of machines for which you want to apply updates using different schedules. First, enable the service by navigating to GCE > OS Patch Management in the Google Cloud Console. Alternatively, you can also enable Cloud OS Config API and Container Analysis API through the Google Cloud Marketplace, or gcloud:Note that the OS Config agent is most likely already installed on the VM instances and just needs to be enabled via project metadata keys:After the agent collects data across the VM fleet, this data is then displayed on the patch compliance dashboard, which shows the state across all your VMs and operating systems, and displays a bird eye’s view of your patch compliance:You can now see some VM instances that you might like to patch more frequently, for example the CentOS and Red Hat Enterprise Linux (RHEL) fleet. Creating a patch deploymentYou can then click the New Patch Deployment at the top of the screen and walk through the steps to create a patch deployment for the target VMs, each with specific patch configurations and scheduling options.In the Target VMs section, you can use VM Instance name prefixes and labels to target only the VM instances with labels that start with a certain prefix. More instance filtering options are available, including zonal and combinations of label groups.In the Patch config option, you can select to patch RHEL, CentOS and Windows with critical and security patches, or specify exact Microsoft Knowledge Base (KB) numbers and packages to install. You can also exclude specific packages from being installed in the ‘Exclude’ fields.Finally, you can schedule the patch job. For example, here’s how to run the job every second Tuesday of the month for a maximum duration (three-hour maintenance window), from 11 AM to 2PM:After the patch job runs, you can see the result of the installed patches. This information is reported on the compliance dashboard and the VM instances tab:Patch your Compute Engine VMs todayTo learn more about the OS patch management service on Compute Engine including automating patch deployment, visit our documentation page.Related ArticleProtect your running VMs with new OS patch management serviceNew OS patch management service protects your Compute Engine VMsRead Article
Quelle: Google Cloud Platform

Migrating data, technology and people to Google Cloud

Editor’s note: Bukalapak, an ecommerce company based in Jakarta, is one of Indonesia’s largest businesses. As their platform grew to serve over 100 million customers and 12 million merchants, they needed a solution that would reliably and securely scale to handle millions of transactions a day. Here, they discuss their migration to Google Cloud and the value added from its managed services.Similar to many other enterprises, Bukalapak’s ecommerce platform did not originate in the cloud. It was initially built leveraging on-premises technologies that worked quite well at the beginning. However, as our business grew—processing over 2 million transactions per day and supporting 100 million customers—it became challenging to keep up with the necessary scale and availability needs. It wasn’t uncommon to see traffic spikes following promotional events, which were frequent. Our infrastructure and overall architecture, however, just wasn’t designed to handle this scale of demand. It was clear we needed a new way to support the success of the business, a way that would allow us to scale to meet fast-growing demand, while providing the best experience to our customers, all without overburdening our team. This led us to implement significant architectural changes, and consider a migration to the cloud.Choosing Google CloudGiven that this migration would be a large and complex endeavor, we wanted a partner in this journey, not just a vendor. We started by evaluating the product and services portfolio of potential providers, along with their ability to innovate and solve cutting-edge problems. With our very limited experience in the cloud, it was critical to have an experienced professional services team that could effectively guide and support us throughout the migration journey. We also evaluated the overall cost and the availability of data centers in Indonesia that would allow us to comply with government requirements for financial products. Finally, we needed to plan for how we would attract and retain talent, so we looked at the degree of adoption across the providers across Southeast Asia, and specifically Indonesia. After careful consideration across these areas, Google Cloud was the right choice for us.Embarking on the cloud migrationOur on-premises deployment included over 160 relational and NoSQL databases, We also maintained a Kubernetes cluster of over 1,000 nodes and over 30,000 cores, running 550 production microservices and one large monolith application. To address the large amount of technical debt our platform had, we decided against a lift-and-shift approach. Instead, we spent a good deal of time refactoring our services, particularly our monolith application (a.k.a., the mothership), and partitioning our databases. Enhancing our monitoring and alerting, deployment tooling, and testing frameworks were critical to improve the quality of our software, development and release processes, and performance and incident management. We also invested heavily in automation, moving away from manual testing to integration testing, API testing and front-end testing. Adopting the toolings and best practices of DevOps, MLOps and ChatOps increased our engineering velocity and improved the quality of our products and services. For a team that had very limited cloud experience, it was clear early on that this was not just a technology migration. It involved a cultural migration as well, and we wanted to ensure our team could perform the migration while gaining the skill set and experience needed to maintain and develop cloud-based applications. We started by training a smaller team, which took on the task of migrating our first services. Incrementally, we worked on expanding the training, and looping in more and more engineers in the migration efforts. As more engineering teams got involved, we paired them with one of the engineers who joined the migration early on and acted as a coach. This approach allowed us to transfer knowledge and roll out best practices, incrementally but surely, across the entire organization. We took a multi-step approach for the migration. We started by focusing on the cloud foundation work, introducing automation and new technologies like Ansible and Terraform. We also invested heavily in establishing a strong security foundation, onboarding WAF and Anti-DDoS, domain threat detection, network scanning, and image hardening tools, to name a few. From there, we started to migrate the smaller, simpler services and worked our way up to the more complex. That helped the team gain experience over time while managing risk appropriately. In the end, we successfully completed the migration in just 18 months, with very minimal downtime.  Managed services for greater peace of mindOur team selected Cloud SQL early on as the fully managed service for most of our MySQL and PostgreSQL databases. We appreciated how easy Cloud SQL made it to manage and maintain our databases. With just a few simple API calls, we could quickly set up a new instance or read replica. Auto-failovers and auto-increasing disk size ensured we could run reliably without a heavy operational burden. In addition to Cloud SQL, we’ve now been able to integrate across the other Google Cloud data services, including BigQuery, Data Studio, Pub/Sub, and Dataflow. These services have been instrumental in helping us process, store, and gain insights from a massive amount of data. That in turn allowed us to better understand our customers and consistently find new opportunities to make improvements on their behalf.Google Cloud’s managed services give a greater peace of mind. Our team spends less time on maintenance and operations. Instead, we have more time and resources to focus on building products and solving problems related to our core business. Our engineering velocity has increased, and our team has access to Google’s cutting-edge technology, enabling us to solve problems more efficiently. In addition, our platform now has higher uptime, and can scale with ease to keep up with unpredictable and growing demand. We also were able to improve the overall security of our platform and now have a standardized security model that can easily be implemented for new applications. The larger impact has been on what our lean infrastructure team is now able to accomplish. Migrating to Google Cloud gave us the strategic and competitive advantages we were looking for. Both throughout the migration and now that we’re running in production, Google Cloud has been a great partner to us. The Google Cloud team put a lot of effort into understanding what we needed to be successful, and advocating for our needs, often connecting us to product teams or experts from others in the organization. Their desire to go the extra mile on behalf of their customers made our experience positive and ultimately made our cloud migration successful.Learn more about Bukalapak and how you can migrate to Google Cloud managed databases.Related ArticleTo run or not to run a database on Kubernetes: What to considerIt can be a challenge to run a database in a distributed container environment like Kubernetes. Try these tips and best practices.Read Article
Quelle: Google Cloud Platform