OpenStack Case Study: CloudVPS

The post OpenStack Case Study: CloudVPS appeared first on Mirantis | Pure Play Open Cloud.
Mirantis customer CloudVPS is using OpenStack to deliver a public cloud in the Netherlands. Cross-posted from Superuser.
Guest Post: By Sunny Cai
CloudVPS is one of the largest Dutch independent OpenStack providers that delivers advanced cloud solutions. With a team of 15 people, CloudVPS is one of the first in Europe to get started with OpenStack, and they are leading in the development of the scalable open-source platform.
At the Open Infrastructure Shanghai Summit in November 2019, Superuser got a chance to talk with the OpenStack engineers from the CloudVPS on why they chose to OpenStack for their organization and how they use OpenStack.
What are some of the open source projects you are using?
Currently, we are using OpenStack, Oxwall, Salt, Tungsten Fabric, Gitlab and a few more. We have not yet started to use the open source projects that are hosted by the OpenStack Foundation, but we are planning on it.
Why do you choose to use OpenStack?
We have used OpenStack for a long time. At the very beginning, we added Hyper V hypervisors for Windows VMs before we built our own orchestration layer. After about three to four years when OpenStack came out, we started our first OpenStack platform to do public cloud. The main reason that we start to use OpenStack is the high growth potential that we see in OpenStack. OpenStack’s features and its community size are big parts of the reason as well. In addition, OpenStack’s stability and maturity are particularly important to us right now. Upgradability is also a key factor for our team. In terms of our partnership with Mirantis, upgradability is the biggest reason why we chose to partner with them instead of doing it ourselves.
What workloads are you running on OpenStack?
We don’t know the exact workloads, but basically all of it. What we do know is that we see web services on there and also platforms for large newspapers in the Netherlands, Belgium, Germany, and other countries around the world. It really varies, and we have all kinds of workloads. For the newspapers, we have conversion workloads for images. We also have an office automation environment like the Windows machine. There are some customers who run containers on top of it. Overall, there are definitely more workloads, but we don’t know all of it.
How large is your OpenStack deployment?
We have two deployments. In total, we have about over 10,000 instances on it and 400-500 nodes.
Stay informed:
Interested in information about the OpenStack Foundation and its projects? Stay up to date on OpenStack and the Open Infrastructure community today!
The post OpenStack Case Study: CloudVPS appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

New Application Manager brings GitOps to Google Kubernetes Engine

Kubernetes is the de facto standard for managing containerized applications, but developers and app operators often struggle with end-to-end Kubernetes lifecycle management—things like authoring, releasing and managing Kubernetes applications. To simplify the management of application lifecycle and configuration, today we are launching Application Manager, an application delivery solution delivered as an add-on to Google Kubernetes Engine (GKE). Now available in beta, Application Manager allows developers to easily create a dev-to-production application delivery flow, while incorporating Google’s best practices for managing release configurations. Application Manager lets you get your applications running in GKE efficiently, securely and in line with company policy, so you can succeed with your application modernization goals. Addressing the Kubernetes application lifecycleThe Kubernetes application lifecycle consists of three main stages: authoring, releasing and managing. Authoring includes writing the application source code and app-specific Kubernetes configuration. Releasing includes making changes to code and/or config, then safely deploying those changes to different release environments. The managing phase includes operationalizing applications at scale and in production. Currently, there are no well defined standards for these stages and users often ask us for best practices and recommendations to help them get started.Lifecycle of Kubernetes applicationIn addition, Kubernetes application configurations can be too long and complex to manage at scale. In particular, an application that is deployed across test, staging and production release environments might have duplicate configurations stored in multiple Git repositories. Any change to one config needs to be replicated to the others, creating the potential for human error. Application Manager embraces GitOps principles, leveraging Git repositories to enable declarative configuration management. It allows you to audit and review changes before they are deployed to environments. It also automatically scaffolds and enforces recommended Git repository structures, and allows you to perform template-free customization for configurations with Kustomize, a Kubernetes-native configuration management tool.Application Manager runs inside your GKE cluster as a cluster add-on, and performs the following tasks: It pulls Kubernetes manifests from a Git repository (within a git branch, tag or commit) and deploys the manifests as an application in the cluster. It reports metadata about deployed applications (e.g. version, revision history, health, etc.) and visualizes the applications in Google Cloud Console.Releasing an application with Application ManagerNow, let’s dive into more details on how to use Application Manager to release or deploy an application, from scaffolding Git repositories, defining application release environments, to deploying it in clusters. You can do all those tasks by executing simple commands in appctl, Application Manager’s command line interface. Here’s an example workflow of how you can release a “bookstore” app to both staging and production environments. First, initialize it by running appctl init bookstore –app-config-repo=github.com/$USER_OR_ORG/bookstore. This creates two remote Git repositories: 1) an application repository, for storing application configuration files in kustomize format (for easier configuration management), and 2) a deployment repository, for storing auto-generated, fully-rendered configuration files as the source of truth of what’s deployed in the cluster. After the Git repositories are initialized, you can add a staging environment to the bookstore app by running appctl env add staging –cluster=$MY_STAGING_CLUSTER, and do the same for prod environment. At this point, the application repository looks like this: Here, we are using kustomize to manage environment-specific differences in the configuration. With kustomize, you can declaratively manage distinctly customized Kubernetes configurations for different environments using only Kubernetes API resource files, by patching overlays on top of the base configuration.When you’re ready to release the application to the staging environment, simply create an application version with git tag in the application repository, and then run appctl prepare staging. This automatically generates hydrated configurations from the tagged version in the application repository, and pushes them to the staging branch of the deployment repository for an administrator to review. With this Google-recommended repository structure, Application Manager provides a clean separation between the easy-to-maintain kustomize configurations in the application repository, and the auto-generated deployment repository—an easy-to-review single source of truth; it also prevents these two repositories from diverging. Once the commits to hydrated configurations are reviewed and merged into the deployment repository, run appctl apply staging to deploy this application to the staging cluster. Promotion from staging to prod is as easy as appctl apply prod –from-env staging. To do rollback in case of failure, simply run appctl apply staging –from-tag=OLD_VERSION_TAG. What’s more, this appctl workflow can be automated and streamlined by executing it in scripts or pipelines. Application Manager for all your Kubernetes apps Now, with Application Manager, it’s easy to create a dev-to-production application delivery flow with a simple and declarative approach that’s recommended by Google. We are also working with our partners on the Google Cloud Marketplace to enable seamless updates of the Kubernetes applications you procure there, so you get automated updates and rollbacks of your partner applications. You can find more information here. For a detailed overview of Application Manager, please see this demo video. When you’re ready to get started, follow the steps in this tutorial.
Quelle: Google Cloud Platform

Womply: Helping small businesses compete through API management

Editor’s note: Today we hear from Brad Plothow and Mihir Sambhus from Womply, a software-as-a-service company that makes CRM, email marketing, and reputation management software for small businesses. The company recently developed APIs to help small businesses use data to gain a clearer picture of their markets—and how to compete in them.Small businesses today have a lot of opportunities to expand services and improve everyday operations. With access to the right data and resources, they can take advantage of digital advertising, reviews sites, and customer insights. These can help them attract customers and foster long-term customer relationships, so they can better compete with large corporations and native e-commerce businesses.Our mission at Womply is to help small businesses thrive in a digital world. Since 2011, we’ve offered software-as-a-service solutions for small businesses, serving more than 450,000 businesses every day with our software. This system includes a treasure trove of data about the relationship between local businesses and their customers. We started wondering if there was a way that we could open up our data platform to help small businesses gain new insights through their other applications.To do that, we decided to create APIs, which would give developers and businesses a secure and controlled way to access our data platform. While we have a microservice architecture that we use for internal systems, we had never before created a scalable API program. When we looked into API solutions, we found that the Apigee API Management Platform had everything that we needed to bring security, speed, and self-service to our new API program.Cultivating relationships with developersIf we want our data platform to improve local commerce for businesses and consumers, the first step is to win over developers. They’re the ones who will be building the apps, services, and integrations with our APIs, after all. It was very important to us that we create a developer portal that was scalable and extremely easy to use.The Apigee developer portal provides simple and secure access to all of the information that developers need, from signing up for the portal, to reading documentation about how to use the APIs, to a sandbox environment for experimentation. The portal encourages self-service, so we don’t need a support team to walk developers through the development process.We also wanted to boost the profile of our API program by announcing our developer portal at Money20/20 USA, an annual conference for fintech and finance companies. It would have taken us at least three months to build a developer portal on our own, and we would have missed this important deadline. Taking advantage of the built-in Apigee developer portal, we were ready to go live during the conference. In addition, since our team didn’t need to worry about building the portal, we could spend more time creating a bigger library of APIs and proxies for developers.Fast time-to-market with ApigeeApigee makes it very easy for our team to develop and release APIs. It only takes one person to launch each new API, which means that we can release more features quickly. We’ve released four APIs so far, and plan to make quite a few more available through our developer portal this year.One of the first APIs we released is a benchmarking API, which lets businesses compare their own financial performance to average performance of comparable businesses in their geography. For example, someone running a restaurant in downtown San Francisco could compare their nightly revenue with similar restaurants in the same area. Are they struggling compared to their competition, or are they leading the pack? Are they stagnating, or improving relative to the market? Using this API, any developer can easily add benchmarking capabilities to their apps or services.Going forward, developers could also use APIs to connect our sales transaction data to online marketing data. This would enable small businesses to attribute offline sales to online marketing and determine whether their Google ads or Facebook banners are paying off. Offline attributions are challenging, but APIs make it possible.While we have planned a roadmap for APIs and proxies, the reality is that developers will probably surprise us with use cases that no one has even thought of yet. We have a waitlist of developers eager to sign up to our developer portal, but we’re onboarding developers slowly so that we can make sure that we’re releasing the right APIs for people’s needs.Ready for any challengesMonetization is another important part of our API roadmap, to increase our revenue streams. We are considering both an à la carte model, where developers can subscribe to one or two APIs, and a tiered package model, where developers get access to multiple APIs but pay different amounts depending on the number of API calls made. Because Apigee offers flexible monetization, we can implement any type of monetization system and adjust our strategy based on the analytics reports provided by the platform.We hope that by sharing access to our data through APIs, local businesses will gain new insights and efficiencies that will help them thrive. Along the way, we’re confident that Apigee will handle any future API use cases that we want to explore.
Quelle: Google Cloud Platform

Amazon Forecast verwendet jetzt Feiertagsdaten aus über 30 Ländern, um die Prognosegenauigkeit zu verbessern

Amazon Forecast ist ein vollständig verwalteter Service, der Machine Learning (ML) verwendet, um genaue Prognosen vorzulegen, ohne dass vorherige ML-Erfahrung erforderlich ist. Amazon Forecast ist in einer Vielzahl von Anwendungsfällen anwendbar, einschließlich Prognose des Energiebedarfs, Personalplanung, Prognose der Nutzung der Cloud-Infrastruktur, Bestandsplanung, Verkehrsprognose und Finanzplanung.
Quelle: aws.amazon.com

Amazon VPC-Endpunkte und Endpunkt-Services unterstützen jetzt Tag-On Create

Sie können jetzt Tags, einfache Beschriftungen, die aus einem benutzerdefinierten Schlüssel und einem optionalen Wert bestehen, direkt zu Ihren Amazon Virtual Private Cloud (VPC)-Gateway-Endpunkten, Schnittstellenendpunkten (AWS PrivateLink) und Endpunkt-Services (AWS PrivateLink) hinzufügen, während Sie die Ressource erstellen. Wenn Sie Ressourcen zum Erstellungszeitpunkt markieren, müssen Sie anschließend keine benutzerdefinierten Skripts ausführen.
Quelle: aws.amazon.com