Playstation: Sony kauft Insomniac Games

Das Entwicklerstudio Insomniac Games hat auch schon Spiele für die Xbox One produziert, doch damit ist nun wohl Schluss: Sony hat das Team hinter Spider-Man gekauft und gliedert es in seine internationale Studiogruppe ein. (Playstation, Sony)
Quelle: Golem

KaaS vs PaaS: Mirantis Kubernetes-as-a-Service vs OpenShift

The post KaaS vs PaaS: Mirantis Kubernetes-as-a-Service vs OpenShift appeared first on Mirantis | Pure Play Open Cloud.
Many companies who use Kubernetes today do it using Red Hat’s OpenShift distribution, so one question we often hear from users asking about the Mirantis Kubernetes as a Service beta is “How is KaaS different from OpenShift?”
The short answer is that OpenShift is a Platform as a Service (PaaS) and Mirantis KaaS is…well…a KaaS. These two concepts are different. Let me explain.
OpenShift is a Platform as a Service, or PaaS, that just happens to use Kubernetes as its underlying substrate. But just because a PaaS uses K8s, that doesn’t automatically make it a KaaS.
PaaS enables developers to easily create applications without having to worry about setting up the underlying platform components. So if a developer wants a database, they just check a box, or make an API call, or use whatever mechanism that the PaaS provides to get one. They don’t have to install the database, they can just use it. OpenShift, Cloud Foundry, and Heroku are examples of a PaaS. 
KaaS systems, on the other hand, assume that the K8s API is the highest level of abstraction exposed to the developer. As such, its focus is to make it easy to create, scale, and manage many distributed Kubernetes clusters, whether on premises or across multiple cloud environments. 
OpenShift has implemented some KaaS functionality in its new version, OpenShift 4, but most folks using it are still on the more PaaS-y OpenShift 3 version, so to get very specific about how KaaS differs from PaaS, let’s compare KaaS with the most commonly used OpenShift (version 3.x), with respect to key use cases and implementation approaches.  
K8s Cluster Upgrades
Because the emphasis of a PaaS is on application development vs. Kubernetes cluster lifecycle management, the process to upgrade an OpenShift PaaS instance (and its embedded K8s version) is not necessarily straightforward. It is generally assumed that a PaaS is used by developers, but is operated by a trained operations team. Because OpenShift consists of multiple frameworks on top of Kubernetes, upgrading an OpenShift cluster is a bespoke procedure consisting of a series of manual Ansible script runs.
Conversely, under the hood Mirantis KaaS partially relies on KubeSpray – a set of validated Ansible scripts maintained by the K8s community – to perform work required for a cluster upgrade. However, from the end-user standpoint, all of the complexity is hidden behind Mirantis’ implementation of ClusterAPI – a Kubernetes-native API standard for cluster lifecycle management – to perform any procedure (such as an upgrade) on a cluster. As such, from an end user standpoint, a cluster upgrade is a single ClusterAPI-compliant API call. The process is similar for other KaaS implementations, such as Rancher and Platform9.
Ability to Scale Kubernetes Clusters
Continuing on the cluster lifecycle management thread, another area in which OpenShift and Kubernetes as a Service differ is when it comes to scaling Kubernetes clusters.
For OpenShift, the process consists of four basic steps, which are similar to those used for installing OpenShift in the first place:

Create VMs with Red Hat Enterprise Linux installed on them
Add access to Red Hat subscription manager repositories on those nodes
Add these nodes to an OpenShift deployment Ansible playbook that lists IP addresses / DNS entries and tweak any other playbook settings as necessary
Run an Ansible playbook and wait for execution to complete

Note that these are steps only an operator can take.
If developers need the ability to scale their own clusters, they will need a KaaS environment, which enables them to scale a cluster with an API call or through the UI; OpenShift is fine as long as developers are not going to need to scale the cluster on which they’re doing their work.
Multi-tenancy
KaaS and OpenShift achieve multi-tenancy in two different ways, and much of that difference has to do with the inherent logic and resulting architecture behind the two solutions. 
An OpenShift PaaS instance is a single instance of K8s running in a single location. 
A KaaS instance is many K8s instances running across many locations, but centrally controlled.  
Because in OpenShift all developers use a single Kubernetes instance, OpenShift has created the additional concept of a “project”, which uses Kubernetes namespaces to isolate users from each others’ resources within a single K8s cluster. Keep in mind, however, that when K8s was initially built, it wasn’t designed to be inherently multi-tenant, and its architecture mostly assumes that a single cluster represents a single tenant. While there are many efforts in the community to implement multi-tenancy, there isn’t an agreement on a single “correct” approach, and upstream is far from an ideal solution here.  
KaaS, on the other hand, doesn’t add any additional hacks to implement multi-tenancy, because resources can be isolated by cluster, with multiple clusters per user or per project. (Note that this is also the approach currently implemented in public cloud implementations such as Google’s GKE.)
IAM Authentication
While both KaaS and OpenShift can integrate with authentication systems such as LDAP and Active Directory, the key difference is in just what you’re controlling access TO. 
OpenShift PaaS (as with most other PaaSes) is a self-contained, all-inclusive and opinionated implementation of all things necessary for developers to build an app. As such, it comes with its own pre-integrated artifact repository, continuous delivery engine and even an SDN. Therefore, once a user is authenticated into an OpenShift instance, there is limited need to interact with any services outside. An OpenShift end user is not going to be deploying K8s embedded in OpenShift on more nodes in the cloud and, therefore, doesn’t need access for it. Similarly, they won’t be using an external artifact repository (such as JFrog) or CD engine (such as Spinnaker or Argo).  Because of this, in OpenShift users authenticate via LDAP or Active Directory to get access to their own “projects” mapped to Kubernetes namespaces, but if they also want to include access to other external resources, such as artifact repositories or external storage devices, operators can configure that access via a bespoke manual procedure. 
Conversely, KaaS generally assumes that an enterprise will have a diverse ecosystem of external “best-of-breed” systems that a Kubernetes cluster (and consequently its end users) needs to interface with, so the Mirantis KaaS beta is implemented as a Single Sign On system using the community KeyCloak project. This approach enables a single user to have access to multiple clusters, artifact repositories, physical machines, and so on, through a single configuration. 
Plug-Ins: CNI, Artifact Repository, CI/CD 
As outlined in the section above, PaaS and KaaS different greatly in their philosophy as to what’s in and out of scope. While it’s not entirely black and white, OpenShift mostly follows the “all-inclusive, full-stack” approach in which everything is defined, whereas KaaS mostly follows the “batteries included, but optional” approach, in which almost everything can be defined, but can also be changed if necessary. Let’s take a closer look to make things more concrete.  
For artifact repositories, OpenShift comes with its own, fairly sophisticated system, which is an augmented implementation of Docker Registry. It gets installed automatically via the same Ansible playbook as the rest of OpenShift, and is designed to interact with a single instance / K8s cluster. This option is best when development is centralized to a single (OpenShift-based) Kubernetes cluster. With some minimal work, it is also possible to bridge multiple OpenShift instances to a third party artifact repository such as JFrog.
The Mirantis KaaS beta does not implement its own registry, but assumes that a user already has a registry in mind. If not, KaaS offers the option to co-deploy a Harbor registry with the Kubernetes cluster. Note that in KaaS, this registry is not typically tied to a single cluster, so you can use a single instance of Harbor to store artifacts from across multiple, geo-distributed K8s clusters. This method is a good choice when development spans multiple Kubernetes clusters.
For CI/CD, OpenShift implements its own system based on Jenkins, enabling developers to build applications with a proper CI/CD workflow. Mirantis KaaS doesn’t specify a CI/CD engine, and is designed to integrate with whatever the customer is already using. 
Application Catalog
Perhaps the most significant thing OpenShift has that KaaS doesn’t is the Application Catalog.  After all, that’s what makes it a PaaS in the first place! This catalog comes populated with over a hundred services, such as MySQL, MongoDB, JBoss, RabbitMQ, and so on, but there’s one very important caveat to keep in mind: these pre-packaged services are there just to get your developers started. They don’t come with support, most won’t be upgraded, and most important to keep in mind, they’re not meant to be used in production.  
That doesn’t mean the catalog can’t be useful, but most large enterprises end up turning off most of these services and re-populating it with their approved, validated, and tested set of services.
Multi-Cloud K8s Management
Just as the application catalog is key to OpenShift’s identity as a PaaS, the ability to deploy multiple Kubernetes clusters is primary to KaaS. 
While OpenShift focuses on making life easier for developers, KaaS focuses on making it simple for operators (and by extension, the developers they support) by exposing a self-service, developer-facing interface for deploying, scaling and upgrading multiple K8s clusters, distributed across public and private clouds. 
Summary
Bringing it all together side-by-side, we get the following picture: 

Typical KaaS
OpenShift 3.x

K8s Cluster Upgrade 
Automated
Manual

K8s Cluster Scaling
ClusterAPI or UI 
Bespoke Ansible Playbooks

Multi-tenancy
Per K8s cluster
Per K8s namespace

IAM Authentication
KeyCloak
Proprietary

Artifact Repository
Harbor Plug-In option
Implementation of Docker Registry with external plug-in option

Access to CNI
Calico
OpenShift SDN

Built-in CI/CD
No
Jenkins 

Application catalog
No
Yes, 100+ apps

Multi-Cloud K8s Management 
OpenShift, AWS
No

Ultimately, the choice between PaaS and KaaS is going to depend on where you want to draw the line between devs and ops in your organization. If you (like most enterprises today) need substantial and strict guardrails and training wheels for your dev teams to adhere to, with an experienced, third-party vendor providing and managing the complete catalog of application building blocks, you should opt for the OpenShift and the PaaS route. On the other hand, if you believe that the Kubernetes API is that line of separation, you should go with KaaS, particularly if you are engaged in multi-cluster or multi-cloud development.
Is Kubernetes as a Service the right solution for you? Request access to the private beta of Mirantis KaaS and find out.
The post KaaS vs PaaS: Mirantis Kubernetes-as-a-Service vs OpenShift appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Announcing the general availability of Python support in Azure Functions

Python support for Azure Functions is now generally available and ready to host your production workloads across data science and machine learning, automated resource management, and more. You can now develop Python 3.6 apps to run on the cross-platform, open-source Functions 2.0 runtime. These can be published as code or Docker containers to a Linux-based serverless hosting platform in Azure. This stack powers the solution innovations of our early adopters, with customers such as General Electric Aviation and TCF Bank already using Azure Functions written in Python for their serverless production workloads. Our thanks to them for their continued partnership!

In the words of David Havera, blockchain Chief Technology Officer of the GE Aviation Digital Group, "GE Aviation Digital Group's hope is to have a common language that can be used for backend Data Engineering to front end Analytics and Machine Learning. Microsoft have been instrumental in supporting this vision by bringing Python support in Azure Functions from preview to life, enabling a real world data science and Blockchain implementation in our TRUEngine project."

Throughout the Python preview for Azure Functions we gathered feedback from the community to build easier authoring experiences, introduce an idiomatic programming model, and create a more performant and robust hosting platform on Linux. This post is a one-stop summary for everything you need to know about Python support in Azure Functions and includes resources to help you get started using the tools of your choice.

Bring your Python workloads to Azure Functions

Many Python workloads align very nicely with the serverless model, allowing you to focus on your unique business logic while letting Azure take care of how your code is run. We’ve been delighted by the interest from the Python community and by the productive solutions built using Python on Functions.

Workloads and design patterns

While this is by no means an exhaustive list, here are some examples of workloads and design patterns that translate well to Azure Functions written in Python.

Simplified data science pipelines

Python is a great language for data science and machine learning (ML). You can leverage the Python support in Azure Functions to provide serverless hosting for your intelligent applications. Consider a few ideas:

Use Azure Functions to deploy a trained ML model along with a scoring script to create an inferencing application.

Leverage triggers and data bindings to ingest, move prepare, transform, and process data using Functions.
Use Functions to introduce event-driven triggers to re-training and model update pipelines when new datasets become available.

Automated resource management

As an increasing number of assets and workloads move to the cloud, there's a clear need to provide more powerful ways to manage, govern, and automate the corresponding cloud resources. Such automation scenarios require custom logic that can be easily expressed using Python. Here are some common scenarios:

Process Azure Monitor alerts generated by Azure services.
React to Azure events captured by Azure Event Grid and apply operational requirements on resources.

Leverage Azure Logic Apps to connect to external systems like IT service management, DevOps, or monitoring systems while processing the payload with a Python function.
Perform scheduled operational tasks on virtual machines, SQL Server, web apps, and other Azure resources.

Powerful programming model

To power accelerated Python development, Azure Functions provides a productive programming model based on event triggers and data bindings. The programming model is supported by a world class end-to-end developer experience that spans from building and debugging locally to deploying and monitoring in the cloud.

The programming model is designed to provide a seamless experience for Python developers so you can quickly start writing functions using code constructs that you're already familiar with, or import existing .py scripts and modules to build the function. For example, you can implement your functions as asynchronous coroutines using the async def qualifier or send monitoring traces to the host using the standard logging module. Additional dependencies to pip install can be configured using the requirements.txt file.

With the event-driven programming model in Functions, based on triggers and bindings, you can easily configure the events that will trigger the function execution and any data sources the function needs to orchestrate with. This model helps increase productivity when developing apps that interact with multiple data sources by reducing the amount of boilerplate code, SDKs, and dependencies that you need to manage and support. Once configured, you can quickly retrieve data from the bindings or write back using the method attributes of your entry-point function. The Python SDK for Azure Functions provides a rich API layer for binding to HTTP requests, timer events, and other Azure services, such as Azure Storage, Azure Cosmos DB, Service Bus, Event Hubs, or Event Grid, so you can use productivity enhancements like autocomplete and Intellisense when writing your code. By leveraging the Azure Functions extensibility model, you can also bring your own bindings to use with your function, so you can also connect to other streams of data like Kafka or SignalR.

Easier development

As a Python developer, you can use your preferred tools to develop your functions. The Azure Functions Core Tools will enable you to get started using trigger-based templates, run locally to test against real-time events coming from the actual cloud sources, and publish directly to Azure, while automatically invoking a server-side dependency build on deployment. The Core Tools can be used in conjunction with the IDE or text editor of your choice for an enhanced authoring experience.

You can also choose to take advantage of the Azure Functions extension for Visual Studio Code for a tightly integrated editing experience to help you create a new app, add functions, and deploy, all within a matter of minutes. The one-click debugging experience enables you to test your functions locally, set breakpoints in your code, and evaluate the call stack, simply with the press of F5. Combine this with the Python extension for Visual Studio Code, and you have an enhanced Python development experience with auto-complete, Intellisense, linting, and debugging.

For a complete continuous delivery experience, you can now leverage the integration with Azure Pipelines, one of the services in Azure DevOps, via an Azure Functions-optimized task to build the dependencies for your app and publish them to the cloud. The pipeline can be configured using an Azure DevOps template or through the Azure CLI.

Advance observability and monitoring through Azure Application Insights is also available for functions written in Python, so you can monitor your apps using the live metrics stream, collect data, query execution logs, and view the distributed traces across a variety of services in Azure.

Host your Python apps with Azure Functions

Host your Python apps with the Azure Functions Consumption plan or the Azure Functions Premium plan on Linux.

The Consumption plan is now generally available for Linux-based hosting and ready for production workloads. This serverless plan provides event-driven dynamic scale and you are charged for compute resources only when your functions are running. Our Linux plan also now has support for managed identities, allowing your app to seamlessly work with Azure resources such as Azure Key Vault, without requiring additional secrets.

The Consumption plan for Linux hosting also includes a preview of integrated remote builds to simplify dependency management. This new capability is available as an option when publishing via the Azure Functions Core Tools and enables you to build in the cloud on the same environment used to host your apps as opposed to configuring your local build environment in alignment with Azure Functions hosting.

Workloads that require advanced features such as more powerful hardware, the ability to keep instances warm indefinitely, and virtual network connectivity can benefit from the Premium plan with Linux-based hosting now available in preview.

With the Premium plan for Linux hosting you can choose between bringing only your app code or bringing a custom Docker image to encapsulate all your dependencies, including the Azure Functions runtime as described in the documentation “Create a function on Linux using a custom image.” Both options benefit from avoiding cold start and from scaling dynamically based on events.

Next steps

Here are a few resources you can leverage to start building your Python apps in Azure Functions today:

Build your first Azure Functions in Python using the command line tools or Visual Studio Code.
Learn more about the programming model using the developer guide.
Explore the Serverless Library samples to find a suitable example for your data science, automation, or web workload.
Sign up for an Azure free account, if you don’t have one yet.

On the Azure Functions team, we are committed to providing a seamless and productive serverless experience for developing and hosting Python applications. With so much being released now and coming soon, we’d love to hear your feedback and learn more about your scenarios. You can reach the team on Twitter and on GitHub. We actively monitor StackOverflow and UserVoice as well, so feel free to ask questions or leave your suggestions. We look forward to hearing from you!
Quelle: Azure

Azure Archive Storage expanded capabilities: faster, simpler, better

Since launching Azure Archive Storage, we have seen unprecedented interest and innovative usage from a variety of industries. Archive Storage is built as a scalable service for cost-effectively storing rarely accessed data for long periods of time. Cold data such as application backups, healthcare records, autonomous driving recordings, etc. that might have been previously deleted could be stored in Azure Storage’s Archive tier in an offline state, then rehydrated to an online tier when needed. Earlier this month, we made Azure Archive Storage even more affordable by reducing prices by up to 50 percent in some regions, as part of our commitment to provide the most cost-effective data storage offering.

We’ve gathered your feedback regarding Azure Archive Storage, and today, we’re happy to share three archive improvements in public preview that make our service even better.

1. Priority retrieval from Azure Archive

To read data stored in Azure Archive Storage, you must first change the tier of the blob to hot or cool. This process is known as rehydration and takes a matter of hours to complete. Today we’re sharing the public preview release of priority retrieval from archive allowing for much faster offline data access. Priority retrieval allows you to flag the rehydration of your data from the offline archive tier back into an online hot or cool tier as a high priority action. By paying a little bit more for the priority rehydration operation, your archive retrieval request is placed in front of other requests and your offline data is expected to be returned in less than one hour.

Priority retrieval is recommended to be used for emergency requests for a subset of an archive dataset. For the majority of use cases, our customers plan for and utilize standard archive retrievals which complete in less than 15 hours. But on rare occasions, a retrieval time of an hour or less is required. Priority retrieval requests can deliver archive data in a fraction of the time of a standard retrieval operation, allowing our customers to quickly resume business as usual. For more information, please see Blob Storage Rehydration.

The archive retrieval options now provided under the optional parameter are:

Standard rehydrate-priority is the new name for what Archive has provided over the past two years and is the default option for archive SetBlobTier and CopyBlob requests, with retrievals taking up to 15 hours.
High rehydrate-priority fulfills the need for urgent data access from archive, with retrievals for blobs under ten GB, typically taking less than one hour.

Regional priority retrieval demand at the time of request can affect the speed at which your data rehydration is completed. In most scenarios, a high rehydrate-priority request may return your Archive data in under one hour. In the rare scenario where archive receives an exceptionally large amount of concurrent high rehydrate-priority requests, your request will still be prioritized over standard rehydrate-priority but may take one to five hours to return your archive data. In the extremely rare case that any high rehydrate-priority requests take over five hours to return archive blobs under a few GB, you will not be charged the priority retrieval rates.

2. Upload blob direct to access tier of choice (hot, cool, or archive)

Blob-level tiering for general-purpose v2 and blob storage accounts allows you to easily store blobs in the hot, cool, or archive access tiers all within the same container. Previously when you uploaded an object to your container, it would inherit the access tier of your account and the blob’s access tier would show as hot (inferred) or cool (inferred) depending on your account configuration settings. As data usage patterns change, you would change the access tier of the blob manually with the SetBlobTier API or automate the process with blob lifecycle management rules.

Today we’re sharing the public preview release of Upload Blob Direct to Access tier, which allows you to upload your blob using PutBlob or PutBlockList directly to the access tier of your choice using the optional parameter x-ms-access-tier. This allows you to upload your object directly into the hot, cool, or archive tier regardless of your account’s default access tier setting. This new capability makes it simple for customers to upload objects directly to Azure Archive in a single transaction. For more information, please see Blob Storage Access Tiers.

3. CopyBlob enhanced capabilities

In certain scenarios, you may want to keep your original data untouched but work on a temporary copy of the data. This holds especially true for data in Archive that needs to be read but still kept in Archive. The public preview release of CopyBlob enhanced capabilities builds upon our existing CopyBlob API with added support for the archive access tier, priority retrieval from archive, and direct to access tier of choice.

The CopyBlob API is now able to support the archive access tier; allowing you to copy data into and out of the archive access tier within the same storage account. With our access tier of choice enhancement, you are now able to set the optional parameter x-ms-access-tier to specify which destination access tier you would like your data copy to inherit. If you are copying a blob from the archive tier, you will also be able to specify the x-ms-rehydrate-priority of how quickly you want the copy created in the destination hot or cool tier. Please see Blob Storage Rehydration and the following table for information on the new CopyBlob access tier capabilities.

 

Hot tier source

Cool tier source

Archive tier source

Hot tier destination

Supported

Supported

Supported within the same account; pending rehydrate

Cool tier destination

Supported

Supported

Supported within the same account; pending rehydrate

Archive tier destination

Supported

Supported

Unsupported

Getting Started

All of the features discussed today (upload blob direct to access tier, priority retrieval from archive, and CopyBlob enhancements) are supported by the most recent releases of the Azure Portal, .NET Client Library, Java Client Library, Python Client Library. As always you can also directly use the Storage Services REST API (version 2019-02-02 and greater). In general, we always recommend using the latest version regardless of whether you are using these new features.

Build it, use it, and tell us about it!

We will continue to improve our Archive and Blob Storage services and are looking forward to hearing your feedback about these features through email at ArchiveFeedback@microsoft.com. As a reminder, we love hearing all of your ideas and suggestions about Azure Storage, which you can post at Azure Storage feedback forum.

Thanks, from the entire Azure Storage Team!
Quelle: Azure

Skip the heavy lifting: Moving Redshift to BigQuery easily

Enterprise data warehouses are getting more expensive to maintain. Traditional data warehouses are hard to scale and often involve lots of data silos. Business teams need data insights quickly, but technology teams have to grapple with managing and providing that data using old tools that aren’t keeping up with demand. Increasingly, enterprises are migrating their data warehouses to the cloud to take advantage of the speed, scalability, and access to advanced analytics it offers. With this in mind, we introduced the BigQuery Data Transfer Service to automate data movement to BigQuery, so you can lay the foundation for a cloud data warehouse without writing a single line of code. Earlier this year, we added the capability to move data and schema from Teradata and S3 to BigQuery via the BigQuery Data Transfer Service. To help you take advantage of the scalability of BigQuery, we’ve now added a service to transfer data from Amazon Redshift, in beta, to that list. Data and schema migration from Redshift to BigQuery is provided by a combination of the BigQuery Data Transfer Service and a special migration agent running on Google Kubernetes Engine (GKE), and can be performed via UI, CLI or API. In the UI, Redshift to BigQuery migration can be initiated from BigQuery Data Transfer Service by choosing Redshift as a source. The migration process has three steps: UNLOAD from Redshift to S3—The GKE agent initiates an UNLOAD operation from Redshift to S3. The agent extracts Redshift data as a compressed file, which helps customers minimize the egress costs. Transfer from S3 to Cloud Storage—The agent then moves data from Amazon S3 to a Cloud Storage bucket using Cloud Storage Transfer Service. Load from Cloud Storage to BigQuery—Cloud Storage data is loaded into BigQuery (up to 10 million files).The BigQuery Data Transfer Service, showing Redshift as a source.You can see more here about how customers are using the BigQuery Data Transfer Service to move database instances easily.To get started, follow our step-by-step guide, or read our article on migrating data to BigQuery using Informatica Intelligent Cloud Services. Qualifying customers can also take advantage of our data warehouse migration offer, which provides architecture and design guidance from Google Cloud engineers, proof-of-concept funding, free training, and usage credits to help speed up your modernization process. Learn more here.
Quelle: Google Cloud Platform