Kubernetes Backup and Restore with Velero

akomljen.com – Recently I migrated some Kubernetes clusters, managed by Amazon EKS. The clusters were running in public subnets, so I wanted to make them more secure by utilizing private and public subnets where ne…
Quelle: news.kubernauts.io

Choosing between BigQuery on-demand and flat rate pricing

Editor’s note: This is one installment in a series about managing BigQuery costs. Check out the other posts on using Reservations effectively and how to use Flex Slots to save on costs.When you use data to guide your business decision-making process, you need to continually optimize your data analytics usage to get more out of that data. Here, we’ll share some ways to be more efficient with your BigQuery usage through ups and downs and changing demands.Like a lot of things in the data realm, there are simple answers that address simple situations. For the more complex situations, the answers get less simple, but the solutions are much more powerful. In this post, we’ll walk through a few scenarios that illustrate the ways you can deploy BigQuery to fit the particular needs of your business.First, a quick intro: BigQuery is Google Cloud’s fully managed enterprise data warehouse. We decouple storage and compute, so the costs for storage and compute are decoupled as well. We’ll only address compute costs in this post.So let’s talk about how compute is billed in BigQuery. You can use BigQuery entirely for free via the sandbox. You can use a pure pay-as-you-go model, where you pay for only the compute you use for querying data. In this pay-as-you-go model, also known as on-demand pricing, you are billed based on the number of bytes your queries scan. In the flat-rate model, you pay a fixed amount each month for dedicated resources in the BigQuery service, and you can scan as much data as you want.Let’s describe each of these in a little more detail.BigQuery sandboxThe BigQuery sandbox can be used by anyone with a Google account, even if they haven’t set up Google Cloud billing. This means the usage, while subject to some limits, is entirely free.BigQuery on-demand pricing modelBigQuery’s on-demand model gets every Google Cloud project up to 2,000 slots, with the ability to burst beyond that when capacity is available. Slots are BigQuery’s unit of computational capacity, and they get scheduled dynamically as your queries execute. As above, when your queries execute, they’ll scan data. You get billed based on how many bytes you scan in the on-demand billing model.BigQuery flat-rate pricing modelIn the flat-rate model, you decide how many slots you’d like to reserve, and you pay a fixed cost each month for those resources. You can choose whether to reserve slots for as little as one minute, or on a month-to-month basis, or commit to a year. In this model, you’re no longer billed based on your bytes scanned. Think of this like an all-you-can-query plan.How to choose the best plan for your situation? Let’s look at a few scenarios that will illuminate some of the decision points. The scenarios build on each other, with each representing an increasingly more complex environment.You’re just getting started with BigQuery. You don’t know how much querying you’re going to do, and you need to be efficient with your spend.You’ve been using BigQuery for a while. Your data is growing, and more and more people are using the warehouse as the business seeks greater access to data. You want to support this while keeping costs in check.You’re looking to consolidate data silos into one source for analytics workloads, and you’re looking to support advanced analytics using Spark or Python. This is in addition to serving multiple lines of business, with a mix of different workloads, from ad-hoc analytics to business intelligence. Some of these workloads will have tight service-level objectives, while others can tolerate best-effort service levels.Here’s how to tackle each of these scenarios.1. You’re just getting started.BigQuery’s on-demand model is perfect for anyone who’s looking for cost efficiency and to pay only for what they consume. If you follow our best practices for cost optimization and make use of custom cost controls, you will be billed for only what you use, while guarding against unexpected spikes in consumption.Since you optimize for cost and performance in BigQuery in almost exactly the same ways (by limiting the data you scan), you’ll get better performance while consuming the least resources possible—the best of all worlds!On-demand slots scale to zero when you’re not querying, and it happens instantly. You don’t need to wait for an inactivity timeout that may never come in order to shut down some nodes. BigQuery only ever schedules as many resources as are necessary to complete your queries, and when the queries complete, the resources are released immediately.One of the most important things to do early on is set up monitoring of your BigQuery usage. Your job metadata is stored for the past 180 days in INFORMATION_SCHEMA tables that you can query and report against. You should also make use of the BigQuery metrics stored in Cloud Monitoring to understand your slot utilization and more.2. You’ve been using BigQuery for a while.As your use of BigQuery grows, you’ll scan more data, so your costs will increase correspondingly. If you’re using the on-demand model, you might look for opportunities to save on cost. One option is to consider BigQuery Reservations.The first thing to know is that the BigQuery Reservations and on-demand pricing models are not mutually exclusive. You can use one or the other, you can combine them as you see fit, or you can try out a reservation with a short-term allocation of Flex Slots. What are Flex Slots? Flex Slots let you scale your data warehouse up and down very quickly—for as little as 60 seconds at a time. Flex Slots let you quickly respond to increased demand for analytics and prepare for business events such as retail holidays and app launches. In addition, Flex Slots are a great way to test a dedicated reservation for a short period of time to help determine whether a longer slot commitment is right for your workloads. Since many businesses have analytics needs that vary seasonally, monthly, or even on an hourly basis, you can reserve Flex Slots to add capacity to your slot pool when you need it.Consider also that you can address different workloads with a combination of cost models. Let’s imagine you have several workloads that revolve around BigQuery: You ingest data, you transform it in an ELT style, and serve both reporting and ad-hoc query usage.Ad-hoc workloads are less predictable, almost by definition. If you’re looking to keep costs in check without hampering your users’ ability to explore data, it can be a good idea to use the flat-rate model to provide an all-you-can-query experience. Reporting workloads are the yin to ad-hoc workloads’ yang. In contrast to the unpredictable load ad-hoc queries can bring, reporting workloads can be much more predictable. Ad-hoc workloads are usually assigned best-effort resources, while reporting workloads tend to have strict SLAs. For workloads with SLAs, it’s helpful to earmark resources for them and ensure that other workloads don’t get in the way. This is where BigQuery’s workload management through reservations comes in. You can configure a project to consume slots from the slot pool on a best-effort basis, while reserving slots for high-SLA workloads. When the high-SLA workloads are not consuming their reservation, the slots can be seamlessly shared with other workloads under the reservation. And when the workloads with strict SLAs run, BigQuery will automatically and non-disruptively pre-empt the slots that had been shared with other, less critical workloads.Finally, maybe the amount of data you transform on a daily basis is fairly predictable. In other words, you know that your ELT jobs will be processing about the same amount of data each day. Since the number of bytes you process is predictable, this workload may be a good match for on-demand pricing. So you might decide to run your ELT workloads in a project that is not assigned to a reservation, thus using on-demand resources. In addition to paying only for the bytes you scan, you also can burst beyond the usual 2,000 slots per project when conditions allow.3. You’re consolidating data silos and more.So you’re consolidating from multiple data silos, and you’ve got lots of workloads. In addition to the kinds of workloads described in the second scenario above, there are power users and data scientists consuming data from your data lake using Spark or Jupyter, and they’d like to continue to do the same thing with BigQuery. They plan to use BigQuery ML to create and get batch inferences from ML models. You might choose to mix and match models as above, but consider that flat rate also includes all BigQuery ML usage, and 300 TB per month of Storage API usage. So for data science and advanced analytics involving Python (Jupyter, Pandas, etc.) or Spark, there may be savings to be had by running those workloads in a Google Cloud project that is assigned a slot reservation.Putting it all togetherBy the time your infrastructure has matured to have a situation like that in the third scenario, you may be mixing and matching multiple billing constructs in order to achieve your cost and efficiency goals: BigQuery Reservations for cost predictability and to provide guaranteed capacity for workloads with SLAs; BigQuery Flex Slots for cyclical workloads that require extra capacity, or for workloads that need to process a lot of data in a short time, and so would be less expensive to run using reserved slots for a short time;On-demand for workloads where the volume of data to be processed is predictable. The per-byte-scanned billing model can be advantageous in that you pay precisely for what you use, with the amount of scanned data as a proxy for compute consumption.Provided you can place your workloads in Google Cloud projects aligned to reservations, or to projects that are opted out of reservations, you can choose the resource that’s right for you on a workload-by-workload basis. Learn more about BigQuery pricing models.
Quelle: Google Cloud Platform

Optimize BigQuery costs with Flex Slots

Editor’s note: This is one installment in a series about managing BigQuery costs. Check out the other posts on choosing between BigQuery pricing models and using Reservations effectively.Google Cloud’s enterprise data warehouse BigQuery offers some flexible pricing options so you can get the most out of your resources. Our recently added Flex Slots can save you money by switching your billing to flat-rate pricing for defined time windows to add maximum efficiency. Flex Slots lets you take advantage of flat-rate pricing when it’s most advantageous, rather than only using on-demand pricing.This is particularly useful for those of you querying large tables—those above 1 terabyte. Flex Slots lets you switch to flat-rate pricing to save money on these larger queries. We often hear, for example, that running data science or ELT jobs over large tables can benefit from using Flex Slots. And companies with teams of AI Notebook users running analytics jobs for several hours or more a day can benefit as well. In this blog post, you’ll see how you can incorporate Flex Slots programmatically into your BigQuery jobs to meet querying spikes or scale on demand to meet data science needs, without going over budget or using a lot of management overhead. Users on Flat Rate commitments no longer pay for queries by bytes scanned and instead pay for reserved compute resources; using Flex Slots commitments, you can cancel anytime after 60 seconds. At the time of this writing, an organization can run an hour’s worth of queries in BigQuery’s U.S. multi-region using Flex Slots for the same price as a single 4TiB on-demand query.  Setting up for Flex SlotsThe recommended best practice for BigQuery Reservations is to maintain a dedicated project for administering the reservations. In order to create reservations, the user account will need the bigquery.resourceAdmin role on the project and Reservations API slots quota.Understanding the conceptsFlex Slot commitments are purchases charged in increments of 500 slot hours for $20, or ~$0.33/minute. You can increase your slot commitments if you need faster queries or more concurrency.  Reservations create a named allocation of slots, and are necessary to assign purchased slots to a project. Find details on reservations in this documentation.Assignments assign reservations to Organizations, Folders, or Projects. All queries in a project will switch from on-demand billing to purchased slots after the assignment is made.You can manage your Flex Slots commitments from the Reservations UI in the Google Cloud Console. In this post, though, we’ll show how you can use the Python client library to apply Flex Slots reservations to your jobs programmatically, so that you can schedule slots when you need them and reduce any unnecessary idle time. This means you can run jobs at any hour, without an admin needing to click a button, and automatically remove that slot commitment when it’s no longer needed (no admin needed).  Check out the BigQuery Quickstart documentation for details on how to authenticate your client session. Here’s a look at a simple script that purchases Flex Slots for the duration of an ELT job:Confirming query reservationsYou can see your query statistics nicely formatted in the BigQuery query history tab within the BigQuery console. The Reservation name will be indicated with a property for queries that used the reserved slots, as shown here:Interpreting the run times and costsThe charts compare the query times and costs of on-demand runs,soft-capped at 2,000 slots, with runs at increments of 500 slots up to 2,000 for a single 3.15 TB on-demand query. It’s important to remember that Flex Slot customers will also pay for idle time and those costs can add up for larger reservations. Even padded with three minutes of idle time, Flex Slots cost 60% to 80% less than the cost of on-demand pricing for large queries.There’s a near-linear performance increase as slots are added.60% to 80% cost savings using Flex SlotsUsing Flex Slots and the Reservation APIs together lets you fine-tune your organization’s cost and performance profile with flexibility that is unprecedented among data warehouse solutions. For more details on how to get started with BigQuery or developing with the Reservations APIs, check out these resources:Get an introduction to BigQuery ReservationsLearn more about BigQuery slotsCheck out the  Python Client for Cloud BigQuery Reservation docsSee the details on Flex Slots pricing
Quelle: Google Cloud Platform

Effectively using BigQuery Reservations

Editor’s note: This is one installment in a series about effectively managing BigQuery costs. Check out the other posts on choosing between BigQuery pricing models and how to properly size your slots.BigQuery has several built-in features and capabilities to help you save on costs, manage spend, and get the most out of your data warehouse resources. In this blog, we’ll dive into Reservations, BigQuery’s platform for cost and workload management. In short, BigQuery Reservations enables you to:Quickly purchase and deploy BigQuery slots Assign slots to various parts of your organizationSwitch your organization from bytes processed to a flat-rate pricing modelCustomers on the flat-rate pricing model purchase compute capacity, measured in slots, and can run any number of queries using this capacity. The flat-rate pricing model is a great alternative to the bytes processed pricing model, as it gives you more cost predictability and control. Think of slots as compute nodes—the more slots you have, the more horsepower you have for your queries.Getting started with ReservationsGetting going with BigQuery Reservations is very easy and low-risk. We introduced Flex Slots, which are charged per second and can be canceled after only 60 seconds, so you can run an experiment for the price of a cup of coffee! Here’s how to get started:1. Simply go into the BigQuery UI and click on “Reservations.” From there choose “Buy Slots.”2. In the purchase flow, choose “Flex Slots” as your commitment type and “500” as your size. If you’ve never bought slots before, you’ll be prompted to default your organization to flat-rate. Opt in if you want all your projects to start using your purchased slots automatically. 3. Confirm your purchase. In a few seconds, your capacity should be confirmed and deployed. 4. Go into the “Assignments” tab and assign any of your projects, or even your entire organization, to the “default” reservation. This tells BigQuery that those projects are on the slots pricing model, rather than bytes processed. Voila!Once you’re done with your test, simply delete all assignments and commitments. A 15-minute test will cost you just $5. Using BigQuery ReservationsOnce you set up Reservations, BigQuery automatically makes sure that your usage is efficient. Any provisioned slot that’s idle across your organization is available elsewhere in your organization to be used. That’s right, any idle or slack capacity is always available for you to use. This means that no matter how big or small your organization is, you get economy of scale benefits, without the penalty of creating wasteful compute silos.To increase capacity, all you need to do is buy more slots. Once your purchase is confirmed and slots are deployed, BigQuery automatically starts using this additional capacity for all your queries in flight—there’s no pausing work or waiting for new queries to start. It all happens quickly and seamlessly.Likewise, to decrease capacity, simply cancel an existing slot commitment. If you were using that capacity, BigQuery will simply pause those bits of work—your queries won’t fail, and at worst they’ll just slow down.Head over to documentation on slots to learn more about what BigQuery slots are and how they are distributed to do work.  Using Reservations for workload managementBigQuery Reservations is built for simplicity, first and foremost. That said, it’s a highly configurable platform that helps complex organizations manage their entire BigQuery operations in one place.It’s typical for an organization administrator to want to isolate and compartmentalize their departments or workloads. For example, you may have a “business” department, an “IT” department, and a “marketing” department, and you’d like each department to have their own set of BigQuery resources, like this:In the above example, you could set up your Reservations as follows:You purchase a 1000-slot commitment. This is your organization’s total processing capacity.You earmark 500 slots for “business,” 300 slots for “IT,” and 200 slots for “marketing” by creating a reservation for each.You assign Google Cloud folder “business_folder” to “business” reservation, and any other Google Cloud project that the business department is using.You assign Google Cloud folder “IT” to “IT” reservation, and project “it_project”You assign the Google Cloud project used by the marketing team for Looker dashboards to “dashboard_proj” We mentioned earlier that idle capacity is seamlessly shared across your organization. In the above example, if at this moment “business” reservation has 20 idle slots, they are automatically available to “IT” and “marketing.” As soon as “business” reservation wants them back, they’re pre-empted from “IT” and “marketing.” Pre-emption is graceful—queries slow down and accelerate seamlessly, rather than error out. Reservations also enables you to centrally manage your entire organization, mitigating the risk of “shadow IT” and unbounded spend. Only folks with bigquery.resourceAdmin, bigquery.admin, or owner roles set at the org level can dictate which projects and folders are assigned to which reservations. Cost attribution back to each department may be important to you. Simply query INFORMATION_SCHEMA jobs tables for reservation_id field and aggregate over slots consumed to report on what portion of the total bill is attributable to each team. To make this even easier, in the coming weeks you’ll see project-level cost attribution in the Google Cloud billing console. When to use Reservations Let’s unpack some examples of how you could set up Reservations for specific use cases.If you have a dev, test, or QA workload, you may only want it to have access to a small amount of resources, and you may not want it to leverage any idle capacity. In this instance, you could create a reservation “dev” with 50 slots and set ignore_idle_slots to true. This way this reservation will not use any idle capacity in the system beyond the 50 slots it requires.If you have a batch processing workload, and you’d like it to only run when there’s slack in the overall system, you can create a reservation “batch” with 0 slots. Any query in this reservation will sit queued up waiting for slack capacity, and will only make forward progress if there’s slack capacity.Suppose you have a reservation that is used to generate Looker dashboards, and you know that every Monday between 9 and 11 in the morning this dashboard experiences higher than normal demand. You may set up a scheduled job (via cron or any other scheduling tool) to increase the size of this reservation at 9am, and reduce it back at 11am.Using Google Cloud folders for advanced configurationGoogle Cloud supports organizations and folders, a powerful way to map your organization to Google Cloud Identity and Access Management (Cloud IAM). Child folders acquire properties of their parent folders, unless explicitly specified otherwise, and users with access to parent folders automatically acquire access to all child folders and their resources.BigQuery Reservations can be used in conjunction with folders to manage complex organizations.Consider the above scenario:Folder C is set up for a specific department in the organization.Org admin has IAM credentials to the entire organization.Folder admin has IAM credentials to Folder C (and hence Folder E as well).Folder admin wants to control her department’s BigQuery costs and resources autonomouslyOrg admin is the central IT department that oversees security and budget conformism.Folder D represents another department in the organization, managed by org admin.To configure BigQuery for this organization, do the following:Folder admin sets up BigQuery Reservations in Folder CFolder admin assigns Folder C and any projects she owns to her reservationsOrg admin sets up BigQuery Reservations in a project in Folder D, and in a project tied to the organizationOrg admin assigns Folder D and any projects he owns to his reservations in Folder DOrg admin assigns the entire organization to the reservations at org levelWith the above setup, folder admin is able to self-manage BigQuery for Folder C and Folder E, and org admin is able to manage BigQuery for every folder in their organization, including Folder C and Folder D. The only caveat is that in this configuration, idle slots are not shared between reservations in Folder C, Folder D, and the organization node.With BigQuery Reservations, managing your BigQuery costs and your workloads is easy. And BigQuery Reservations offers the power and flexibility to meet the goals of the most complex organizations out there while maximizing efficiency and minimizing waste. To learn more about BigQuery Reservations, head over to the documentation.
Quelle: Google Cloud Platform

9 ways to back up your SAP systems in Google Cloud

At the heart of every modern business is data. Use it right, and you open the door to emerging technologies that’ll help you compete. But as you continue to innovate and invest in your technology, the data that’s created and produced becomes even more critical to protect from loss and outages. For SAP customers using new systems like S/4HANA, including backup and storage design as part of your overall business continuity planning rings particularly true. Reasons for data loss and outages can be physical or logical. In this blog post, we’ll focus on protecting against physical outages, like those caused by data center failures or environmental disasters, so your business is ready for anything.Technology 101: How backups work in the SAP ecosystemEach of your SAP deployments has unique Recovery Point Objective (RPO) and Recovery Time Objective (RTO) requirements, which influence your entire backup strategy and toolset. You can think of RPO as your backup operations: The more capabilities you have here, the further back your recovery point goes. RTO refers to the time it takes for your systems to recover and get back online. Most of the time, a trade-off is made between the overall cost of backup operations and the cost of time due to lost data.A typical SAP workload consists of virtual machines (VMs) running databases and application- servers on disks. There is a dedicated boot disk for the operating system (OS), and most of the remaining disks are used for applications. Because of this, we recommend that all of our SAP customers allocate a separate disk, like Persistent Disk, for all files and data that aren’t part of your OS. This makes systems easily replaceable and moveable and simplifies data capture and storage processes. Backup strategies for SAP customers leveraging the cloudThe core principle for backup solutions is to segregate backup data copies from the primary storage location. But, in an on-premises setting, data has only one place to go: the in-house storage unit. The good news is that, as more SAP workloads have moved to the cloud on HANA, you now have multiple cloud-based backup solutions that are flexible, scalable, and self-manageable. Persistent disk snapshotsPersistent disk snapshots are fast and cost-effective. You can specify the storage location for snapshots as regional or multi-regional. In an SAP HANA database running on Google Cloud, you can store backup folders on separate persistent disks to capture and replicate the database server independently.Machine images (Beta)A Google Compute Engine resource, machine images store all the configuration, metadata, permissions, and data needed from disks to create a VM instance. Machine images are ideal resources for disk backups as well as instance cloning and replication.Shared file storage SAP systems can use shared file storage (for example, Google Cloud Filestore or Elastifile) to fulfill any high availability and disaster recovery requirements. Shared file systems can be combined with appropriately chosen Cloud Storage buckets (multi-region, dual region) to ensure availability of data backups across zones and regions.HANA Backint agent for Cloud Storage (Beta)For SAP HANA database backup, Google Cloud offers customers a free, SAP-certified, and application-aware Cloud Storage Backint agent for SAP which would eliminate the need for backing up with persistent disks. Third-party network storageThird-party network file system (NFS) solutions offer a backup of all relevant file system volumes of an SAP instance for both the application and database layers with scheduled snapshots, which are stored in Cloud Storage. For SAP HANA, this solution is only suitable for hosting backup and share volumes.Third-party backup agents and managed servicesThese solutions offer advanced technical features that enable rapid backup and recovery times, because third-party providers do not rely on database-level incremental backups. For enterprise-scale SAP landscapes, this reduces storage sizes. A word of advice, though: Stick to SAP HANA certified backup solutions.SAP HANA data snapshotSAP HANA databases can also create data snapshots independently, using native SQL. This doesn’t require certification, but it is a highly complex technique since some systems need to be deactivated before snapshots can be taken.SAP HANA stop/start snapshot of secondary HANA instanceThis solution is suitable for non-production cases where cost considerations supersede RPO requirements. Creating snapshots involves using a smaller standby instance in an SAP HANA system replication setup. You can also take this instance offline and make a complete VM snapshot for point-in-time recoverability.Snapshot and disk deallocationIf cost is a high priority, Google Cloud offers services that enable you to allocate a persistent disk in time for a snapshot and deallocate it once the backup is complete. A cloud-based infrastructure will allow you to create disks for backup on an as-needed and pay-as-you-use basis.While we wish we could say data loss and disasters will never happen, the reality is that the next outage or triggering event is just around the corner. For businesses rapidly modernizing and transforming in a digital landscape, like SAP customers migrating to HANA, protecting your data will determine whether you are able to compete in an unpredictable, complex, and dynamic business environment. From persistent disk snapshots to machine images, Google and SAP’s cloud solutions work seamlessly together to provide an ecosystem of customizable solutions.Explore your HA optionsWe’ve only scratched the surface when it comes to understanding the many ways Google Cloud supports and extends backup and recovery for your SAP instances. For an even deeper dive, read our white paper, “SAP on Google Cloud: Backup strategies and solutions.”
Quelle: Google Cloud Platform

Introducing Java 11 on Google Cloud Functions

The Java programming language recently turned 25 years old, and it’s still one of the top-used languages powering today’s enterprise application customers. On Google Cloud, you can already run serverless Java microservices in App Engine and Cloud Run. Today we’re bringing Java 11 to Google Cloud Functions, an event-driven serverless compute platform that lets you run locally or in the cloud without having to provision servers. That means you can now write Cloud Functions using your favorite JVM languages (Java, Kotlin, Groovy, Scala, etc) with our Functions Framework for Java, and also with Spring Cloud Functions and Micronaut!With Cloud Functions for Java 11, now in beta, you can use Java to build business-critical applications and integration layers, and deploy the function in a fully managed environment, complete with access to resources in a private VPC network. Java functions will scale automatically based on your load. You can write HTTP functions to respond to HTTP events, and background functions to process events sourced from various cloud and GCP services, such as Pub/Sub, Cloud Storage, Firestore, and more.Click to enlargeFunctions are a great fit for serverless application backends for integrating with third-party services and APIs, or for mobile or IoT backends. You can also use functions for real-time data processing systems, like processing files as they are uploaded to Cloud Storage, or to handle real-time streams of events from Pub/Sub. Last but not least, functions can serve intelligent applications like virtual assistants and chat bots, or video, image and sentiment analysis.Cloud Functions for Java 11 exampleYou can develop functions using the Functions Framework for Java, an open source functions-as-a-service framework for writing portable Java functions. You can develop and run your functions locally, deploy them to Cloud Functions, or to another Java environment.An HTTP function simply implements the HttpFunction interface:Add the Functions Framework API dependency to the Maven pom.xml:Then add the the Function Maven plugin so you can run the function locally:Run the function locally:You can also use your IDE to launch this Maven target in Debugger mode to debug the function locally.To deploy the function, you can use the gcloud command line:Alternatively, you can also deploy with the Function Maven plugin:You can find the full example on GitHub. In addition to running this function in the fully managed Cloud Functions environment, you can also bring the Functions Framework runtime with you to other environments, such as Cloud Run, Google Kubernetes Engine, or a virtual machine.Third-party framework supportIn addition to our Functions Framework for Java, both the Micronautframework and the Spring Cloud Function project now have out-of-the-box support for Google Cloud Functions. You can create both an HTTP function and background function using the respective framework’s programming model, including capabilities like dependency injection.MicronautThe Micronaut team implemented dedicated support for the Cloud Functions Java 11 runtime. Instead of implementing Functions Framework’s HttpFunction interface directly, you can use Micronaut’s programming model, such that a Helloworld HTTP Function can simply be a Micronaut controller:You can find a full example of Micronaut with Cloud Functions and its documentation on GitHub.Spring Cloud FunctionsThe Google Cloud Java Frameworks team worked with the Spring team to bring Spring Cloud GCP project to help Spring Boot users easily leverage Google Cloud services. More recently, the team worked with the Spring Cloud Function team to bring you Spring Cloud Function GCP Adapter. A function can just be a vanilla Java function, so you can run a Spring Cloud Function application on Cloud Functions without having to modify your code to run on Google Cloud.You can find a full example of a Spring Cloud Function with Cloud Functions on GitHub.JVM LanguagesIn addition to using the latest Java 11 language features with Cloud Functions, you can also use your favorite JVM languages, such as Kotlin, Groovy, and Scala, and more. For example, here’s a function written with Kotlin:Here’s the same function with Groovy:You can take a deeper dive into a Groovy example, and otherwise, find all the examples on GitHub (Kotlin, Groovy, Scala).Try Cloud Functions for Java 11 todayCloud Functions for Java 11 is now in beta, so you can try it today with your favorite JVM language and frameworks. Read the Quick Start guide, learn how to write your first functions, and try it out with a Google Cloud Platform free trial. If you want to dive a little bit deeper into the technical aspects, you can also read this article on Google Developers blog. If you’re interested in the open-source Functions Framework for Java, please don’t hesitate to have a look at the project and potentially even contribute to it. We’re looking forward to seeing all the Java the functions you write! Special thanks to Googlers Éamonn McManus, Magda Zakrzewska‎, Sławek Walkowski, Ludovic Champenois, Katie McCormick, Grant Timmerman, Ace Nassri, Averi Kitsch, Les Vogel, Kurtis Van Gent, Ronald Laeremans, Mike Eltsufin, Dmitry Solomakha, Daniel Zou, Jason Polites, Stewart Reichling, and Vinod Ramachandran. We also want to thank Micronaut and Spring Cloud Function teams for working on the Cloud Functions support!
Quelle: Google Cloud Platform

Azure Lighthouse—managing cloud, hybrid, and edge environments at-scale through a single control plane

Thousands of partners and enterprises use Azure Lighthouse to manage services across Azure tenants, representing tens of thousands of subscriptions and more than one million Azure resources from Azure Resource Manager—a unified control plane. With Azure Lighthouse, service providers, as well as self-managing enterprises, can achieve higher operational efficiency using Azure’s comprehensive and robust management tools. You can now view and manage resources, with higher automation, scale, and enhanced governance across hybrid estates and on-premises.

It is common for Managed Service Providers (MSPs) to service customer resources across hybrid estates and on-premises environments. Many MSP partners rely on Azure Lighthouse, and now Azure Arc, to achieve a unified management solution in these advanced scenarios. MSPs can extend their service offerings to manage their customers’ on-premises environments through Azure Resource Manager, managing resources at scale and governing compliance using Azure policy.

ClearDATA—delivering robust governance across hybrid environments for healthcare customers

Using Azure Lighthouse, Azure Policy, and Azure Arc, ClearDATA—an Azure Expert MSP—provides compliance insights to enterprise customers in regulated industries, such as healthcare. Azure Arc enables ClearDATA to easily perform virtual machine inventories in hybrid environments, while Azure Policy used with Azure Lighthouse helps them to achieve consistency, security, and compliance across all of their customers in all of the clouds and private datacenters or branch offices the customers use.

ClearDATA provides compliance state insights across hybrid environments to enterprise customers.

“ClearDATA’s HIPAA compliant and HITRUST 9.1 certified solutions on Azure help enterprise organizations easily transition and accelerate their move to the cloud with greater confidence. A rich library of compliance reference architecture for Azure services, coupled with our unique Automated Safeguards and Remediation technology, unlocks the true potential of Azure Lighthouse and Azure cloud. Our visual and easy-to-use compliance dashboard and flexible reports provide transparency and visibility needed to demonstrate compliance.”—Suhas Kelkar, Chief Product Officer, ClearDATA.

Yorktel—monitoring customer edge devices

Yorktel manages health states of Microsoft collaboration devices (Surface Hubs 1, 2, and Microsoft Teams Rooms), including displays, microphones, cameras, speakers, and Microsoft Teams’ real-time features, on-behalf of its end-customers. By pivoting to Azure Monitor as their primary monitoring tool, and Azure Lighthouse as their secure access mechanism, Yorktel is shaking up edge device management. Consolidated views across all its customers provides Yorktel with comprehensive oversight, enabling timely alerts that trigger response workflows for speedy problem resolution. Azure Lighthouse has created smoother user experiences and higher customer satisfaction.

Yorktel’s Azure-based monitoring workflow for edge devices.

“Yorktel’s Azure Lighthouse enabled monitoring and management solution couldn’t have come at a better time. As the post-COVID-19 world prepares to return to work, this proactive problem and resolution technology presents the potential for dramatic impact, both for managed services providers and their customers. The efficiencies generated by faster, large-scale problem resolution will allow companies to focus on the strategic and transformational initiatives that will help them grow and acclimate to the post-COVID-19 world, rather than the tactical, day-to-day ‘keeping the lights on’.” —Jeremy Short, SVP of Microsoft Solutions, Yorktel

Vandis—delivering managed network services

Azure Lighthouse has also enabled multiple service providers, such as Azure Networking MSPs, to build and operate optimized hybrid connectivity from customer premises to customer subscriptions in Azure. Vandis, for example, uses Azure Lighthouse to plan, build, and operate a hybrid network for customers based on Azure Virtual WAN and Azure Express Route.

“Azure Lighthouse has enabled us to expand our Network-as-a-Service Platform to our customers as well as drive work-from-home solutions such as Windows Virtual Desktop on Azure.” —Ryan Young, CTO, Vandis

Azure Lighthouse—continuing to innovate for management-at-scale scenarios in Azure

Congratulations to all our partners who continue to add value to our joint customers with enhanced services for managing Azure and hybrid estates. Our team is as motivated as ever to innovate for our partner ecosystem, and we’ve been constantly adding new Azure Lighthouse capabilities as a result.

Here are a few highlights:

Service providers can now trigger notification and onboarding workflows for their teams, in their own Azure control plane, through activity logs that monitor customers’ resource delegation actions.
Customers can now upgrade their managed services offers inside their own Azure portal experiences, in service providers views, rather than visiting other portals or marketplaces.
Automation tools of choice across command-line interface (CLI), APIs (subscription function), and PowerShell can now display managed and managing tenant context of an Azure subscription.
Service providers can opt-out of managing customer delegated Azure scopes, on their own, to accelerate compliance and offboarding needs.
Azure Backup Explorer and Backup reports now offer cross-customer consolidated views for service providers, driving operator efficiency.
Azure Lighthouse is now a FEDRAMP High certified service available in Microsoft Azure Government.
Partners can now draft and publish managed services offers to the Azure Marketplace directly from the Partner Center, streamlining offer and lead management into a single portal.
Azure Lighthouse Help and Support experiences have been enhanced, including recommended solutions for common issues, empowering managing tenants with more insights to solve issues themselves.

And that’s a wrap for Build 2020 with Azure Lighthouse. I cannot wait to share more with you at Inspire 2020 in July. In the meantime, check out our new Azure Lighthouse learning content.
Quelle: Azure

Virtual Build spotlights IoT updates and rollouts

As people around the globe adapt to new ways of working, the Microsoft Build 2020 conference took a new approach as well. Rather than gathering the developer community in person as planned, Microsoft shifted gears and put together 48 hours of streaming content for a virtual event.

Despite the new format, Microsoft Build’s goals remained the same: Connect our developers with the best of Microsoft so they can bring their ideas to life. For IoT, that included a lot of new innovations and training for developers, all geared toward simplifying IoT and empowering developers to build new breakthrough solutions.

On the training side, we’re especially excited to launch a new IoT certification to help build skills in the community and unlock the creativity of developers. We’ve also added some industry-leading capabilities with an all-new Azure Digital Twins release that can model just about any scenario.

Below is a roundup of the key news. I encourage you to click down into the individual announcements for more detail, and if you weren’t able to virtually attend the Microsoft Build conference, access the sessions online.

New IoT certification for developers

One of the biggest challenges for developers building IoT applications is acquiring the skills to do so. Microsoft offers multiple training options that empower developers to increase technical skills and prepare for Microsoft Certifications.

At Microsoft Build 2020, we announced the general availability of a recent addition to the Microsoft Certification portfolio: The Azure IoT Developer Specialty certification. Earning this certification can help developers become recognized as experts and advance their careers by validating technical knowledge and ability.

Developers can start the IoT learning and certification journey at Microsoft Learn, with free online, self-paced courses covering all the essentials like provisioning and managing devices, processing data, deploying cloud workloads to the edge, securing the solution, and more. Check out the Microsoft Learning Blog to explore all the resources available to skill up and get certified.

Azure Digital Twins: New preview features

A “digital twin” is a digital replica of real-world things—assets, environments, business systems—designed to understand, control, simulate, analyze, and improve how those things work in the real world.

At Microsoft Build 2020, we announced the next iteration of Azure Digital Twins, making it even easier for developers to build these dynamic virtual replicas. New capabilities include rich and flexible modeling that supports full graph topologies, a live execution environment, easy integration with other Azure services, and broad query APIs.

To drive openness in building IoT applications, the new Azure Digital Twins also uses an open modeling language called the Digital Twins Definition Language, based on the JSON-LD standard. This will provide great flexibility, ease of use, and easy integration into other Azure platform offerings such as IoT Hub and Time Series insights.

It also allows for expanded integration outside Azure, so partners can use Digital Twins as part of their existing modeling frameworks and third-party systems. The new features are expected to be out in the coming months.

We also highlighted two partners using new capabilities in exciting ways. Pennsylvania-based ANSYS is building physics-based simulations that can aid in designing large physical assets. Another partner, Bentley Systems, is creating a digital representation of major infrastructure including road and rail networks, public works and utilities, industrial plants, and commercial and institutional facilities to help customers better design, build, and operate.

Finally, as part of our commitment to openness and interoperability, we announced that Microsoft has joined Dell, Ansys, and LendLease in founding the Digital Twin Consortium, where we will work to build an open community that promotes best practices and standard digital twin models for all businesses and industry domains.

IoT Plug and Play: New preview features

IoT Plug and Play is an open approach that dramatically accelerates IoT by making it much easier to develop software on devices, connect them quickly to IoT solutions, and update each independently. Since our initial preview last year, we have been busy responding to customer feedback and at build we announced a set of new preview features which will be available soon:

Alignment with Digital Twins: IoT Plug and Play and Azure Digital Twins now share the same modeling language: the Digital Twins Definition Language (DTDL). This makes it simple to connect an IoT Plug and Play device to Azure Digital Twins and have the device appear instantly as a Digital Twin. 
Support for existing devices: we have made it easy to update existing devices to be IoT Plug and Play compatible, developers can simply author a DTDL document that describes the interaction model of their device, make targeted code changes, and then send the model when the device connects.

We will also be enabling our device providers to start their final certifications ahead of our IoT Plug and Play general availability.

Azure Time Series Insights: New features general availability

Traditionally comparing historical trends with time series data has meant spending days normalizing the data before analyzing it. With Azure Time Series Insights, developers can process, analyze, and get data insights in just minutes.

This year at Microsoft Build, we announced that new features for Azure Time Series Insights will be generally available in the coming months.

Several months ago we announced a preview of Azure Time Series Insights features, including an enhanced analytics user experience through Time Series explorer, seamless integration with advanced machine learning platforms and analytics tools, a native connector to Power BI, semantic model support for metadata, and more.

This version builds on our commitment to deliver a truly flexible analytics platform with the introduction of Azure Data Lake Storage Gen2 support. By combining customer-owned Azure Data Lake Storage with our native support for the open source, highly-efficient Apache Parquet, customers can gain insights over decades of IoT data. They can also integrate with other analytics tools of their choice to unlock significant business value and operational intelligence.

When our customers use Azure Time Series Insights together with Azure Digital Twins, they gain highly contextualized representations of their connected environments to better understand how assets, customers, and processes interact.

We also announced improvements in scale, security, and user experience that will be available in the next few months. Learn more about Azure Time Series Insights and start getting insights from your IoT data today.

Azure Maps: Creator feature in preview

Azure Maps is an enterprise location platform that enables developers to add spatial analytics and mobility to their IoT applications.

At Microsoft Build, we announced Azure Maps Creator in preview, which offers a fundamental shift in building and managing private map data, and moving geographic information systems (GIS) data management into Azure cloud.

With Azure Maps Creator, developers can upload private map information such as indoor floorplans, spaces, and physical assets into a customer-controlled, highly-secure, and fully-compliant geospatial storage system within Azure Maps.

Azure Maps Creator also helps Azure Digital Twins customers by handling private map data associated with Digital Twins for private spaces like building interiors, campuses, factories, and more. The combination of Azure Maps Creator and Azure Digital Twins helps customers manage, monitor, and track IoT assets within their environments through the Azure Maps interface. Learn more about Azure Maps Creator.

Azure IoT Central: First-class support for Azure Sphere and Azure IoT Edge

IoT Central is a fully managed software as a service (SaaS) IoT app platform that allows developers to easily create IoT applications without managing the underlying infrastructure. Developers can either use existing IoT Central industry templates or create customized solutions of their own design. Of particular note during our current public health crisis is IoT Central’s continuous patient monitoring health template designed to accelerate the assembly and deployment of healthcare wearables and patient monitoring solutions.

At Microsoft Build, IoT Central announced several new features, including first-class support for both Azure Sphere and Azure IoT Edge.

Integrating IoT Edge with IoT Central allows developers to deploy cloud workloads such as artificial intelligence and machine learning on edge devices. It dramatically increases the possibilities for IoT applications by allowing developers to deploy Edge software modules, find insights from them, and take actions—all from within IoT Central.

Pairing IoT Central with Azure Sphere’s integrated security solution provides the foundation needed to build, monitor, and safely manage IoT devices and products. It allows application builders to ensure device-to-cloud security through simplified security management from a single pane of glass. Developers can also model Azure Sphere devices in IoT Central using device templates integrated with Azure Sphere cloud services to facilitate secure error and device status reporting.

For more information on how IoT Central and Azure Sphere can help in the design and management of a robust IoT strategy, read the blog to learn more.

Follow the latest IoT Central innovations by subscribing to our monthly service updates.

Azure IoT Hub and Azure IoT Edge: New breakthrough capabilities for enterprise-grade IoT

At Microsoft Build, we announced another industry first: Azure IoT Hub now supports Azure Private Link for device connectivity as well as Managed Identity for securely connecting to locked-down Azure resources. As a result, customers can now bring IoT Hub into their Azure Virtual Network (VNET) and secure their IoT solution by eliminating exposure to the public internet. To learn more, see the full blog.

We also announced new industry-leading features that elevate Azure IoT Edge to the most sophisticated, production-grade edge platform in the industry:

IoT Edge added X.509 certificate attestation for IoT Hub Device Provisioning Service (DPS). This takes advantage of X.509 certificate chains to automate device provisioning, allowing for greater scale.
Additional features will make supportability and debugging quick and easy. A new feature called Support Bundle reduces the work required to debug issues across IoT Edge components. This feature allows collection of module, IoT Edge security manager, and container engine logs, along with iotedge check output and other useful debug information, in a single compressed file with a single command.
IoT Edge, together with IoT Hub Automatic Device Management, allows layered deployments that enable reuse of the same module in different combinations, reducing the number of unique deployments that need to be created.
Azure IoT Edge also works on Kubernetes, and we recently added new features for this support. These include an integrated, production-grade security architecture, a built-in lightweight proxy to deploy IoT Edge modules on Kubernetes with no code changes, integration of loT Edge features like automatic provisioning using IoT Hub Device Provisioning Service, and application model extensions that allow the use of select Kubernetes primitives in an edge deployment manifest.

And we are not done—based on our customers’ needs, we are working on the following new features that will be released soon as part of IoT Edge release 1.0.10 in the coming months:

Priority messages and Time-to-Live (TTL) support, which will allow greater control over network usage in constrained and expensive networking environments by letting our customers choose which data they want to receive first from an IoT Edge device.
IoT Edge runtime will be enhanced to emit rich operational metrics in an industry-standard Prometheus format, enabling powerful monitoring and alerting features both locally and remotely.

Azure RTOS

Getting intelligent, reliable hardware products to market can be time-consuming and complex. Azure RTOS is an embedded IoT development suite that includes a lightweight real-time operating system for microcontrollers (MCUs) and microprocessors (MPUs) to streamline the process of building high-performing devices.

At Microsoft Build we announced the general availability of Azure RTOS, the fastest, smallest, industry-grade RTOS on the planet. We also announced that Microsoft now supports Azure RTOS on development kits from ST, Renesas, NXP, and Microchip. This turnkey integration helps simplify many steps in the development cycle.

Full source code for all Azure RTOS components is now available on GitHub for developers to freely test and explore. Azure RTOS includes a preview integration of an Azure Security Center module. Later this year we will offer an add-on industrial certification package to help developers get to market even faster. For more details, read the full announcement.

Azure Sphere

Azure Sphere is a device security solution purpose-built with Azure Sphere-certified hardware—a highly secured OS and a cloud security service, with more than a decade of ongoing, on-chip security improvements.

Since we announced its general availability in February 2020, Microsoft has relied on Azure Sphere in our own datacenters to securely connect the critical infrastructure that delivers cloud services at scale. 
 
At Microsoft Build, we demonstrated Azure Sphere and Azure RTOS’s collective capability to address critical needs across the full spectrum of MCU and embedded-class IoT devices, enabling developers to build highly secure devices with real-time processing capabilities.

Windows for IoT: A broad range of updates, including something for every developer

At Microsoft Build, we also laid out the road map for the continued integration of IoT capabilities into Windows.

Customers love the security and manageability of Windows for IoT, and we are making it even easier to integrate with Azure and to access Linux modules by enabling the Linux version of Azure IoT Edge on Windows 10 IoT Enterprise. We are also creating new market opportunities for device builders by shrinking the footprint of Windows 10 IoT Enterprise, enabling NXP’s i.MX8 silicon, and adding new features for appliance scenarios and business models.

Our partners continue to build innovative solutions with Windows IoT. Democracy Live and Dover Fueling Solutions are examples of partners enabling secure, accessible, and empowered solutions with Windows 10 IoT Enterprise. It is also exciting to see Clearpath Robotics adding support for Robot Operating System (ROS) on Windows, and HIWIN enabling speech and vision cognition capabilities for robots running ROS on Windows.

For more detail on all the IoT updates happening around our upcoming releases of Windows IoT, check out the announcement blog.

Get more from Microsoft IoT

All of us at Microsoft IoT want to thank the developers who participated in our first virtual Microsoft Build. Shifting gears to put on this event in an accessible, inclusive way involved groups across Microsoft, and we hope the content helps the community stay connected to the platform and advance their own offerings.

Watch the virtual sessions and check out the detailed announcement blog posts linked above. We’ll be adding more in the coming months, so stay tuned—and stay safe.
Quelle: Azure