State of the Word 2020

State of the Word is an annual keynote address delivered by WordPress project co-founder, Matt Mullenweg. This year’s keynote will be streamed on the WordPress.org blog, Facebook, YouTube, and Twitter on Thursday, Dec 17th, at 1600 UTC. You can view a replay of the event at any time after it airs on any of these platforms. 

Sign up here to receive an email reminder, so you don’t miss the State of the Word broadcast!

Join the email list

We will only use this list to share State of the Word updates. Your personal information will not be used for anything else.

Processing…

Success! You're on the list.

Whoops! There was an error and we couldn't process your subscription. Please reload the page and try again.

New to State of the Word?

If this is your first time hearing of this talk and want to learn more, you’re in luck! Check out previous recordings below.

State of the Word 2019 – WordCamp US, St. LouisState of the Word 2018 – WordCamp US, NashvilleAll recordings
Quelle: RedHat Stack

New from WordPress.com Courses: Podcasting for Beginners

Would you like to learn how to create your own podcast or improve your existing podcast? WordPress.com Courses is excited to offer our new on-demand course, Podcasting for Beginners. We’ll help you get started, learn how to publish, and even how to use your podcast to make a living.  

Our courses are flexible. You can join, and learn at your own pace. But that’s just the start. Podcasting for Beginners is more than just a course —  it’s a community that gives you access to weekly Office Hours hosted by WordPress experts. A place where you can ask questions, share your progress, and pick up a few tips along the way. 

Lessons include step-by-step videos covering:

The Foundations (Curating your content and an editorial calendar.) Interviews (Recording, editing, and outreach.) Configuring Your Site (Integrating your podcast into your site and distributing it.) Growing Your Community (Engaging with listeners.) Making Money (Monetization basics and preparing for the future.) 

Let us take you from “What is podcasting?” to launching a podcast of your own.

Cost: A $99 annual subscription gives you unlimited access to course content, our online community, and virtual sessions.

Join now as our first 100 customers will enjoy 50% off the subscription fee with the code PODCAST50.

Register here!

Quelle: RedHat Stack

Better together: Expanding the Confidential Computing ecosystem

Core to our goal of delivering security innovation is the ability to offer powerful features as part of our cloud infrastructure that are easy for customers to implement and use. Confidential computing can provide a flexible, isolated, hardware-based trusted execution environment, allowing adopters to protect their data and sensitive code against malicious access and memory snooping while data is in use. Today, we are happy to announce that we have completed the rollout of Confidential VMs to general availability in nine regions. Our partners have played a huge part in this journey. They have been critical in establishing an ecosystem that aims to make Confidential Computing ubiquitous across mobile, edge, and cloud. We spoke to Raghu Nambiar from AMD, Mark Shuttleworth from Canonical, Burzin Patel from HashiCorp, Mike Bursell from Red Hat, Dr. Thomas Di Giacomo from SUSE, and Solomon Cates from Thales. Here are the excerpts.Raghu Nambiar, Corporate Vice President, Data Center Ecosystems, AMDConfidential Computing is a relatively new concept with a goal to encrypt data in use in the main memory of the system, while still offering high performance. Confidential Computing addresses key security concerns many organizations have today in migrating their sensitive applications to the cloud and safeguarding their most valuable information while in-use by their applications. It wouldn’t be a surprise if in a few years, all virtual machines (VMs) in the cloud are Confidential VMs. How did you approach confidential computing?The 2nd Gen AMD EPYC processors used by Google for its Confidential VMs uses an advanced security feature called Secure Encrypted Virtualization (SEV). SEV is available on all AMD EPYC processors and, when enabled by an OEM or cloud provider, it encrypts the data-in-use on a virtual machine, helping to keep it isolated from other guests, the hypervisor and even the system administrators. The SEV feature works by providing each virtual machine with an encryption key that isolates guests and the hypervisor from one another, and these keys are created, distributed, and managed by the AMD Secure Processor. The benefit of SEV is that customers don’t have to re-write or re-compile applications to access these security features With SEV-enabled Confidential VMs, customers have better control of their data, enabling them to better secure their workloads and collaborate in the cloud with confidence. What kind of performance can we expect?What’s really impressive about Google Confidential VMs powered by AMD EPYC processors with SEV enabled is that it offers performance close to that of non-confidential VMs. AMD and Google’s engineering teams ran a set of well-known application benchmarks for relational database, graph database, webserver as well as Computational Fluid Dynamics and popular simulation workloads in FSI on Google Confidential VMs and Google’s N2D VMs, of which Confidential VMs are based on. The difference in using SEV versus not using SEV on the applications listed, was measured to be just a small overhead in application performance. Any final thoughts?Confidential Computing is a game-changer for computing in the public cloud as it addresses important security concerns many organizations have about migrating their sensitive applications to the cloud. Google Confidential VMs, with AMD EPYC processors and SEV, strengthen VM isolation and data-in-use protection helping customers safeguard their most valuable information while in-use by applications in the public cloud. This is a paradigm shift and we’re excited to work with Google to make this possible.Mark Shuttleworth, CEO, CanonicalConfidential Computing directly addresses the question of trust between cloud providers and their customers, with guarantees of data security for guest machines enforced by the underlying hardware of the cloud. With Google’s addition of Confidential Computing to multiple regions, customers gain a secure substrate for large-scale computation with sensitive data and a path to regulatory compliance for new classes of workload on the cloud.What value does the partnership between GCP and Canonical create?Close technical collaboration between Google and Canonical ensures that Ubuntu is optimized for GCP operations at scale. Confidential Computing requires multiple pieces to align and we are delighted to offer full Ubuntu support for this crucial capability at the outset with Google.How will this benefit organizations?Organizations gain peace of mind that large classes of attack on cloud guests are mitigated by Confidential Computing. Memory encryption with hardware key management and attestation prevents a compromise of the hypervisor becoming a compromise of guest data or integrity. Customers can now consider GCP as secure as private infrastructure for a much wider class of workloads. Canonical Ubuntu fully supports Confidential Computing on Google Cloud, providing a new level of trust in public cloud infrastructure.Burzin Patel, Vice President of Global Alliances, HashiCorpHashiCorp Vault enables teams to securely store and tightly control access to tokens, passwords, certificates, and encryption keys for protecting machines and applications. When combined with GCP’s Confidential Computing capabilities, confidentiality can be extended to the HashiCorp Vault server’s system memory, ensuring that malware, malicious privileged users, or zero days on the host cannot compromise data. Why did you choose Google Cloud as a partner for Confidential Computing?Google Cloud’s Confidential Computing nodes operate exactly like regular compute nodes making the offering very easy to use. We were able to take our existing Vault binary and host it on the Confidential Computing node to leverage the confidential computing benefits. No code or configuration changes were needed.What is the gap confidential computing solves specifically for your customers?Vault stores all of its sensitive data in memory and is stored as plaintext. In the past there were no easy solutions to keep this runtime memory protected. However, with the availability of confidential computing nodes, the data in memory is protected via encryption by utilizing the security features of modern CPUs together with confidential computing services.Any use-cases that are top of mind for you when it comes to confidential computing?HashiCorp Vault allows organizations to eliminate system complexity where any mistakes or misconfiguration could lead to a breach or data-leakage that in turn can halt operations and erode trust across customers. Together, HashiCorp Vault and Google Cloud’s Confidential Computing help organizations manage their most critical secrets and assets. This includes the entire secret lifecycle, from the initial creation, to sharing and distribution, and to the revocation or expiration of credentials and secrets.Any final thoughts?Security is the most critical element for enterprise customers looking to adopt the cloud. Customers are looking for a flexible solution that is robust and highly secure. The combination of HashiCorp Vault and Google Cloud Confidential Computing provide users a critical solution for their enterprise-wide cloud security needs.Mike Bursell, Chief Security Architect, Red HatAs more businesses and organizations move to the cloud, security remains a top priority. Maintaining the same levels of confidentiality that their partners, customers, regulators and shareholders expect across private and public clouds is vital. Red Hat believes that Confidential Computing is one key approach to extend security from on-premises deployments into the cloud, and Google’s announcement of Confidential VMs is an example of how customers can further secure their applications and workloads.What has Red Hat’s approach been to Confidential Computing?Red Hat Enterprise Linux is an enterprise operating system designed to handle the needs of customers across on-premises and hybrid cloud environments. Customers need stability, predictability and management solutions that scale with their workloads, which is why we enable Confidential Computing solutions in our product portfolio. That way customers don’t have to worry about migration costs. How will confidential computing impact cloud adoption?Often, customers with regulatory concerns have greater concerns about shifting into a truly open hybrid cloud environment, as they cannot expose their more sensitive data and applications outside their own data centers. Red Hat believes Confidential Computing can help them make this shift, expanding their opportunities for digital transformation, allowing them to provide quicker, more scalable and more competitive solutions, while maintaining the data privacy and protection assurances that their customers expect and require. As organizations balance the need for security with the opportunities presented by the cloud, Confidential Computing provides new ways to safely and securely embrace those opportunities.Dr. Thomas Di Giacomo, Chief Technology & Product Officer, SUSEConfidential VMs is a cloud industry security game-changer. This offering for our joint cloud customers expands sensitive data protection and compliance requirements, especially for regulated industries. The best part is you can run legacy and cloud-native workloads securely without any refactoring to the underlying application code, simplifying the transition to the cloud, all with little to no performance penalty.How has SUSE been working with Google Cloud and AMD?Working closely with AMD, SUSE added upstream support for AMD EPYC SEV processor to the Linux Kernel and was the first to announce Confidential VM support in SUSE Linux Enterprise Server 15 SP1 available in the Google Cloud Marketplace. These innovations allow our customers to take advantage of the scale and cost savings of Google Cloud Platform and the mission-critical manageability, compliance, and support from the #1 rated Linux support team, SUSE.How do you foresee this benefiting organizations?Confidential VMs will help tremendously accelerate our customer migrations to the cloud on their hybrid cloud digital transformation journey. This technology opens up new areas of migration opportunities for legacy on-premises workloads, custom applications as well as Private and Government workloads that require the utmost security and compliance requirements once considered not cloud-ready in the past.Solomon Cates, Principal Technologist, CTO Office, ThalesConfidential computing is a fundamental step in providing users control of their data as it goes “off premise” into cloud environments and all the way to the edge. Customers can essentially transition their workloads to the cloud with high assurance that includes auditable “proof” of control. And, architecturally, it opens up so many possibilities for customers.Many enterprises have significant trepidation when it comes to security in the cloud. Confidential computing helps alleviate that. For example, security professionals no longer have to worry about a cloud provider seeing or using their data.How does Confidential Computing help your customers?Confidential computing solves an issue that enterprises specifically have around trust in memory—namely that memory cannot be seen or used by a cloud provider. Three key use cases that can immediately benefit from this technology include edge computing, external key management and in-memory secrets.What made you partner with Google Cloud?Thales and Google Cloud have collaborated across a number of areas including cloud, security, Kubernetes containers and new technologies such as Continuous Access Evaluation Protocol (CAEP).  At the core, we both strive to offer customers the best option for strong security and privacy protection.Any final thoughts?From both a strategic and technical standpoint, Thales and Google Cloud have a shared vision that focuses on customer control and security of their data in the cloud. Through our work around confidential computing, we will bring new possibilities for securing workloads at the edge. Together, we are making it possible for enterprises to put their trust in the cloud with more sovereign control over their data security.We thank our hardware and software partners for their continuous innovation in this space. Confidential Computing can help organizations ensure the confidentiality of sensitive, business critical information and workloads, and we are excited to see the possibilities this technology will open up for your organization.
Quelle: Google Cloud Platform

Migrating apps to containers? Why Migrate for Anthos is your best bet

Most of us know that there is real value to be had in modernizing workloads, and there are plenty of customer success stories to showcase that. But, even though the value in modernizing workloads to Kubernetes has been well documented, there are still plenty of businesses that haven’t been able to make the jump. Reluctant businesses say that manually modernizing traditional workloads running on VMs into containers is a very complex/challenging project that involves significant time and costs. For instance, some proposals to refactor a single small to medium application can be $100,000 or more. Multiply that by 500 applications, and that’s a $50,000,000 project! To say nothing of how long it might take. Moreover, for some workloads (e.g., from third parties or ISVs) there is no access to the source code, precluding manual containerization altogether. As a result, these become blockers for many enterprises in their data center migration, especially for customers that don’t just want to lift and shift their important workloads. However, there’s an alternative. By leveraging automated containerization technologies and the right solution partners, you can cut the time and cost of a modernization project by as much as 90%, while enjoying most of the benefits that come with manual refactoring. Given that, using tools like Migrate for Anthos is a uniquely smart, efficient way to modernize traditional applications away from virtual machines and into native containers. Our unique automation approach extracts critical application elements from a VM so you can easily insert those elements into containers running on Google Kubernetes Engine (GKE), without artifacts like guest OS layers that VMs need but that are unnecessary for containers. For example, Migrate for Anthos automatically generates a container image, a Dockerfile for day-2 image updates and application revisions, Kubernetes deployment YAMLs and (where relevant) a persistent data volume onto which the application data files and persistent state are copied. This automated, intelligent extraction is significantly faster and easier than manually modernizing the app, especially when source code or deep application rebuild knowledge is unavailable. That’s why using Migrate for Anthos is one of the most scalable approaches to modernize applications with Kubernetes orchestration, image-based container management and DevOps automation.One of our customers, British newspaper The Telegraph, used Migrate for Anthos to accelerate its modernization and avoid the blockers we mentioned above. Here’s what Andrew Gregory, Systems Engineer Manager and Amit Lalani, Sr. Systems Engineer, had to say about the effort: “The Telegraph was running a legacy content management system (CMS) in another public cloud on several instances. Upgrading the actual system or migrating the content to our main Website CMS was problematic, but we wanted to migrate it from the public cloud it was on. With the help of our partners at Claranet and Google engineers, Migrate for Anthos delivered results quickly and efficiently. This legacy (but very important) system is now safely in GKE and joins its more modern counterparts, and is already seeing significant savings on infrastructure and reduced day-to-day operational costs.”Like at The Telegraph, any means that can accelerate and enable modernization of enterprise workloads is of high business value to our customers. Migrate for Anthos can accelerate and simplify the transition from VMs to GKE and Anthos by automating the containerization and “kubernetization” of the workloads. While manual refactoring typically takes many weeks or months, Migrate for Anthos can deliver containerization in hours or days. And once you’ve done so, you’ll start seeing immediate benefits in terms of infrastructure efficiency, operational productivity, and developer experience. To showcase that, Forrester’s New Technology Projection: The Total Economic Impact™ of Anthos (2019) report states: “When you are ready to migrate existing applications to the cloud, Migrate for Anthos makes that process simple and fast. The composite organization is projected to have 58% to 75% faster app migration and modernization process when using Anthos. After you containerize your existing applications you can take advantage of Anthos GKE, both on-prem and in the cloud, and consistently manage your Kubernetes deployments.”Let’s take a deeper look at some of the benefits you can expect from modernizing your VM-based workloads into containers on Kubernetes with Migrate for Anthos. Infrastructure efficiencyInternal Google studies have shown that converting VMs to containers in Kubernetes can yield between 30 – 65% savings on what you’re currently paying for your infrastructure, by means of: Higher utilization and density, leveraging automatic bin-packing and auto-scaling capabilities, Kubernetes places containers optimally in nodes based on required resources while scaling as needed, without impairing availability. In addition, unlike VMs, all containers on a single node share one copy of the operating system and don’t each require their own OS image and vCPU, resulting in a much smaller memory footprint and CPU needs. This means more workloads running on fewer compute resources. Shortened provisioning means you’re paying less to run the same workloads on account of them being ready sooner/easier. Operational productivityEmpowering your IT team to do more in less time also yields about 20 – 55% cost savings through reduced overall IT management and administration, for example:  Simplified OS management – In Anthos, the node and its operating system are managed by the system, so you don’t need to manage or be responsible for kernel security patches and upgrades. Configuration encapsulation – By leveraging declarative specification (infrastructure as code) you can simplify and automate your deployment and more easily perform maintenance tasks like rollback and upgrades. This all leads to a faster, more agile IT lifecycle.Reduced downtime – By leveraging Kubernetes features like self healing and dynamic scaling you’ll reduce incidents and have easier desired state management.Unified management – By modernizing legacy workloads into containers, DevOps engineers can use the same method to manage all their workloads, both cloud-native and cloud “naturalized” workloads, making it faster and easier for IT to manage your hybrid IT landscape. Environment parity with improved visibility and monitoring, makes finding and fixing problems less toilsome. Developer productivity When you’ve got a better and more agile IT environment, your developers can do more with less, usually resulting in cost savings from developer efficiency and reduced infrastructure. Apps that have been converted into containers benefit from:Layering efficiency – The ability to use Docker images and layers (which Migrate for Anthos extracts as part of the container artifacts). Developer velocity – You can finally “write once run everywhere,” and combine automated CI/CD pipelines with on-demand, repeatable test deployments using declarative models and Kubernetes orchestration.Faster lifecycle – Get products to market quicker, yielding additional revenue and competitive market advantages, on top of savings. In short, modernizing your VMs into containers running on Kubernetes has benefits across infrastructure, operations, and development. Although modernization may seem intimidating at first, Migrate for Anthos helps make this process fast and painless. You can read more about it here, watch a quick video on using Migrate for Anthos on Linux or Windows workloads, or try it yourself using Qwiklabs.And if you’re interested in talking to someone about using Migrate for Anthos please fill out this form (mention “Migrate for Anthos” in the ‘Your Project’ field) and someone will contact you directly.
Quelle: Google Cloud Platform

Dataproc Metastore: Fully managed Hive metastore now in public preview

The Apache Hive metastore service has become a building block for data lakes that utilize the diverse world of open-source software, such as Apache Spark and Presto. We’re launching the Dataproc Metastore into public preview today, so these powerful tools are now easy to use by any Google Cloud customer with fewer distractions and delays. The Dataproc Metastore is a fully managed, highly available, auto-healing, open-source Apache Hive metastore service that simplifies technical metadata management for customers building data lakes on Google Cloud. And for a limited time only, it’s free! This launch exemplifies our commitment to fast-paced innovation and delivery, combining cloud technology with open source, and closely follows our announcement of the private preview in June of this year. Before we go into more detail, we would also like to thank our private preview users for testing and providing rich feedback—the launch today has been made better with your valuable input.  What does this mean for my data lake?If you are familiar with the Hive Metastore, you likely already know it is a critical component of many data lakes because it acts as a central repository of metadata. In fact, a whole ecosystem of tools, open-source and otherwise, are built around the Hive Metastore, some of which this diagram illustrates.The Dataproc Metastore is a serverless Hive Metastore that unlocks several key data lake use cases in Google Cloud, including:Many ephemeral Dataproc clusters can utilize a Dataproc Metastore at the same time, allowing many users of open-source tools, such as Spark , Hive, and Presto, to access consistent metadata at the same time. Unifying metadata between open-source tables and Data Fusion, so ETL and ELT on those tables is easier and code-free.Tying together metadata into a central store so cloud-natiove services like Dataproc can seamlessly interoperate with other open-source tools or partner technologies.The Dataproc Metastore now means your data lake is easier to manage, more unified, and increasingly serverless for fewer distractions. New features in Dataproc MetastoreThroughout the private preview period, and since our initial announcement in June, we have added many new features to the Dataproc Metastore. Several of these new features are launching with this release today.IAM and Kerberos—Fine-grained Cloud Identity and Access Management (Cloud IAM) support, along with out-of-the-box support for Kerberos and other security tools such as Apache Ranger.Import/export—Metadata can be imported and exported to enable bidirectional integration with and migration from other Hive Metastores, such as those on-premises.VPC-SC—Support for Google Cloud VPC Service Controls to mitigate data exfiltration risks.ACID transactions—Dataproc Metastore supports ACID transactions using Hive’s ACID transaction capabilities.Cloud Monitoring integration—Logging and monitoring of Dataproc Metastore instances seamlessly inside of Cloud Monitoring and Logging.Broad Dataproc compatibility—Compatible with a broad range of Dataproc releases, including the Dataproc 2.0 preview release with Spark, Hadoop, and Hive 3.x. Service updates—You can transactionally update elements of the hive Metastore service including configurations, tiers, ports, maintenance window, and more.Cloud Console and Cloud SDK—Dataproc Metastore supports both the Cloud Console and the Cloud SDK command line (gcloud beta metastore).We will continue to move quickly to get the Dataproc Metastore into general availability while also adding highly requested features such as customer-managed encryption keys.Dataproc Metastore public preview pricingDuring the public preview period, which starts today and lasts until GA, the Dataproc Metastore will be offered at a 100% discount. This discount is intended for you to use and test the technology without incurring costs for the testing. The Dataproc Metastore is offered in two service tiers, developer and enterprise, each of which offer different features, service levels, and pricing because they are intended for different use cases.This pricing allows you to create developer instances for quick testing and prototyping without needing to test against your production environment or create multiple copies of your production database. The enterprise tier is intended for production deployments that require high availability, performance, and stability. Future releases will also incorporate features targeted at specific tiers, such as Data Catalog integration.You can find more information in the pricing documentation for Dataproc Metastore.Serverless open sourceThe Dataproc Metastore is a good example of how the best of Google Cloud infrastructure can be used to run managed open source. As a result of innovations in how we run, secure, and scale the Hive Metastore, we have been able to make the Dataproc Metastore serverless. This launch is the beginning of how we’re reshaping managed open source for data analytics in cloud. As a team passionate about both cloud and open source, it is our goal to bring the very things that make the Hive Metastore uniquely great, including no infrastructure to manage, automated scalability, enhanced hands-off high availability, and easier pricing to other popular open source components in the future. Get startedAny Google Cloud customer can use the Dataproc Metastore, for free during preview, starting today. You can follow the quickstart guide or review the full documentation for more information on how to get started.Related ArticleDataproc Hub makes notebooks easier to use for machine learningDataproc Hub, now generally available, makes it easy to use open source, notebook-based machine learning on Google Cloud, powered by Spark.Read Article
Quelle: Google Cloud Platform