Introducing a new design for the WordPress apps

The WordPress mobile apps are the best way to manage your site from anywhere. If you’re already using the app, you might have noticed a new visual design that’s been rolling out. That rollout is complete in WordPress 17.1, which is available today for both Android and iOS. If you’re not already using it this is the perfect time to give it a try!

Our new visual design as seen on iPhone and iPad.

We add new features and improve the WordPress apps in every release, but our visual design hasn’t changed much in the last few years. Over the last few months we’ve thought about how to modernize the design of the apps. As we’ve implemented features like Dark Mode, we’re taking advantage of new components made available in the latest versions of iOS and Android.  

Bigger, bolder headers call out key product areas and create a distinction between the top level tabs and deeper levels of the apps. A new color palette pairs a more neutral background that lets your content shine with brighter blues that make interactive elements even more noticeable. A new serif typeface is a nod to WordPress’s roots in writing and publishing.

Dark Mode is available on both iOS and Android devices.

We hope you enjoy these updates as you use the apps and we’d love to hear your feedback. Reach out to us from within the app by going to My Site, tapping your photo on the top right, tapping Help & Support, and then selecting Contact Support.
Quelle: RedHat Stack

Feedback in Mirantis Kubernetes Engine 3.4.0

Mirantis Kubernetes Engine 3.4.0 lets users send us feedback. Please leverage this to let us know what works, what doesn’t, and how to support you better Collecting general feedback directly within Mirantis Kubernetes Engine is one more way we’re empowering customers to help us understand how they use our products. The information is sent to … Continued
Quelle: Mirantis

Mirantis Cloud Native Platform April Update

VMware support for Mirantis Container Cloud, 2FA for Kubernetes Engine, Helm chart support for Secure Registry, and more Mirantis no longer does “waterfall” releases. Instead, all the interoperating parts of Mirantis Cloud Native platform (and the open source projects we sponsor, as well) release continuously — as frequently as every few weeks. That’s good, in … Continued
Quelle: Mirantis

Colossus under the hood: a peek into Google’s scalable storage system

You trust Google Cloud with your critical data, but did you know that Google also relies on the same underlying storage infrastructure for its other businesses as well? That’s right, the same storage system that powers Google Cloud also underpins Google’s most popular products, supporting globally available services like YouTube, Drive, and Gmail. That foundational storage system is Colossus, which backs Google’s extensive ecosystem of storage services, such as Cloud Storage and Firestore, supporting a diverse range of workloads, including transaction processing, data serving, analytics and archiving, boot disks, and home directories. In this post, we take a deeper look at the storage infrastructure behind your VMs, specifically the Colossus file system, and how it helps enable massive scalability and data durability for Google services as well as your applications. Google Cloud scales because Google scalesBefore we dive into how storage services operate, it’s important to understand the single infrastructure that supports both Cloud and Google products. Like any well-designed software system, all of Google is layered with a common set of scalable services. There are three main building blocks used by each of our storage services:Colossus is our cluster-level file system, successor to the Google File System (GFS).  Spanner is our globally-consistent, scalable relational database.Borg is a scalable job scheduler that launches everything from compute to storage services. It was and continues to be a big influence on the design and development of Kubernetes.These three core building blocks are used to provide the underlying infrastructure for all Google Cloud storage services, from Firestore to Cloud SQL to Filestore, and Cloud Storage. Whenever you access your favorite storage services, the same three building blocks are working together to provide everything you need. Borg provisions the needed resources, Spanner stores all the metadata about access permissions and data location, and then Colossus manages, stores, and provides access to all your data. Google Cloud takes these same building blocks and then layers everything needed to provide the level of availability, performance, and durability you need from your storage services. In other words, your own applications will scale the same as Google products because they rely on the same core infrastructure based on these three services scaling to meet your needs. Colossus in a nutshellNow, let’s take a closer look at how Colossus works. But first, a little background on Colossus:It’s the next-generation of the GFS.Its design enhances storage scalability and improves availability to handle the massive growth in data needs of an ever-growing number of applications. Colossus introduced a distributed metadata model that delivered a more scalable and highly available metadata subsystem.  But how does it all work? And how can one file system underpin such a wide range of workloads? Below is a diagram of the key components of the Colossus control plane:Client libraryThe client library is how an application or service interacts with Colossus. The client is probably the most complex part of the entire file system. There’s a lot of functionality, such as software RAID, that goes into the client based on an application’s requirements. Applications built on top of Colossus use a variety of encodings to fine-tune performance and cost trade-offs for different workloads. Colossus Control PlaneThe foundation of Colossus is its scalable metadata service, which consists of many Curators. Clients talk directly to curators for control operations, such as file creation, and can scale horizontally. Metadata databaseCurators store file system metadata in Google’s high-performance NoSQL database, BigTable. The original motivation for building Colossus was to solve scaling limits we experienced with Google File System (GFS) when trying to accommodate metadata related to Search. Storing file metadata in BigTable allowed Colossus to scale up by over 100x over the largest GFS clusters. D File ServersColossus also minimizes the number of hops for data on the network. Data flows directly between clients and “D” file servers (our network attached disks). CustodiansColossus also includes background storage managers called Custodians. They play a key role in maintaining the durability and availability of data as well as overall efficiency, handling tasks like disk space balancing and RAID reconstruction. How Colossus provides rock-solid, scalable storageTo see how this all works in action, let’s consider how Cloud Storage uses Colossus. You’ve probably heard us talk a lot about how Cloud Storage can support a wide range of use cases, from archival storage to high throughput analytics, but we don’t often talk about the system that lies beneath.With Colossus, a single cluster is scalable to exabytes of storage and tens of thousands of machines. In the example above, for example, we have instances accessing Cloud Storage from Compute Engine VMs, YouTube serving nodes, and Ads MapReduce nodes—all of which are able to share the same underlying file system to complete requests. The key ingredient is having a shared storage pool that is managed by the Colossus control plane, providing the illusion that each has its own isolated file system. Disaggregation of resources drives more efficient use of valuable resources and lowers costs across all workloads. For instance, it’s possible to provision for the peak demand of low latency workloads, like a YouTube video, and then run batch analytic workloads more cheaply by having them fill in the gaps of otherwise idle time.Let’s take a look at a few other benefits Colossus brings to the table. Simplify hardware complexityAs you might imagine, any file system supporting Google services has fairly daunting throughput and scaling requirements that must handle multi-TB files and massive datasets. Colossus abstracts away a lot of physical hardware complexity that would otherwise plague storage-intensive applications. Google data centers have a tremendous variety of underlying storage hardware, offering a mix of spinning disk and flash storage in many sizes and types. On top of this, applications have extremely diverse requirements around durability, availability, and latency. To ensure each application has the storage it requires, Colossus provides a range of service tiers. Applications use these different tiers by specifying I/O, availability, and durability requirements, and then provisioning resources (bytes and I/O) as abstract, undifferentiated units.In addition, at Google scale, hardware is failing virtually all the time—not because it’s unreliable, but because there’s a lot of it. Failures are a natural part of operating at such an enormous scale, and it’s imperative that its file system provide fault tolerance and transparent recovery. Colossus steers IO around these failures and does fast background recovery to provide highly durable and available storage.The end result is that the associated complexity headaches of dealing with hardware resources are significantly reduced, making it easy for any application to get and use the storage it requires.Maximize storage efficiencyNow, as you might imagine it takes some management magic to ensure that storage resources are available when applications need them without overprovisioning. Colossus takes advantage of the fact that data has a wide variety of access patterns and frequencies (i.e., hot data that is accessed frequently) and uses a mix of flash and disk storage to meet any need. The hottest data is put on flash for more efficient serving and lower latency. We buy just enough flash to push the I/O density per gigabyte into what disks can typically provide and buy just enough disks to ensure we have enough capacity. With the right mix, we can maximize storage efficiency and avoid wasteful overprovisioning. For disk-based storage,  we want to keep disks full and busy to avoid excess inventory and wasted disk IOPs. To do this, Colossus uses intelligent disk management to get as much value as possible from available disk IOPs. Newly written data (i.e. hotter data) is evenly distributed across all the drives in a cluster. Data is then rebalanced and moved to larger capacity drives as it ages and becomes colder.  This works great for analytics workloads, for example, where data typically cools off as it ages.  Battle-tested to deliver massive scaleSo, there you have it—Colossus is the secret scaling superpower behind Google’s storage infrastructure. Colossus not only handles the storage needs of Google Cloud services, but also provides the storage capabilities of Google’s internal storage needs, helping to deliver content to the billions of people using Search, Maps, YouTube, and more every single day. When you build your business on Google Cloud you get access to the same super-charged infrastructure that keeps Google running. We’ll keep making our infrastructure better, so you don’t have to.To learn more about Google Cloud’s storage architecture, check out the Next ‘20 session from which this post was developed, “A peek at the Google Storage infrastructure behind the VM.” And check out the cloud storage website to learn more about all our storage offerings.Related ArticleOptimizing object storage costs in Google Cloud: location and classesSaving on Cloud Storage starts with picking the right storage for your use case, and making sure you follow best practices.Read Article
Quelle: Google Cloud Platform

How ShareChat built scalable data-driven social media with Google Cloud

Editor’s note: Today’s guest post comes from Indian social media platform ShareChat. Here’s the story of how they improved performance, app development, and analytics for serving regional content to millions of users using Google Cloud. How do you create a social network when your country has 22 major official languages and countless active regional dialects? At ShareChat, we serve more than 160 million monthly active users who share and view videos, images, GIFs, songs, and more in 15 different Indian languages. We also launched a short video platform in 2020, Moj, which already supports over 80 million monthly active users. Connecting with people in the language they understandAs mobile data and smartphones have become more affordable in India, we noticed a large new segment of people, many in rural areas, being welcomed onto the internet. However, many of them didn’t speak English, and when it comes to accessing content and information—language plays a significant role. Instead of joining other social media sites where English reigned supreme, new internet users chose to join language or dialect-specific Whatsapp groups where they felt more comfortable instead.So, we set out to build a platform where people can share their opinions, document their lives, and make new friends, all in their native language. ShareChat simplifies content and people discovery by using a personalized content newsfeed to deliver language-specific content to the right audience.Given the high-intensity data and high volume of content and traffic, we rely heavily on IT infrastructure. On top of that, a large number of our users rely on 2G networks to post, like, view, or follow each other. Our platform needs to deliver great experiences to people who are spread out across the country and different networks without any reduction in performance.The right cloud partner to support future growthShareChat was born in the cloud—we already knew how to scale systems to serve a large customer base with our existing cloud provider. But like many companies, we struggled with over-provisioning compute and storage to accommodate unpredictable traffic and avoid running out of storage. With demand rising for local language content and an increase in online interactions in response to the COVID-19 crisis, we realized that we would need a more efficient way to scale dynamically and allocate resources as needed.Google Cloud was a natural choice for us. We wanted to partner with a technology-first company that would make it easy (and cost-effective) to manage a strong technology portfolio that would allow us to build whatever we wanted. Google is at the forefront of technology innovation and provided everything we needed to build, run, and manage our applications (including creating an efficient DevOps pipeline to fix and release new features quickly). We had a few issues in mind at the start of discussions with the Google Cloud team, but over time, as we got information and support from them, we realized that these were the partners we wanted in our corner when it came time to tackle our most challenging problems. In the end, we decided to take our entire infrastructure to Google Cloud.To support millions of users, we deploy and scale using Google Kubernetes Engine. While we analyze our data using a combination of managed data cloud services, such as Pub/Sub for data pipelines, BigQuery for analytics, Cloud Spanner for real-time app serving workloads, and Cloud Bigtable for less-indexed databases. We also rely on Cloud CDN to help us distribute high-quality and reliable content delivery at low latency to our users. We now use just half the total core consumption of our legacy environment to run ShareChat’s existing workloads.Google Cloud delivers better outcomes at every level By moving to Google Cloud, we saw major benefits in several key areas: Zero-downtime migration for usersAt the time of migration, we had over 70 terabytes of data, consisting of 220 tables—some of which were up to 14 terabytes with nearly 50 billion rows. Due to our data’s interdependencies, moving services over one at a time wasn’t an option for us. Even though we were migrating such large volumes of data, we didn’t want to impact any of our customers. Latency spikes for out-of-sync data might affect message delivery. For instance, if a message or notification was delayed, we didn’t want to risk a bad user experience causing someone to abandon ShareChat. To prepare for the move, we ran a proof-of-concept cluster for over four months to test database performance in a real-world scenario for handling more than a million queries per second. Using an open-source API gateway, we replicated our legacy data environment into Google Cloud for performance testing and capacity analysis. As soon as we were confident Google Cloud could handle the same traffic as our previous cloud environment, we were ready to execute.Using wrappers, we were able to migrate without having to change anything in our existing application code. The entire migration of 60 million users to Google Cloud took five hours—without any data loss or downtime. Today, ShareChat has grown to 160 million users, and Google Cloud continues to give us the support we need.Scaling globally to meet unexpected demandWe rely on real-time data to drive everything on ShareChat by tracking everything that goes on in our app—from messages and new groups to content people like or who they follow. Our users create more than a million posts per day, so it’s critical that our systems can process massive amounts of data efficiently. We chose to migrate to Spanner for its global consistency and secondary index. Unlike our legacy NoSQL database, we could scale without having to rethink existing tables or schema definitions and keep our data systems in sync across multiple locations. It’s also cost-effective for us—moving over 120 tables with 17 indexes into Cloud Spanner reduced our costs by 30%.Spanner also replicates data seamlessly in multiple locations in real time, enabling us to retrieve documents if one region fails. For instance, when our traffic unexpectedly grew by 500% over just a few days, we were able to scale horizontally with zero lines of code change. We were also launching our Moj video app simultaneously, and we were able to move it to another region without a single issue. Simplifying development and deploymentOn average, we experience about 80,000 requests per second (RPS) –nearly 7 billion RPS per day. That means daily push notifications sent out to the entire user base about daily trending topics can often result in a spike of 130,000 RPS in just a few seconds. Instead of over-provisioning, Google Kubernetes Engine (GKE) enables us to pre-scale for traffic spikes around scheduled events, such as holidays like Diwali, when millions of Indians send each other greetings. Migrating to GKE has also enabled us to adopt more agile ways of work, such as automating deployment and saving time with writing scripts. Even though we were already using container-based solutions, they lacked transparency and coverage across the entire deployment funnel. Kubernetes features, such as sidecar proxy, allows us to attach peripheral tasks like logging into the application without requiring us to make code changes. Kubernetes upgrades are managed by default, so we don’t have to worry about maintenance and stay focused on more valuable work. Clusters and nodes automatically upgrade to run the latest version, minimizing security risks and ensuring we always have access to the latest features.Low latency and real-time ML predictionsEven though many of our users may be accessing ShareChat outside of metropolitan areas, it doesn’t mean they’re more patient if the app loads slowly or their messages are delayed. We strive to deliver a high-performance experience, regardless of where our users are. We use Cloud CDN to cache data in five Google Cloud Point of Presence (PoP) locations at the edge in India, allowing us to bring content as close as possible to people and speeding up load time. Since moving to Cloud CDN, our cache hit ratio has improved from 90% to 98.5%—meaning our cache can handle 98.5% of content requests. As we expand globally, we’d like to use machine learning to reach new people with content in different languages. We want to build new algorithms to process real-time datasets in regional languages and accurately predict what people want to see. Google Cloud gives us an infrastructure optimized to handle compute-intensive workloads that will be useful to us both now—and in the future.  The confidence to build the best platformOur current system now performs better than before we migrated, but we are continuously building new features on top of it. Google’s data cloud has provided us with an elegant ecosystem of services that allows us to build whatever we want, more easily and faster than ever before. Perhaps the biggest advantage of partnering with Google Cloud has been the connection we have with the engineers at Google. If we’re working to solve a specific problem statement and find a specific solution in a library or a piece of code, we have the ability to immediately connect with the team responsible for it. As a result, we have experienced a massive boost in our confidence. We know that we can build a really good system because we not only have a good process in place to solve problems—we have the right support behind us.Related ArticleDatabase observability for developers: introducing Cloud SQL InsightsNew Insights tool helps developers quickly understand and resolve database performance issues on Cloud SQL.Read Article
Quelle: Google Cloud Platform

The 5 benefits of Cloud SQL [infographic]

Tired of spending too much time on database maintenance? You’re not alone. Lots of businesses are turning to fully managed database services to help build and scale infrastructure quickly, freeing up time and resources to spend on innovation instead. At Google Cloud, our managed database services include Cloud SQL, which makes it easy to move MySQL, PostgreSQL, and SQL Server workloads to the cloud. Here are the top five benefits of using Google’s Cloud SQL.Click to enlarge the 5 benefits of Cloud SQL infographicCheck out more details on how managed services like Cloud SQL work and why they matter.
Quelle: Google Cloud Platform