Checking Your Current Docker Pull Rate Limits and Status

Continuing with our move towards consumption-based limits, customers will see the new rate limits for Docker pulls of container images at each tier of Docker subscriptions starting from November 2, 2020. 

Anonymous free users will be limited to 100 pulls per six hours, and authenticated free users will be limited to 200 pulls per six hours. Docker Pro and Team subscribers can pull container images from Docker Hub without restriction as long as the quantities are not excessive or abusive.

In this article, we’ll take a look at determining where you currently fall within the rate limiting policy using some command line tools.

Determining your current rate limit

Requests to Docker Hub now include rate limit information in the response headers for requests that count towards the limit. These are named as follows:

RateLimit-Limit    RateLimit-Remaining

The RateLimit-Limit header contains the total number of pulls that can be performed within a six hour window. The RateLimit-Remaining header contains the number of pulls remaining for the six hour rolling window. 

Let’s take a look at these headers using the terminal. But before we can make a request to Docker Hub, we need to obtain a bearer token. We will then use this bearer token when we make requests to a specific image using curl.

Anonymous Requests

Let’s first take a look at finding our limit for anonymous requests. 

The following command makes a request to auth.docker.io for an authentication token for the ratelimitpreview/test image and saves that token in an environment variable named TOKEN. You’ll notice that we do not pass a username and password as we will for authenticated requests.

$ TOKEN=$(curl “https://auth.docker.io/token?service=registry.docker.io&scope=repository:ratelimitpreview/test:pull” | jq -r .token)

Now that we have a TOKEN, we can decode it and take a look at what’s inside. We’ll use the jwt tool to do this. You can also paste your TOKEN into the online tool located on jwt.io

$ jwt decode $TOKEN
Token header
————
{
“typ”: “JWT”,
“alg”: “RS256″
}

Token claims
————
{
“access”: [
{
“actions”: [
“pull”
],
“name”: “ratelimitpreview/test”,
“parameters”: {
“pull_limit”: “100”,
“pull_limit_interval”: “21600”
},
“type”: “repository”
}
],

}

Under the Token header section, you see a pull_limit and a pull_limit_interval. These values are relative to you as an anonymous user and the image being requested. In the above example, we can see that the pull_limit is set to 100 and the pull_limit_interval is set to 21600 which is the number of seconds for the limit.

Now make a request for the test image, ratelimitpreview/test, passing the TOKEN from above.

NOTE: The following curl command emulates a real pull and therefore will count as a request. Please run this command with caution.

$ curl -v -H “Authorization: Bearer $TOKEN” https://registry-1.docker.io/v2/ratelimitpreview/test/manifests/latest 2>&1 | grep RateLimit

< RateLimit-Limit: 100;w=21600
< RateLimit-Remaining: 96;w=21600

The output shows that our RateLimit-Limit is set to 100 pulls every six hours – as we saw in the output of the JWT. We can also see that the RateLimit-Remaining value tells us that we now have 96 remaining pulls for the six hour rolling window. If you were to perform this same curl command multiple times, you would observe the RateLimit-Remaining value decrease.

Authenticated requests

For authenticated requests, we need to update our token to be one that is authenticated. Make sure you replace username:password with your Docker ID and password in the command below.

$ TOKEN=$(curl –user ‘username:password’ “https://auth.docker.io/token?service=registry.docker.io&scope=repository:ratelimitpreview/test:pull” | jq -r .token)

Below is the decoded token we just retrieved.

$ jwt decode $TOKEN
Token header
————
{
“typ”: “JWT”,
“alg”: “RS256″
}

Token claims
————
{
“access”: [
{
“actions”: [
“pull”
],
“name”: “ratelimitpreview/test”,
“parameters”: {
“pull_limit”: “200”,
“pull_limit_interval”: “21600”
},
“type”: “repository”
}
],

}

The authenticated JWT contains the same fields as the anonymous JWT but now the pull_limit value is set to 200 which is the limit for authenticated free users.

Let’s make a request for the ratelimitpreview/test image using our authenticated token.

NOTE: The following curl command emulates a real pull and therefore will count as a request. Please run this command with caution.

$ curl -v -H “Authorization: Bearer $TOKEN” https://registry-1.docker.io/v2/ratelimitpreview/test/manifests/latest 2>&1 | grep RateLimit

< RateLimit-Limit: 200;w=21600
< RateLimit-Remaining: 176;w=21600

You can see that our RateLimit-Limit value has risen to 200 per six hours and our remaining pulls are at 176 for the next six hours. Just like with an anonymous request, If you were to perform this same curl command multiple times, you would observe the RateLimit-Remaining value decrease.

Error messages

When you have reached your Docker pull rate limit, the resulting response will have a http status code of 429 and include the below message.

HTTP/1.1 429 Too Many Requests
Cache-Control: no-cache
Connection: close
Content-Type: application/json
Retry-After: 21600
{
“errors”: [{
“code”: “DENIED”,
“message”: “You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit”
}]
}

Conclusion

In this article we took a look at determining the number of image pulls allowed based on whether we are an authenticated user or anonymous user. Anonymous free users will be limited to 100 pulls per six hours, and authenticated free users will be limited to 200 pulls per six hours. If you would like to avoid rate limits completely, you can purchase or upgrade to a Pro or Team subscription: subscription details and upgrade information is available at https://docker.com/pricing.

For more information and common questions, please read our docs page and FAQ. And as always, please feel free to reach out to us on Twitter (@docker) or to me directly (@pmckee).

To get started using Docker, sign up for a free Docker account and take a look at our getting started guide.
The post Checking Your Current Docker Pull Rate Limits and Status appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

What you need to know about upcoming Docker Hub rate limiting

On August 13th, we announced the implementation of rate limiting for Docker container pulls for some users. Beginning November 2, Docker will begin phasing in limits of Docker container pull requests for anonymous and free authenticated users.  The limits will be gradually reduced over a number of weeks until the final levels (where anonymous users are limited to 100 container pulls per six hours and free users limited to 200 container pulls per six hours) are reached. All paid Docker accounts (Pro, Team or Legacy subscribers) are exempt from rate limiting. 

The rationale behind the phased implementation periods is to allow our anonymous and free tier users and integrators to see the places where anonymous CI/CD processes are pulling container images. This will allow Docker users to address the limitations in one of two ways:  upgrade to an unlimited Docker Pro or Docker Team subscription,  or adjust application pipelines to accommodate the container image request limits.  After a lot of thought and discussion, we’ve decided on this gradual, phased increase over the upcoming weeks instead of an abrupt implementation of the policy. An up-do-date status update on rate limitations is available at https://www.docker.com/increase-rate-limits.

Docker users can get an up-to-date view of their usage limits and updated status messages in the CLI, in terms of querying for current pulls used as well as header messages returned from Docker Hub. This blog post walks developers through how they can access their current account usage as well as understanding the header messages. And finally, Docker users can avoid rate limits completely by upgrading to a Pro or Team subscription: subscription details and upgrade information is available at https://docker.com/pricing. And open source projects can apply for a sponsored no-cost Docker account by filling out this application.
The post What you need to know about upcoming Docker Hub rate limiting appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

These inspiring small and medium businesses are helping the world navigate COVID-19 together

No matter where you are in the world, we’ve all had to adapt our routines to navigate the “new normal” brought on by COVID-19. We’ve been inspired by the ways businesses and organizations have responded, from accelerating medical research using the latest AI technology to enabling remote healthcare to protect doctors and their patients. As Google Cloud small and medium business (SMB) customers keep innovating to support governments and organizations around the world, we share some of their stories to inspire you in the fight against this virus:Giving millions the information they urgently needCOVID-19 triggered an overwhelming need for information, flooding healthcare support centers with calls. At the same time, social distancing measures disrupted legacy customer support systems and forced call center staff to find new, safer ways to provide continuous support to those in need. Solutions were needed to keep the public informed, and Landbot.io and Clustaar promptly partnered with entrepreneurs, businesses, and governmental organizations to help.Spanish startup Landbot helps people find what they need, quickly and easily, when browsing a website. A no-code solution, Landbot is easy to implement on both the web and popular messaging platforms, gives users an interactive, adaptive messaging thread that uses conversational prompts to understand and deliver the information they need. Since the start of the pandemic, organizations all over the world—from online pharmacies in Ireland to hospitals in the US to official COVID-19 communication channels in Syria—have used Landbot at cost price or for free to supply the public with urgent information. So far, Landbot has automated 280 million messages to respond to urgent requests from more than 20 million unique users. To make this possible, Google Cloud for Startups supports Landbot.io with free credits to grow and continue developing more resources using the same tools and infrastructure used to build Google.At the same time, in France, Clustaar is reducing the pressure on customer support lines using front-line chatbots, designed to handle the high volumes of medical and political queries on local government and media websites. With support from the Google Cloud for Startups program, Clustaar has scaled its architecture to handle more than 8 million queries from more than 1.5 million people.Making remote healthcare and medical research possibleFor our most vulnerable, who need to monitor their health more often due to COVID-19 risks, lockdown has posed a new challenge: regular visits to the doctor were no longer recommended. When diabetes patients were advised to self-isolate to avoid exposure to the virus, SocialDiabetes made it possible for them to continue being monitored 24/7 from home. The app, which is free to download, monitors and diagnoses diabetes in real time by integrating with glucometers to automatically collect blood glucose data from patients. With their permission, the app can also send this information to doctors so they can continue to provide recommendations and support their patients remotely. SocialDiabetes includes a video conference and chat system to make it easier for doctors and patients to connect remotely. With support from Google Cloud for Startups, SocialDiabetes has made this important platform free to use for HCPs during COVID19 pandemy.Another point of concern for the medical community is that innovation across biomedicine and other disease areas aside from COVID-19 risked falling behind due to the closure of research labs, delayed clinical trials, and reallocated scientific funding. To help, UK-based AI startup Biorelate has opened up free access to its lead product, Galactic AI, a browser-ready deep search tool that helps researchers to make better use of existing biomedical research data from their desks. Google Kubernetes Engine helps to scale the solution, which auto-ingests more than 30 million biomedical research texts from a wide range of sources to reveal hidden insights by curating and connecting data from global scientific output, quickly. The company is now rolling out a long-term plan to register more researchers focused on drug discovery and to keep Galactic AI open for academic researchers for as long as possible.Keeping the public informed on the factsMeanwhile, being exposed to an overload of information about COVID-19 brings its own kinds of challenges to the general public. Between January and April 2020, 100 million social media posts and more than 25 million long-form documents, such as news articles and blog posts, were published about COVID-19. Amongst them were not only important recommendations published by health experts, but also conspiracy theories and misinformation with potential to put lives at risk. This surge in media and public interest presented Logically, an AI-powered platform fighting online misinformation, with a workload 10 times greater than it processes in a typical month. Faced with the challenge of exponential demand, Google Cloud connected Logically with Searce, a Global Google Cloud Premier Partner, to quickly scale-up using Google Kubernetes Engine (GKE). Searce helped implement a combination of preemptible and on-demand node pools on Google Cloud to scale cost-effectively. In the new architecture, Logically can offer a high degree of availability to users and public sector partners looking to fact-check online information about COVID-19. To help tackle the spread of false information about the pandemic and improve the information landscape for more than a billion people around the world, the platform has identified half a million false posts and articles, and alerted partners to 13 million instances of bot involvement on social media.Responding to the pandemic without coding Faces Advendurance is a South African company that organizes adventure and endurance sporting events. Being able to participate in such events represents a normalcy that many have sought throughout the pandemic, and as the virus spread, the company focused on making participation safe. They turned to AppSheet, Google Cloud’s no-code application development platform, in order to quickly build new registration and logistics processes. Hennie Scheepers, Information Systems Manager at Faces Advendurance, does not consider himself a programmer. But with AppSheet, he quickly built an app that imports data into Google Sheets from race participants who have pre-registered online through their entry platform, and that volunteers can use on mobile phones to manage participants as they arrive at the event—all things that made it that much easier to comply with COVID regulations limiting the number of people at events or in race batches. Faces Advendurance uses an RFID (Radio-Frequency Identification) timing system in which race participants get a tag with a tracking code that is automatically scanned by RFID readers as participants cross the finish line. Participants’ results automatically update in Sheets, which makes results available in the app in real time for race organizers. “It could hardly have gone better!” Scheepers said of a September 2020 event. “We received so many compliments from participants about the new registration system.”The AppSheet team has been active in pandemic response in other ways, including partnering with USMEDIC, a provider of comprehensive equipment maintenance solutions in the healthcare industry, and other companies to build and deploy a medical equipment tracking and management solution to support healthcare organizations in their COVID-19 response.Working collaboratively to stop the spread of the virusUnder the coordination of the Comité Stratégique de Filière Mode & Luxe, French textile industries gathered to tackle a shortage of masks and surgical gowns by converting their production lines, even while maintaining revenue streams and employment in face of an emergent economic downturn. Supporting them is Savoir Faire Ensemble, a web platform where professionals able to produce masks and surgical gowns could register for free, and those in need of these products could easily order them directly from suppliers during the peak of the pandemic. Nearly 1,500 firms joined Savoir Faire Ensemble, and more than 1,500 Google Accounts were set up for free to support their work during the pandemic. Using Google Workspace collaboration and communication tools, the consortium coordinates the work of suppliers and answers questions from buyers and from potential new members. Sheets is consulted by hundreds of members everyday, who use it to view and keep track of all orders coming in from the consortium’s website. The Savoir Faire Ensemble is now the biggest textile and clothing consortium in France, and it has produced 90 million masks and 12 million surgical coats since the start of the pandemic.In Italy, COVID-19 also highlighted the need for new protocols, procedures, and tools to enable specialists across the country to collaborate in real time. To help emergency care units achieve this goal, a national project was set in motion by SIAARTI, the Italian Society for Anesthesia, Analgesia, Resuscitation, and Intensive Care, which develops guidelines and clinical protocols in these fields. With support from Biotest Italia, the Society was able to establish a multifunctional communication platform capable of grouping hospitals in the same area, regionally or nationally, to share COVID-19 knowledge, starting with adoptable therapeutic protocols. Using Google Currents to exchange information, Google Forms to collect data analyzed with Google Data Studio, and Google Meet to provide training and host meetings, SIAARTI created a repository for COVID-19 guidelines and articles and enabled the safe sharing of protocols and other sensitive data amongst Italian health practitioners.Supporting the scientific community in the search for a cureCOVID-19 may have brought the world to a virtual halt, but the scientific community is in a race against time to publish scientific results related to the virus and unlock as much knowledge as possible to treat those in need. Supporting these efforts is a biomedical research discovery tool launched by UK-based company Causaly. It unlocks key evidence in research papers, faster, by applying machine learning to global scientific literature, supporting speedy new predictions in biomedical science. Causaly’s AI platform enables rapid identification of all previously reported drugs for the betacoronavirus genus, and uncovers relationships that wouldn’t be obvious in a traditional literature review search. Using Google Compute Engine to run its natural language processing pipeline and host the graph database containing all the analysis results, the platform uses artificial intelligence to rapidly read, interpret, and surface evidence from 30 million biomedical publications in seconds, enabling researchers to not only rapidly map epidemiology data, biomarkers, genes, and molecular targets, but also identify potential treatment options. As of April 2020, Causaly’s AI platform analyzed 40,000 COVID-19 papers made public as part of CORD-19, the COVID-19 Open Research Dataset, and 30 million existing biomedical publications. From this, it has identified 250 compounds with the highest promise for further COVID-19 research treatments, and made the information available for immediate download to aid researchers. Causaly’s dataset has also been provided to the Global Health Drug Discovery Institute (GHDDI) to support COVID-19 research in the future. Meanwhile, here at Google Cloud, we continue working to ensure that these inspiring companies and many others have the tools they need to bring their innovative solutions to the world. We are humbled and inspired to join forces with our customers and to support their efforts throughout COVID-19 and beyond.Related ArticleThe Google Cloud Healthcare Consent Management API: protecting emerging data in digital care and researchGoogle Cloud’s Healthcare Consent Management API helps healthcare app developers and researchers manage individuals’ consent of their hea…Read Article
Quelle: Google Cloud Platform

AWS FSx ist jetzt in der Region AWS China (Peking) verfügbar, betrieben von Sinnet, und in der Region AWS China (Ningxia), betrieben von NWCD.

Amazon FSx, ein vollständig verwalteter Service, der es einfach macht, funktionsreiche und hochperformante Dateisysteme zu starten und auszuführen, ist jetzt in der von Sinnet betriebenen AWS Region China (Peking) und der von NWCD betriebenen AWS Region China (Ningxia) verfügbar. Mit Amazon FSx können Kunden die umfangreichen Funktionen und die schnelle Leistung weit verbreiteter Open-Source- und kommerziell lizenzierter Dateisysteme nutzen und gleichzeitig zeitaufwändige Verwaltungsaufgaben wie Hardware-Bereitstellung, Software-Konfiguration, Patches und Sicherungen vermeiden.
Quelle: aws.amazon.com