AWS Systems Manager Now Supports Use of Parameter Store at Higher API Throughput

AWS Systems Manager Parameter Store now supports up to 1,000 requests per second. This lets you run applications that require higher concurrent access to a large number of parameters. You can enable the higher throughput limit from the Parameter Store Settings tab. Once the higher throughput is enabled for your account, you incur charges per API interaction. See the pricing page for details.  
Quelle: aws.amazon.com

RDO Stein Released

The RDO community is pleased to announce the general availability of the RDO build for OpenStack Stein for RPM-based distributions, CentOS Linux and Red Hat Enterprise Linux. RDO is suitable for building private, public, and hybrid clouds. Stein is the 19th release from the OpenStack project, which is the work of more than 1200 contributors from around the world.
The release is already available on the CentOS mirror network at http://mirror.centos.org/centos/7/cloud/x86_64/openstack-stein/.
The RDO community project curates, packages, builds, tests and maintains a complete OpenStack component set for RHEL and CentOS Linux and is a member of the CentOS Cloud Infrastructure SIG. The Cloud Infrastructure SIG focuses on delivering a great user experience for CentOS Linux users looking to build and maintain their own on-premise, public or hybrid clouds.
All work on RDO and on the downstream release, Red Hat OpenStack Platform, is 100% open source, with all code changes going upstream first.
Photo by Yucel Moran on Unsplash
New and Improved
Interesting things in the Stein release include:

Ceph Nautilus is the default version of Ceph, a free-software storage platform, implements object storage on a single distributed computer cluster, and provides interfaces for object-, block- and file-level storage, within RDO (or is it the default without OpenStack?). Within Nautilus, the Ceph Dashboard has gained a lot of new functionality like support for multiple users / roles, SSO (SAMLv2) for user authentication, auditing support, a new landing page showing more metrics and health info, I18N support, and REST API documentation with Swagger API.

The extracted Placement service, used to track cloud resource inventories and usages to help other services effectively manage and allocate their resources, is now packaged as part of RDO. Placement has added the ability to target a candidate resource provider, easing specifying a host for workload migration, increased API performance by 50% for common scheduling operations, and simplified the code by removing unneeded complexity, easing future maintenance.

Other improvements include:

The TripleO deployment service, used to develop and maintain tooling and infrastructure able to deploy OpenStack in production, using OpenStack itself wherever possible, added support for podman and buildah for containers and container images. Open Virtual Network (OVN) is now the default network configuration and TripleO now has improved composable network support for creating L3 routed networks and IPV6 network support.

Contributors
During the Stein cycle, we saw the following new RDO contributors:

Sławek Kapłoński
Tobias Urdin
Lee Yarwood
Quique Llorente
Arx Cruz
Natal Ngétal
Sorin Sbarnea
Aditya Vaja
Panda
Spyros Trigazis
Cyril Roelandt
Pranali Deore
Grzegorz Grasza
Adam Kimball
Brian Rosmaita
Miguel Duarte Barroso
Gauvain Pocentek
Akhila Kishore
Martin Mágr
Michele Baldessari
Chuck Short
Gorka Eguileor

Welcome to all of you and Thank You So Much for participating!
But we wouldn’t want to overlook anyone. A super massive Thank You to all 74 contributors who participated in producing this release. This list includes commits to rdo-packages and rdo-infra repositories:

yatin
Sagi Shnaidman
Wes Hayutin
Rlandy
Javier Peña
Alfredo Moralejo
Bogdan Dobrelya
Sławek Kapłoński
Alex Schultz
Emilien Macchi
Lon
Jon Schlueter
Luigi Toscano
Eric Harney
Tobias Urdin
Chandan Kumar
Nate Johnston
Lee Yarwood
rabi
Quique Llorente
Chandan Kumar
Luka Peschke
Carlos Goncalves
Arx Cruz
Kashyap Chamarthy
Cédric Jeanneret
Victoria Martinez de la Cruz
Bernard Cafarelli
Natal Ngétal
hjensas
Tristan de Cacqueray
Marc Dequènes (Duck)
Juan Antonio Osorio Robles
Sorin Sbarnea
Rafael Folco
Nicolas Hicher
Michael Turek
Matthias Runge
Giulio Fidente
Juan Badia Payno
Zoltan Caplovic
agopi
marios
Ilya Etingof
Steve Baker
Aditya Vaja
Panda
Florian Fuchs
Martin André
Dmitry Tantsur
Sylvain Baubeau
Jakub Ružička
Dan Radez
Honza Pokorny
Spyros Trigazis
Cyril Roelandt
Pranali Deore
Grzegorz Grasza
Bnemec
Adam Kimball
Haikel Guemar
Daniel Mellado
Bob Fournier
Nmagnezi
Brian Rosmaita
Ade Lee
Miguel Duarte Barroso
Alan Bishop
Gauvain Pocentek
Akhila Kishore
Martin Mágr
Michele Baldessari
Chuck Short
Gorka Eguileor

The Next Release Cycle
At the end of one release, focus shifts immediately to the next, Train, which has an estimated GA the week of 14-18 October 2019. The full schedule is available at https://releases.openstack.org/train/schedule.html.
Twice during each release cycle, RDO hosts official Test Days shortly after the first and third milestones; therefore, the upcoming test days are 13-14 June 2019 for Milestone One and 16-20 September 2019 for Milestone Three.
Get Started
There are three ways to get started with RDO.
To spin up a proof of concept cloud, quickly, and on limited hardware, try an All-In-One Packstack installation. You can run RDO on a single node to get a feel for how it works.
For a production deployment of RDO, use the TripleO Quickstart and you’ll be running a production cloud in short order.
Finally, for those that don’t have any hardware or physical resources, there’s the OpenStack Global Passport Program. This is a collaborative effort between OpenStack public cloud providers to let you experience the freedom, performance and interoperability of open source infrastructure. You can quickly and easily gain access to OpenStack infrastructure via trial programs from participating OpenStack public cloud providers around the world.
Get Help
The RDO Project participates in a Q&A service at https://ask.openstack.org. We also have our users@lists.rdoproject.org for RDO-specific users and operrators. For more developer-oriented content we recommend joining the dev@lists.rdoproject.org mailing list. Remember to post a brief introduction about yourself and your RDO story. The mailing lists archives are all available at https://mail.rdoproject.org. You can also find extensive documentation on RDOproject.org.
The #rdo channel on Freenode IRC is also an excellent place to find and give help.
We also welcome comments and requests on the CentOS mailing lists and the CentOS and TripleO IRC channels (#centos, #centos-devel, and #tripleo on irc.freenode.net), however we have a more focused audience within the RDO venues.
Get Involved
To get involved in the OpenStack RPM packaging effort, check out the RDO contribute pages, peruse the CentOS Cloud SIG page, and inhale the RDO packaging documentation.
Join us in #rdo on the Freenode IRC network and follow us on Twitter @RDOCommunity. You can also find us on Facebook and YouTube.
Quelle: RDO

Azure Tips and Tricks – Become more productive with Azure

Today, we’re pleased to re-introduce a web resource called “Azure Tips and Tricks” that helps existing developers using Azure learn something new within a couple of minutes. Since inception in 2017, the collection has grown to over 200+ tips as well as videos, conference talks, and several eBooks spanning the entire universe of the Azure platform. Featuring a new weekly tip and video it is designed to help you boost your productivity with Azure, and all tips are based off of practical real-world scenarios. The series spans the entire universe of the Azure platform from App Services, to containers, and more!

 

Figure 1: The Azure Tips and Tricks homepage.

 

With the new site, we’ve included the much-needed ability to navigate between Azure services, so that you can quickly browse your favorite categories.

 

Figure 2: The new Azure Tips and Tricks navigation capabilities.

 

There is search functionality to assist you to quickly find what you are looking for.

 

 

Figure 3: The new Azure Tips and Tricks search function.

 

The site is also open-source on GitHub, so anyone can help contribute to the site, ask questions, and jump-in wherever they want! While you are on the page go ahead and star us to keep up to date.

 

Figure 4: The Azure Tips and Tricks GitHub repo.

 

What are you waiting for? Visit the site and star the repo so that you don’t miss future updates to the site, and to ensure you make the most of the Azure platform’s constantly evolving services and features. I’ll also be presenting a session on the series at Microsoft Build on Monday, May 6th from 2:30-2:50pm. I hope to meet you in person.

Thanks for reading and keep in mind that you can learn more about Azure by following our official blog and Twitter account. You can also reach the author of this post on Twitter.
Quelle: Azure

Train and deploy state-of-the-art mobile image classification models via Cloud TPU

As organizations use machine learning (ML) more frequently in mobile and embedded devices, training and deploying small, fast, and accurate machine learning models becomes increasingly important. To help accelerate this process, we’ve published open-source Cloud TPU models to enable you and your data science team to train state-of-the-art mobile image classification models faster and at a lower cost.For many IoT-focused businesses, it’s also essential to optimize both latency and accuracy, especially on low power, resource-constrained devices. By leveraging a novel, platform-aware neural architecture search framework (MnasNet), we identified a model architecture that can outperform the previous state-of-the-art MobileNetV1 and MobileNetV2 models that were carefully built by hand. You can find a comparison between MnasNet and MobileNetV2 below:This new MnasNet model runs nearly 1.8x faster inference speed (or 55% less latency) than the corresponding MobileNetV2 model and still maintains the same ImageNet top-1 classification accuracy.How to train MnasNet on Cloud TPUWe specifically designed and optimized MNasNet to train as fast as we could make it on Cloud TPUs. The MnasNet model training source code is now the latest available in the TensorFlow TPU GitHub repository. Using this code, you can benefit from both low training cost and fast inference speed when you train MnasNet on Cloud TPUs and export the trained model for deployment.If you have not yet experimented with training models on Cloud TPUs, you might want to begin by following the QuickStart guide. Once you are up and running with Cloud TPUs, you can begin training an MnasNet model by executing a command of this form:The model processes training data in TFRecord format, which can be created from input image collections via TensorFlow’s Apache Beam pipeline tool. You can find more details on how to use Cloud TPUs to train MnasNet in our tutorial.To help you further tune your MnasNet model, we have published additional notes about our implementation along with a variety of suggested tuning parameters to accommodate different classification latency requirements.How you can deploy via SavedModel or TensorFlow LiteYou can easily deploy the models trained on Cloud TPUs to a variety of different platforms and devices. We have published pre-trained SavedModel files (mnasnet-a1 and mnasnet-b1) from ImageNet training runs to help you get started: you can use this MnasNet Colab to experiment with these pre-trained models interactively.You can easily deploy your newly trained model by exporting it to TensorFlow Lite. You can convert an exported saved model into a *.tflite file with the following code:Next, you can optionally apply post-training quantization, a common technique that reduces the model size while also providing up to 3x lower latency. These improvements are a result of smaller word sizes that enable faster computation and more efficient memory usage. To quantize 32-bit floating point numbers into more efficient 8-bit integers, add the following code:The open-source implementation provided in the Cloud TPU repository implements saved model export, TensorFlow Lite export, and TensorFlow Lite’s post-training quantization by default. The code also includes a default serving input function that decodes and classifies JPEG images: if your application requires custom input preprocessing, you should consider modifying this example to perform your own input preprocessing (for serving or for on-device deployment via TensorFlow Lite).With this new open source MnasNet implementation for Cloud TPU, it is easier and faster than ever before to train a state-of-the-art image classification model and deploy it on mobile and embedded devices. Check out our tutorial and Colab to get started.AcknowledgementsMany thanks to the Googlers who contributed to this post, including Zak Stone, Xiaodan Song, David Shevitz, Barrett Williams, Russell Power, Adam Kerin, and Quoc Le.
Quelle: Google Cloud Platform