Performance and cost optimization best practices for machine learning

Cloud computing provides the power and speed needed for Machine Learning (ML), and allows you to easily scale up and down. However, this also means that costs may spin out of control if you don’t plan ahead, which is especially fraught now, given that businesses are particularly cost conscious. To use Google Cloud effectively for ML, then, it’s important to follow best practices to optimize for performance and costs. To help you do just that, we published a new set of best practices—based on our experience working with advanced ML customers—on how you can enhance the performance and decrease the costs of your ML workloads on Google Cloud, from experimentation to production. The guide covers various Smart Analytics and Cloud AI services in different phases of the ML process, as illustrated in the diagram below, namely: Experimentation with AI Platform NotebooksData preparation with BigQuery and DataflowTraining with AI Platform TrainingServing with AI Platform PredictionOrchestration with AI Platform PipelinesClick to enlargeWe also provide best practices for monitoring performance and managing the cost of ML projects with Google Cloud tools. Are you ready to optimize your ML workloads? Check out the Machine Learning Performance and Cost Optimization Best Practices to get started.Acknowledgements: We’d like to thank Andrew Stein (Product Manager, Cloud Accelerators), Chad Jennings (Product Manager, BigQuery), Henry Tappen (Product Manager, Cloud AI), Karthik Ramachandran (Product Manager, Cloud AI), Lak Lakshmanan (Head of Data Analytics and AI Solutions), Mark Mirchandani (Developer Advocacy, Cost Management), Kannappan Sirchabesan (Strategic Cloud Engineer, Data and Analytics), Mehran Nazir (Product Manager, Dataflow), and Shan Kulandaivel (Product Manager, Dataflow) for their contributions to the best practice documentation.
Quelle: Google Cloud Platform

Prioritize datacenter discovery and readiness assessments to accelerate cloud migration

Cloud migrations are an effective way to drive operational efficiencies and to flip capital expenses to operational. Successful cloud migrations are rooted in bias towards action and execute with urgency towards triggers that need immediate attention. In our experience, migration projects that start with a deep understanding of the IT landscape, are best positioned to mitigate any complexities. In our experience, leaders who set actionable project goals and timelines, bring together teams and encourage solution thinking, and lean in to track progress towards well-defined objectives, are the most effective in helping their organizations realize cloud migration targets.

In the kick-off blog of this series, we listed prioritizing assessments as one of our top three recommendations to accelerate your cloud migration journey. Comprehensive cloud migration assessments should cover the entire fleet and help you arrive at key decisions related to candidate apps, optimum resource allocation, and cost projections. You’ll want to understand your applications, their on-premises performance, uncover dependencies and interrelated systems, and estimate cloud readiness and run-cost. This analysis is critical to fully recognize what you are working with and proactively understand how to best manage these resources in the cloud. Further, in our experience with customers, inadequately planned migrations—especially those that don’t focus on optimizing infrastructure resources and cost levers such as compute, storage, licensing, and benefits including Azure Hybrid Benefit and Software Assurance—often result in long-term sticker shock.

Prioritizing assessments is also important to keep your IT and financial organizations aligned around how to transform your business with Azure while keeping the cost structure lean to weather changing market conditions. We shared our guidance in this cloud migration blog to help you understand the financial considerations for cloud migrations and best practice guidance for managing cloud costs.

Comprehensive discovery with Azure Migrate and Movere

The discovery process can be slow and daunting, especially for enterprises that host hundreds of applications and resources across multiple datacenters. Arriving at an accurate baseline of your IT infrastructure is tedious and often requires you to connect disparate sets of information across various tools, sub-systems, and business teams. Leverage Azure Migrate or Movere to automate this process and quickly perform discovery and assessments of your on-premises infrastructure, databases, and applications. Movere is available via the United States and Worldwide Solutions Assessment program. Azure Migrate is available with your Azure subscription at no additional cost.

Azure Migrate discovery and assessment capabilities are agentless and offer the following key features:

Comprehensive, at-scale discovery features for Linux and Windows servers, running on hypervisor platforms such as VMware vSphere or Microsoft Hyper-V, public clouds such as AWS or GCP, or bare metal servers.
Discovery of infrastructure configuration and actual resource utilization in terms of cores, memory, disks, IOPS, and more so that you can right-size and optimize your infrastructure based on what you actually need to meet the desired application performance in Azure. Discovery of IOPS characteristics over a period of time results in an accurate prediction of resources that your applications need in the cloud.
Azure assessment reports that help you understand the various offers, SKUs, and offerings and associated cost of running your applications in Azure. You can customize for different scenarios and compare results to make decisions related to target regions, EA pricing, reserved instances, SKU consolidation, and more.
Features that help you inventory your applications and software components installed on your servers – this capability is crucial in understanding your application vendor estate and evaluating compatibility, end-of-support, and more.
Agentless dependency mapping so that you can visualize dependencies across different tiers of an application or across applications – this feature helps you design high-confidence migration waves and to mitigate any complexities upfront.

CMDBs, ITAM, and management tools enrich discovery data

Discovery and assessment results are important, but intersecting them with your existing on-premises data sources unlocks powerful insights, driving better decision-making. These are data sources that are great to get started with – your Configuration Management Database (CMDB), IT asset management systems (ITAM), Active Directory, management tools, and monitoring systems. Merging your rich IT data repositories with discovery and assessment reports broadens understanding across different dimensions and renders a more complete and accurate view of your business units, IT assets, and business applications.

Use Azure cost estimations from the assessment output and allocate the projections to various business teams to better recognize their future budgetary requirements. Compare Azure cost against the current spend to estimate potential cloud savings your teams can accrue by moving to Azure.
Identify machines that have reached their OS end-of-support and reference your CMDB to identify associated application owners and teams to prioritize migrations to Azure.
Filter out machines with high CPU and memory utilization, and correlate with performance events in your monitoring systems to identify applications with capacity constraints. These applications are ideal candidates that can benefit from Azure’s auto-scaling, and VM scale sets capabilities.
Identify related systems using the Azure Migrate dependency mapping feature, and map associated owners from your CMDBs and Azure AD to identify move group owners.
Identify servers with zero to low usage and work with owning business units on decommissioning options.
Understand the recommend migration window by mapping RTO/RPO information from your private data sources.
Understand your storage IOPS and projected application growth to select the appropriate Azure storage and disk SKUs.

These are just a few samples of the many insights that can be surfaced by unifying discovery and assessment results with IT data sources.

Data-driven progress tracking

CIOs and leaders who are on point for driving cloud migration initiatives should periodically track progress, identify and communicate migration priorities, and bring together stakeholders to ensure that teams on the ground are making progress. Dashboards that track progress on the projects and the quality of insights and actions being generated are effective tools to stay focused.

Some important dimensions that dashboards should include are datacenter cost trends, fleet size in terms of physical hosts, count of virtual servers, provisioned storage, OS distribution, VM density by host, resource utilization in terms of cores, memory, and storage. Additionally, views should help quickly identify important cloud migration triggers such as hardware that is coming up for refresh, OS versions that are hitting end-of-support, business units that are constrained by capacity.

Here is a sample PowerBI dashboard that an Azure customer is using to track the progress of their cloud assessment and migration project:

 
  

Next steps

Investigate the Microsoft Cloud Adoption Framework for Azure to align your cloud migration priorities and objectives before you start planning and ensure a more successful migration.
Make sure you start your journey right by understanding how to build your migration plan with Azure Migrate and reviewing the best practices for creating assessments.
For expert assistance from Microsoft or our qualified partners, check out our Cloud Solution Assessment offerings or join the Azure Migration Program (AMP).
To learn more and to get started, visit the Azure Migration Center.

Coming up next, we’ll explore a big topic that’s key to succeeding in your migrations: anticipating and mitigating complexities. We’ll talk about the organizational challenges and decisions you’ll need to make as you start planning and executing your cloud migrations.

Share your feedback

Please share your experiences or thoughts as this series comes together in the comments below—we appreciate your feedback.
Quelle: Azure

How To Setup Your Local Node.js Development Environment Using Docker – Part 2

In part I of this series, we took a look at creating Docker images and running Containers for Node.js applications. We also took a look at setting up a database in a container and how volumes and network play a part in setting up your local development environment.

In this article we’ll take a look at creating and running a development image where we can compile, add modules and debug our application all inside of a container. This helps speed up the developer setup time when moving to a new application or project. 

We’ll also take a quick look at using Docker Compose to help streamline the processes of setting up and running a full microservices application locally on your development machine.

Fork the Code Repository

The first thing we want to do is download the code to our local development machine. Let’s do this using the following git command:

git clone git@github.com:pmckeetx/memphis.git

Now that we have the code local, let’s take a look at the project structure. Open the code in your favorite IDE and expand the root level directories. You’ll see the following file structure.

├── docker-compose.yml
├── notes-service
│ ├── config
│ ├── node_modules
│ ├── nodemon.json
│ ├── package-lock.json
│ ├── package.json
│ └── server.js
├── reading-list-service
│ ├── config
│ ├── node_modules
│ ├── nodemon.json
│ ├── package-lock.json
│ ├── package.json
│ └── server.js
├── users-service
│ ├── Dockerfile
│ ├── config
│ ├── node_modules
│ ├── nodemon.json
│ ├── package-lock.json
│ ├── package.json
│ └── server.js
└── yoda-ui
├── README.md
├── node_modules
├── package.json
├── public
├── src
└── yarn.lock

The application is made up of a couple simple microservices and a front-end written in React.js. It uses MongoDB as it’s datastore.

In part I of this series, we created a couple of Dockerfiles for our services and also took a look at running them in containers and connecting them to an instance of MongoDb running in a container. 

Local development in Containers

There are many ways to use Docker and containers to do local development and a lot of it depends on your application structure. We’ll start at with the very basic and then progress into more complicated setups

Using a Development Image

One of the easiest ways to start using containers in your development workflow is to use a development image. A development image is an image that has all the tools that you need to develop and compile your application with.

In this article we are using node.js, so our image should have Node.js installed as well as npm or yarn. Let’s create a development image that we can use to run our node.js application inside of.

Development Dockerfile

Create a local directory on your development machine that we can use as a working directory to save our Dockerfile and any other files that we’ll need for our development image.

$ mkdir -p ~/projects/dev-image

Create a Dockerfile in this folder and add the following commands.

FROM node:12.18.3
RUN apt-get update && apt-get install -y
nano
vim

We start off by using the node:12.18.3 official image. I’ve found that this image is fine for creating a development image. I like to add a couple of text editors to the image in case I want to quickly edit a file while inside the container.

We did not add an ENTRYPOINT or CMD to the Dockerfile because we will rely on the base image’s ENTRYPOINT and we will override the CMD when we start the image.

Let’s build our image.

$ docker build -t node-dev-image .

And now we can run it.

$ docker run -it –rm –name dev -v $(pwd):/code node-dev-image bash

You will be presented with a bash command prompt. Now, inside the container we can create a JavaScript file and run it with Node.js.

Run the following commands to test our image.

$ cat <<EOF > index.js
console.log( ‘Hello from inside our container’ )
EOF
$ node index.js

Nice. It appears that we have a working development image. We can now do everything that we would do in our normal bash terminal.

If you ran the above Docker command inside of the notes-service directory, then you will have access to the code inside of the container. 

You can start the notes-service by simply navigating to the /code directory and running npm run start.

Using Compose to Develop locally

The notes-service project uses MongoDb as it’s data store. If you remember from Part I of this series, we had to start the Mongo container manually and connect it to the same network that our notes-service is running on. We also had to create a couple of volumes so we could persist our data across restarts of our application and MongoDb.

In this section, we’ll create a Compose file to start our notes-serice and the MongoDb with one command. We’ll also set up the Compose file to start the notes-serice in debug mode so that we can connect a debugger to the running node process.

Open the notes-service in your favorite IDE or text editor and create a new file named docker-compose.dev.yml. Copy and paste the the below commands into the file.

version: ‘3.8’
services:
notes:
build:
context: .
ports:
– 8080:8080
– 9229:9229
environment:
– SERVER_PORT=8080
– DATABASE_CONNECTIONSTRING=mongodb://mongo:27017/notes
volumes:
– ./:/code
command: npm run debug

mongo:
image: mongo:4.2.8
ports:
– 27017:27017
volumes:
– mongodb:/data/db
– mongodb_config:/data/configdb
volumes:
mongodb:
mongodb_config:

This compose file is super convenient because now we do not have to type all the parameters to pass to the docker run command. We can declaratively do that in the compose file.

We are exposing port 9229 so that we can attach a debugger. We are also mapping our local source code into the running container so that we can make changes in our text editor and have those changes picked up in the container.

One other really cool feature of using the a compose file, is that we have service resolution setup to use the service names. So we are now able to use “mongo” in our connection string. The reason we use mongo is because that is what we have named our mongo service in the compose file as.

Let’s start our application and confirm that it is running properly.

$ docker-compose -f docker-compose.dev.yml up –build

We pass the “–build” flag so Docker will compile our image and then start it.

If all goes will you should see something similar:

Now let’s test our API endpoint. Run the following curl command:

$ curl –request GET –url http://localhost:8080/services/m/notes

You should receive the following response:

{“code”:”success”,”meta”:{“total”:0,”count”:0},”payload”:[]}

Connecting a Debugger

We’ll use the debugger that comes with the Chrome browser. Open Chrome on your machine and then type the following into the address bar.

about:inspect

The following screen will open.

Click the “Open dedicated DevTools for Node” link. This will open the DevTools that are connected to the running node.js process inside our container.

Let’s change the source code and then set a breakpoint. 

Add the following code to the server.js file on line 19 and save the file. 

server.use( ‘/foo’, (req, res) =&gt; {
return res.json({ "foo": "bar" })
})

If you take a look at the terminal where our compose application is running, you’ll see that nodemon noticed the changes and reloaded our application.

Navigate back to the Chrome DevTools and set a breakpoint on line 20 and then run the following curl command to trigger the breakpoint.

$ curl –request GET –url http://localhost:8080/foo

BOOM You should have seen the code break on line 20 and now you are able to use the debugger just like you would normally. You can inspect and watch variables, set conditional breakpoints, view stack traces and a bunch of other stuff.

Conclusion

In this article we took a look at creating a general development image that we can use pretty much like our normal command line. We also set up our compose file to map our source code into the running container and exposed the debugging port.

Resources

Getting Started with Dockerhttps://www.docker.com/get-startedBest practices for writing Dockerfileshttps://docs.docker.com/develop/develop-images/dockerfile_best-practices/https://www.docker.com/blog/speed-up-your-development-flow-with-these-dockerfile-best-practices/Docker Desktop https://docs.docker.com/desktop/Docker Compose https://docs.docker.com/compose/Project skeleton sampleshttps://github.com/docker/awesome-compose

The post How To Setup Your Local Node.js Development Environment Using Docker – Part 2 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/