How To Setup Your Local Node.js Development Environment Using Docker – Part 2

In part I of this series, we took a look at creating Docker images and running Containers for Node.js applications. We also took a look at setting up a database in a container and how volumes and network play a part in setting up your local development environment.

In this article we’ll take a look at creating and running a development image where we can compile, add modules and debug our application all inside of a container. This helps speed up the developer setup time when moving to a new application or project. 

We’ll also take a quick look at using Docker Compose to help streamline the processes of setting up and running a full microservices application locally on your development machine.

Fork the Code Repository

The first thing we want to do is download the code to our local development machine. Let’s do this using the following git command:

git clone git@github.com:pmckeetx/memphis.git

Now that we have the code local, let’s take a look at the project structure. Open the code in your favorite IDE and expand the root level directories. You’ll see the following file structure.

├── docker-compose.yml
├── notes-service
│ ├── config
│ ├── node_modules
│ ├── nodemon.json
│ ├── package-lock.json
│ ├── package.json
│ └── server.js
├── reading-list-service
│ ├── config
│ ├── node_modules
│ ├── nodemon.json
│ ├── package-lock.json
│ ├── package.json
│ └── server.js
├── users-service
│ ├── Dockerfile
│ ├── config
│ ├── node_modules
│ ├── nodemon.json
│ ├── package-lock.json
│ ├── package.json
│ └── server.js
└── yoda-ui
├── README.md
├── node_modules
├── package.json
├── public
├── src
└── yarn.lock

The application is made up of a couple simple microservices and a front-end written in React.js. It uses MongoDB as it’s datastore.

In part I of this series, we created a couple of Dockerfiles for our services and also took a look at running them in containers and connecting them to an instance of MongoDb running in a container. 

Local development in Containers

There are many ways to use Docker and containers to do local development and a lot of it depends on your application structure. We’ll start at with the very basic and then progress into more complicated setups

Using a Development Image

One of the easiest ways to start using containers in your development workflow is to use a development image. A development image is an image that has all the tools that you need to develop and compile your application with.

In this article we are using node.js, so our image should have Node.js installed as well as npm or yarn. Let’s create a development image that we can use to run our node.js application inside of.

Development Dockerfile

Create a local directory on your development machine that we can use as a working directory to save our Dockerfile and any other files that we’ll need for our development image.

$ mkdir -p ~/projects/dev-image

Create a Dockerfile in this folder and add the following commands.

FROM node:12.18.3
RUN apt-get update && apt-get install -y
nano
vim

We start off by using the node:12.18.3 official image. I’ve found that this image is fine for creating a development image. I like to add a couple of text editors to the image in case I want to quickly edit a file while inside the container.

We did not add an ENTRYPOINT or CMD to the Dockerfile because we will rely on the base image’s ENTRYPOINT and we will override the CMD when we start the image.

Let’s build our image.

$ docker build -t node-dev-image .

And now we can run it.

$ docker run -it –rm –name dev -v $(pwd):/code node-dev-image bash

You will be presented with a bash command prompt. Now, inside the container we can create a JavaScript file and run it with Node.js.

Run the following commands to test our image.

$ cat <<EOF > index.js
console.log( ‘Hello from inside our container’ )
EOF
$ node index.js

Nice. It appears that we have a working development image. We can now do everything that we would do in our normal bash terminal.

If you ran the above Docker command inside of the notes-service directory, then you will have access to the code inside of the container. 

You can start the notes-service by simply navigating to the /code directory and running npm run start.

Using Compose to Develop locally

The notes-service project uses MongoDb as it’s data store. If you remember from Part I of this series, we had to start the Mongo container manually and connect it to the same network that our notes-service is running on. We also had to create a couple of volumes so we could persist our data across restarts of our application and MongoDb.

In this section, we’ll create a Compose file to start our notes-serice and the MongoDb with one command. We’ll also set up the Compose file to start the notes-serice in debug mode so that we can connect a debugger to the running node process.

Open the notes-service in your favorite IDE or text editor and create a new file named docker-compose.dev.yml. Copy and paste the the below commands into the file.

version: ‘3.8’
services:
notes:
build:
context: .
ports:
– 8080:8080
– 9229:9229
environment:
– SERVER_PORT=8080
– DATABASE_CONNECTIONSTRING=mongodb://mongo:27017/notes
volumes:
– ./:/code
command: npm run debug

mongo:
image: mongo:4.2.8
ports:
– 27017:27017
volumes:
– mongodb:/data/db
– mongodb_config:/data/configdb
volumes:
mongodb:
mongodb_config:

This compose file is super convenient because now we do not have to type all the parameters to pass to the docker run command. We can declaratively do that in the compose file.

We are exposing port 9229 so that we can attach a debugger. We are also mapping our local source code into the running container so that we can make changes in our text editor and have those changes picked up in the container.

One other really cool feature of using the a compose file, is that we have service resolution setup to use the service names. So we are now able to use “mongo” in our connection string. The reason we use mongo is because that is what we have named our mongo service in the compose file as.

Let’s start our application and confirm that it is running properly.

$ docker-compose -f docker-compose.dev.yml up –build

We pass the “–build” flag so Docker will compile our image and then start it.

If all goes will you should see something similar:

Now let’s test our API endpoint. Run the following curl command:

$ curl –request GET –url http://localhost:8080/services/m/notes

You should receive the following response:

{“code”:”success”,”meta”:{“total”:0,”count”:0},”payload”:[]}

Connecting a Debugger

We’ll use the debugger that comes with the Chrome browser. Open Chrome on your machine and then type the following into the address bar.

about:inspect

The following screen will open.

Click the “Open dedicated DevTools for Node” link. This will open the DevTools that are connected to the running node.js process inside our container.

Let’s change the source code and then set a breakpoint. 

Add the following code to the server.js file on line 19 and save the file. 

server.use( ‘/foo’, (req, res) =&gt; {
return res.json({ "foo": "bar" })
})

If you take a look at the terminal where our compose application is running, you’ll see that nodemon noticed the changes and reloaded our application.

Navigate back to the Chrome DevTools and set a breakpoint on line 20 and then run the following curl command to trigger the breakpoint.

$ curl –request GET –url http://localhost:8080/foo

BOOM You should have seen the code break on line 20 and now you are able to use the debugger just like you would normally. You can inspect and watch variables, set conditional breakpoints, view stack traces and a bunch of other stuff.

Conclusion

In this article we took a look at creating a general development image that we can use pretty much like our normal command line. We also set up our compose file to map our source code into the running container and exposed the debugging port.

Resources

Getting Started with Dockerhttps://www.docker.com/get-startedBest practices for writing Dockerfileshttps://docs.docker.com/develop/develop-images/dockerfile_best-practices/https://www.docker.com/blog/speed-up-your-development-flow-with-these-dockerfile-best-practices/Docker Desktop https://docs.docker.com/desktop/Docker Compose https://docs.docker.com/compose/Project skeleton sampleshttps://github.com/docker/awesome-compose

The post How To Setup Your Local Node.js Development Environment Using Docker – Part 2 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Hosting Live (Virtual!) Events: Lessons from Planning the WordPress.com Growth Summit

Back in January, my team at WordPress.com was busy planning another year of exciting in-person events — community meetups, conference keynotes, booths, and in-person demos — at large exhibit halls and hotels around the world.

Then the world changed overnight, and because of a global pandemic, our Events team — just like many of you running your own businesses — had to rethink everything about how we connect with people. 

So we went back to work. We’ve learned so much in just five months, and it culminates in the upcoming WordPress.com Growth Summit — our first-ever multi-day virtual conference. It’s going to be a jam-packed program full of expert advice from business leaders and entrepreneurs. We’ll also have breakout sessions with our own WordPress experts, the Automattic Happiness Engineers, taking you through everything you need to know about building a powerful, fast website that works great for SEO, eCommerce, and growing your business. 

In the lead-up to the Summit, we wanted to share everything we’ve learned so far about running virtual events, from YouTube to webinars to Facebook Live and ticket sales. There are dozens of great solutions for staying connected to and supporting your audience — here’s what’s been working for us: 

Live webinars 

In April, we launched a series of daily webinars, 30-minute live demos and Q&As direct from our Happiness Engineers, five days a week. These webinars walk people through the basics of getting started with WordPress.com. We also launched a few topical webinars — deeper dives into specific topics: eCommerce 101, growing an audience, using the WordPress app, and podcasting, to name a few.

We chose Zoom to host these because it’s a popular platform that allows for key webinar elements like pre-registration/signups, screen sharing, and Q&A. We pulled these together quickly, so going with a familiar solution was key for us and our audience. 

To expand our reach, we also streamed to our Facebook, Instagram, and YouTube channels. This was a simple integration that Zoom offers already, and we saw our viewership grow exponentially. 

Pre-recorded vs. live instruction 

At virtual events, one question always comes up: pre-recorded or live? We prefer a combination! Live is great when possible; it gives attendees an opportunity to interact with speakers, speakers can personalize the content based on questions being asked, and attendees can interact with one another, forming connections with like-minded content creators and entrepreneurs. 

Live content also has challenges: internet connections can cut out, computers can shut down unexpectedly, and there are more opportunities for interruption (does anyone else’s dog bark the minute you get on a live video?). It also requires all participants to be online at the same time, which can be logistically challenging.

Our advice: Test, test, test! If a speaker isn’t comfortable presenting live, there is the option to do a combination — a pre-recorded session with a live Q&A in the chat. We’ve found it to work really well, and it gives attendees the same access to the presenter.

The Growth Summit 

We helped folks to get online quickly with our daily webinars and dove into deeper topics each month, and now we want to help you grow your site. Enter The Official WordPress.com Growth Summit, happening next week, August 11-13.

We gathered frequently asked questions over the past few months, listened to your requests for live sessions across more time zones, and found inspiration from users that we felt needed to be shared widely.  

The Growth Summit takes a deeper dive into topics and offers hands-on WordPress training for anyone looking to get online. You’ll have the opportunity to ask questions live, connect with speakers, visit our virtual Happiness Bar for 1:1 support, and connect with other attendees during the networking breaks. 

Some key highlights from the agenda: 

Using the block editorCustomizing your siteGrowing your audienceImproving your content ranking (SEO)Creating a marketing plan Expanding from blogging to podcasting Making money with your siteAnd so much more… 

We wanted a platform for this event that would act as an immersive event experience. There are many great platforms for this, including Accelevents and Hopin. We’ll be experimenting with many of them in the coming months (Remember: test!). A few key features we looked for: 

Ease of self-productionAbility for simultaneous sessionsOverall user experienceFlow of the event — central location for agenda, speaker bios, networking, and moreNetworking featuresAudience engagement — polling, live chat, and moreAnalyticsRegistration within the platformAccessibilityCustomizationSpeaker (virtual) green rooms

The best part? This event is being offered twice a day so that we cover all time zones. And if you can’t join us live, attendees will have access to all content from all time zones, after the event.

Register for the WordPress.com Growth Summit Today!

Quelle: RedHat Stack

Compute Engine explained: Picking the right licensing model for your VMs

We recently posted a guide to choosing the right Compute Engine machine family and type for your workloads. Once you’ve decided on the right technical specs, you need to decide how to license it appropriately, what kind of image to use—and evaluate tradeoffs between your options. This is an important process as licensing decisions can add a layer of complexity to future operational or architectural decisions. To support your cloud journey, Compute Engine provides licensing flexibility and compliance support through various licensing options, as illustrated in the table below:Let’s take a closer look at the four available licensing options, and their respective benefits and considerations. For specific questions around your licenses or software rights, please work with your legal team, license reseller, or license provider.Option 1: Google Cloud-provided image and licenseArguably, the most straightforward option is to purchase new licenses from Google Cloud. For your convenience, Compute Engine provides prebuilt images that have pay-as-you-go licenses attached. This approach can help minimize licensing agreements and obligations, and enable you to take advantage of pay-as-you-go billing for elastic workloads. In addition, using Google Cloud’s premium Compute Engine images and licensing relieves you of a lot of operational burden, as Google Cloud:Provides ongoing updates and patches to base imagesManages license reporting and complianceSimplifies your support model by leveraging Google for your software support needsPremium images and licenses for Compute Engine are available for both Linux and Windows workloads. To ensure proper license reporting and compliance, you are automatically billed based on usage for all VMs created with these images.For customers that don’t already have a Microsoft Enterprise Agreement or Red Hat Cloud Access licenses, this model allows you to take advantage of Google Cloud’s relationships with third-party software vendors for pay-as-you-go licenses that scale with your workload, and offer premium support. This allows you to pay for exactly what you need when your workloads spike, rather than paying a predetermined amount through fixed third-party contracts.For pay-as-you-go licenses, Compute Engine offers the following premium images with built-in licensing:Red Hat Enterprise Linux (RHEL and RHEL for SAP)SUSE Linux Enterprise Server (SLES and SLES for SAP)Microsoft Windows ServerMicrosoft SQL ServerWith the exception of Microsoft SQL Server, all licenses associated with these images are charged in one-second increments, with a one-minute minimum. SQL Server images are also charged in one-second increments, but have a 10-minute minimum. To see additional pricing details, visit the premium images pricing documentation.Option 2: Bring your own image with Google Cloud-provided licensesIf you want to import your own images, but still wish to use pay-as-you-go licenses provided by Google Cloud, Compute Engine lets you import your virtual disks or import your virtual appliances and specify a license provided by Compute Engine to attach to the image. This model lets you bring a custom image to Compute Engine, ensure the proper Compute Engine drivers are installed, and use a Compute Engine pay-as-you-go license. Similar to Compute Engine premium images, all images created through this import process will have the appropriate license(s) attached. VMs created using these images are billed automatically to help ensure correct license reporting and compliance.Some of the benefits of using your own image with Google Cloud licensing include:You can use your own custom imageGoogle Cloud manages license reporting and complianceSimplified support by leveraging Google for your vendor software support needsThis option, available for both Linux (RHEL) and Windows workloads, helps reduce licensing agreements and complexity, and lets you take advantage of pay-as-you-go billing for elastic workloads.Option 3: Google Cloud-provided image with bring your own license or subscriptionIf you want to use an image from Google Cloud but want to bring your own licenses or subscriptions, that’s an option too. You can choose SUSE Linux Enterprise Server with your own subscription (BYOS) support from the Google Cloud Marketplace, allowing you to spin up your own images while taking advantage of any licensing agreements or subscriptions you may have with your Linux operating system vendor. To use BYOS, sign up for a license on the vendor’s website when you deploy the solution. Under this model, the vendor bills you directly for licensing, and Google Cloud bills you separately for infrastructure costs.This option is not available for Windows Server or SQL Server, as both require you to bring your own image when you bring your own licenses. Additional details on bringing your own Windows licenses is covered below.In short, with a Google Cloud-provided image plus your own license or subscription, you can:Use a Google Cloud Marketplace solution with pre-installed software packagesReuse existing licensing agreementsPay Google Cloud only for infrastructure costsOption 4: Bring your own image and your own licenseLastly, you can bring eligible licenses to Google Cloud to use with your own imported images. With this option, you can import your virtual disks or virtual appliances, specifying a ‘Bring Your Own License’ (BYOL) option. Like other BYOL or BYOS options, images created using this option are only billed for infrastructure. This option supports customers with eligible Red Hat Enterprise Linux or Windows Server and other Microsoft application (e.g., SQL Server) licenses. For Red Hat Enterprise Linux, you can import your RHEL images using the image import tool and specify your own licenses. You can run these workloads on either multi-tenant VMs or single-tenant VMs on Compute Engine sole-tenant nodes.For Windows licenses, you can import your own image using the same image import tooling. For customers running Microsoft application servers with Software Assurance (which includes SQL Server, but not the underlying OS), you can bring your licenses using License Mobility. However, for Windows OS licenses, regardless of whether or not you have Software Assurance, you are restricted to running your BYOL Windows Server or BYOL Desktop OS on dedicated hardware, available on Compute Engine sole-tenant nodes or Google Cloud VMware Engine.Sole-tenant nodes allow you to launch your instances onto physical Compute Engine servers that are dedicated exclusively to your workloads while providing visibility into the underlying hardware to support your license usage and reporting needs. When running on sole-tenant nodes, there are different host maintenance configurations to help support your physical server affinity licensing restrictions, while still ensuring you receive the latest host updates. Additional details on these options can be found in the sole-tenant node documentation.There are several benefits to using your own image with your own license or subscription:Save on licensing costs by reusing existing investments in licensesTake advantage of unlimited virtualization rights for your per-physical-core licenses by using CPU overcommit on sole-tenant nodesLeverage Compute Engine tooling for licensing reporting and complianceHowever, before going down this path, consider the following:Although Compute Engine provides support and tooling on sole-tenant infrastructures, you’re responsible for license activation, reporting and compliance.Windows Server BYOL requires the use of dedicated hardware, in the form of Compute Engine sole-tenant nodes.Sole-tenant nodes provide maintenance policy configurations for you to adjust the maintenance behavior to best comply with your licensing requirements.The licensing low-downChoosing the right image and licensing options for a given workload depends on a variety of factors, including the operating system or application that you’re running, and whether or not you have existing licenses, images or vendor relationships that you want to take advantage of. We hope this blog post helps you make sense of all your options. For more on licensing pricing, check out these resources:See the estimated costs of your instances and Compute Engine resources when you create them in the Google Cloud Console.Estimate your total project costs with the Google Cloud Pricing Calculator.View and download prices from the Pricing Table in the Cloud Console.Use the Cloud Billing Catalog API for programmatic access to SKU information.Gauge your costs with Premium Image PricingFor detailed information on licensing Microsoft workloads on Google Cloud, please reference this guide authored by SoftwareOne.
Quelle: Google Cloud Platform

The best of Google Cloud Next ’20: OnAir's Security Week for technical practitioners

Hello security aficionados! This is your week for Google Cloud Next ’20: OnAir. There is a ton of content related to security coming out this week across a wide range of topics and audiences. With that in mind, here are some sessions that I think are particularly useful for security professionals and technical practitioners:Take Control of Security with Cloud Security Command Center: Guillaume Blaquiere from Veolia, along with Kyle Olive and Andy Chang from Google Cloud demonstrate how to prevent, detect, and respond to threats in virtual machines, containers, and more using Cloud Security Command Center.Authentication for Anthos Clusters: Google’s Richard Liu and Pradeep Sawlani show you how to authenticate to Anthos clusters, including how to integrate with your identity providers using protocols such as OIDC and LDAP.Minimizing Permissions Using IAM Recommender: Find out how Uber developed automation to minimize permissions org-wide from Uber’s Senior Cloud Security Engineer Sonal Desai, along with Cloud IAM Product Manager Abhi Yadav.Check out our full Security Session Guide for a look at everything going on this week. In addition to sessions, our weekly Talks by DevRel series is a great companion to the conference. Join our host Kaslin Fields for our security technical practitioner-focused recap, Q&A, and deep dive sessions by Sandro Badame and Stephanie Wong on Friday August 7th at 9 AM PST. For folks in Asia-Pacific, I will be hosting the APAC edition with our APAC team on Friday at 11 AM SGT.If you want hands-on technical experience, we also have security-focused Study Jams available this week:HTTP Load Balancer with Cloud ArmorHands-on Lab: User Authentication: Identity-Aware ProxySecurity Week has something for everyone, so be sure to take a look at the full security session catalog for sessions that cover more, including security, compliance, and handling sensitive data.Beyond this week, we also have a lot of exciting security learning opportunities coming over the rest of the summer. Application Modernization Week (starting on August 24th), in particular, has some interesting security-related sessions:Secrets in Serverless – 2.0: Discover all the secrets about how to store secrets for Serverless workloads from my DevRel colleague and Cloud Secrets Product Manager Seth Vargo.Evolve to Zero Trust Security Model‎ with Anthos Security: Find out how you can protect your software supply chain with Binary Authorization and Anthos.Anthos Security: Modernize Your Security Posture for Cloud-Native Applications: Learn all the Cloud Native security tools that GKE and Anthos make available from GKE Security Engineer Greg Castle and Senior Product Manager Samrat Ray.Security touches many different areas and practitioners need to be constantly learning, so check back on the blog every Monday from now until the first week of September for session guides, and be on the lookout for sessions, demos, and Study Jams in other weeks as well!
Quelle: Google Cloud Platform

Google Cloud named a Leader in the 2020 Forrester Wave for API Management Solutions

APIs are a critical component of any enterprise’s digital transformation strategy. They can drive customer engagement, accelerate time to market of new services, power innovation and unlock new business opportunities. Therefore, choosing the right API management platform is critical to running a successful API program, and research from industry analyst firms like Forrester Research can help enterprises evaluate and choose the right solution.Today, we’re proud to share that Google Cloud has been recognized by Forrester as a leader in The Forrester Wave™: API Management Solutions, Q3 2020. We believe this recognition is a testament to our continued strategic investments in product innovation and laser-sharp focus on the success of our customers. Today, six of the top 10 healthcare companies, seven of the top 10 retailers, five of the top 10 financial services companies and six of the top 10 telecom providers trust Apigee to drive their digital transformation efforts. Moreover, many global organizations that were impacted by the COVID-19 pandemic continued to invest in Apigee as they doubled-down on their digital strategy. In this report, Forrester assessed 15 API management solutions against a set of pre-defined criteria. In addition to being named a leader, Google Cloud received the highest possible score in the market presence category, and the strategy category criteria of product vision, and planned enhancements, and current offering criteria such as API user engagement, REST API documentation, formal lifecycle management, data validation and attack protection, API product management, and analytics and reporting.“As a long-standing player in the market, Google Cloud has rich resources for educating customers and prospects on API business potential, including using them to create new ecosystem models. This shows in the product’s rich set of API product and pricing definition features,” according to the Forrester report. Forrester also noted that Google Cloud has deepened its integration of Apigee with other Google Cloud capabilities such as reCAPTCHA, machine learning and Istio-based service mesh. Forrester also noted that Google Cloud’s reference customers expressed high satisfaction with Apigee API management. We believe this reflects why large global brands like Nationwide Insurance, Philips, Pizza Hut, ABN Amro, Ticketmaster and Change Healthcare etc. partner with Google Cloud to drive their digital transformation programs. “What I’ve really appreciated about Apigee isn’t just the functionality in the developer portal, but the guidance they provided on how we should roll out our API strategy and how we can think strategically about digital transformation using APIs,” said Rick Schnierer, AVP, One IT Applications Business Solutions, Nationwide Insurance.You can download the full The Forrester Wave™: API Management Solutions, Q3 2020 report here (requires an email address).To learn more about Apigee, visit the website here.
Quelle: Google Cloud Platform