Share WordPress.com Blog Posts to Telegram With WordPressDotCom Bot

Ever wished you could share new blog posts to a Telegram channel automatically?If so, then great news! Starting today, you can follow any WordPress.com blog within Telegram by using WordPressDotCom Bot. Now, you can easily get real-time updates from your favorite bloggers in specific channels, group chats, or DMs, right in the app. All with just a few clicks.

Sharing WordPress.com Blog Posts Made Easy

Whether you want to follow your favorite blogs or share your own posts, the WordPress.com Bot has you covered.Start a new channel and invite readers to follow you there. Or automate sharing with an existing channel and add value for your friends and followers. Here are some ideas for inspiration:

Keep up-to-date with writers you don’t want to miss.Share the latest shots from your family photo blog with friends and loved ones.Spark conversation about your hobbies and interests in a private channel.

How to Use the WordPress.com Bot for Telegram

There are two ways to set up WordPressDotCom Bot: 

Visit https://t.me/WordPressDotComBot and click on Send Message. Or invite the bot to a channel or group by adding a new member and searching for WordPress.com

From there, here are a handful of commands you can use:

/follow [url] -This will follow the blog specified in the url/unfollow  [url] -This will unfollow the blog specified in the url. No more notifications of new posts will be sent./following – This will show a list of all blogs followed./reset – This stops following all blogs via the Telegram bot.

Best of all, WordPress.com Bot shows off your post thumbnails. Take a look at this preview example:

Try WordPress.com Bot for Telegram Today

WordPress.com Bot is free to use. Try it now.  
Quelle: RedHat Stack

AI Booster: how Vodafone is supercharging AI & ML at scale

One of the largest telecommunications companies in the world, Vodafone is at the forefront of building next-generation connectivity and a sustainable digital future.  Creating this digital future requires going beyond what’s possible today and unlocking significant investment in new technology and change. For Vodafone, a key driver is the use of artificial intelligence (AI) and machine learning (ML), enabling predictive capabilities in enhancing the customer experience, improving network performance, accelerating advances in research, and much more. Following 18 months of hard work, Vodafone has made a huge leap forward in advancing its AI capabilities at scale with the launch of its “AI Booster” AI / ML platform. Led by the Global Big Data & AI organization under Vodafone Commercial, the platform will use the latest Google technology to enable the next generation of AI use cases, such as optimizing customer experiences, customer loyalty, and product recommendations. Vodafone’s Commercial team has long focused on advancing its AI and ML capabilities to drive business results. Yet as demand grows, it is easier said than done to embed AI and ML into the fabric of the organization and rapidly build and deploy ML use cases at scale in a highly regulated industry. Accomplishing this task means not only having the right platform infrastructure, but also developing new skills, ways of working, and processes. Having made meaningful strides in extracting value from data by moving it into a single source of truth on Google Cloud, Vodafone had already significantly increased efficiency, reduced data costs, and improved data quality. This enabled a plethora of use cases that generate business value using analytics and data science. The next step was building industrial scale ML capability, capable of handling thousands of ML models a day across 18+ countries, while streamlining data science processes and keeping up with technological growth. Knowing they had to do something drastically different to scale successfully, along came the idea for AI Booster. “To maximize business value at pace and scale, our vision was to enable fast creation and horizontal / vertical scaling of use cases in an automated, standardized manner. To do this, 18 months ago we set out to build a next-generation AI / ML platform based on new Google technology, some of which hadn’t even been announced yet. “We knew it wouldn’t be easy. People said, ‘Shoot for the stars and you might get off the ground…’ Today, we’re really proud that AI Booster is truly taking off, and went live in almost double the markets we had originally planned. Together, we’ve used the best possible ML Ops tools and created Vodafone’s “AI Booster Platform” to make data scientists’ lives easier, maximise value and take co-creation and scaling of use cases globally to another level,” says Cornelia Schaurecker, Global Group Director for Big Data & AI at Vodafone. AI Booster: a scalable, unified ML platform built entirely on Google CloudGoogle’s Vertex AI lets customers build, deploy, and scale ML models faster, with pre-trained and custom tooling within a unified platform. Built upon Vertex AI, Vodafone’s AI Booster is a fully managed cloud-native platform that integrates seamlessly with Vodafone’s Neuron platform, a data ocean built on Google Cloud. “As a technology platform, we’re incredibly proud of building a cutting-edge MLOps platform based on best-in-class Google Cloud architecture with in-built automation, scalability and security. The result is we’re delivering more value from data science, while embedding reliability engineering principles throughout,” comments Ashish Vijayvargia, Analytics Product Lead at VodafoneIndeed, while Vertex AI is at the core of the platform, it’s much more than that. With tools like Cloud Build and Artifact Registry for CI/CD, and Cloud Functions for automatically triggering Vertex Pipelines, automation is at the heart of driving efficiency and reducing operational overhead and deployment times. Today, users simply complete an online form, and then, within minutes, receive a fully functional AI Booster environment with all the right guardrails, controls, and approvals. Not long ago it could take months to move a model from a proof of concept (PoC) to launching live in production. By focusing on ML operations (MLOps), the entire ML journey is now more cost-effective, faster, and flexible, all without compromising security. PoC-to-production can now be as little as four weeks, an 80% reduction.Diving a bit deeper, Vodafone’s AI Booster Product Manager, Sebastian Mathalikunnel, summarizes key features of the platform: “Our overarching vision was a single ML platform-as-a-service that scales horizontally (business use cases across markets) and vertically (from PoC to Production). For this, we needed innovative solutions to make it both technically and commercially feasible. Selecting a few highlights, we: completely automated ML lifecycle compliance activities (drift / skew detection, explainability, auditability, etc.) via reusable pipelines, containers, and managed services; embedded security by design into the heart of the platform;capitalized on Google-native ML tooling using BQML, AutoML, Vertex AI and others;accelerated adoption through standardized and embedded ML templates.”For the last point, Datatonic, a Google Cloud data and AI partner, was instrumental in building reusable MLOps Turbo Templates, a reference implementation of Vertex Pipelines, to accelerate building a production-ready MLOps solution on Google Cloud.  “Our team is devoted to solving complex challenges with data and AI, in a scalable way. From the start, we knew the extent of change Vodafone was embarking on with AI Booster. Through this open-source codebase, we’ve created a common standard for deploying ML models at scale on Google Cloud. The benefit to one data scientist alone is significant, so scaling this across hundreds of data scientists can really change the business,” says Jamie Curtis, Datatonic’s Practice Lead for MLOps.  Reimagining the data scientist & machine learning engineer experience With the new technology platform in place, driving adoption across geographies and markets is the next challenge. The technology and process changes have a considerable impact on people’s roles, learning, and ways of working. For data scientists, non-core work now is supported by machines in the background—literally at the click of a button. They can spend time doing what they do best and discovering new tools to help them do the job. With AI Booster, data scientists and ML engineers have already started to drive greater value and collaborate on innovative solutions. Supported by instructor-led and on-demand learning paths with Google Cloud, AI Booster is also shaping a culture of experimentation and learning. Together We Can Eighteen months in the making, AI Booster would not have happened without the dedication of teams across Vodafone, Datatonic, and Google Cloud. Googlers from across the globe were engaged in supporting Vodafone’s journey and continue to help build the next evolution of the platform. Cornelia highlights that “all of this was only possible due to the incredible technology and teams at Vodafone and Google Cloud, who were flexible in listening to our requirements and even tweaking their products as a result. Alongside our ‘Spirit of Vodafone,’ which encourages experimenting and adapting fast, we’re able to optimize value for our customers and business. A huge thank you also to Datatonic, who were a critical partner throughout this journey and to Intel for their valuable funding contribution.” The Google & Vodafone partnership continues to go from strength to strength, and together, we are accelerating the digital future and finding new ways to keep people connected. “Vodafone’s flourishing relationship with Google Cloud is a vital aspect of our evolution toward becoming a world-leading tech communications company. It accelerates our ability to create faster, more scalable solutions to business challenges like improving customer loyalty and enhancing customer experience, whilst keeping Vodafone at the forefront of AI and data science,” says Cengiz Ucbenli, Global Head of Big Data and AI, Innovation, Governance at Vodafone. Find out more about the work Google Cloud is doing to help Vodafone here, and to learn more about how Vertex AI capabilities continue to evolve, read about our recent Applied ML Summit.Related ArticleAccelerating ML with Vertex AI: From retail and finance to manufacturing and automotiveHow businesses across industries are accelerating deployment of machine learning models into production with VertexAI.Read Article
Quelle: Google Cloud Platform

Choose the right size for your workload with NVads A10 v5 virtual machines, now generally available

Visualization workloads entail a wide range of use cases: from computer-aided design (CAD), to virtual desktops, to high-end simulations. Traditionally, when running these graphics-heavy visualization workloads in the cloud, customers have been limited to purchasing virtual machines (VMs) with full GPUs, which increased costs and limited flexibility. So, in 2019, we introduced the first GPU-partitioned (GPU-P) virtual machine offering in the cloud. And today, your options just got wider. Introducing the general availability of NVads A10 v5 GPU accelerated virtual machines, now available in US South Central, US West2, US West3, Europe West, and Europe North regions. Azure is the first public cloud to offer GPU partitioning (GPU-P) on NVIDIA GPUs.

NVads A10 v5 virtual machines feature NVIDIA A10 Tensor Core GPUs, up to 72 AMD EPYC™ 74F3 vCPUs with clock frequencies up to 4.0 GHz, 880 GB of RAM, 256 MB of L3 cache, and simultaneous multithreading (SMT).

Pay-as-you-go, one-year and three-year Azure Reserved Instances, and Spot virtual machines pricing for Windows and Linux deployments are now available.

Flexible and affordable NVIDIA GPU-powered workstations in the cloud

Many enterprises today use NVIDIA vGPU technology on-premises to create virtual GPUs that can be shared across multiple virtual machines. We are always innovating to provide cloud infrastructure that makes it easy for customers to migrate to the cloud. By working with NVIDIA, we have implemented SR-IOV-based GPU partitioning that provides customers cost-effective options, similar to the vGPU profiles configured on-premises to pick the right-sized GPU-powered virtual machine for the workload. The SR-IOV-based GPU partitioning provides a strong, hardware-backed security boundary with predictable performance for each virtual machine.

With support for NVIDIA vGPU, customers can select from virtual machines with one-sixth of an A10 GPU and scale all the way up to two full A10 GPU configurations. This offers cost-effective entry-level and low-intensity GPU workloads on NVIDIA GPUs, while still giving customers the option to scale up to powerful full-GPU and multi-GPU processing power. Each GPU partition in the NVads A10 v5 series virtual machines includes the full NVIDIA RTX(GRID) license and customers can either deploy a single virtual workstation per user or offer multiple sessions using the Windows Enterprise multi-session operating system. Our customers love the integrated license validation feature as it simplifies the user experience by eliminating the need to deploy dedicated license server infrastructure and provides customers with a unified pricing model.

"The NVIDIA A10 GPU-accelerated instances in Azure with support for GPU partitioning are transformational for customers seeking cost-effective cloud options for graphics- and compute-intensive workloads. Now, enterprises can access powerful RTX Virtual Workstation instances accelerated by NVIDIA Ampere architecture-based A10 GPUs—sized to meet the performance requirements of creative and technical professionals working across industries such as manufacturing, architecture, and media and entertainment."— Anne Hecht, Senior Director, Product Marketing, NVIDIA.

NVIDIA RTX Virtual Workstations include the latest enhancements in AI, ray tracing, and simulation to enable incredible 3D designs, photorealistic simulations, and stunning visual effects—at faster speeds than ever.

Pick the right-sized GPU virtual machine for any workload

The NVads A10 v5 virtual machine series is designed to offer the right choice for any workload and provide the optimum configurations for both single-user and multi-session environments. The flexible GPU-partitioned virtual machine sizes enable a wide variety of graphics, video, and AI workloads—some of which weren’t previously possible. These include virtual production and visual effects, engineering design and simulation, game development and streaming, virtual desktops/workstations, and many more.

“In the world of CAD design, cost performance and flexibility are of prime importance for our users. Microsoft has completed extensive testing with Siemens NX and we found significant benefits in performance for multiple user scenarios. With GPU partitioning, Microsoft Azure can now enable multiple users to use Siemens NX and efficiently utilize GPU resources offering customers great performance at a reasonable hardware price point.”—George Rendell, Vice President Product Management, Siemens NX.

High performance for GPU-accelerated graphics applications

The NVIDIA A10 Tensor core GPUs in the NVads A10 v5 virtual machines offer great performance for graphics applications. The AMD EPYC™ 74F3 vCPUs with clock frequencies up to 4.0 GHz offer impressive performance for single-threaded applications.

Next steps

For more information on topics covered here, see the following documentation:

NVads A10 v5 virtual machine documentation
Virtual machine pricing
Learn more about Azure HPC + AI
Read about visualization workloads on Azure

Quelle: Azure

How to choose the right Azure services for your applications—It’s not A or B

This post was co-authored by Ajai Peddapanga, Principal Cloud Solution Architect.

If you have been working with Azure for any period, you might have grappled with the question—which Azure service is best to run my apps on? This is an important decision because the services you choose will dictate your resource planning, budget, timelines, and, ultimately, the time to market for your business. It impacts the cost of not only the initial delivery, but also the ongoing maintenance of your applications.

Traditionally, organizations have thought that they must choose between two platforms, technologies, or competing solutions to build and run their software applications. For example, they ask questions like: Do we use Web Logic or WebSphere for hosting our Java Enterprise applications?, Should Docker Swarm be the enterprise-wide container platform or Kubernetes?, or Do we adopt containers or just stick with virtual machines (VMs)? They try to fit all their applications on platform A or B. This A or B mindset stems from outdated practices that were based on the constraints of the on-premises world, such as packaged software delivery models, significant upfront investments in infrastructure and software licensing, and long lead times required to build and deploy any application platform. With that history, it’s easy to bring the same mindset to Azure and spend a lot of time building a single platform based on a single Azure service that can host as many of their applications as possible—if not all. Then companies try to force-fit all their applications into this single platform, introducing delays and roadblocks that could have been avoided.

There's a better approach possible in Azure that yields higher returns on investment (ROI). As you transition to Azure, where you provision and deprovision resources on an as-needed basis, you don't have to choose between A or B. Azure makes it easy and cost-effective to take a different—and better—approach: the A+B approach. An A+B mindset simply means instead of limiting yourself to a predetermined service, you choose the service(s) that best meet your application needs; you choose the right tool for the right job.

Figure 1: Azure enables you to shift your thinking from an A or B to an A+B mindset, which has many benefits.

With A+B thinking, you can:

Select the right tool for the right job instead of force-fitting use cases to a predetermined solution.
Innovate and go to market faster with the greater agility afforded by the A+B approach.
Accelerate your app modernizations and build new cloud-native apps by taking a modular approach to picking the right Azure services for running your applications.
Achieve greater process and cost efficiencies, and operational excellence.
Build best-in-class applications tailored fit for your business

As organizations expand their decision-making process and technical strategy from an A or B mindset to encompass the possibilities and new opportunities offered with an A+B mindset, there are many new considerations. In our new book, we introduce the principles of the A+B mindset that you can use to choose the right Azure services for your applications. We have illustrated the A+B approach using two Azure services as examples in our book; however, you can apply these principles to evaluate any number of Azure Services for hosting your applications–Azure Spring Apps, Azure App Service, Azure Container Apps, Azure Kubernetes Service, and Virtual Machines are commonly used Azure Services for application hosting. A+B mindset applies to any application, written in any language.

Learn more

Asir and Ajai are the authors of a new Microsoft e-book that helps you transition to an A+B mindset and answer the question: “What is the right service for my applications?” Get the Microsoft e-book to learn more about how to transition to an A+B mindset to choose the right Azure services for your applications.
Quelle: Azure

How to Build and Deploy a URL Shortener Using TypeScript and Nest.js

At Docker, we’re incredibly proud of our vibrant, diverse and creative community. From time to time, we feature cool contributions from the community on our blog to highlight some of the great work our community does. Are you working on something awesome with Docker? Send your contributions to Ajeet Singh Raina (@ajeetraina) on the Docker Community Slack and we might feature your work!
 
Over the last five years, TypeScript’s popularity has surged among enterprise developers. In Stack Overflow’s 2022 Developer Survey, TypeScript ranked third in the “most wanted” category. Stack Overflow reserves this distinction for developers who aren’t developing with a specific language or technology, but have expressed interest in doing so.
 
Data courtesy of Stack Overflow.
 
TypeScript’s incremental adoption is attributable to enhancements in developer code quality and comprehensibility. Overall, Typescript encourages developers to thoroughly document their code and inspires greater confidence through ease of use. TypeScript offers every modern JavaScript feature while introducing powerful concepts like interfaces, unions, and intersection types. It improves developer productivity by clearly displaying syntax errors during compilation, rather than letting things fail at runtime.
However, remember that every programming language comes with certain drawbacks, and TypeScript is no exception. Long compilation times and a steeper learning curve for new JavaScript users are most noteworthy.
Building Your Application
In this tutorial, you’ll learn how to build a basic URL shortener from scratch using TypeScript and Nest.
First, you’ll create a basic application in Nest without using Docker. You’ll see how the application lets you build a simple URL shortening service in Nest and TypeScript, with a Redis backend. Next, you’ll learn how Docker Compose can help you jointly run a Nest.js, TypeScript, and Redis backend to power microservices. Let’s jump in.
Getting Started
The following key components are essential to completing this walkthrough:

Node.js
NPM
VS Code
Docker Desktop 

 
Before starting, make sure you have Node installed on your system. Then, follow these steps to build a simple web application with TypeScript.
Creating a Nest Project
Nest is currently the fastest growing server-side development framework in the JavaScript ecosystem. It’s ideal for writing scalable, testable, and loosely-coupled applications. Nest provides a level of abstraction above common Node.js frameworks and exposes their APIs to the developer. Under the hood, Nest makes use of robust HTTP server frameworks like Express (the default) and can optionally use Fastify as well! It supports databases like PostgreSQL, MongoDB, and MySQL. NestJS is heavily influenced by Angular, React, and Vue — while offering dependency injection right out of the box.
For first-time users, we recommend creating a new project with the Nest CLI. First, enter the following command to install the Nest CLI.

npm install -g @nestjs/cli

 
Next, let’s create a new Nest.js project directory called backend.

mkdir backend

 
It’s time to populate the directory with the initial core Nest files and supporting modules. From your new backend directory, run Nest’s bootstrapping command. We’ll call our new application link-shortener:

nest new link-shortener

⚡ We will scaffold your app in a few seconds..

CREATE link-shortener/.eslintrc.js (665 bytes)
CREATE link-shortener/.prettierrc (51 bytes)
CREATE link-shortener/README.md (3340 bytes)
CREATE link-shortener/nest-cli.json (118 bytes)
CREATE link-shortener/package.json (1999 bytes)
CREATE link-shortener/tsconfig.build.json (97 bytes)
CREATE link-shortener/tsconfig.json (546 bytes)
CREATE link-shortener/src/app.controller.spec.ts (617 bytes)
CREATE link-shortener/src/app.controller.ts (274 bytes)
CREATE link-shortener/src/app.module.ts (249 bytes)
CREATE link-shortener/src/app.service.ts (142 bytes)
CREATE link-shortener/src/main.ts (208 bytes)
CREATE link-shortener/test/app.e2e-spec.ts (630 bytes)
CREATE link-shortener/test/jest-e2e.json (183 bytes)

? Which package manager would you ❤️ to use? (Use arrow keys)
❯ npm
yarn
pnpm

 
All three packages managers are usable, but we’ll choose npm for the purposes of this walkthrough.

Which package manager would you ❤️ to use? npm
✔ Installation in progress… ☕

🚀 Successfully created project link-shortener
👉 Get started with the following commands:

$ cd link-shortener
$ npm run start

Thanks for installing Nest 🙏
Please consider donating to our open collective
to help us maintain this package.

🍷 Donate: https://opencollective.com/nest</pre>

 
Once the command is executed successfully, it creates a new link-shortener project directory with node modules and a few other boilerplate files. It also creates a new src/ directory populated with several core files as shown in the following directory structure:
tree -L 2 -a
.
└── link-shortener
├── dist
├── .eslintrc.js
├── .gitignore
├── nest-cli.json
├── node_modules
├── package.json
├── package-lock.json
├── .prettierrc
├── README.md
├── src
├── test
├── tsconfig.build.json
└── tsconfig.json

5 directories, 9 files
 
Let’s look at the core files ending with .ts (TypeScript) under /src directory:
src % tree
.
├── app.controller.spec.ts
├── app.controller.ts
├── app.module.ts
├── app.service.ts
└── main.ts

0 directories, 5 files
 
Nest embraces modularity. Accordingly, two of the most important Nest app components are controllers and providers. Controllers determine how you handle incoming requests. They’re responsible for accepting incoming requests, performing some kind of operation, and returning the response. Meanwhile, providers are extra classes which you can inject into the controllers or to certain providers. This grants various supplemental functionality. We always recommend reading up on providers and controllers to better understand how they work.
The app.module.ts is the root module of the application and bundles up a couple of controllers and providers that the controller uses.

cat app.module.ts
import { Module } from ‘@nestjs/common';
import { AppController } from ‘./app.controller';
import { AppService } from ‘./app.service';

@Module({
imports: [],
controllers: [AppController],
providers: [AppService],
})
export class AppModule {}

 
As shown in the above file, AppModule is just an empty class. Nest’s @Module decorator is responsible for providing the config that lets Nest build a functional application from it.
First, app.controller.ts exports  a basic controller with a single route. The app.controller.spec.ts is the unit test for the controller. Second, app.service.ts is a basic service with a single method. Third, main.ts is the entry file of the application. It bootstraps the application by calling NestFactory.create, then starts the new application by having it listen for inbound HTTP requests on port 3000.

import { NestFactory } from ‘@nestjs/core';
import { AppModule } from ‘./app.module';

async function bootstrap() {
const app = await NestFactory.create(AppModule);
await app.listen(3000);
}
bootstrap();

 
Running the Application
Once the installation is completed, run the following command to start your application:

npm run start

> link-shortener@0.0.1 start
> nest start

[Nest] 68686 – 05/31/2022, 5:50:59 PM LOG [NestFactory] Starting Nest application…
[Nest] 68686 – 05/31/2022, 5:50:59 PM LOG [InstanceLoader] AppModule dependencies initialized +24ms
[Nest] 68686 – 05/31/2022, 5:50:59 PM LOG [RoutesResolver] AppController {/}: +4ms
[Nest] 68686 – 05/31/2022, 5:50:59 PM LOG [RouterExplorer] Mapped {/, GET} route +2ms
[Nest] 68686 – 05/31/2022, 5:50:59 PM LOG [NestApplication] Nest application successfully started +1ms

This command starts the app with the HTTP server listening on the port defined in the src/main.ts file. Once the application is successfully running, open your browser and navigate to http://localhost:3000. You should see the “Hello World!” message:

 
Let’s now add a new test for our new endpoint in app.service.spec.ts:

import { Test, TestingModule } from "@nestjs/testing";
import { AppService } from "./app.service";
import { AppRepositoryTag } from "./app.repository";
import { AppRepositoryHashmap } from "./app.repository.hashmap";
import { mergeMap, tap } from "rxjs";

describe(‘AppService’, () => {
let appService: AppService;

beforeEach(async () => {
const app: TestingModule = await Test.createTestingModule({
providers: [
{ provide: AppRepositoryTag, useClass: AppRepositoryHashmap },
AppService,
],
}).compile();

appService = app.get<AppService>(AppService);
});

describe(‘retrieve’, () => {
it(‘should retrieve the saved URL’, done => {
const url = ‘docker.com';
appService.shorten(url)
.pipe(mergeMap(hash => appService.retrieve(hash)))
.pipe(tap(retrieved => expect(retrieved).toEqual(url)))
.subscribe({ complete: done })
});
});
});

 
Before running our tests, let’s implement the function in app.service.ts:

import { Inject, Injectable } from ‘@nestjs/common';
import { map, Observable } from ‘rxjs';
import { AppRepository, AppRepositoryTag } from ‘./app.repository';

@Injectable()
export class AppService {
constructor(
@Inject(AppRepositoryTag) private readonly appRepository: AppRepository,
) {}

getHello(): string {
return ‘Hello World!';
}

shorten(url: string): Observable<string> {
const hash = Math.random().toString(36).slice(7);
return this.appRepository.put(hash, url).pipe(map(() =>; hash)); // <– here
}

retrieve(hash: string): Observable<string> {
return this.appRepository.get(hash); // <– and here
}
}

 
Run these tests once more to confirm that everything passes, before we begin storing the data in a real database.
Add a Database
So far, we’re just storing our mappings in memory. That’s fine for testing, but we’ll need to store them somewhere more centralized and durable in production. We’ll use Redis, a popular key-value store available on Docker Hub.
Let’s install this Redis client by running the following command from the backend/link-shortener directory:

npm install redis@4.1.0 –save

 
Inside /src, create a new version of the AppRepository interface that uses Redis. We’ll call this file app.repository.redis.ts:

import { AppRepository } from ‘./app.repository';
import { Observable, from, mergeMap } from ‘rxjs';
import { createClient, RedisClientType } from ‘redis';

export class AppRepositoryRedis implements AppRepository {
private readonly redisClient: RedisClientType;

constructor() {
const host = process.env.REDIS_HOST || ‘redis';
const port = +process.env.REDIS_PORT || 6379;
this.redisClient = createClient({
url: `redis://${host}:${port}`,
});
from(this.redisClient.connect()).subscribe({ error: console.error });
this.redisClient.on(‘connect’, () => console.log(‘Redis connected’));
this.redisClient.on(‘error’, console.error);
}

get(hash: string): Observable<string&> {
return from(this.redisClient.get(hash));
}

put(hash: string, url: string): Observable<string> {
return from(this.redisClient.set(hash, url)).pipe(
mergeMap(() => from(this.redisClient.get(hash))),
);
}
}

 
Finally, it’s time to change the provider in app.module.ts to our new Redis repository from the in-memory version:

import { Module } from ‘@nestjs/common';
import { AppController } from ‘./app.controller';
import { AppService } from ‘./app.service';
import { AppRepositoryTag } from ‘./app.repository';
import { AppRepositoryRedis } from "./app.repository.redis";

@Module({
imports: [],
controllers: [AppController],
providers: [
AppService,
{ provide: AppRepositoryTag, useClass: AppRepositoryRedis }, // <– here
],
})
export class AppModule {}

 
Finalize the Backend
Head back to app.controller.ts and create another endpoint for redirect:

import { Body, Controller, Get, Param, Post, Redirect } from ‘@nestjs/common';
import { AppService } from ‘./app.service';
import { map, Observable, of } from ‘rxjs';

interface ShortenResponse {
hash: string;
}

interface ErrorResponse {
error: string;
code: number;
}

@Controller()
export class AppController {
constructor(private readonly appService: AppService) {}

@Get()
getHello(): string {
return this.appService.getHello();
}

@Post(‘shorten’)
shorten(@Body(‘url’) url: string): Observable<ShortenResponse | ErrorResponse> {
if (!url) {
return of({ error: `No url provided. Please provide in the body. E.g. {‘url':’https://google.com’}`, code: 400 });
}
return this.appService.shorten(url).pipe(map(hash => ({ hash })));
}

@Get(‘:hash’)
@Redirect()
retrieveAndRedirect(@Param(‘hash’) hash): Observable<{ url: string }> {
return this.appService.retrieve(hash).pipe(map(url => ({ url })));
}
}

 
Click here to access the code previously developed for this example.
Containerizing the TypeScript Application
Docker helps you containerize your TypeScript app, letting you bundle together your complete TypeScript application, runtime, configuration, and OS-level dependencies. This includes everything needed to ship a cross-platform, multi-architecture web application. 
Let’s see how you can easily run this app inside a Docker container using a Docker Official Image. First, you’ll need to download Docker Desktop. Docker Desktop accelerates the image-building process while making useful images more discoverable. Complete the installation process once your download is finished.
Docker uses a Dockerfile to specify an image’s “layers.” Each layer stores important changes building upon the base image’s standard configuration. Create the following empty Dockerfile in your Nest project.
 
touch Dockerfile
 
Use your favorite text editor to open this Dockerfile. You’ll then need to define your base image. Let’s also quickly create a directory to house our image’s application code. This acts as the working directory for your application:
 
WORKDIR /app
 
The following COPY instruction copies the files from the host machine to the container image:
 
COPY . .
 
Finally, this closing line tells Docker to compile and run your application packages:
 
CMD[“npm”, “run”, “start:dev”]
 
Here’s your complete Dockerfile:

FROM node:16
COPY . .
WORKDIR /app
RUN npm install
EXPOSE 3000
CMD ["npm", "run", "start:dev"]

 
You’ve effectively learned how to build a Dockerfile for a sample TypeScript app. Next, let’s see how to create an associated Docker Compose file for this application. Docker Compose is a tool for defining and running multi-container Docker applications. With Compose, you’ll use a YAML file to configure your services. Then, with a single command, you can create and start every service from your configuration.
Defining Services Using a Compose File
It’s time to define your services in a Docker Compose file:

services:
redis:
image: ‘redis/redis-stack’
ports:
– ‘6379:6379′
– ‘8001:8001′
networks:
– urlnet
dev:
build:
context: ./backend/link-shortener
dockerfile: Dockerfile
environment:
REDIS_HOST: redis
REDIS_PORT: 6379
ports:
– ‘3000:3000′
volumes:
– ‘./backend/link-shortener:/app’
depends_on:
– redis
networks:
– urlnet

networks:
urlnet:

 
Your example application has the following parts:

Two services backed by Docker images: your frontend dev app and your backend database redis
The redis/redis-stack Docker image is an extension of Redis that adds modern data models and processing engines to provide a complete developer experience. We use port 8001 for RedisInsight — a visualization tool for understanding and optimizing Redis data.
The frontend, accessible via port 3000
The depends_on parameter, letting you create your backend service before the frontend service starts
One persistent volume, attached to the backend
The environmental variables for your Redis database

 
Once you’ve stopped the frontend and backend services that we ran in the previous section, let’s build and start our services using the docker-compose up command:

docker compose up -d –build

 
Note: If you’re using Docker Compose v1, the command line syntax is docker-compose with a hyphen. If you’re using v2, which ships with Docker Desktop, the hyphen is omitted and docker compose is correct. 

docker compose ps
NAME COMMAND SERVICE STATUS PORTS
link-shortener-js-dev-1 "docker-entrypoint.s…" dev running 0.0.0.0:3000->3000/tcp
link-shortener-js-redis-1 "/entrypoint.sh" redis running 0.0.0.0:6379->6379/tcp, 0.0.0.0:8001->8001/tcp

 
Just like that, you’ve created and deployed your TypeScript URL shortener! You can use this in your browser like before. If you visit the application at https://localhost:3000, you should see a friendly “Hello World!” message. Use the following curl command to shorten a new link:

curl -XPOST -d "url=https://docker.com" localhost:3000/shorten

 
Here’s your response:

{"hash":"l6r71d"}

 
This hash may differ on your machine. You can use it to redirect to the original link. Open any web browser and visit https://localhost:3000/l6r71d to access Docker’s website.
Viewing the Redis Keys
You can view the Redis keys with the RedisInsight tool by visiting https://localhost:8001.
 

Viewing the Compose Logs
You can use docker compose logs -f to check and view your Compose logs:

[6:17:19 AM] Starting compilation in watch mode…
link-shortener-js-dev-1 |
link-shortener-js-dev-1 | [6:17:22 AM] Found 0 errors. Watching for file changes.
link-shortener-js-dev-1 |
link-shortener-js-dev-1 | [Nest] 31 – 06/18/2022, 6:17:23 AM LOG [NestFactory] Starting Nest application…
link-shortener-js-dev-1 | [Nest] 31 – 06/18/2022, 6:17:23 AM LOG [InstanceLoader] AppModule dependencies initialized +21ms
link-shortener-js-dev-1 | [Nest] 31 – 06/18/2022, 6:17:23 AM LOG [RoutesResolver] AppController {/}: +3ms
link-shortener-js-dev-1 | [Nest] 31 – 06/18/2022, 6:17:23 AM LOG [RouterExplorer] Mapped {/, GET} route +1ms
link-shortener-js-dev-1 | [Nest] 31 – 06/18/2022, 6:17:23 AM LOG [RouterExplorer] Mapped {/shorten, POST} route +0ms
link-shortener-js-dev-1 | [Nest] 31 – 06/18/2022, 6:17:23 AM LOG [RouterExplorer] Mapped {/:hash, GET} route +1ms
link-shortener-js-dev-1 | [Nest] 31 – 06/18/2022, 6:17:23 AM LOG [NestApplication] Nest application successfully started +1ms
link-shortener-js-dev-1 | Redis connected

You can also leverage the Docker Dashboard to view your container’s ID and easily access or manage your application:
 

 
You can also inspect important logs via the Docker Dashboard:
 

Conclusion
Congratulations! You’ve successfully learned how to build and deploy a URL shortener with TypeScript and Nest. Using a single YAML file, we demonstrated how Docker Compose helps you easily build and deploy a TypeScript-based URL shortener app in seconds. With just a few extra steps, you can apply this tutorial while building applications with much greater complexity.
Happy coding.
Quelle: https://blog.docker.com/feed/