Skip to content

Latest commit

 

History

History
 
 

pub-sub

Dapr Pub-Sub

In this quickstart, you'll create a publisher microservice and two subscriber microservices to demonstrate how Dapr enables a publish-subcribe pattern. The publisher will generate messages of a specific topic, while subscribers will listen for messages of specific topics. See Why Pub-Sub to understand when this pattern might be a good choice for your software architecture.

Visit this link for more information about Dapr and Pub-Sub.

This quickstart includes one publisher:

  • React front-end message generator

And three subscribers:

  • Node.js subscriber
  • Python subscriber
  • C# subscriber

Dapr uses pluggable message buses to enable pub-sub, and delivers messages to subscribers in a Cloud Events compliant message envelope. in this case you'll use Redis Streams (enabled in Redis versions => 5). The following architecture diagram illustrates how components interconnect locally:

Architecture Diagram

Dapr allows you to deploy the same microservices from your local machines to the cloud. Correspondingly, this quickstart has instructions for deploying this project locally or in Kubernetes.

Prerequisites

Prerequisites to run locally

Prerequisites to Run in Kubernetes

Run locally

In order to run the pub/sub quickstart locally, each of the microservices need to run with Dapr. Start by running message subscribers.

Note: These instructions deploy a Node subscriber and a Python subscriber, but if you don't have either Node or Python, feel free to run just one.

Clone the quickstarts repository

Clone this quickstarts repository to your local machine:

git clone [-b <dapr_version_tag>] https://github.com/dapr/quickstarts.git

Note: See https://github.com/dapr/quickstarts#supported-dapr-runtime-version for supported tags. Use git clone https://github.com/dapr/quickstarts.git when using the edge version of dapr runtime.

Run Node message subscriber with Dapr

  1. Navigate to Node subscriber directory in your CLI:
cd node-subscriber
  1. Install dependencies:
npm install
  1. Run the Node subscriber app with Dapr:
dapr run --app-id node-subscriber --app-port 3000 node app.js

app-id which can be any unique identifier for the microservice. app-port, is the port that the Node application is running on. Finally, the command to run the app node app.js is passed last.

Run Python message subscriber with Dapr

  1. Open a new CLI window and navigate to Python subscriber directory in your CLI:
cd python-subscriber
  1. Install dependencies:
pip3 install -r requirements.txt 

or

python -m pip install -r requirements.txt
  1. Run the Python subscriber app with Dapr:
dapr run --app-id python-subscriber --app-port 5001 python3 app.py

Run C# message subscriber with Dapr

  1. Open a new CLI window and navigate to C# subscriber directory in your CLI:
cd csharp-subscriber
  1. Build Asp.Net Core app:
dotnet build
  1. Run the C# subscriber app with Dapr:
dapr run --app-id csharp-subscriber --app-port 5009 dotnet run csharp-subscriber.csproj

Run the React front end with Dapr

Now, run the React front end with Dapr. The front end will publish different kinds of messages that subscribers will pick up.

  1. Open a new CLI window and navigate to the react-form directory:
cd react-form
  1. Run the React front end app with Dapr:
npm run buildclient
npm install
dapr run --app-id react-form --app-port 8080 npm run start

This may take a minute, as it downloads dependencies and creates an optimized production build. You'll know that it's done when you see == APP == Listening on port 8080! and several Dapr logs.

  1. Open the browser and navigate to "http://localhost:8080/". You should see a form with a dropdown for message type and message text:

Form Screenshot

  1. Pick a topic, enter some text and fire off a message! Observe the logs coming through your respective Dapr. Note that the Node.js subscriber receives messages of type "A" and "B", while the Python subscriber receives messages of type "A" and "C". Note that logs are showing up in the console window where you ran each one:
== APP == Listening on port 8080!

Use the CLI to publish messages to subscribers

The Dapr CLI provides a mechanism to publish messages for testing purposes.

  1. Use Dapr CLI to publish a message:
dapr publish --publish-app-id react-form --pubsub pubsub --topic A --data-file message_a.json
  1. Optional: Try publishing a message of topic B. You'll notice that only the Node app will receive this message. The same is true for topic 'C' and the python app.

Note: If you are running in an environment without easy access to a web browser, the following curl commands will simulate a browser request to the node server.

curl -s http://localhost:8080/publish -H Content-Type:application/json --data @message_b.json
curl -s http://localhost:8080/publish -H Content-Type:application/json --data @message_c.json
  1. Cleanup
dapr stop --app-id node-subscriber
dapr stop --app-id python-subscriber
dapr stop --app-id csharp-subscriber
dapr stop --app-id react-form
  1. If you want more insights into the Cloud Events being published on Redis streams, keep going. If you want to deploy this same application to Kubernetes, jump to the Run in Kubernetes. Otherwise, skip ahead to the How it Works section to understand the code!

Insights into the Cloud Events being published on Redis streams

If you want to see the Cloud Events that are being published on Redis stream, follow the below steps:

  1. Redis is running inside of a docker container, so to access it via CLI, pass the following command to a new terminal:
docker exec -it dapr_redis redis-cli
  1. There can be multiple items/orders/messages inside a Redis stream. It is possible to get the number of orders inside a stream by using the XLEN command:
XLEN orders
  1. To show the details of each order in the Redis stream events, use the XRANGE command. Specify two IDs, start and end. The range returned includes the elements having start or end as ID. The two special IDs - and + respectively mean the smallest and the greatest ID possible. Add an optional COUNT option at the end to get the first N items only. For example, the command shows the first 2 events.
XRANGE orders - + Count 2

Run in Kubernetes

To run the same code in Kubernetes, first set up a Redis store and then deploy the microservices. You'll be using the same microservices, but ultimately the architecture is a bit different:

Architecture Diagram

Set up a Redis store

Dapr uses pluggable message buses to enable pub-sub, in this case Redis Streams (enabled in Redis version 5 and above) is used. You'll install Redis into the cluster using helm, but keep in mind that you could use whichever Redis host you like, as long as the version is greater than 5.

  1. Follow these steps to create a Redis store using Helm.

    Note: Currently the version of Redis supported by Azure Redis Cache is less than 5, so using Azure Redis Cache will not work.

  2. Once your store is created, add the keys to the redis.yaml file in the deploy directory. Don't worry about applying the redis.yaml, as it will be covered in the next step.

    Note: the redis.yaml file provided in this quickstart takes plain text secrets. In a production-grade application, follow secret management instructions to securely manage your secrets.

Deploy assets

Now that the Redis store is set up, you can deploy the assets.

  1. In your CLI window, navigate to the deploy directory
  2. To deploy the publisher and three subscriber microservices, as well as the redis configuration you set up in the last step, run:
kubectl apply -f .

Kubernetes deployments are asyncronous. This means you'll need to wait for the deployment to complete before moving on to the next steps. You can do so with the following command:

kubectl rollout status deploy/node-subscriber
kubectl rollout status deploy/python-subscriber
kubectl rollout status deploy/csharp-subscriber
kubectl rollout status deploy/react-form
  1. To see each pod being provisioned run:
kubectl get pods
  1. To get the external IP exposed by the react-form microservice, run
kubectl get svc -w

This may take a few minutes.

Note: Minikube users cannot see the external IP. Instead, you can use minikube service [service_name] to access loadbalancer without external IP.

Use the app

  1. Access the web form.

There are several different ways to access a Kubernetes service depending on which platform you are using. Port forwarding is one consistent way to access a service, whether it is hosted locally or on a cloud Kubernetes provider like AKS.

kubectl port-forward service/react-form 8000:80

This will make your service available on http://localhost:8000

Optional: If you are using a public cloud provider, you can substitue your EXTERNAL-IP address instead of port forwarding. You can find it with:

kubectl get svc react-form
  1. Create and submit messages of different types.

Open a web broswer and navigate to http://localhost:8000 and you see the same form as with the locally hosted example above.

Note: If you are running in an environment without easy access to a web browser, the following curl commands will simulate a browser request to the node server.

curl -s http://localhost:8000/publish -H Content-Type:application/json --data @message_a.json
curl -s http://localhost:8000/publish -H Content-Type:application/json --data @message_b.json
curl -s http://localhost:8000/publish -H Content-Type:application/json --data @message_c.json
  1. To see the logs generated from your subscribers:
kubectl logs --selector app=node-subscriber -c node-subscriber
kubectl logs --selector app=python-subscriber -c python-subscriber
kubectl logs --selector app=csharp-subscriber -c csharp-subscriber
  1. Note that the Node.js subscriber receives messages of type "A" and "B", while the Python subscriber receives messages of type "A" and "C" and the C# subscriber receives messages of type "A" and "B" and "C".

Cleanup

Once you're done, you can spin down your Kubernetes resources by navigating to the ./deploy directory and running:

kubectl delete -f .

This will spin down each resource defined by the .yaml files in the deploy directory, including the state component.

How it works

Now that you've run the quickstart locally and/or in Kubernetes, let's unpack how this all works. the app is broken up into two subscribers and one publisher:

Node message subscriber

Navigate to the node-subscriber directory and open app.js, the code for the Node.js subscriber. Here three API endpoints are exposed using express. The first is a GET endpoint:

app.get('/dapr/subscribe', (_req, res) => {
    res.json([
        {
            pubsubname: "pubsub",
            topic: "A",
            route: "A"
        },
        {
            pubsubname: "pubsub",
            topic: "B",
            route: "B"
        }
    ]);
});

This tells Dapr what topics in which pubsub component to subscribe to. When deployed (locally or in Kubernetes), Dapr will call out to the service to determine if it's subscribing to anything. The other two endpoints are POST endpoints:

app.post('/A', (req, res) => {
    console.log("A: ", req.body.data.message);
    res.sendStatus(200);
});

app.post('/B', (req, res) => {
    console.log("B: ", req.body.data.message);
    res.sendStatus(200);
});

These handle messages of each topic type coming through. Note that this simply logs the message. In a more complex application this is where you would include topic-specific handlers.

Python message subscriber

Navigate to the python-subscriber directory and open app.py, the code for the Python subscriber. As with the Node.js subscriber, we're exposing three API endpoints, this time using flask. The first is a GET endpoint:

@app.route('/dapr/subscribe', methods=['GET'])
def subscribe():
    subscriptions = [{'pubsubname': 'pubsub', 'topic': 'A', 'route': 'A'}, {'pubsubname': 'pubsub', 'topic': 'C', 'route': 'C'}]
    return jsonify(subscriptions)

Again, this is how you tell Dapr what topics in which pubsub component to subscribe to. In this case, subscribing to topics "A" and "C" of pubsub component named 'pubsub'. Messages of those topics are handled with the other two routes:

@app.route('/A', methods=['POST'])
def a_subscriber():
    print(f'A: {request.json}', flush=True)
    print('Received message "{}" on topic "{}"'.format(request.json['data']['message'], request.json['topic']), flush=True)
    return json.dumps({'success':True}), 200, {'ContentType':'application/json'}

@app.route('/C', methods=['POST'])
def c_subscriber():
    print(f'C: {request.json}', flush=True)
    print('Received message "{}" on topic "{}"'.format(request.json['data']['message'], request.json['topic']), flush=True)
    return json.dumps({'success':True}), 200, {'ContentType':'application/json'}

Note: if flush=True is not set, logs will not appear when running kubectl get logs.... This is a product of Python's output buffering.

C# message subscriber

Navigate to the csharp-subscriber directory and open Program.cs, the code for the C# subscriber. We're exposing three API endpoints, this time using Asp.Net Core 6 Minimal API.

Again, this is how you tell Dapr what topics in which pubsub component to subscribe to. In this case, subscribing to topics "A" and "B" and "C" of pubsub component named 'pubsub'. Messages of those topics are handled with these three routes:

using Dapr;

var builder = WebApplication.CreateBuilder(args);

var app = builder.Build();

// Dapr configurations
app.UseCloudEvents();

app.MapSubscribeHandler();

app.MapPost("/A", [Topic("pubsub", "A")] (ILogger<Program> logger, MessageEvent item) => {
    logger.LogInformation($"{item.MessageType}: {item.Message}");
    return Results.Ok();
});

app.MapPost("/B", [Topic("pubsub", "B")] (ILogger<Program> logger, MessageEvent item) => {
    logger.LogInformation($"{item.MessageType}: {item.Message}");
    return Results.Ok();
});

app.MapPost("/C", [Topic("pubsub", "C")] (ILogger<Program> logger, Dictionary<string, string> item) => {
    logger.LogInformation($"{item["messageType"]}: {item["message"]}");
    return Results.Ok();
});

app.Run();

internal record MessageEvent(string MessageType, string Message);

React front end

Our publisher is broken up into a client and a server:

Client

The client is a simple single page React application that was bootstrapped with Create React App. The relevant client code sits in react-form/client/src/MessageForm.js where a form is presented to the users. As users update the form, React state is updated with the latest aggregated JSON data. By default the data is set to:

{
    messageType: "A",
    message: ""
};

Upon submission of the form, the aggregated JSON data is sent to the server:

fetch('/publish', {
    headers: {
        'Accept': 'application/json',
        'Content-Type': 'application/json'
    },
    method:"POST",
    body: JSON.stringify(this.state),
});

Server

The server is a basic express application that exposes a POST endpoint: /publish. This takes the requests from the client and publishes them against Dapr. Express's built in JSON middleware function is used to parse the JSON out of the incoming requests:

app.use(express.json());

This allows us to determine which topic to publish the message with. To publish messages against Dapr, the URL needs to look like: http://localhost:<DAPR_URL>/publish/<PUBSUB_NAME>/<TOPIC>, so the publish endpoint builds a URL and posts the JSON against it. The POST request also needs to return a success code in the response upon successful completion.

  const publishUrl = `${daprUrl}/publish/${pubsubName}/${req.body?.messageType}`;
  await axios.post(publishUrl, req.body);
  return res.sendStatus(200);

Note how the daprUrl determines what port Dapr live on:

const daprUrl = `http://localhost:${process.env.DAPR_HTTP_PORT || 3500}/v1.0`;

By default, Dapr live on 3500, but if we're running Dapr locally and set it to a different port (using the --app-port flag in the CLI run command), then that port will be injected into the application as an environment variable.

The server also hosts the React application itself by forwarding default home page / route requests to the built client code:

app.get('/', function (_req, res) {
  res.sendFile(path.join(__dirname, 'client/build', 'index.html'));
});

Why Pub-Sub?

Developers use a pub-sub messaging pattern to achieve high scalability and loose coupling.

Scalability

Pub-sub is generally used for large applications that need to be highly scalable. Pub-sub applications often scale better than traditional client-server applications.

Loose coupling

Pub-sub allows us to completely decouple the components. Publishers need not be aware of any of their subscribers, nor must subscribers be aware of publishers. This allows developers to write leaner microservices that don't take an immediate dependency on each other.

Related links:

Next steps: