--- url: /guide/durability/marten/ancillary-stores.md --- # "Separate" or Ancillary Stores Let's say that you want to use the full "Critter Stack" inside of a [modular monolith architecture](https://jeremydmiller.com/2024/04/01/thoughts-on-modular-monoliths/). With Marten, you might well want to use its ["Separate Store"](https://martendb.io/configuration/hostbuilder.html#working-with-multiple-marten-databases) feature ("ancillary" in Wolverine parlance) to split up the modules so they are accessing different, logical databases \-- even if in the end everything is stored in the exact same PostgreSQL database. However, even with separate Marten document stores, you still want Wolverine's: * Transaction middleware support, including the transactional outbox * Scheduled message support -- which is really part of the outbox anyway * Subscriptions to Marten events captured by these separate stores * Marten side effect model (`MartenOps`) * Ability to automatically set up the necessary envelope storage tables and functions in each database or separate schema Well now you can get that, but there's a few explicit steps to take. First off, you need to explicitly and individually tag each Marten store that you want to be integrated with Wolverine in your bootstrapping. From the Wolverine tests, say you have these two separate stores: ```cs public interface IPlayerStore : IDocumentStore; public interface IThingStore : IDocumentStore; ``` snippet source | anchor We can add Wolverine integration to both through a similar call to `IntegrateWithWolverine()` as normal as shown below: ```cs theHost = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // THIS IS IMPORTANT FOR MODULAR MONOLITH USAGE! // This helps Wolverine out to always utilize the same envelope storage // for all modules for more efficient usage of resources opts.Durability.MessageStorageSchemaName = "wolverine"; opts.Services.AddMarten(Servers.PostgresConnectionString).IntegrateWithWolverine(); opts.Policies.AutoApplyTransactions(); opts.Durability.Mode = DurabilityMode.Solo; opts.Services.AddMartenStore(m => { m.Connection(Servers.PostgresConnectionString); m.DatabaseSchemaName = "players"; }) .IntegrateWithWolverine() // Add a subscription .SubscribeToEvents(new ColorsSubscription()) // Forward events to wolverine handlers .PublishEventsToWolverine("PlayerEvents", x => { x.PublishEvent(); }); // Look at that, it even works with Marten multi-tenancy through separate databases! opts.Services.AddMartenStore(m => { m.MultiTenantedDatabases(tenancy => { tenancy.AddSingleTenantDatabase(tenant1ConnectionString, "tenant1"); tenancy.AddSingleTenantDatabase(tenant2ConnectionString, "tenant2"); tenancy.AddSingleTenantDatabase(tenant3ConnectionString, "tenant3"); }); m.DatabaseSchemaName = "things"; }).IntegrateWithWolverine(x => { x.MainConnectionString = Servers.PostgresConnectionString; }); opts.Services.AddResourceSetupOnStartup(); }).StartAsync(); ``` snippet source | anchor Let's specifically zoom in on this code from within the big sample above: ```cs // THIS IS IMPORTANT FOR MODULAR MONOLITH USAGE! // This helps Wolverine out to always utilize the same envelope storage // for all modules for more efficient usage of resources opts.Durability.MessageStorageSchemaName = "wolverine"; ``` snippet source | anchor If you are using separate Marten document stores for different modules in your application, you can easily make Wolverine happily share the transactional inbox/outbox between modules (you do *want* to do this to save on resource usage) by ensuring that all the document stores have the same database schema for envelope storage. The `opts.Durability.MessageStorageSchemaName` value can be used to help Wolverine out to share the transactional inbox/outbox storage across all Marten stores that target the same physical database. Now, moving to message handlers or HTTP endpoints, you will have to explicitly tag either the containing class or individual messages with the `[MartenStore(store type)]` attribute like this simple example below: ```cs // This will use a Marten session from the // IPlayerStore rather than the main IDocumentStore [MartenStore(typeof(IPlayerStore))] public static class PlayerMessageHandler { // Using a Marten side effect just like normal public static IMartenOp Handle(PlayerMessage message) { return MartenOps.Store(new Player { Id = message.Id }); } } ``` snippet source | anchor ::: info At this point the "Critter Stack" team is voting to make the attribute an explicit requirement rather than trying any kind of conventional application of what handlers/messages/HTTP routes are covered by what Marten document store ::: So what's possible so far? * The transactional inbox support is available in all configured Marten stores * Transactional middleware * The "aggregate handler workflow" * Marten side effects * Subscriptions to Marten events * Multi-tenancy, both "conjoined" Marten multi-tenancy and multi-tenancy through separate databases * [Wolverine managed projection or subscription distribution](/guide/durability/marten/distribution) * * The ["Event Forwarding"](/guide/durability/marten/event-forwarding) from Marten to Wolverine, but that is either 100% enabled for all Marten stores through the main Marten store registration or not at all ::: tip In the case of the ancillary Marten stores, the `IDocumentSession` objects are "lightweight" sessions without any identity map mechanics for better performance. ::: ## What's not (yet) supported * It is not possible to use more than one ancillary store in the same handler with the middleware * Fine grained configuration of the `IDocumentSession` objects created for the ancillary stores, so no ability to tag custom `IDocumentSessionListener` objects or control the session type. Listeners could be added through Wolverine middlware though * The PostgreSQL messaging transport will not span the ancillary databases, but will still work if the ancillary store is targeting the same database --- --- url: /guide/durability/marten/event-sourcing.md --- # Aggregate Handlers and Event Sourcing ::: tip Only use the "aggregate handler workflow" if you are wanting to potentially write new events to an existing event stream. If all you need in a message handler or HTTP endpoint is a read-only copy of an event streamed aggregate from Marten, use the `[ReadAggregate]` attribute instead that has a little bit lighter weight runtime within Marten. ::: ::: info If your message handler or HTTP endpoint uses more than one declarative attribute for retrieving Marten data, Wolverine 5.0+ is able to utilize [Marten's Batch Querying capability](https://martendb.io/documents/querying/batched-queries.html#batched-queries) for more efficient interaction with the database. This batching behavior is also supported for all the declarative attributes and the "aggregate handler workflow" in general described in this page. ::: See the [OrderEventSourcingSample project on GitHub](https://github.com/JasperFx/wolverine/tree/main/src/Persistence/OrderEventSourcingSample) for more samples. That Wolverine + Marten combination is optimized for efficient and productive development using a [CQRS architecture style](https://martinfowler.com/bliki/CQRS.html) with [Marten's event sourcing](https://martendb.io/events/) support. Specifically, let's dive into the responsibilities of a typical command handler in a CQRS with event sourcing architecture: 1. Fetch any current state of the system that's necessary to evaluate or validate the incoming event 2. *Decide* what events should be emitted and captured in response to an incoming event 3. Manage concurrent access to system state 4. Safely commit the new events 5. Selectively publish some of the events based on system needs to other parts of your system or even external systems 6. Instrument all of the above And then lastly, you're going to want some resiliency and selective retry capabilities for concurrent access violations or just normal infrastructure hiccups. Let's just right into an example order management system. I'm going to model the order workflow with this aggregate model: ```cs public class Item { public string Name { get; set; } public bool Ready { get; set; } } public class Order { public Order(OrderCreated created) { foreach (var item in created.Items) Items[item.Name] = item; } // This would be the stream id public Guid Id { get; set; } // This is important, by Marten convention this would // be the public int Version { get; set; } public DateTimeOffset? Shipped { get; private set; } public Dictionary Items { get; set; } = new(); // These methods are used by Marten to update the aggregate // from the raw events public void Apply(IEvent shipped) { Shipped = shipped.Timestamp; } public void Apply(ItemReady ready) { Items[ready.Name].Ready = true; } public bool IsReadyToShip() { return Shipped == null && Items.Values.All(x => x.Ready); } } ``` snippet source | anchor At a minimum, we're going to want a command handler for this command message that marks an order item as ready to ship and then evaluates whether or not based on the current state of the `Order` aggregate whether or not the logical order is ready to be shipped out: ```cs // OrderId refers to the identity of the Order aggregate public record MarkItemReady(Guid OrderId, string ItemName, int Version); ``` snippet source | anchor In the code above, we're also utilizing Wolverine's [outbox messaging](/guide/durability/) support to both order and guarantee the delivery of a `ShipOrder` message when the Marten transaction Before getting into Wolverine middleware strategies, let's first build out an MVC controller method for the command above: ```cs [HttpPost("/orders/itemready")] public async Task Post( [FromBody] MarkItemReady command, [FromServices] IDocumentSession session, [FromServices] IMartenOutbox outbox ) { // This is important! outbox.Enroll(session); // Fetch the current value of the Order aggregate var stream = await session .Events // We're also opting into Marten optimistic concurrency checks here .FetchForWriting(command.OrderId, command.Version); var order = stream.Aggregate; if (order.Items.TryGetValue(command.ItemName, out var item)) { item.Ready = true; // Mark that the this item is ready stream.AppendOne(new ItemReady(command.ItemName)); } else { // Some crude validation throw new InvalidOperationException($"Item {command.ItemName} does not exist in this order"); } // If the order is ready to ship, also emit an OrderReady event if (order.IsReadyToShip()) { // Publish a cascading command to do whatever it takes // to actually ship the order // Note that because the context here is enrolled in a Wolverine // outbox, the message is registered, but not "released" to // be sent out until SaveChangesAsync() is called down below await outbox.PublishAsync(new ShipOrder(command.OrderId)); stream.AppendOne(new OrderReady()); } // This will also persist and flush out any outgoing messages // registered into the context outbox await session.SaveChangesAsync(); } ``` snippet source | anchor Hopefully, that code is easy to understand, but there's some potentially repetitive code (loading aggregates, appending events, committing transactions) that will reoccur across all your command handlers. Likewise, it would be best to completely isolate your business logic that *decides* what new events should be appended completely away from the infrastructure code so that you can more easily reason about that code and easily test that business logic. To that end, Wolverine supports the [Decider](https://thinkbeforecoding.com/post/2021/12/17/functional-event-sourcing-decider) pattern with Marten using the `[AggregateHandler]` middleware. Using that middleware, we get this slimmer code: ```cs [AggregateHandler] public static IEnumerable Handle(MarkItemReady command, Order order) { if (order.Items.TryGetValue(command.ItemName, out var item)) { // Not doing this in a purist way here, but just // trying to illustrate the Wolverine mechanics item.Ready = true; // Mark that the this item is ready yield return new ItemReady(command.ItemName); } else { // Some crude validation throw new InvalidOperationException($"Item {command.ItemName} does not exist in this order"); } // If the order is ready to ship, also emit an OrderReady event if (order.IsReadyToShip()) { yield return new OrderReady(); } } ``` snippet source | anchor In the case above, Wolverine is wrapping middleware around our basic command handler to to: 1. Fetch the appropriate `Order` aggregate matching the command 2. Append any new events returned from the handle method to the Marten event stream for this `Order` 3. Saves any outstanding changes and commits the Marten unit of work To make this more clear, here's the generated code (with some reformatting and extra comments): ```cs public class MarkItemReadyHandler1442193977 : MessageHandler { private readonly OutboxedSessionFactory _outboxedSessionFactory; public MarkItemReadyHandler1442193977(OutboxedSessionFactory outboxedSessionFactory) { _outboxedSessionFactory = outboxedSessionFactory; } public override async Task HandleAsync(MessageContext context, CancellationToken cancellation) { var markItemReady = (MarkItemReady)context.Envelope.Message; await using var documentSession = _outboxedSessionFactory.OpenSession(context); var eventStore = documentSession.Events; // Loading Marten aggregate var eventStream = await eventStore.FetchForWriting(markItemReady.OrderId, markItemReady.Version, cancellation).ConfigureAwait(false); var outgoing1 = MarkItemReadyHandler.Handle(markItemReady, eventStream.Aggregate); if (outgoing1 != null) { // Capturing any possible events returned from the command handlers eventStream.AppendMany(outgoing1); } await documentSession.SaveChangesAsync(cancellation).ConfigureAwait(false); } } ``` As you probably guessed, there are some naming conventions or other questions you need to be aware of before you use this middleware strategy. ::: warning There are some open, let's call them *imperfections* with Wolverine's code generation against the `[WriteAggregate]` and `[ReadAggregate]` usage. For best results, only use these attributes on a parameter within the main HTTP endpoint method and not in `Validate/Before/Load` methods. ::: ::: info The `[Aggregate]` and `[WriteAggregate]` attributes *require the requested stream and aggregate to be found by default*, meaning that the handler or HTTP endpoint will be stopped if the requested data is not found. You can explicitly mark individual attributes as `Required=false`. ::: Alternatively, there is also the newer `[WriteAggregate]` usage, with this example being a functional alternative mark up: ```cs public static IEnumerable Handle( // The command MarkItemReady command, // This time we'll mark the parameter as the "aggregate" [WriteAggregate] Order order) { if (order.Items.TryGetValue(command.ItemName, out var item)) { // Not doing this in a purist way here, but just // trying to illustrate the Wolverine mechanics item.Ready = true; // Mark that the this item is ready yield return new ItemReady(command.ItemName); } else { // Some crude validation throw new InvalidOperationException($"Item {command.ItemName} does not exist in this order"); } // If the order is ready to ship, also emit an OrderReady event if (order.IsReadyToShip()) { yield return new OrderReady(); } } ``` snippet source | anchor The `[WriteAggregate]` attribute also opts into the "aggregate handler workflow", but is placed at the parameter level instead of the class level. This was added to extend the "aggregate handler workflow" to operations that involve multiple event streams in one transaction. ::: tip `[WriteAggregate]` works equally on message handlers as it does on HTTP endpoints. In fact, the older `[Aggregate]` attribute in Wolverine.Http.Marten is now just a subclass of `[WriteAggregate]`. ::: ## Validation on Stream Existence By default, the "aggregate handler workflow" does no validation on whether or not the identified event stream actually exists at runtime, and it's possible to receive a null for the aggregate in this example if the aggregate does not exist: ```cs public static IEnumerable Handle( // The command MarkItemReady command, // This time we'll mark the parameter as the "aggregate" [WriteAggregate] Order order) { if (order.Items.TryGetValue(command.ItemName, out var item)) { // Not doing this in a purist way here, but just // trying to illustrate the Wolverine mechanics item.Ready = true; // Mark that the this item is ready yield return new ItemReady(command.ItemName); } else { // Some crude validation throw new InvalidOperationException($"Item {command.ItemName} does not exist in this order"); } // If the order is ready to ship, also emit an OrderReady event if (order.IsReadyToShip()) { yield return new OrderReady(); } } ``` snippet source | anchor As long as you handle the case where the requested is null, you can even effectively start a new stream by emitting events from your handler or HTTP endpoint. If you do want to protect message handlers or HTTP endpoints from acting on missing streams because of bad user inputs (or who knows what, it's a chaotic world and you should never trust your system is receiving valid input), you now have some options to mark the aggregate itself as required and even control how Wolverine deals with the aggregate being missing as shown in these sample signatures below: ```cs public static class ValidatedMarkItemReadyHandler { public static IEnumerable Handle( // The command MarkItemReady command, // In HTTP this will return a 404 status code and stop // the request if the Order is not found // In message handlers, this will log that the Order was not found, // then stop processing. The message would be effectively // discarded [WriteAggregate(Required = true)] Order order) => []; [WolverineHandler] public static IEnumerable Handle2( // The command MarkItemReady command, // In HTTP this will return a 400 status code and // write out a ProblemDetails response with a default message explaining // the data that could not be found [WriteAggregate(Required = true, OnMissing = OnMissing.ProblemDetailsWith400)] Order order) => []; [WolverineHandler] public static IEnumerable Handle3( // The command MarkItemReady command, // In HTTP this will return a 404 status code and // write out a ProblemDetails response with a default message explaining // the data that could not be found [WriteAggregate(Required = true, OnMissing = OnMissing.ProblemDetailsWith404)] Order order) => []; [WolverineHandler] public static IEnumerable Handle4( // The command MarkItemReady command, // In HTTP this will return a 400 status code and // write out a ProblemDetails response with a custom message. // Wolverine will substitute in the order identity into the message for "{0}" // In message handlers, Wolverine will log using your custom message then discard the message [WriteAggregate(Required = true, OnMissing = OnMissing.ProblemDetailsWith404, MissingMessage = "Cannot find Order {0}")] Order order) => []; } ``` snippet source | anchor The `Required`, `OnMissing`, and `MissingMessage` properties behave consistently on all Wolverine attributes like `[Entity]` or `[WriteAggregate]` or `[ReadAggregate]`. ### Handler Method Signatures The Marten workflow command handler method signature needs to follow these rules: * Either explicitly use the `[AggregateHandler]` attribute on the handler method **or use the `AggregateHandler` suffix** on the message handler type to tell Wolverine to opt into the aggregate command workflow. * The first argument should be the command type, just like any other Wolverine message handler * The 2nd argument should be the aggregate -- either the aggregate itself (`Order`) or wrapped in the Marten `IEventStream` type (`IEventStream`). There is an example of that usage below: ```cs [AggregateHandler] public static void Handle(OrderEventSourcingSample.MarkItemReady command, IEventStream stream) { var order = stream.Aggregate; if (order.Items.TryGetValue(command.ItemName, out var item)) { // Not doing this in a purist way here, but just // trying to illustrate the Wolverine mechanics item.Ready = true; // Mark that the this item is ready stream.AppendOne(new ItemReady(command.ItemName)); } else { // Some crude validation throw new InvalidOperationException($"Item {command.ItemName} does not exist in this order"); } // If the order is ready to ship, also emit an OrderReady event if (order.IsReadyToShip()) { stream.AppendOne(new OrderReady()); } } ``` snippet source | anchor Just as in other Wolverine [message handlers](/guide/handlers/), you can use additional method arguments for registered services ("method injection"), the `CancellationToken` for the message, and the message `Envelope` if you need access to message metadata. As for the return values from these handler methods, you can use: * It's legal to have **no** return values if you are directly using `IEventStream` to append events * `IEnumerable` or `object[]` to denote that a value is events to append to the current event stream * `IAsyncEnumerable HandleAsync(MarkItemReady command, Order order, ISomeService service) { // All contrived, let's say we need to call some // kind of service to get data so this handler has to be // async var data = await service.FindDataAsync(); var messages = new OutgoingMessages(); var events = new Events(); if (order.Items.TryGetValue(command.ItemName, out var item)) { // Not doing this in a purist way here, but just // trying to illustrate the Wolverine mechanics item.Ready = true; // Mark that the this item is ready events += new ItemReady(command.ItemName); } else { // Some crude validation throw new InvalidOperationException($"Item {command.ItemName} does not exist in this order"); } // If the order is ready to ship, also emit an OrderReady event if (order.IsReadyToShip()) { events += new OrderReady(); messages.Add(new ShipOrder(order.Id)); } // This results in both new events being captured // and potentially the ShipOrder message going out return (events, messages); } ``` snippet source | anchor ### Determining the Aggregate Identity Wolverine is trying to determine a public member on the command type that refers to the identity of the aggregate type. You've got two options, either use the implied naming convention below where the `OrderId` property is assumed to be the identity of the `Order` aggregate by appending "Id" to the aggregate type name (it's not case sensitive if you were wondering): ```cs // OrderId refers to the identity of the Order aggregate public record MarkItemReady(Guid OrderId, string ItemName, int Version); ``` snippet source | anchor Or if you want to use a different member, bypass the convention, or just don't like conventional magic, you can decorate a public member on the command class with Marten's `[Identity]` attribute like so: ```cs public class MarkItemReady { // This attribute tells Wolverine that this property will refer to the // Order aggregate [Identity] public Guid Id { get; init; } public string ItemName { get; init; } } ``` snippet source | anchor ## Validation Every possible attribute for triggering the "aggregate handler workflow" includes support for data requirements as shown below with `[ReadAggregate]`: ```cs // Straight up 404 on missing [WolverineGet("/letters1/{id}")] public static LetterAggregate GetLetter1([ReadAggregate] LetterAggregate letters) => letters; // Not required [WolverineGet("/letters2/{id}")] public static string GetLetter2([ReadAggregate(Required = false)] LetterAggregate letters) { return letters == null ? "No Letters" : "Got Letters"; } // Straight up 404 & problem details on missing [WolverineGet("/letters3/{id}")] public static LetterAggregate GetLetter3([ReadAggregate(OnMissing = OnMissing.ProblemDetailsWith404)] LetterAggregate letters) => letters; ``` snippet source | anchor ## Forwarding Events See [Event Forwarding](./event-forwarding) for more information. ## Returning the Updated Aggregate A common use case for the "aggregate handler workflow" has been to respond with the now updated state of the projected aggregate that has just been updated by appending new events. Until now, that's effectively meant making a completely separate call to the database through Marten to retrieve the latest updates. ::: info To understand more about the inner workings of the next section, see the Marten documentation on its [FetchLatest](https://martendb.io/events/projections/read-aggregates.html#fetchlatest) API. ::: As a quick tip for performance, assuming that you are *not* mutating the projected documents within your command handlers, you can opt for this significant Marten optimization to eliminate extra database round trips while using the aggregate handler workflow: ```csharp builder.Services.AddMarten(opts => { // Other Marten configuration // Use this setting to get the very best performance out // of the UpdatedAggregate workflow and aggregate handler // workflow over all opts.Events.UseIdentityMapForAggregates = true; }).IntegrateWithWolverine(); ``` ::: info The setting above cannot be a default in Marten because it can break some existing code with a very different workflow than what the Critter Stack team recommends for the aggregate handler workflow. ::: Wolverine.Marten has a special response type for message handlers or HTTP endpoints we can use as a directive to tell Wolverine to respond with the latest state of a projected aggregate as part of the command execution. Let's make this concrete by taking the `MarkItemReady` command handler we've used earlier in this guide and building a slightly new version that produces a response of the latest aggregate: ```cs [AggregateHandler] public static ( // Just tells Wolverine to use Marten's FetchLatest API to respond with // the updated version of Order that reflects whatever events were appended // in this command UpdatedAggregate, // The events that should be appended to the event stream for this order Events) Handle(OrderEventSourcingSample.MarkItemReady command, Order order) { var events = new Events(); if (order.Items.TryGetValue(command.ItemName, out var item)) { // Not doing this in a purist way here, but just // trying to illustrate the Wolverine mechanics item.Ready = true; // Mark that the this item is ready events.Add(new ItemReady(command.ItemName)); } else { // Some crude validation throw new InvalidOperationException($"Item {command.ItemName} does not exist in this order"); } // If the order is ready to ship, also emit an OrderReady event if (order.IsReadyToShip()) { events.Add(new OrderReady()); } return (new UpdatedAggregate(), events); } ``` snippet source | anchor Note the usage of the `Wolverine.Marten.UpdatedAggregate` response in the handler. That type by itself is just a directive to Wolverine to generate the necessary code to call `FetchLatest` and respond with that. The command handler above allows us to use the command in a mediator usage like so: ```cs public static Task update_and_get_latest(IMessageBus bus, MarkItemReady command) { // This will return the updated version of the Order // aggregate that incorporates whatever events were appended // in the course of processing the command return bus.InvokeAsync(command); } ``` snippet source | anchor Likewise, you can use `UpdatedAggregate` as the response body of an HTTP endpoint with Wolverine.HTTP [as shown here](/guide/http/marten.html#responding-with-the-updated-aggregate~~~~). ::: info This feature has been more or less requested several times, but was finally brought about because of the need to consume Wolverine + Marten commands within Hot Chocolate mutations and always return the current state of the projected aggregate being updated to the user interface. ::: ### Passing the Aggregate to Before/Validate/Load Methods The "[compound handler](/guide/handlers/#compound-handlers)" feature is a valuable way in Wolverine to organize your handler code, and fully supported within the aggregate handler workflow as well. If you have a command handler method marked with `[AggregateHandler]` or the `[Aggregate]` attribute in HTTP usage, you can also pass the aggregate type as an argument to any `Before` / `LoadAsync` / `Validate` method on that handler to do validation before the main handler method. Here's a sample from the tests of doing just that: ```cs public record RaiseIfValidated(Guid LetterAggregateId); public static class RaiseIfValidatedHandler { public static HandlerContinuation Validate(LetterAggregate aggregate) => aggregate.ACount == 0 ? HandlerContinuation.Continue : HandlerContinuation.Stop; [AggregateHandler] public static IEnumerable Handle(RaiseIfValidated command, LetterAggregate aggregate) { yield return new BEvent(); } } ``` snippet source | anchor ## Archiving Streams To mark a Marten event stream as archived from a Wolverine aggregate handler, just append the special Marten [Archived](https://martendb.io/events/archiving.html#archived-event) event to the stream just like you would in any other aggregate handler. ## Reading the Latest Version of an Aggregate ::: info This is using Marten's [FetchLatest](https://martendb.io/events/projections/read-aggregates.html#fetchlatest) API and is limited to single stream projections. ::: If you want to inject the current state of an event sourced aggregate as a parameter into a message handler method strictly for information and don't need the heavier "aggregate handler workflow," use the `[ReadAggregate]` attribute like this: ```cs public record FindAggregate(Guid Id); public static class FindLettersHandler { // This is admittedly just some weak sauce testing support code public static LetterAggregateEnvelope Handle(FindAggregate command, [ReadAggregate] LetterAggregate aggregate) { return new LetterAggregateEnvelope(aggregate); } /* ALTERNATIVE VERSION [WolverineHandler] public static LetterAggregateEnvelope Handle2( FindAggregate command, // Just showing you that you can disable the validation [ReadAggregate(Required = false)] LetterAggregate aggregate) { return aggregate == null ? null : new LetterAggregateEnvelope(aggregate); } */ } ``` snippet source | anchor If the aggregate doesn't exist, the HTTP request will stop with a 404 status code. The aggregate/stream identity is found with the same rules as the `[Entity]` or `[Aggregate]` attributes: 1. You can specify a particular request body property name or route argument 2. Look for a request body property or route argument named "EntityTypeId" 3. Look for a request body property or route argument named "Id" or "id" You can override the validation rules for how Wolverine handles an aggregate / event stream not being found by setting these properties on `[ReadAttribute]` (which is much more useful for HTTP endpoints): ```cs // Straight up 404 on missing [WolverineGet("/letters1/{id}")] public static LetterAggregate GetLetter1([ReadAggregate] LetterAggregate letters) => letters; // Not required [WolverineGet("/letters2/{id}")] public static string GetLetter2([ReadAggregate(Required = false)] LetterAggregate letters) { return letters == null ? "No Letters" : "Got Letters"; } // Straight up 404 & problem details on missing [WolverineGet("/letters3/{id}")] public static LetterAggregate GetLetter3([ReadAggregate(OnMissing = OnMissing.ProblemDetailsWith404)] LetterAggregate letters) => letters; ``` snippet source | anchor There is also an option with `OnMissing` to throw a `RequiredDataMissingException` exception if a required data element is missing. This option is probably most useful with message handlers where you may want to key off the exception with custom error handling rules. ## Targeting Multiple Streams at Once It's now possible to use the "aggregate handler workflow" while needing to append events to more than one event stream at a time. ::: tip You can use read only views of event streams through `[ReadAggregate]` at will, and that will use Marten's `FetchLatest()` API underneath. For appending to multiple streams though, for now you will have to directly target `IEventStream` to help Marten know which stream you're appending events to. ::: Using the canonical example of a use case where you move money from one account to another account and need both changes to be persisted in one atomic transaction. Let’s start with a simplified domain model of events and a “self-aggregating” Account type like this: ```cs public record AccountCreated(double InitialAmount); public record Debited(double Amount); public record Withdrawn(double Amount); public class Account { public Guid Id { get; set; } public double Amount { get; set; } public static Account Create(IEvent e) => new Account { Id = e.StreamId, Amount = e.Data.InitialAmount}; public void Apply(Debited e) => Amount += e.Amount; public void Apply(Withdrawn e) => Amount -= e.Amount; } ``` snippet source | anchor And you need to handle a command like this: ```cs public record TransferMoney(Guid FromId, Guid ToId, double Amount); ``` snippet source | anchor Using the `[WriteAggregate]` attribute to denote the event streams we need to work with, we could write this message handler + HTTP endpoint: ```cs public static class TransferMoneyHandler { [WolverinePost("/accounts/transfer")] public static void Handle( TransferMoney command, [WriteAggregate(nameof(TransferMoney.FromId))] IEventStream fromAccount, [WriteAggregate(nameof(TransferMoney.ToId))] IEventStream toAccount) { // Would already 404 if either referenced account does not exist if (fromAccount.Aggregate.Amount >= command.Amount) { fromAccount.AppendOne(new Withdrawn(command.Amount)); toAccount.AppendOne(new Debited(command.Amount)); } } } ``` snippet source | anchor The `IEventStream` abstraction comes from Marten’s `FetchForWriting()` API that is our recommended way to interact with Marten streams in typical command handlers. This API is used underneath Wolverine’s “aggregate handler workflow”, but normally hidden from user written code if you’re only working with one stream at a time. In this case though, we’ll need to work with the raw `IEventStream` objects that both wrap the projected aggregation of each Account as well as providing a point where we can explicitly append events separately to each event stream. `FetchForWriting()` guarantees that you get the most up to date information for the Account view of each event stream regardless of how you have configured Marten’s `ProjectionLifecycle` for `Account` (kind of an important detail here!). The typical Marten transactional middleware within Wolverine is calling `SaveChangesAsync()` for us on the Marten unit of work IDocumentSession for the command. If there’s enough funds in the “From” account, this command will append a `Withdrawn` event to the “From” account and a `Debited` event to the “To” account. If either account has been written to between fetching the original information, Marten will reject the changes and throw its `ConcurrencyException` as an optimistic concurrency check. In unit testing, we could write a unit test for the “happy path” where you have enough funds to cover the transfer like this: ```cs public class when_transfering_money { [Fact] public void happy_path_have_enough_funds() { // StubEventStream is a type that was recently added to Marten // specifically to facilitate testing logic like this var fromAccount = new StubEventStream(new Account { Amount = 1000 }) { Id = Guid.NewGuid() }; var toAccount = new StubEventStream(new Account { Amount = 100}) { Id = Guid.NewGuid() }; TransferMoneyHandler.Handle(new TransferMoney(fromAccount.Id, toAccount.Id, 100), fromAccount, toAccount); // Now check the events we expected to be appended fromAccount.Events.Single().Data.ShouldBeOfType().Amount.ShouldBe(100); toAccount.Events.Single().Data.ShouldBeOfType().Amount.ShouldBe(100); } } ``` snippet source | anchor ## Strong Typed Identifiers If you're so inclined, you can use strong typed identifiers from tools like [Vogen](https://github.com/SteveDunn/Vogen) and [StronglyTypedId](https://github.com/andrewlock/StronglyTypedId) within the "Aggregate Handler Workflow." You can also use hand rolled value types that wrap either `Guid` or `string` depending on your Marten event store configuration (`StreamIdentity`) as long as it conforms to [Marten's own rules about value type identifiers](https://martendb.io/documents/identity.html#strong-typed-identifiers). For a message handler, let's start with this example identifier type and aggregate from the Wolverine tests: ```cs [StronglyTypedId(Template.Guid)] public readonly partial struct LetterId; public class StrongLetterAggregate { public StrongLetterAggregate() { } public LetterId Id { get; set; } public int ACount { get; set; } public int BCount { get; set; } public int CCount { get; set; } public int DCount { get; set; } public void Apply(AEvent _) => ACount++; public void Apply(BEvent _) => BCount++; public void Apply(CEvent _) => CCount++; public void Apply(DEvent _) => DCount++; } ``` snippet source | anchor And now let's use that identifier type in message handlers: ```cs public record IncrementStrongA(LetterId Id); public record AddFrom(LetterId Id1, LetterId Id2); public record IncrementBOnBoth(LetterId Id1, LetterId Id2); public record FetchCounts(LetterId Id); public static class StrongLetterHandler { public static StrongLetterAggregate Handle(FetchCounts counts, [ReadAggregate] StrongLetterAggregate aggregate) => aggregate; public static AEvent Handle(IncrementStrongA command, [WriteAggregate] StrongLetterAggregate aggregate) { return new(); } public static void Handle( IncrementBOnBoth command, [WriteAggregate(nameof(IncrementBOnBoth.Id1))] IEventStream stream1, [WriteAggregate(nameof(IncrementBOnBoth.Id2))] IEventStream stream2 ) { stream1.AppendOne(new BEvent()); stream2.AppendOne(new BEvent()); } public static IEnumerable Handle( AddFrom command, [WriteAggregate(nameof(AddFrom.Id1))] StrongLetterAggregate _, [ReadAggregate(nameof(AddFrom.Id2))] StrongLetterAggregate readOnly) { for (int i = 0; i < readOnly.ACount; i++) { yield return new AEvent(); } for (int i = 0; i < readOnly.BCount; i++) { yield return new BEvent(); } for (int i = 0; i < readOnly.CCount; i++) { yield return new CEvent(); } for (int i = 0; i < readOnly.DCount; i++) { yield return new DEvent(); } } } ``` snippet source | anchor And also in some of the equivalent Wolverine.HTTP endpoints: ```cs [WolverineGet("/sti/aggregate/longhand/{id}")] public static ValueTask Handle2(LetterId id, IDocumentSession session) => session.Events.FetchLatest(id.Value); // This is an equivalent to the endpoint above [WolverineGet("/sti/aggregate/{id}")] public static StrongLetterAggregate Handle( [ReadAggregate] StrongLetterAggregate aggregate) => aggregate; ``` snippet source | anchor tools do this for you, and value types generated by these tools are legal route argument variables for Wolverine.HTTP now. --- --- url: /guide/http/integration.md --- # ASP.Net Core Integration ::: tip WolverineFx.HTTP is an alternative to Minimal API or MVC Core for crafting HTTP service endpoints, but absolutely tries to be a good citizen within the greater ASP.Net Core ecosystem and heavily utilizes much of the ASP.Net Core technical foundation. It is also perfectly possible to use any mix of WolverineFx.HTTP, Minimal API, and MVC Core controllers within the same code base as you see fit. ::: The `WolverineFx.HTTP` library extends Wolverine's runtime model to writing HTTP services with ASP.Net Core. As a quick sample, start a new project with: ```bash dotnet new webapi ``` Then add the `WolverineFx.HTTP` dependency with: ```bash dotnet add package WolverineFx.HTTP ``` ::: tip The [sample project for this page is on GitHub](https://github.com/JasperFx/wolverine/tree/main/src/Samples/TodoWebService/TodoWebService). ::: From there, let's jump into the application bootstrapping. Stealing the [sample "Todo" project idea from the Minimal API documentation](https://learn.microsoft.com/en-us/aspnet/core/tutorials/min-web-api?view=aspnetcore-7.0\&tabs=visual-studio) (and shifting to [Marten](https://martendb.io) for persistence just out of personal preference), this is the application bootstrapping: ```cs using Marten; using JasperFx; using JasperFx.Resources; using Wolverine; using Wolverine.Http; using Wolverine.Marten; var builder = WebApplication.CreateBuilder(args); // Adding Marten for persistence builder.Services.AddMarten(opts => { opts.Connection(builder.Configuration.GetConnectionString("Marten")); opts.DatabaseSchemaName = "todo"; }) .IntegrateWithWolverine(); builder.Services.AddResourceSetupOnStartup(); // Wolverine usage is required for WolverineFx.Http builder.Host.UseWolverine(opts => { // This middleware will apply to the HTTP // endpoints as well opts.Policies.AutoApplyTransactions(); // Setting up the outbox on all locally handled // background tasks opts.Policies.UseDurableLocalQueues(); }); // Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle builder.Services.AddEndpointsApiExplorer(); builder.Services.AddSwaggerGen(); builder.Services.AddWolverineHttp(); var app = builder.Build(); // Configure the HTTP request pipeline. if (app.Environment.IsDevelopment()) { app.UseSwagger(); app.UseSwaggerUI(); } // Let's add in Wolverine HTTP endpoints to the routing tree app.MapWolverineEndpoints(); return await app.RunJasperFxCommands(args); ``` snippet source | anchor Do note that the only thing in that sample that pertains to `WolverineFx.Http` itself is the call to `IEndpointRouteBuilder.MapWolverineEndpoints()`. Let's move on to "Hello, World" with a new Wolverine http endpoint from this class we'll add to the sample project: ```cs public class HelloEndpoint { [WolverineGet("/")] public string Get() => "Hello."; } ``` snippet source | anchor At application startup, WolverineFx.Http will find the `HelloEndpoint.Get()` method and treat it as a Wolverine http endpoint with the route pattern `GET: /` specified in the `[WolverineGet]` attribute. As you'd expect, that route will write the return value back to the HTTP response and behave as specified by this [Alba](https://jasperfx.github.io/alba) specification: ```cs [Fact] public async Task hello_world() { var result = await _host.Scenario(x => { x.Get.Url("/"); x.Header("content-type").SingleValueShouldEqual("text/plain"); }); result.ReadAsText().ShouldBe("Hello."); } ``` snippet source | anchor Moving on to the actual `Todo` problem domain, let's assume we've got a class like this: ```cs public class Todo { public int Id { get; set; } public string? Name { get; set; } public bool IsComplete { get; set; } } ``` snippet source | anchor In a sample class called [TodoEndpoints](https://github.com/JasperFx/wolverine/blob/main/src/Samples/TodoWebService/TodoWebService/Endpoints.cs) let's add an HTTP service endpoint for listing all the known `Todo` documents: ```cs [WolverineGet("/todoitems")] public static Task> Get(IQuerySession session) => session.Query().ToListAsync(); ``` snippet source | anchor As you'd guess, this method will serialize all the known `Todo` documents from the database into the HTTP response and return a 200 status code. In this particular case the code is a little bit noisier than the Minimal API equivalent, but that's okay, because you can happily use Minimal API and WolverineFx.Http together in the same project. WolverineFx.Http, however, will shine in more complicated endpoints. Consider this endpoint just to return the data for a single `Todo` document: ```cs // Wolverine can infer the 200/404 status codes for you here // so there's no code noise just to satisfy OpenAPI tooling [WolverineGet("/todoitems/{id}")] public static Task GetTodo(int id, IQuerySession session, CancellationToken cancellation) => session.LoadAsync(id, cancellation); ``` snippet source | anchor At this point it's effectively de rigueur for any web service to support [OpenAPI](https://www.openapis.org/) documentation directly in the service. Fortunately, WolverineFx.Http is able to glean most of the necessary metadata to support OpenAPI documentation with [Swashbuckle](https://github.com/domaindrivendev/Swashbuckle.AspNetCore) from the method signature up above. The method up above will also cleanly set a status code of 404 if the requested `Todo` document does not exist. Now, the bread and butter for WolverineFx.Http is using it in conjunction with Wolverine itself. In this sample, let's create a new `Todo` based on submitted data, but also publish a new event message with Wolverine to do some background processing after the HTTP call succeeds. And, oh, yeah, let's make sure this endpoint is actively using Wolverine's [transactional outbox](/guide/durability/) support for consistency: ```cs [WolverinePost("/todoitems")] public static async Task Create(CreateTodo command, IDocumentSession session, IMessageBus bus) { var todo = new Todo { Name = command.Name }; session.Store(todo); // Going to raise an event within out system to be processed later await bus.PublishAsync(new TodoCreated(todo.Id)); return Results.Created($"/todoitems/{todo.Id}", todo); } ``` snippet source | anchor The endpoint code above is automatically enrolled in the Marten transactional middleware by simple virtue of having a dependency on Marten's `IDocumentSession`. By also taking in the `IMessageBus` dependency, WolverineFx.Http is wrapping the transactional outbox behavior around the method so that the `TodoCreated` message is only sent after the database transaction succeeds. ::: tip WolverineFx.Http allows you to place any number of endpoint methods on any public class that follows the naming conventions, but we strongly recommend isolating any kind of complicated endpoint method to its own endpoint class. ::: Lastly for this page, consider the need to update a `Todo` from a `PUT` call. Your HTTP endpoint may vary its handling and response by whether or not the document actually exists. Just to show off Wolverine's "composite handler" functionality and also how WolverineFx.Http supports middleware, consider this more complex endpoint: ```cs public static class UpdateTodoEndpoint { public static async Task<(Todo? todo, IResult result)> LoadAsync(UpdateTodo command, IDocumentSession session) { var todo = await session.LoadAsync(command.Id); return todo != null ? (todo, new WolverineContinue()) : (todo, Results.NotFound()); } [WolverinePut("/todoitems")] public static void Put(UpdateTodo command, Todo todo, IDocumentSession session) { todo.Name = todo.Name; todo.IsComplete = todo.IsComplete; session.Store(todo); } } ``` snippet source | anchor ## How it Works WolverineFx.Http takes advantage of the ASP.Net Core endpoint routing to add additional routes to the ASP.Net Core's routing tree. In Wolverine's case though, the underlying `RequestDelegate` is compiled at runtime (or ahead of time for faster cold starts!) with the same code weaving strategy as Wolverine's message handling. Wolverine is able to utilize the same [middleware model as the message handlers](/guide/handlers/middleware), with some extensions for recognizing the ASP.Net Core IResult model. ## Discovery ::: tip The HTTP endpoint method discovery is very similar to the [handler discovery](/guide/handlers/discovery) and will scan the same assemblies as with the handlers. ::: WolverineFx.Http discovers endpoint methods automatically by doing type scanning within your application. The assemblies scanned are: 1. The entry assembly for your application 2. Any assembly marked with the `[assembly: WolverineModule]` attribute 3. Any assembly that is explicitly added in the `UseWolverine()` configuration as a handler assembly as shown in the following sample code: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // This gives you the option to programmatically // add other assemblies to the discovery of HTTP endpoints // or message handlers var assembly = Assembly.Load("my other assembly name that holds HTTP endpoints or handlers"); opts.Discovery.IncludeAssembly(assembly); }).StartAsync(); ``` snippet source | anchor ::: info Wolverine 1.6.0 added the looser discovery rules to just go look for any method on public, concrete types that is decorated with a Wolverine route attribute. ::: In the aforementioned assemblies, Wolverine will look for **public, concrete, closed** types whose names are suffixed by `Endpoint` or `Endpoints` **and also any public, concrete class with methods that are decorated by any `[WolverineVerb]` attribute**. Within these types, Wolverine is looking for **public** methods that are decorated with one of Wolverine's HTTP method attributes: * `[WolverineGet]` * `[WolverinePut]` * `[WolverinePost]` * `[WolverineDelete]` * `[WolverineOptions]` * `[WolverineHead]` The usage is suspiciously similar to the older `[HttpGet]` type attributes in MVC Core. ## OpenAPI Metadata Wolverine is trying to replicate the necessary OpenAPI to fully support Swashbuckle usage with Wolverine endpoints. This is a work in process though. At this point it can at least expose: * HTTP status codes * HTTP methods * Input and output types when an http method either takes in JSON bodies or writes JSON responses * Authorization rules -- or really any ASP.Net Core attribute like `[Authorize]` --- --- url: /guide/http/security.md --- # Authentication and Authorization Wolverine.HTTP endpoints are just routes within your ASP.Net Core application, and will happily work with all existing ASP.Net Core middleware. Likewise, the built int `[AllowAnonymous]` and `[Authorize]` attributes from ASP.Net Core are valid on Wolverine HTTP endpoints. To require authorization on all endpoints (which is overridden by `[AllowAnonymous]`), use this syntax: ```csharp app.MapWolverineEndpoints(opts => { opts.RequireAuthorizeOnAll(); }); ``` or more selectively, the code above is just syntactical sugar for: ```cs /// /// Equivalent of calling RequireAuthorization() on all wolverine endpoints /// public void RequireAuthorizeOnAll() { ConfigureEndpoints(e => e.RequireAuthorization()); } ``` snippet source | anchor --- --- url: /guide/basics.md --- # Basics ![Wolverine Messaging Architecture](/messages.jpeg) One way or another, Wolverine is all about messages within your system or between systems (Wolverine considers HTTP to just be a different flavor of message 😃). Staying inside a single Wolverine system, a message is typically just a .NET class or struct or C#/F# record. A message generally represents either a "command" that should trigger an operation or an "event" that just lets another part of your system know that something happened. Just know that as far as Wolverine is concerned, those are roles and unlike some other messaging frameworks, will have no impact whatsoever on Wolverine's handling or implementation. Here's a couple simple samples: ```cs // A "command" message public record DebitAccount(long AccountId, decimal Amount); // An "event" message public record AccountOverdrawn(long AccountId); ``` snippet source | anchor The next concept in Wolverine is a message handler, which is just a method that "knows" how to process an incoming message. Here's an extremely simple example: ```cs public static class DebitAccountHandler { public static void Handle(DebitAccount account) { Console.WriteLine($"I'm supposed to debit {account.Amount} from account {account.AccountId}"); } } ``` snippet source | anchor Wolverine can act as a completely local mediator tool that allows your code to invoke the handler for a message at any time without having to know anything about exactly how that message is processed with this usage: ```cs public async Task invoke_debit_account(IMessageBus bus) { // Debit $250 from the account #2222 await bus.InvokeAsync(new DebitAccount(2222, 250)); } ``` snippet source | anchor There's certainly some value in Wolverine just being a command bus running inside of a single process, Wolverine also allows you to both publish and process messages received through external infrastructure like [Rabbit MQ](https://www.rabbitmq.com/) or [Pulsar](https://pulsar.apache.org/). To put this into perspective, here's how a Wolverine application could be connected to the outside world: ![Wolverine Messaging Architecture](/WolverineMessaging.png) :::tip The diagram above should just say "Message Handler" as Wolverine makes no structural differentiation between commands or events, but Jeremy is being too lazy to fix the diagram. ::: ## Terminology * *Message* -- Typically just a .NET class or C# record that can be easily serialized. See [messages and serialization](/guide/messages) for more information * *Envelope* -- Wolverine's [Envelope Wrapper](https://www.enterpriseintegrationpatterns.com/patterns/messaging/EnvelopeWrapper.html) model that wraps the raw messages with metadata * *Message Handler* -- A method or function that "knows" how to process an incoming message. See [Message Handlers](/guide/handlers/) for more information * *Transport* -- This refers to the support within Wolverine for external messaging infrastructure tools like [Rabbit MQ](/guide/messaging/transports/rabbitmq/), [Amazon SQS](/guide/messaging/transports/sqs/), [Azure Service Bus](/guide/messaging/transports/azure-service-bus/), or Wolverine's built in [TCP transport](/guide/messaging/transports/tcp) * *Endpoint* -- The configuration for a Wolverine connection to some sort of external resource like a Rabbit MQ exchange or an Amazon SQS queue. The [Async API](https://www.asyncapi.com/) specification refers to this as a *channel*, and Wolverine may very well change its nomenclature in the future to be consistent with Async API. * *Sending Agent* -- You won't use this directly in your own code, but Wolverine's internal adapters to publish outgoing messages to transport endpoints * *Listening Agent* -- Again, an internal detail of Wolverine that receives messages from external transport endpoints, and mediates between the transports and executing the message handlers * *Node* -- Not to be confused with Node.js or Kubernetes "Node", in this documentation, "node" is just meant to be a running instance of your Wolverine application within an application cluster of any sort * *Agent* -- Wolverine has a concept of stateful software "agents" that run on a single node, with Wolverine controlling the distribution of the agents. This is mostly used behind the scenes, but just know that it exists * *Message Store* -- Database storage for Wolverine's [inbox/outbox persistent messaging](/guide/durability/). A durable message store is necessary for Wolverine to support leader election, node/agent assignments, durable scheduled messaging in most cases, and its [transactional inbox/outbox](/guide/durability/) support * *Durability Agent* -- An internal subsystem in Wolverine that runs in a background service to interact with the message store for Wolverine's [transactional inbox/outbox](https://microservices.io/patterns/data/transactional-outbox.html) functionality --- --- url: /guide/handlers/batching.md --- # Batch Message Processing Sometimes you might want to process a stream of incoming messages in batches rather than one at a time. This might be for performance reasons, or maybe there's some kind of business logic that makes more sense to calculate for batches, or maybe you want a logical ["debounce"](https://medium.com/@jamischarles/what-is-debouncing-2505c0648ff1) in how your system responds to the incoming messages. ::: info The batching is supported both for messages published in process to local queues and from incoming messages from external transports. ::: Regardless, Wolverine has a mechanism to locally batch incoming messages and forward them to a batch handler. First, let's say that you have a message type called `Item`: ```cs public record Item(string Name); ``` snippet source | anchor And for whatever reason, we need to process these messages in batches. To do that, we first need to have a message handler for an array of `Item` like so: ```cs public static class ItemHandler { public static void Handle(Item[] items) { // Handle this just like a normal message handler, // just that the message type is Item[] } } ``` snippet source | anchor ::: warning At this point, Wolverine **only** supports an array of the message type for the batched handler ::: ::: tip Batch message handlers are just like any other message handler and have no special rules about their capabilities ::: With that in our system, now we need to tell Wolverine to group `Item` messages, and we do that with the following syntax: ```cs theHost = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.BatchMessagesOf(batching => { // Really the maximum batch size batching.BatchSize = 500; // You can alternatively override the local queue // for the batch publishing. batching.LocalExecutionQueueName = "items"; // We can tell Wolverine to wait longer for incoming // messages before kicking out a batch if there // are fewer waiting messages than the maximum // batch size batching.TriggerTime = 1.Seconds(); }) // The object returned here is the local queue configuration that // will handle the batched messages. This may be useful for fine // tuning the behavior of the batch processing .Sequential(); }).StartAsync(); ``` snippet source | anchor And that's that! Just to bring this a little more into focus, here's an end to end test from the Wolverine codebase: ```cs [Fact] public async Task send_end_to_end_with_batch() { // Items to publish var item1 = new Item("one"); var item2 = new Item("two"); var item3 = new Item("three"); var item4 = new Item("four"); Func publish = async c => { // I'm publishing the 4 items in sequence await c.PublishAsync(item1); await c.PublishAsync(item2); await c.PublishAsync(item3); await c.PublishAsync(item4); }; // This is the "act" part of the test var session = await theHost.TrackActivity() // Wolverine testing helper to "wait" until // the tracking receives a message of Item[] .WaitForMessageToBeReceivedAt(theHost) .ExecuteAndWaitAsync(publish); // The four Item messages should be processed as a single // batch message var items = session.Executed.SingleMessage(); items.Length.ShouldBe(4); items.ShouldContain(item1); items.ShouldContain(item2); items.ShouldContain(item3); items.ShouldContain(item4); } ``` snippet source | anchor Alright, with all that being said, here's a few more facts about the batch messaging support: 1. There is absolutely no need to create a specific message handler for the `Item` message, and in fact, you should not do so 2. The message batching is able to group the message batches by tenant id *if* your Wolverine system uses multi-tenancy ## What about durable messaging ("inbox")? The durable inbox behaves just a little bit differently for message batching. Wolverine will technically "handle" the individual messages, but does not mark them as handled in the message store until a batch message that refers to the original message is completely processed. ## Custom Batching Strategies ::: info This feature was originally added for a [JasperFx Software](https://jasperfx.net) customer who needed to batch messages by a logical saga id. ::: By default, Wolverine is simply batching messages of type `Item` into a message of type `Item[]`. But what if you need to do something a little more custom? Like batching messages by a logical saga id or some kind of entity identity? As an example, let's say that you are building some kind of task tracking system where you might easily have dozens of sub tasks for each parent task that could be getting marked complete in rapid succession. That's maybe a good example of where batching would be handy. Let's say that we have two message types for the individual item message and a custom task for the batched message like so: ```cs // Messages at the granular level that might be streaming in // very quickly public record SubTaskCompleted(string TaskId, string SubTaskId); // A custom message type for processing a batch of sub task // completed messages. Note that it's batched by the TaskId public record SubTaskCompletedBatch(string TaskId, string[] SubTaskIdList); ``` snippet source | anchor To teach Wolverine how to batch up our `SubTaskCompleted` messages into our custom batch message, we need to supply our own implementation of Wolverine's built in `Wolverine.Runtime.Batching.IMessageBatcher` type: ```cs /// /// Plugin strategy for creating custom grouping of messages /// public interface IMessageBatcher { /// /// Main method that batches items /// /// /// IEnumerable Group(IReadOnlyList envelopes); /// /// The actual message type being built that is assumed to contain /// all the batched items /// Type BatchMessageType { get; } } ``` snippet source | anchor A custom implementation of that interface in this case would look like this: ```cs public class SubTaskCompletedBatcher : IMessageBatcher { public IEnumerable Group(IReadOnlyList envelopes) { var groups = envelopes // You can trust that the message will be a non-null SubTaskCompleted here .GroupBy(x => x.Message!.As().TaskId) .ToArray(); foreach (var group in groups) { var subTaskIdList = group .Select(x => x.Message) .OfType() .Select(x => x.SubTaskId) .ToArray(); var message = new SubTaskCompletedBatch(group.Key, subTaskIdList); // It's important here to pass along the group of envelopes that make up // this batched message for Wolverine's transactional inbox/outbox // tracking yield return new Envelope(message, group); } } public Type BatchMessageType => typeof(SubTaskCompletedBatch); } ``` snippet source | anchor And of course, this doesn't work without a matching message handler for our custom message type: ```cs public static class SubTaskCompletedBatchHandler { public static Task LoadAsync(SubTaskCompletedBatch batch, ITrackedTaskRepository repository) { return repository.LoadAsync(batch.TaskId); } public static Task Handle(SubTaskCompletedBatch batch) { // actually do something here.... return Task.CompletedTask; } } ``` snippet source | anchor And finally, we need to tell Wolverine about the batching and the strategy for batching the `SubTaskCompleted` message type: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.BatchMessagesOf(x => { // We just have to let Wolverine know about our custom // message batcher x.Batcher = new SubTaskCompletedBatcher(); }); }).StartAsync(); ``` snippet source | anchor --- --- url: /introduction/best-practices.md --- # Best Practices The Wolverine community certainly wants you to be successful with Wolverine, so we're using this page to gather whatever advice we can offer. This advice falls into two areas, generic tips for creating maintainable code when using asynchronous messaging that would apply to any messaging or command executor tool, and specific tips for Wolverine itself. ## Dividing Handlers Wolverine does not enforce or require today any kind of explicit assignment of incoming message handler to endpoint (i.e. Rabbit MQ queue, AWS SQS queue, Kafka topic). That being said, you will frequently want to define your message routing to isolate incoming message types into separate endpoints for the sake of throughput or parallelized work. Likewise, if message ordering is important, you may want to purposely route multiple message types to the same listening endpoint. While Wolverine will happily allow you to implement multiple message handler methods or multiple HTTP endpoints in the same class, you may get better results by only allowing a single message handler method per class. Especially for larger, more complex message handling or HTTP request handling. ## Avoid Abstracting Wolverine This is generic advice for just about any infrastructure tool. You will lose out on Wolverine functionality by trying to abstract it, and very likely just create an abstraction that merely mimics a subset of Wolverine for little gain. If you are concerned about testability of your message handlers, we recommend using [cascading messages](/guide/handlers/cascading) instead anyway that would completely remove the need for abstracting Wolverine. ## Lean on Wolverine Error Handling Rather than explicitly catching exceptions in message handlers, we recommend leaning on Wolverine's configurable error handling policies. This will save you explicit code that can obfuscate your actual functionality, while still providing robust responses to errors and Wolverine's built in observability (error logging, circuit breakers, execution statistics). ## Pre-Generate Types to Optimize Production Usage Wolverine has admittedly an unusual runtime architecture in that it depends much more on runtime generated code than the IoC container tricks that many other .NET frameworks do today. That's great for performance, and definitely helps Wolverine to enable much lower ceremony code, but that comes with a potentially significant memory usage and [cold start](https://en.wikipedia.org/wiki/Cold_start_\(computing\)#:~:text=Cold%20start%20in%20computing%20refers,cache%20or%20starting%20up%20subsystems.) problem. If you see any of the issues I just described, or want to get in front of this issue, utilize the "pre-generated types" functionality described in [Working with Code Generation](/guide/codegen). ## Prefer Pure Functions for Business Logic As much as possible, we recommend that you try to create [pure functions](https://en.wikipedia.org/wiki/Pure_function) for any business logic or workflow routing logic that is responsible for "deciding" what to do next. The goal here is to make that code relatively easy to test inside of isolated unit tests that are completely decoupled from infrastructure. Moreover, using pure functions allows you to largely eschew the usage of mock objects inside of unit tests which can become problematic when overused. Wolverine has a lot of specific functionality to move infrastructure concerns out of the way of your business or workflow logic. For tips on how to create pure functions for your Wolverine message handlers or HTTP endpoints, see: * [A-Frame Architecture with Wolverine](https://jeremydmiller.com/2023/07/19/a-frame-architecture-with-wolverine/) * [Testing Without Mocks: A Pattern Language by Jim Shore](https://www.jamesshore.com/v2/projects/nullables/testing-without-mocks) * [Compound Handlers in Wolverine](https://jeremydmiller.com/2023/03/07/compound-handlers-in-wolverine/) * [Isolating Side Effects from Wolverine Handlers](https://jeremydmiller.com/2023/04/24/isolating-side-effects-from-wolverine-handlers/) ## Make All Side Effects Apparent from the Root Message Handler ::: info This advice arose from Jeremy's involvement with a legacy system using a Wolverine competitor where a message handler had a huge dependency tree of services that depended on other services and deep, deep down the call stack, a service published a message through that service bus. The point here is to make your code easy to reason about by being able to easily scan and see the side effects and outcomes of the original message. ::: Very frequently, you'll need to publish additional messages from either an HTTP endpoint or a message handler. You can technically have the current `IMessageBus` for the message injected into your handler's dependencies and have outgoing messages published deeper in your call stack -- but we strongly recommend not doing that **because it can make your code and your system hard to reason about**. Instead, as much as possible, the Wolverine team recommends that these outgoing, "cascading" messages only be published from the root method like shown below: ```csharp // Using cascading messages is certainly fine public static SecondMessage Handle(FirstMessage message, IService1 service1) { return new SecondMessage(); } // This is fine too if you prefer the more explicit code model and don't mind // the tighter coupling public static async ValueTask Handle(FirstMessage message, IService1 service1, IMessageBus bus) { // Little more coupling, but some folks will prefer the more explicit style await bus.PublishAsync(new SecondMessage()); } public static SecondMessage? Handle(FirstMessage message, IService1 service1) { // Call into another service to *decide* whether or not to send // the cascading service if (service1.ComplicatedLogicTest(message)) { return BuildUpComplicatedMessage(); } return null; // no cascading message } ``` This advice does **not** mean that you have to cascade every possible follow up step from a message handler, just to make your code easy to reason about by not hiding "side effect" actions deep in the stack trace underneath the message handler. Consider this case as an anti-pattern to avoid: 1. Your message handler method calls a method on `IService1` 2. Which might call a method on `IService2` 3. Which might call `IMessageBus.PublishAsync()` to publish a new message as part of you original message handling In the case above, it can become very easy to lose sight of the workflow of the system, and the Wolverine team has encountered systems build using other messaging frameworks that suffered from this problem. ## Keep Your Call Stacks Short Honestly, as a follow up to the previous statement, we highly advise you to make your "call stacks" short within message handlers to help make your code easier to reason about. And by "call stack," we mean how many other different code files or types underneath the message handler will you have to jump through to really understand how a command or event message is handled? What we've found in our own development is that for whatever value layering provides for loose coupling, it can easily do even more damage for your ability to reason about and modify the code as a whole. To be blunt, the Wolverine team is not a fan of Onion/Clean Architecture approaches with a lot of layering. Wolverine leans hard into that "A-Frame Architecture" idea as a way of creating loose coupling between technical concerns and business logic with simpler code than we think you can achieve with more typical layered, hexagonal architecture approaches. ## Attaining IMessageBus The Wolverine team recommendation is to utilize [cascading messages](/guide/handlers/cascading) as much as possible to publish additional messages, but when you do need to attain a reference to an `IMessageBus` object, we suggest to take that service into your message handler or HTTP endpoint methods as a method argument like so: ```csharp public static async Task HandleAsync(MyMessage message, IMessageBus messageBus) { // handle the message..; } ``` or if you really prefer the little extra ceremony of constructor injection because that's how the .NET ecosystem has worked for the past 15-20 years, do this: ```csharp public class MyMessageHandler { private readonly IMessageBus _messageBus; public MyMessageHandler(IMessageBus messageBus) { _messageBus = messageBus; } public async Task HandleAsync(MyMessage message) { // handle the message..; } } ``` Avoid ever trying to resolve `IMessageBus` at runtime through a scoped container like this: ```csharp services.AddScoped(s => { var bus = s.GetRequiredService(); return new Service { TenantId = bus.TenantId }; }); ``` The usage above will give you a completely different `IMessageBus` object than the current `MessageContext` being used by Wolverine to track behavior and state. ## IoC Container Usage ::: tip Wolverine is trying really hard **not** to use an IoC container whatsoever at runtime. It's not going to work with Wolverine to try to pass state between the outside world into Wolverine through ASP.Net Core scoped containers for example. ::: Honestly, you'll get better performance and better results all the way around if you can avoid doing any kind of "opaque" service registrations in your IoC container that require runtime resolution. In effect, this means to stay away from any kind of `Scoped` or `Transient` Lambda registration like: ```csharp services.AddScoped(s => { var c = s.GetRequiredService(); return new Database(c.GetConnectionString("foo"); }); ``` This might be more about preference than a hard advantage (it is a performance improvement though), but the Wolverine team recommends using method injection over the older, traditional constructor injection approach as shown below: ```csharp // Wolverine prefers this: public static class Message1Handler { public static Task HandleAsync(Message1 message, IService service) { // Do stuff } } // This certainly works with Wolverine, but it's more code and more // runtime overhead public class Message1Handler { private readonly IService1 _service1; public Message1Handler(IService1 service1) { _service1 = service1; } public Task HandleAsync(Message1 message) { // Do stuff } } ``` By and large, Wolverine kind of wants you to use fewer abstractions and keep a shorter call stack. See the earlier section about "pure function" handlers for alternatives to jamming more abstracted services into an IoC container in order to create separation of concerns and testability. ::: warning Troubleshooting\ When familiarizing yourself with Wolverine, it can be daunting to troubleshoot the failure modes of the different approaches to IOC and dependency management.\ Below are some steps to take if you run into issues\ ::: #### Inspect the generated code Pre-generate the code to see exactly what is happening at runtime: ```bash dotnet run -- codegen write ``` Look at the generated code in the `./Internal/Generated/WolverineHandlers/` folder and look for creation of an `IServiceScope` as below: ```csharp using var serviceScope = _serviceScopeFactory.CreateScope(); ``` If this line is present, any services resolved through this scope will not share state with the Wolverine runtime. What this means in practice is that services resolved through the `IServiceProvider` that have a dependency on Wolverine resources such as the current message `Envelope` or the `IMessageContext` will have invalid objects for the current operation. #### Example ```csharp public class BadService(IServiceProvider serviceProvider, IMessageContext context) { // The `context` parameter here would be different from the context parameter in `GoodService`, // and the tenantId would be *DEFAULT* public void DoThings() { Console.WriteLine($"Current tenantId is {messageContext.tenantId}"); } } public class GoodService(IMessageContext context) { public void DoThings() { Console.WriteLine($"Current tenantId is {messageContext.tenantId}"); } } ``` ## Vertical Slice Architecture ::: tip Despite its name, "Vertical Slice Architecture" is really just an idea about organizing code and not what I would normally think of as a true architectural pattern. You could technically follow any kind of ports and adapter style of coding like the Clean Architecture while still organizing your code in vertical slices instead of horizontal layers. ::: There's nothing stopping you from using Wolverine as part of a typical [Clean Architecture](https://www.youtube.com/watch?v=yF9SwL0p0Y0) or [Onion Architecture](https://jeffreypalermo.com/2008/07/the-onion-architecture-part-1/) project layout where technical concerns are generally spread out into different projects per technical concern. Wolverine though has quite a bit of features specifically to support a "Vertical Slice Architecture" code layout, and you may be able to utilize Wolverine to create a maintainable codebase with much less complexity than the state of the art Clean/Onion layered approach. See [Low Ceremony Vertical Slice Architecture with Wolverine](https://jeremydmiller.com/2023/07/10/low-ceremony-vertical-slice-architecture-with-wolverine/) ## Graceful Shutdown of Nodes Hey, don't worry about this quite so much anymore. One of the big changes in Wolverine 3.0 was making Wolverine a **lot** more tolerant of how the application was shut down. Go forth, debug as needed, and just get things done! It's helpful to note that Wolverine operates in `Balanced` mode by default, which enables it to operate as a cluster of nodes. Ideally, the node process should be gracefully shut down to prevent failures when others attempt to communicate with it. A health check process determines stale nodes If you are running Wolverine in a container, ensure that the orchestrator correctly sends a TERM signal and that there is enough time before it forcefully kills it. For reference, you can check [Pod termination](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination) in Kubernetes. --- --- url: /guide/messaging/broadcast-to-topic.md --- # Broadcast Messages to a Specific Topic If you're using a transport endpoint that supports publishing messages by topic such as this example using Rabbit MQ from the Wolverine tests: ```cs theSender = Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseRabbitMq("host=localhost;port=5672").AutoProvision(); opts.PublishAllMessages().ToRabbitTopics("wolverine.topics", exchange => { exchange.BindTopic("color.green").ToQueue("green"); exchange.BindTopic("color.blue").ToQueue("blue"); exchange.BindTopic("color.*").ToQueue("all"); // Need this to be able to go to ONLY the green receiver for a test exchange.BindTopic("special").ToQueue("green"); }); opts.Discovery.DisableConventionalDiscovery() .IncludeType(); opts.ServiceName = "TheSender"; opts.PublishMessagesToRabbitMqExchange("wolverine.topics", m => m.TopicName); }).Start(); ``` snippet source | anchor You can explicitly publish a message to a topic through this syntax: ```cs var publisher = theSender.Services .GetRequiredService(); await publisher.BroadcastToTopicAsync("color.purple", new Message1()); ``` snippet source | anchor ```cs var publisher = theSender.Services .GetRequiredService(); await publisher.BroadcastToTopicAsync("color.purple", new Message1()); ``` snippet source | anchor ::: warning If you wish to use this functionality, you have to configure at least one sending endpoint subscription like a Rabbit MQ topic exchange in your application. Wolverine has to know how to send messages with your topic. ::: ## Topic Sending as Cascading Message Wolverine is pretty serious about enabling as many message handlers or HTTP endpoints as possible to be [pure functions](https://en.wikipedia.org/wiki/Pure_function) where the unit testing is easier, so there's an option to broadcast messages to a particular topic as a cascaded message: ```cs public class ManuallyRoutedTopicResponseHandler { public IEnumerable Consume(MyMessage message, Envelope envelope) { // Go North now at the "direction" queue yield return new GoNorth().ToTopic($"direction/{envelope.TenantId}"); } } ``` snippet source | anchor --- --- url: /guide/http/caching.md --- # Caching For caching HTTP responses, Wolverine can simply work with the [Response Caching Middleware in ASP.NET Core](https://learn.microsoft.com/en-us/aspnet/core/performance/caching/middleware?view=aspnetcore-10.0). Wolverine.HTTP *will* respect your usage of the `[ResponseCache]` attribute on either the endpoint handler or method to write out both the `vary` and `cache-control` HTTP headers -- with an attribute on the method taking precedence. Here's an example or two: ```cs // This is all it takes: [WolverineGet("/cache/one"), ResponseCache(Duration = 3, VaryByHeader = "accept-encoding", NoStore = false)] public static string GetOne() { return "one"; } [WolverineGet("/cache/two"), ResponseCache(Duration = 10, NoStore = true)] public static string GetTwo() { return "two"; } ``` snippet source | anchor Wolverine.HTTP will also modify the OpenAPI metadata to reflect the caching as well as embed the metadata in the ASP.Net Core `Endpoint` for utilization inside of the response caching middleware. --- --- url: /guide/handlers/cascading.md --- # Cascading Messages ::: tip Cascading messages are just the equivalent of `IMessageBus.PublishAsync()`. Just know that unlike [Side Effects](/guide/handlers/side-effects), cascaded messages are handled separately in a later thread and with a completely independent "retry loop" from the originating message. ::: ::: info One of Wolverine's advantages over previous .NET messaging frameworks is the ability to express many message handlers as pure functions for better testability and hopefully more self-explanatory code. Cascading messages are one of the ways that Wolverine uses to enable pure function handler methods. ::: Many times during the processing of a message you will need to create and send out other messages. Maybe you need to respond back to the original sender with a reply, maybe you need to trigger a subsequent action, or send out additional messages to start some kind of background processing. You can do that by just having your handler class use the `IMessageContext` interface as shown in this sample: ```cs public class NoCascadingHandler { private readonly IMessageContext _bus; public NoCascadingHandler(IMessageContext bus) { _bus = bus; } public void Consume(MyMessage message) { // do whatever work you need to for MyMessage, // then send out a new MyResponse _bus.SendAsync(new MyResponse()); } } ``` snippet source | anchor The code above certainly works and this is consistent with most of the competing service bus tools. However, Wolverine supports the concept of *cascading messages* that allow you to automatically send out objects returned from your handler methods without having to use `IMessageContext` as shown below: ```cs public class CascadingHandler { public MyResponse Consume(MyMessage message) { return new MyResponse(); } } ``` snippet source | anchor When Wolverine executes `CascadingHandler.Consume(MyMessage)`, it "knows" that the `MyResponse` return value should be sent through the service bus as part of the same transaction with whatever routing rules apply to `MyResponse`. A couple things to note here: * Cascading messages returned from handler methods will not be sent out until after the original message succeeds and is part of the underlying transport transaction * Null's returned by handler methods are simply ignored * The cascading message feature was explicitly designed to make unit testing handler actions easier by shifting the test strategy to [state-based](http://blog.jayfields.com/2008/02/state-based-testing.html) where you mostly need to verify the state of the response objects instead of mock-heavy testing against calls to `IMessageContext`. See [return types](/guide/handlers/return-values) for more information on valid handler signatures. In terms of response types that become cascading messages, the response types of your message handlers can be: 1. A specific message type 2. `object` if the cascaded message type is variable 3. An object that implements `ISendMyself` to customize how a cascaded message is sent (timeouts? specific destinations?) 4. `IEnumerable` or `object[]` or `Task` to make multiple responses 5. `IAsyncEnumerable` to make multiple cascading messages out of an asynchronous handler 6. A [Tuple](https://docs.microsoft.com/en-us/dotnet/csharp/tuples) type to express the exact kinds of responses your message handler returns ## To Specific Endpoints Sometimes you'll want to explicitly send messages to specific endpoints rather than relying on Wolverine's message routing. You can still use cascading messages to an endpoint by name or by the destination `Uri` like so: ```cs public class ManuallyRoutedResponseHandler { public IEnumerable Consume(MyMessage message) { // Go North now at the "important" queue yield return new GoNorth().ToEndpoint("important"); // Go West in a lower priority queue yield return new GoWest().ToDestination(new Uri("rabbitmq://queue/low")); } } ``` snippet source | anchor You can also add optional `DeliveryOptions` to the outgoing messages to fine tune how the message is to be published. ## Using OutgoingMessages ::: tip In the case of mixing different return values from a handler (side effects, Marten events, etc.), it might well make your code more intention revealing to use `OutgoingMessages` ::: You can return a value from your handlers called `OutgoingMessages` that is just a collection of outgoing messages. This helps Wolverine "know" that these messages should be cascaded after the initial message is successful. The usage of this is shown below: ```cs public static OutgoingMessages Handle(Incoming incoming) { // You can use collection initializers for OutgoingMessages in C# // as a shorthand. var messages = new OutgoingMessages { new Message1(), new Message2(), new Message3(), }; // Send a specific message back to the original sender // of the incoming message messages.RespondToSender(new Message4()); // Send a message with a 5 minute delay messages.Delay(new Message5(), 5.Minutes()); // Schedule a message to be sent at a specific time messages.Schedule(new Message5(), new DateTimeOffset(2023, 4, 5, 0, 0, 0, 0.Minutes())); return messages; } ``` snippet source | anchor Do note that the value of `OutgoingMessages` is probably greatest when being used in a tuple response from a handler that's a mix of cascading messages and other side effects. ## Scheduled, Delayed, or other Customized Message Publishing The basic cascading messages effectively do a straight up `IMessageBus.PublishAsync()` on each object returned from a handler -- but that takes away a lot of the power of Wolverine. Not to worry, you've got a couple helpers to have both the testability and pure function goodness of cascading messages **and** have full access to the power of Wolverine. Here's some example usages: ```cs public static IEnumerable Consume(Incoming incoming) { // Delay the message delivery by 10 minutes yield return new Message1().DelayedFor(10.Minutes()); // Schedule the message delivery for a certain time yield return new Message2().ScheduledAt(new DateTimeOffset(DateTime.Today.AddDays(2))); // Customize the message delivery however you please... yield return new Message3() .WithDeliveryOptions(new DeliveryOptions().WithHeader("foo", "bar")); // Send back to the original sender yield return Respond.ToSender(new Message4()); } ``` snippet source | anchor ## Request/Reply Scenarios ::: warning Just know that in the case of using `InvokeAsync()` for request/reply, that the reply type of `T` **is not also published as a cascaded message**. Instead, it is only returned to the original caller. ::: Normally, cascading messages are just sent out according to the configured subscription rules for that message type, but there's an exception case. If the original sender requested a response, Wolverine will automatically send the cascading messages returned from the action to the original sender if the cascading message type matches the reply that the sender had requested. If you're examining the `Envelope` objects for the message, you'll see that the "reply-requested" header is "MyResponse." Let's say that we have two running service bus nodes named "Sender" and "Receiver." If this code below is called from the "Sender" node: ```cs public class Requester { private readonly IMessageContext _bus; public Requester(IMessageContext bus) { _bus = bus; } public ValueTask GatherResponse() { return _bus.SendAsync(new MyMessage(), DeliveryOptions.RequireResponse()); } } ``` snippet source | anchor and inside Receiver we have this code: ```cs public class CascadingHandler { public MyResponse Consume(MyMessage message) { return new MyResponse(); } } ``` snippet source | anchor Assuming that `MyMessage` is configured to be sent to "Receiver," the following steps take place: 1. Sender sends a `MyMessage` message to the Receiver node with the "reply-requested" header value of "MyResponse" 2. Receiver handles the `MyMessage` message by calling the `CascadingHandler.Consume(MyMessage)` method 3. Receiver sees the value of the "reply-requested" header matches the response, so it sends the `MyResponse` object back to Sender 4. When Sender receives the matching `MyResponse` message that corresponds to the original `MyMessage`, it sets the completion back to the Task returned by the `IMessageContext.Request()` method ## Conditional Responses You may need some conditional logic within your handler to know what the cascading message is going to be. If you need to return different types of cascading messages based on some kind of logic, you can still do that by making your handler method return signature be `object` like this sample shown below: ```cs public class ConditionalResponseHandler { public object Consume(DirectionRequest request) { switch (request.Direction) { case "North": return new GoNorth(); case "South": return new GoSouth(); } // This does nothing return null; } } ``` snippet source | anchor ## Schedule Response Messages You may want to raise a delayed or scheduled response. In this case you will need to return an `Envelope` for the response as shown below: ```cs public class ScheduledResponseHandler { public Envelope Consume(DirectionRequest request) { return new Envelope(new GoWest()).ScheduleDelayed(TimeSpan.FromMinutes(5)); } public Envelope Consume(MyMessage message) { // Process GoEast at 8 PM local time return new Envelope(new GoEast()).ScheduleAt(DateTime.Today.AddHours(20)); } } ``` snippet source | anchor ## Multiple Cascading Messages You can also raise any number of cascading messages by returning either any type that can be cast to `IEnumerable`, and Wolverine will treat each element as a separate cascading message. An empty enumerable is just ignored. ```cs public class MultipleResponseHandler { public IEnumerable Consume(MyMessage message) { // Go North now yield return new GoNorth(); // Go West in an hour yield return new GoWest().DelayedFor(1.Hours()); } } ``` snippet source | anchor ## Using C# Tuples as Return Values Sometimes you may well need to return multiple cascading messages from your original message action. In FubuMVC, Wolverine's forebear, you had to return either `object[]` or `IEnumerable` as the return type of your action -- which had the unfortunate side effect of partially obfuscating your code by making it less clear what message types were being cascaded from your handler without carefully reading the message body. In Wolverine, we still support the "mystery meat" `object` return value signatures, but now you can also use C# tuples to better denote the cascading message types. This handler cascading a pair of messages: ```cs public class MultipleResponseHandler { public IEnumerable Consume(MyMessage message) { // Go North now yield return new GoNorth(); // Go West in an hour yield return new GoWest().DelayedFor(1.Hours()); } } ``` snippet source | anchor can be rewritten with C# 7 tuples to: ```cs public class TupleResponseHandler { // Both GoNorth and GoWest will be interpreted as // cascading messages public (GoNorth, GoWest) Consume(MyMessage message) { return (new GoNorth(), new GoWest()); } } ``` snippet source | anchor The sample above still treats both `GoNorth` and the `ScheduledResponse` as cascading messages. The Wolverine team thinks that the tuple-ized signature makes the code more self-documenting and easier to unit test. ## Responding to Sender If in a message handler you need to send a message directly back to the original sender, you can use this cascaded message option: ```cs public object Handle(PingMessage message) { var pong = new PongMessage { Id = message.Id }; // This will send the pong message back // to the original sender of the PingMessage return Respond.ToSender(pong); } ``` snippet source | anchor --- --- url: /guide/command-line.md --- # Command Line Integration @[youtube](3C5bacH0akU) With help from its [JasperFx](https://github.com/JasperFx) team mate [Oakton](https://jasperfx.github.io/oakton), Wolverine supports quite a few command line diagnostic and resource management tools. To get started, apply Oakton as the command line parser in your applications as shown in the last line of code in this sample application bootstrapping from Wolverine's [Getting Started](/tutorials/getting-started): ```cs using JasperFx; using Quickstart; using Wolverine; var builder = WebApplication.CreateBuilder(args); // The almost inevitable inclusion of Swashbuckle:) builder.Services.AddEndpointsApiExplorer(); builder.Services.AddSwaggerGen(); // For now, this is enough to integrate Wolverine into // your application, but there'll be *many* more // options later of course :-) builder.Host.UseWolverine(); // Some in memory services for our application, the // only thing that matters for now is that these are // systems built by the application's IoC container builder.Services.AddSingleton(); builder.Services.AddSingleton(); var app = builder.Build(); // An endpoint to create a new issue that delegates to Wolverine as a mediator app.MapPost("/issues/create", (CreateIssue body, IMessageBus bus) => bus.InvokeAsync(body)); // An endpoint to assign an issue to an existing user that delegates to Wolverine as a mediator app.MapPost("/issues/assign", (AssignIssue body, IMessageBus bus) => bus.InvokeAsync(body)); // Swashbuckle inclusion app.UseSwagger(); app.UseSwaggerUI(); app.MapGet("/", () => Results.Redirect("/swagger")); // Opt into using JasperFx for command line parsing // to unlock built in diagnostics and utility tools within // your Wolverine application return await app.RunJasperFxCommands(args); ``` snippet source | anchor From this project's root in the command line terminal tool of your choice, type: ```bash dotnet run -- help ``` and you *should* get this hopefully helpful rundown of available command options: ```bash The available commands are: Alias Description ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ check-env Execute all environment checks against the application codegen Utilities for working with JasperFx.CodeGeneration and JasperFx.RuntimeCompiler describe Writes out a description of your running application to either the console or a file help List all the available commands resources Check, setup, or teardown stateful resources of this system run Start and run this .Net application storage Administer the Wolverine message storage Use dotnet run -- ? [command name] or dotnet run -- help [command name] to see usage help about a specific command ``` ## Describe a Wolverine Application ::: tip While Wolverine certainly knows upfront what message types it handles, you may need to help Wolverine "know" what types will be outgoing messages later with the [message discovery](/guide/messages.html#message-discovery) support. ::: Wolverine is admittedly a configuration-heavy framework, and some combinations of conventions, policies, and explicit configuration could easily lead to confusion about how the system is going to behave. To help ameliorate that possible situation -- but also to help the Wolverine team be able to remotely support folks using Wolverine -- you have this command line tool: ```bash dotnet run -- describe ``` At this time, a Wolverine application will spit out command line reports about its configuration that will describe: * "Wolverine Options" - the basics properties as configured, including what Wolverine thinks is the application assembly and any registered extensions * "Wolverine Listeners" - a tabular list of all the configured listening endpoints, including local queues, within the system and information about how they are configured * "Wolverine Message Routing" - a tabular list of all the message routing for *known* messages published within the system * "Wolverine Sending Endpoints" - a tabular list of all *known*, configured endpoints that send messages externally * "Wolverine Error Handling" - a preview of the active message failure policies active within the system * "Wolverine Http Endpoints" - shows all Wolverine HTTP endpoints. This is only active if WolverineFx.HTTP is used within the system ## Exporting System Capabilities This command: ```bash dotnet run capabilities wolverine.json ``` Will write a JSON file to "wolverine.json" that will completely describe all the configured settings, message types, message store, messaging endpoints, and even event stores configured to this application. The Wolverine team may ask you for this file to help you troubleshoot issues in the future. This functionality was originally built for consumption in the "CritterWatch" add on tool, but was requested by a [JasperFx Software](https://jasperfx.net) client to provide a mechanism to detect any unintentional changes to Wolverine application configuration. ## Other Highlights * See the [code generation support](./codegen) * The `storage` command helps manage the [durable messaging support](./durability/) * Wolverine has direct support for [Oakton](https://jasperfx.github.io/oakton) environment checks and resource management that can be very helpful for Wolverine integrations with message brokers or database servers --- --- url: /guide/configuration.md --- # Configuration ::: info As of 3.0, Wolverine **does not require the usage of the [Lamar](https://jasperfx.github.io/lamar) IoC container**, and will no longer replace the built in .NET container with Lamar. Wolverine 3.0 *is* tested with both the built in `ServiceProvider` and Lamar. It's theoretically possible to use other IoC containers now as long as they conform to the .NET conforming container, but this isn't tested by the Wolverine team. ::: Wolverine is configured with the `IHostBuilder.UseWolverine()` or `HostApplicationBuilder` extension methods, with the actual configuration living on a single `WolverineOptions` object. The `WolverineOptions` is the configuration model for your Wolverine application, and as such it can be used to configure directives about: * Basic elements of your Wolverine system like the system name itself * Connections to [external messaging infrastructure](/guide/messaging/introduction) through Wolverine's *transport* model * Messaging endpoints for either listening for incoming messages or subscribing endpoints * [Subscription rules](/guide/messaging/subscriptions) for outgoing messages * How [message handlers](/guide/messages) are discovered within your application and from what assemblies * Policies to control how message handlers function, or endpoints are configured, or error handling policies ![Wolverine Configuration Model](/configuration-model.png) ## With ASP.NET Core ::: info Do note that there's some [additional configuration to use WolverineFx.HTTP](/guide/http/integration) as well. ::: Below is a sample of adding Wolverine to an ASP.NET Core application that is bootstrapped with `WebApplicationBuilder`: ```cs using JasperFx; using Quickstart; using Wolverine; var builder = WebApplication.CreateBuilder(args); // The almost inevitable inclusion of Swashbuckle:) builder.Services.AddEndpointsApiExplorer(); builder.Services.AddSwaggerGen(); // For now, this is enough to integrate Wolverine into // your application, but there'll be *many* more // options later of course :-) builder.Host.UseWolverine(); // Some in memory services for our application, the // only thing that matters for now is that these are // systems built by the application's IoC container builder.Services.AddSingleton(); builder.Services.AddSingleton(); var app = builder.Build(); // An endpoint to create a new issue that delegates to Wolverine as a mediator app.MapPost("/issues/create", (CreateIssue body, IMessageBus bus) => bus.InvokeAsync(body)); // An endpoint to assign an issue to an existing user that delegates to Wolverine as a mediator app.MapPost("/issues/assign", (AssignIssue body, IMessageBus bus) => bus.InvokeAsync(body)); // Swashbuckle inclusion app.UseSwagger(); app.UseSwaggerUI(); app.MapGet("/", () => Results.Redirect("/swagger")); // Opt into using JasperFx for command line parsing // to unlock built in diagnostics and utility tools within // your Wolverine application return await app.RunJasperFxCommands(args); ``` snippet source | anchor ## "Headless" Applications :::tip The `WolverineOptions.Services` property can be used to add additional IoC service registrations with either the standard .NET `IServiceCollection` model syntax. ::: For "headless" console applications with no user interface or HTTP service endpoints, the bootstrapping can be done with just the `HostBuilder` mechanism as shown below: ```cs return await Host.CreateDefaultBuilder(args) .UseWolverine(opts => { opts.ServiceName = "Subscriber1"; opts.Discovery.DisableConventionalDiscovery().IncludeType(); opts.ListenAtPort(MessagingConstants.Subscriber1Port); opts.UseRabbitMq().AutoProvision(); opts.ListenToRabbitQueue(MessagingConstants.Subscriber1Queue); // Publish to the other subscriber opts.PublishMessage().ToRabbitQueue(MessagingConstants.Subscriber2Queue); // Add Open Telemetry tracing opts.Services.AddOpenTelemetryTracing(builder => { builder .SetResourceBuilder(ResourceBuilder .CreateDefault() .AddService("Subscriber1")) .AddJaegerExporter() // Add Wolverine as a source .AddSource("Wolverine"); }); }) // Executing with Oakton as the command line parser to unlock // quite a few utilities and diagnostics in our Wolverine application .RunOaktonCommands(args); ``` snippet source | anchor As of Wolverine 3.0, you can also use the `HostApplicationBuilder` mechanism as well: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { var connectionString = builder.Configuration.GetConnectionString("database"); opts.Services.AddDbContextWithWolverineIntegration(x => { x.UseSqlServer(connectionString); }); // Add the auto transaction middleware attachment policy opts.Policies.AutoApplyTransactions(); }); using var host = builder.Build(); await host.StartAsync(); ``` snippet source | anchor And lastly, you can just use `IServiceCollection.AddWolverine()` by itself. ## Replacing ServiceProvider with Lamar If you run into any trouble whatsoever with code generation after upgrading to Wolverine 3.0, please: 1. Please [raise a GitHub issue in Wolverine](https://github.com/JasperFx/wolverine/issues/new/choose) with some description of the offending message handler or http endpoint 2. Fall back to Lamar for your IoC tool To use Lamar, add this Nuget to your main project: ```bash dotnet add package Lamar.Microsoft.DependencyInjection ``` If you're using `IHostBuilder` like you might for a simple console app, it's: ```cs // With IHostBuilder var builder = Host.CreateDefaultBuilder(); builder.UseLamar(); ``` snippet source | anchor In a web application, it's: ```csharp var builder = WebApplication.CreateBuilder(args); builder.Host.UseLamar(); ``` and with `HostApplicationBuilder`, try: ```csharp var builder = Host.CreateApplicationBuilder(); // Little ugly, and Lamar *should* have a helper for this... builder.ConfigureContainer(new LamarServiceProviderFactory()); ``` ## Splitting Configuration Across Modules To keep your `UseWolverine()` configuration from becoming too huge or to keep specific configuration maybe within different modules within your system, you can use [Wolverine extensions](/guide/extensions). You can also use the `IServiceCollection.ConfigureWolverine()` method to add configuration to your Wolverine application from outside the main `UseWolverine()` code as shown below: ```cs var builder = Host.CreateApplicationBuilder(); // Baseline Wolverine configuration builder.Services.AddWolverine(opts => { }); // This would be applied as an extension builder.Services.ConfigureWolverine(w => { // There is a specific helper for this, but just go for it // as an easy example w.Durability.Mode = DurabilityMode.Solo; }); using var host = builder.Build(); host.Services.GetRequiredService() .Options .Durability .Mode .ShouldBe(DurabilityMode.Solo); ``` snippet source | anchor --- --- url: /guide/extensions.md --- # Configuration Extensions ::: warning As of Wolverine 3.0 and our move to directly support non-Lamar IoC containers, it is no longer possible to alter service registrations through Wolverine extensions that are themselves registered in the IoC container at bootstrapping time. ::: Wolverine supports the concept of extensions for modularizing Wolverine configuration with implementations of the `IWolverineExtension` interface: ```cs /// /// Use to create loadable extensions to Wolverine applications /// public interface IWolverineExtension { /// /// Make any alterations to the WolverineOptions for the application /// /// void Configure(WolverineOptions options); } ``` snippet source | anchor Here's a sample: ```cs public class SampleExtension : IWolverineExtension { public void Configure(WolverineOptions options) { // Add service registrations options.Services.AddTransient(); // Alter settings within the application options .UseNewtonsoftForSerialization(settings => settings.TypeNameHandling = TypeNameHandling.None); } } ``` snippet source | anchor Extensions can be applied programmatically against the `WolverineOptions` like this: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Including a single extension opts.Include(); // Or add a Wolverine extension that needs // to use IoC services opts.Services.AddWolverineExtension(); }) .ConfigureServices(services => { // This is the same logical usage, just showing that it // can be done directly against IServiceCollection services.AddWolverineExtension(); }) .StartAsync(); ``` snippet source | anchor Lastly, you can also add `IWolverineExtension` types to your IoC container registration that will be applied to `WolverineOptions` just before bootstrapping Wolverine at runtime. This was originally added to allow for test automation scenarios where you might want to override part of the Wolverine setup during tests. As an example, consider this common usage for disabling external transports during testing: ```cs // This is using Alba to bootstrap a Wolverine application // for integration tests, but it's using WebApplicationFactory // to do the actual bootstrapping await using var host = await AlbaHost.For(x => { // I'm overriding x.ConfigureServices(services => services.DisableAllExternalWolverineTransports()); }); ``` snippet source | anchor Behind the scenes, Wolverine has a small extension like this: ```cs internal class DisableExternalTransports : IWolverineExtension { public void Configure(WolverineOptions options) { options.ExternalTransportsAreStubbed = true; } } ``` snippet source | anchor And that extension is just added to the application's IoC container at test bootstrapping time like this: ```cs public static IServiceCollection DisableAllExternalWolverineTransports(this IServiceCollection services) { services.AddSingleton(); return services; } ``` snippet source | anchor In usage, the `IWolverineExtension` objects added to the IoC container are applied *after* the inner configuration inside your application's `UseWolverine()` set up. As another example, `IWolverineExtension` objects added to the IoC container can also use services injected into the extension object from the IoC container as shown in this example that uses the .NET `IConfiguration` service: ```cs public class ConfigurationUsingExtension : IWolverineExtension { private readonly IConfiguration _configuration; // Use constructor injection from your DI container at runtime public ConfigurationUsingExtension(IConfiguration configuration) { _configuration = configuration; } public void Configure(WolverineOptions options) { // Configure the wolverine application using // the information from IConfiguration } } ``` snippet source | anchor There's also a small helper method to register Wolverine extensions like so: ## Modifying Transport Configuration If your Wolverine extension needs to apply some kind of extra configuration to the transport integration, most of the transport packages support a `WolverineOptions.ConfigureTransportName()` extension method that will let you make additive configuration changes to the transport integration for items like declaring extra queues, topics, exchanges, subscriptions or overriding dead letter queue behavior. For example: 1. `ConfigureRabbitMq()` 2. `ConfigureKafka()` 3. `ConfigureAzureServiceBus()` 4. `ConfigureAmazonSqs()` ## Asynchronous Extensions ::: tip This was added to Wolverine 2.3, specifically for a user needing to use the [Feature Flag library](https://learn.microsoft.com/en-us/azure/azure-app-configuration/use-feature-flags-dotnet-core) from Microsoft. ::: There is also any option for creating Wolverine extensions that need to use asynchronous methods to configure the `WolverineOptions` using the `IAsyncWolverineExtension` library. A sample is shown below: ```cs public class SampleAsyncExtension : IAsyncWolverineExtension { private readonly IFeatureManager _features; public SampleAsyncExtension(IFeatureManager features) { _features = features; } public async ValueTask Configure(WolverineOptions options) { if (await _features.IsEnabledAsync("Module1")) { // Make any kind of Wolverine configuration options .PublishMessage() .ToLocalQueue("module1-high-priority") .Sequential(); } } } ``` snippet source | anchor Which can be added to your application with this extension method on `IServiceCollection`: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.Services.AddFeatureManagement(); opts.Services.AddSingleton(featureManager); // Adding the async extension to the underlying IoC container opts.Services.AddAsyncWolverineExtension(); }).StartAsync(); ``` snippet source | anchor ### Asynchronous Extensions and Wolverine.HTTP Just a heads up, there's a timing issue between the application of asynchronous Wolverine extensions and the usage of the Wolverine.HTTP `MapWolverineEndpoints()` method. If you need the asynchronous extensions to apply to the HTTP configuration, you need to help Wolverine out by explicitly calling this method in your `Program` file *after* building the `WebApplication`, but before calling `MapWolverineEndpoints()` like so: ```cs var app = builder.Build(); // In order for async Wolverine extensions to apply to Wolverine.HTTP configuration, // you will need to explicitly call this *before* MapWolverineEndpoints() await app.Services.ApplyAsyncWolverineExtensions(); ``` snippet source | anchor ## Wolverine Plugin Modules ::: warning This functionality will likely be eliminated in Wolverine 3.0. ::: ::: tip Use this sparingly, but it might be advantageous for adding extra instrumentation or extra middleware ::: If you want to create a Wolverine extension assembly that automatically loads itself into an application just by being referenced by the project, you can use a combination of `IWolverineExtension` and the `[WolverineModule]` assembly attribute. Assuming that you have an implementation of `IWolverineExtension` named `Module1Extension`, you can mark your module library with this attribute to automatically add that extension to Wolverine: ```cs [assembly: WolverineModule] ``` snippet source | anchor ## Disabling Assembly Scanning Some Wolverine users have seen rare issues with the assembly scanning cratering an application with out of memory exceptions in the case of an application directory being the same as the root of a Docker container. *If* you experience that issue, or just want a faster start up time, you can disable the automatic extension discovery using this syntax: ```cs using var host = await Microsoft.Extensions.Hosting.Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.DisableConventionalDiscovery(); }, ExtensionDiscovery.ManualOnly) .StartAsync(); ``` snippet source | anchor --- --- url: /guide/messaging/transports/sqs/queues.md --- # Configuring Queues ``` ``` --- --- url: /guide/messaging/transports/rabbitmq/multiple-brokers.md --- # Connecting to Multiple Brokers If you have a need to exchange messages with multiple Rabbit MQ brokers from one application, you have the option to add additional, named brokers identified by Wolverine's `BrokerName` identity. Here's the syntax to work with extra, named brokers: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { // Connect to the "main" Rabbit MQ broker for this application opts.UseRabbitMq(builder.Configuration.GetConnectionString("internal-rabbit-mq")); // Listen for incoming messages on the main broker at the queue named "incoming" opts.ListenToRabbitQueue("incoming"); // Let's say there's one Rabbit MQ broker for internal communications // and a second one for external communications var external = new BrokerName("external"); // BUT! Let's also use a second broker opts.AddNamedRabbitMqBroker(external, factory => { factory.Uri = new Uri(builder.Configuration.GetConnectionString("external-rabbit-mq")); }); // Listen to a queue on the named, secondary broker opts.ListenToRabbitQueueOnNamedBroker(external, "incoming"); // Other options for publishing messages to the named broker opts.PublishAllMessages().ToRabbitExchangeOnNamedBroker(external, "exchange1"); opts.PublishAllMessages().ToRabbitQueueOnNamedBroker(external, "outgoing"); opts.PublishAllMessages().ToRabbitRoutingKeyOnNamedBroker(external, "exchange1", "key2"); opts.PublishAllMessages().ToRabbitTopicsOnNamedBroker(external, "topics"); }); ``` snippet source | anchor The `Uri` values for endpoints to the additional broker follows the same rules as the normal usage of the Rabbit MQ transport, but the `Uri.Scheme` is the name of the additional broker. For example, connecting to a queue named "incoming" at a broker named by `new BrokerName("external")` would be `external://queue/incoming`. --- --- url: /guide/messaging/transports/azureservicebus/conventional-routing.md --- # Conventional Message Routing Lastly, you can have Wolverine automatically determine message routing to Azure Service Bus based on conventions as shown in the code snippet below. By default, this approach assumes that each outgoing message type should be sent to queue named with the [message type name](/guide/messages.html#message-type-name-or-alias) for that message type. Likewise, Wolverine sets up a listener for a queue named similarly for each message type known to be handled by the application. ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { // One way or another, you're probably pulling the Azure Service Bus // connection string out of configuration var azureServiceBusConnectionString = builder .Configuration .GetConnectionString("azure-service-bus"); // Connect to the broker in the simplest possible way opts.UseAzureServiceBus(azureServiceBusConnectionString).AutoProvision() .UseConventionalRouting(convention => { // Optionally override the default queue naming scheme convention.QueueNameForSender(t => t.Namespace) // Optionally override the default queue naming scheme .QueueNameForListener(t => t.Namespace) // Fine tune the conventionally discovered listeners .ConfigureListeners((listener, builder) => { var messageType = builder.MessageType; var runtime = builder.Runtime; // Access to basically everything // customize the new queue listener.CircuitBreaker(queue => { }); // other options... }) // Fine tune the conventionally discovered sending endpoints .ConfigureSending((subscriber, builder) => { // Similarly, use the message type and/or wolverine runtime // to customize the message sending }); }); }); using var host = builder.Build(); await host.StartAsync(); ``` snippet source | anchor ## Route to Topics and Subscriptions You can also opt into conventional routing using topics and subscriptions named after the message type names like this: ```cs opts.UseAzureServiceBusTesting() .UseTopicAndSubscriptionConventionalRouting(convention => { // Optionally control every aspect of the convention and // its applicability to types // as well as overriding any listener, sender, topic, or subscription // options // Can't use the full name because of limitations on name length convention.SubscriptionNameForListener(t => t.Name.ToLowerInvariant()); convention.TopicNameForListener(t => t.Name.ToLowerInvariant()); convention.TopicNameForSender(t => t.Name.ToLowerInvariant()); }) .AutoProvision() .AutoPurgeOnStartup(); ``` snippet source | anchor ## Separated Handler Behavior In the case of using the `MultipleHandlerBehavior.Separated` mode, this convention will create a subscription for each separate handler using the handler type to derive the subscription name and the message type to derive the topic name. Both the topic and subscription are declared by the transport if using the `AutoProvision()` setting. --- --- url: /guide/messaging/transports/gcp-pubsub/conventional-routing.md --- # Conventional Message Routing You can have Wolverine automatically determine message routing to GCP Pub/Sub based on conventions as shown in the code snippet below. By default, this approach assumes that each outgoing message type should be sent to topic named with the [message type name](/guide/messages.html#message-type-name-or-alias) for that message type. Likewise, Wolverine sets up a listener for a topic named similarly for each message type known to be handled by the application. ```cs var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UsePubsub("your-project-id") .UseConventionalRouting(convention => { // Optionally override the default queue naming scheme convention.TopicNameForSender(t => t.Namespace) // Optionally override the default queue naming scheme .QueueNameForListener(t => t.Namespace) // Fine tune the conventionally discovered listeners .ConfigureListeners((listener, builder) => { var messageType = builder.MessageType; var runtime = builder.Runtime; // Access to basically everything // customize the new queue listener.CircuitBreaker(queue => { }); // other options... }) // Fine tune the conventionally discovered sending endpoints .ConfigureSending((subscriber, builder) => { // Similarly, use the message type and/or wolverine runtime // to customize the message sending }); }); }).StartAsync(); ``` snippet source | anchor --- --- url: /guide/messaging/transports/sqs/conventional-routing.md --- # Conventional Message Routing As an example, you can apply conventional routing with the Amazon SQS transport like so: ```cs var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseAmazonSqsTransport() .UseConventionalRouting(); }).StartAsync(); ``` snippet source | anchor In this case any outgoing message types that aren't handled locally or have an explicit subscription will be automatically routed to an Amazon SQS queue named after the Wolverine message type name of the message type. --- --- url: /guide/messaging/transports/rabbitmq/conventional-routing.md --- # Conventional Routing ::: tip All Rabbit MQ objects are declared as durable by default, just meaning that the Rabbit MQ objects will live independently of the lifecycle of the Rabbit MQ connections from your Wolverine application. ::: Wolverine comes with an option to set up conventional routing rules for Rabbit MQ so you can bypass having to set up explicit message routing. Here's the easiest possible usage: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseRabbitMq() // Opt into conventional Rabbit MQ routing .UseConventionalRouting(); }).StartAsync(); ``` snippet source | anchor With the defaults from above, for each message that the application can handle (as determined by the discovered [message handlers](/guide/handlers/discovery)) the conventional routing will: 1. A durable queue using Wolverine's [message type name logic](/guide/messages.html#message-type-name-or-alias) 2. A listening endpoint to the queue above configured with a single, inline listener and **without and enrollment in the durable outbox** Likewise, for every outgoing message type, the routing convention will *on demand at runtime*: 1. Declare a fanout exchange named with the Wolverine message type alias name (usually the full name of the message type) 2. Create the exchange if auto provisioning is enabled if the exchange does not already exist 3. Create a [subscription rule](/guide/messaging/subscriptions) for that message type to the new exchange within the system Of course, you may want your own slightly different behavior, so there's plenty of hooks to customize the Rabbit MQ routing conventions as shown below: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseRabbitMq() // Opt into conventional Rabbit MQ routing .UseConventionalRouting(x => { // Customize the naming convention for the outgoing exchanges x.ExchangeNameForSending(type => type.Name + "Exchange"); // Customize the naming convention for incoming queues x.QueueNameForListener(type => type.FullName.Replace('.', '-')); // Or maybe you want to conditionally configure listening endpoints x.ConfigureListeners((listener, context) => { if (context.MessageType.IsInNamespace("MyApp.Messages.Important")) { listener.UseDurableInbox().ListenerCount(5); } else { // If not important, let's make the queue be // volatile and purge older messages automatically listener.TimeToLive(2.Minutes()); } }) // Or maybe you want to conditionally configure the outgoing exchange .ConfigureSending((ex, _) => { ex.ExchangeType(ExchangeType.Direct); }); }); }).StartAsync(); ``` snippet source | anchor ## Adjusting Routing Conventions If the exchange/queue routing defaults don't suit your message routing requirements, they can be overridden as below. This keeps existing naming conventions intact and avoids the need to drop down to manual exchange/queue definitions. ```cs var sender = WolverineHost.For(opts => { opts.UseRabbitMq() .UseConventionalRouting(conventions => { conventions.ExchangeNameForSending(type => type.Name + "_custom"); conventions.ConfigureSending((x, c) => { // Route messages via headers exchange whilst taking advantage of conventional naming if (c.MessageType == typeof(HeadersMessage)) { x.ExchangeType(ExchangeType.Headers); } }); }); }); var receiver = WolverineHost.For(opts => { opts.UseRabbitMq() .UseConventionalRouting(conventions => { conventions.ExchangeNameForSending(type => type.Name + "_custom"); conventions.ConfigureListeners((x, c) => { if (c.MessageType == typeof(HeadersMessage)) { // Bind our queue based on the headers tenant-id x.BindToExchange(ExchangeType.Headers, arguments: new Dictionary() { { "tenant-id", "tenant-id" } }); } }); }); }); ``` snippet source | anchor ## Separated Handler Behavior In the case of using the `MultipleHandlerBehavior.Separated` mode, this convention will create an exchange for the message type, then a separate queue for each handler using the handler type to create the name *and* finally a binding from that queue to the exchange. --- --- url: /guide/durability/cosmosdb.md --- # CosmosDb Integration Wolverine supports an [Azure CosmosDB](https://learn.microsoft.com/en-us/azure/cosmos-db/) backed message persistence strategy option as well as CosmosDB-backed transactional middleware and saga persistence. To get started, add the `WolverineFx.CosmosDb` dependency to your application: ```bash dotnet add package WolverineFx.CosmosDb ``` and in your application, tell Wolverine to use CosmosDB for message persistence: ```cs var builder = Host.CreateApplicationBuilder(); // Register CosmosClient with DI builder.Services.AddSingleton(new CosmosClient( "your-connection-string", new CosmosClientOptions { /* options */ } )); builder.UseWolverine(opts => { // Tell Wolverine to use CosmosDB, specifying the database name opts.UseCosmosDbPersistence("your-database-name"); // The CosmosDB integration supports basic transactional // middleware just fine opts.Policies.AutoApplyTransactions(); }); ``` ## Container Setup Wolverine uses a single CosmosDB container named `wolverine` with a partition key path of `/partitionKey`. The container is automatically created during database migration if it does not exist. All Wolverine document types are stored in the same container, differentiated by a `docType` field: * `incoming` - Incoming message envelopes * `outgoing` - Outgoing message envelopes * `deadletter` - Dead letter queue messages * `node` - Node registration documents * `agent-assignment` - Agent assignment documents * `lock` - Distributed lock documents ## Message Persistence The [durable inbox and outbox](/guide/durability/) options in Wolverine are completely supported with CosmosDB as the persistence mechanism. This includes scheduled execution (and retries), dead letter queue storage, and the ability to replay designated messages in the dead letter queue storage. ## Saga Persistence The CosmosDB integration can serve as a [Wolverine Saga persistence mechanism](/guide/durability/sagas). The only limitation is that all saga identity values must be `string` types. The saga id is used as both the CosmosDB document id and partition key. ## Transactional Middleware Wolverine's CosmosDB integration supports [transactional middleware](/guide/durability/marten/transactional-middleware) using the CosmosDB `Container` type. When using `AutoApplyTransactions()`, Wolverine will automatically detect handlers that use `Container` and apply the transactional middleware. ## Storage Side Effects (ICosmosDbOp) Use `ICosmosDbOp` as return values from handlers for a cleaner approach to CosmosDB operations: ```cs public static class MyHandler { public static ICosmosDbOp Handle(CreateOrder command) { var order = new Order { id = command.Id, Name = command.Name }; return CosmosDbOps.Store(order); } } ``` Available side effect operations: * `CosmosDbOps.Store(document)` - Upsert a document * `CosmosDbOps.Delete(id, partitionKey)` - Delete a document by id and partition key ## Outbox Pattern You can use the `ICosmosDbOutbox` interface to combine CosmosDB operations with outgoing messages in a single logical transaction: ```cs public class MyService { private readonly ICosmosDbOutbox _outbox; public MyService(ICosmosDbOutbox outbox) { _outbox = outbox; } public async Task DoWorkAsync(Container container) { _outbox.Enroll(container); // Send messages through the outbox await _outbox.SendAsync(new MyMessage()); // Flush outgoing messages await _outbox.SaveChangesAsync(); } } ``` ## Dead Letter Queue Management Dead letter messages are stored in the same CosmosDB container with `docType = "deadletter"` and can be managed through the standard Wolverine dead letter queue APIs. Messages can be marked as replayable and will be moved back to the incoming queue. ## Distributed Locking The CosmosDB integration implements distributed locking using document-based locks with ETag-based optimistic concurrency. Lock documents have a 5-minute expiration time and are automatically reclaimed if a node fails to renew them. --- --- url: /tutorials/middleware.md --- # Custom Middleware While reviewing a very large system that used asynchronous messaging I noticed a common pattern in many of the message handlers: 1. Attempt to load account data referenced by the incoming command 2. If the account didn't exist, log that the account referenced by the command didn't exist and stop the processing Like this code: ```cs public static async Task Handle(DebitAccount command, IDocumentSession session, ILogger logger) { // Try to find a matching account for the incoming command var account = await session.LoadAsync(command.AccountId); if (account == null) { logger.LogInformation("Referenced account {AccountId} does not exist", command.AccountId); return; } // do the real processing } ``` snippet source | anchor That added up to a lot of repetitive code, and it'd be nice if we introduced some kind of middleware to eliminate the duplication -- so let's do just that! Using Wolverine's [conventional middleware approach](/guide/handlers/middleware.html#conventional-middleware) strategy, we'll start by lifting a common interface for command message types that reference an `Account` like so: ```cs public interface IAccountCommand { Guid AccountId { get; } } ``` snippet source | anchor So a command message might look like this: ```cs public record CreditAccount(Guid AccountId, decimal Amount) : IAccountCommand; ``` snippet source | anchor Skipping ahead a little bit, if we had a handler for the `CreditAccount` command type above that was counting on some kind of middleware to just "push" the matching `Account` data in, the handler might just be this: ```cs public static class CreditAccountHandler { public static void Handle( CreditAccount command, // Wouldn't it be nice to just have Wolverine "push" // the right account into this method? Account account, // Using Marten for persistence here IDocumentSession session) { account.Balance += command.Amount; // Just mark this account as needing to be updated // in the database session.Store(account); } } ``` snippet source | anchor You'll notice at this point that the message handler is synchronous because it's no longer doing any calls to the database. Besides removing some repetitive code, this appproach arguably makes the Wolverine message handler methods easier to unit test now that you can happily "push" in system state rather than fool around with stubs or mocks. Next, let's build the actual middleware that will attempt to load an `Account` matching a command's `AccountId`, then determine if the message handling should continue or be aborted. Here's sample code to do exactly that: ```cs // This is *a* way to build middleware in Wolverine by basically just // writing functions/methods. There's a naming convention that // looks for Before/BeforeAsync or After/AfterAsync public static class AccountLookupMiddleware { // The message *has* to be first in the parameter list // Before or BeforeAsync tells Wolverine this method should be called before the actual action public static async Task<(HandlerContinuation, Account?, OutgoingMessages)> LoadAsync( IAccountCommand command, ILogger logger, // This app is using Marten for persistence IDocumentSession session, CancellationToken cancellation) { var messages = new OutgoingMessages(); var account = await session.LoadAsync(command.AccountId, cancellation); if (account == null) { logger.LogInformation("Unable to find an account for {AccountId}, aborting the requested operation", command.AccountId); messages.RespondToSender(new InvalidAccount(command.AccountId)); return (HandlerContinuation.Stop, null, messages); } // messages would be empty here return (HandlerContinuation.Continue, account, messages); } } ``` snippet source | anchor Now, some notes about the code above: * Wolverine has a convention that generates a call to the middleware's `LoadAsync()` method before the actual message handler method (`CreditAccountHandler.Handle()`) * The `ILogger` would be the `ILogger` for the message type that is currently being handled. So in the case of the `CreditAccount`, the logger would be `ILogger` * Wolverine can wire up the `Account` object returned from the middleware method to the actual `Handle()` method's `Account` argument * By returning `HandleContinuation` from the `LoadAsync()` method, we can conditionally tell Wolverine to abort the message processing Lastly, let's apply the newly built middleware to only the message handlers that work against some kind of `IAccountCommand` message: ```cs builder.Host.UseWolverine(opts => { // This middleware should be applied to all handlers where the // command type implements the IAccountCommand interface that is the // "detected" message type of the middleware opts.Policies.ForMessagesOfType().AddMiddleware(typeof(AccountLookupMiddleware)); opts.UseFluentValidation(); // Explicit routing for the AccountUpdated // message handling. This has precedence over conventional routing opts.PublishMessage() .ToLocalQueue("signalr") // Throw the message away if it's not successfully // delivered within 10 seconds .DeliverWithin(10.Seconds()) // Not durable .BufferedInMemory(); }); ``` snippet source | anchor --- --- url: /guide/handlers/dataannotations-validation.md --- # DataAnnotations Validation Middleware ::: tip There is also an HTTP specific middleware for WolverineFx.Http that uses the `ProblemDetails` specification. See [DataAnnotations Validation Middleware for HTTP](/guide/http/dataannotationsvalidation) for more information. ::: ::: warning While it is possible to access the IoC Services via `ValidationContext`, we recommend instead using a more explicit `Validate` or `ValidateAsync()` method directly in your message handler class for the data input. ::: For simple input validation of your messages, the [Data Annotation Attributes](https://learn.microsoft.com/en-us/dotnet/api/system.componentmodel.dataannotations?view=net-10.0) are a good choice. The `WolverineFx.DataAnnotationsValidation` nuget package will add support for the built-in and custom attributes via middleware that will stop invalid messages from reaching the message handlers. To get started, add the nuget package and configure your Wolverine Application: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Apply the validation middleware opts.UseDataAnnotationsValidation(); }).StartAsync(); ``` snippet source | anchor Now you can decorate your messages with the built-in or custom `ValidationAttributes`: ```cs public record CreateCustomer( // you can use the attributes on a record, but you need to // add the `property` modifier to the attribute [property: Required] string FirstName, [property: MinLength(5)] string LastName, [property: PostalCodeValidator] string PostalCode ) : IValidatableObject { public IEnumerable Validate(ValidationContext validationContext) { // you can implement `IValidatableObject` for custom // validation logic yield break; } }; public class PostalCodeValidatorAttribute : ValidationAttribute { public override bool IsValid(object? value) { // custom attributes are supported return true; } } public static class CreateCustomerHandler { public static void Handle(CreateCustomer customer) { // do whatever you'd do here, but this won't be called // at all if the DataAnnotations Validation rules fail } } ``` snippet source | anchor In the case above, the Validation check will happen at runtime *before* the call to the handler methods. If the validation fails, the middleware will throw a `ValidationException` and stop all processing. Some notes about the middleware: * The middleware is applied to all message handler types as there is no easy way of knowing if a message has some sort of validation attribute defined. * The registration also adds an error handling policy to discard messages when a `ValidationException` is thrown ## Customizing the Validation Failure Behavior Out of the box, the Fluent Validation middleware will throw a `DataAnnotationsValidation.ValidationException` with all the validation failures if the validation fails. To customize that behavior, you can plug in a custom implementation of the `IFailureAction` interface. This behaves exactly the same as the [Fluent Validation Customisation](/guide/handlers/fluent-validation). --- --- url: /guide/durability/efcore/migrations.md --- # Database Migrations Wolverine uses [Weasel](https://github.com/JasperFx/weasel) for schema management of EF Core `DbContext` types rather than EF Core's own migration system. This approach provides a consistent schema management experience across the entire "critter stack" (Wolverine + Marten) and avoids issues with EF Core's `Database.EnsureCreatedAsync()` bypassing migration history. ## How It Works When you register a `DbContext` with Wolverine using `AddDbContextWithWolverineIntegration()` or call `UseEntityFrameworkCoreWolverineManagedMigrations()`, Wolverine will: 1. **Read the EF Core model** — Wolverine inspects your `DbContext`'s entity types, properties, and relationships to build a Weasel schema representation 2. **Compare against the actual database** — Weasel connects to the database and compares the expected schema with the current state 3. **Apply deltas** — Only the necessary changes (new tables, added columns, foreign keys) are applied This all happens automatically at application startup when you use `UseResourceSetupOnStartup()` or through Wolverine's resource management commands. ## Enabling Weasel-Managed Migrations To opt into Weasel-managed migrations for your EF Core `DbContext` types, add this to your Wolverine configuration: ```csharp builder.UseWolverine(opts => { opts.PersistMessagesWithSqlServer(connectionString); opts.Services.AddDbContextWithWolverineIntegration( x => x.UseSqlServer(connectionString)); // Enable Weasel-managed migrations for all registered DbContext types opts.UseEntityFrameworkCoreWolverineManagedMigrations(); }); ``` With this in place, Wolverine will create and update your EF Core tables using Weasel at startup, alongside any Wolverine envelope storage tables. ## What Gets Migrated Weasel will manage the following schema elements from your EF Core model: * **Tables** — Created from entity types registered in `DbSet` properties * **Columns** — Mapped from entity properties, including types, nullability, and default values * **Primary keys** — Derived from `DbContext` key configuration * **Foreign keys** — Including cascade delete behavior * **Schema names** — Respects EF Core's `ToSchema()` configuration Entity types excluded from migrations via EF Core's `ExcludeFromMigrations()` are also excluded from Weasel management. ## Programmatic Migration You can also trigger migrations programmatically using the Weasel extension methods on `IServiceProvider`: ```csharp // Create a migration plan for a specific DbContext await using var migration = await serviceProvider .CreateMigrationAsync(dbContext, CancellationToken.None); // Apply the migration (only applies if there are actual differences) await migration.ExecuteAsync(AutoCreate.CreateOrUpdate, CancellationToken.None); ``` The `CreateMigrationAsync()` method compares the EF Core model against the actual database schema and produces a `DbContextMigration` object. Calling `ExecuteAsync()` applies any necessary changes. ### Creating the Database If you need to ensure the database itself exists (not just the tables), use: ```csharp await serviceProvider.EnsureDatabaseExistsAsync(dbContext); ``` This uses Weasel's provider-specific database creation logic, which only creates the database catalog — it does not create any tables or schema objects. ## Multi-Tenancy For multi-tenant setups where each tenant has its own database, Wolverine will automatically ensure each tenant database exists and apply schema migrations when using the tenanted `DbContext` builder. See [Multi-Tenancy](./multi-tenancy) for details. ## Weasel vs EF Core Migrations | Feature | Weasel (Wolverine) | EF Core Migrations | |---------|-------------------|-------------------| | Migration tracking | Compares live schema | Migration history table | | Code generation | None needed | `dotnet ef migrations add` | | Additive changes | Automatic | Requires new migration | | Works with Marten | Yes, unified approach | No | | Rollback support | No | Yes, via `Down()` method | ::: tip Weasel migrations are **additive** — they can create tables and add columns, but will not drop columns or tables automatically. This makes them safe for `CreateOrUpdate` scenarios in production. ::: ::: warning If you are already using EF Core's migration system (`dotnet ef migrations add`, `Database.MigrateAsync()`), you should choose one approach or the other. Mixing EF Core migrations with Weasel-managed migrations can lead to conflicts. Wolverine's Weasel-managed approach is recommended for applications in the "critter stack" ecosystem. ::: ## CLI Commands When Weasel-managed migrations are enabled, you can use Wolverine's built-in resource management: ```bash # Apply all pending schema changes dotnet run -- resources setup # Check current database status dotnet run -- resources list # Reset all state (development only!) dotnet run -- resources clear ``` These commands manage both Wolverine's internal tables and your EF Core entity tables together. --- --- url: /guide/messaging/transports/azureservicebus/deadletterqueues.md --- # Dead Letter Queues The behavior of Wolverine.AzureServiceBus dead letter queuing depends on the endpoint mode: ### Inline Endpoints For inline endpoints, Wolverine uses native [Azure Service Bus dead letter queueing](https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-dead-letter-queues). Failed messages are moved directly to the dead letter subqueue of the source queue. Note that inline endpoints do not use Wolverine's inbox for message persistence, so retries and dead lettering rely entirely on Azure Service Bus mechanisms. To configure an endpoint for inline processing: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { // One way or another, you're probably pulling the Azure Service Bus // connection string out of configuration var azureServiceBusConnectionString = builder .Configuration .GetConnectionString("azure-service-bus"); // Connect to the broker opts.UseAzureServiceBus(azureServiceBusConnectionString).AutoProvision(); // Use inline processing with native Azure Service Bus DLQ opts.ListenToAzureServiceBusQueue("inline-queue") .ProcessInline(); }); using var host = builder.Build(); await host.StartAsync(); ``` ### Buffered Endpoints For buffered endpoints, Wolverine sends failed messages to a designated dead letter queue. By default, this queue is named `wolverine-dead-letter-queue`. To customize the dead letter queue for buffered endpoints: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { var azureServiceBusConnectionString = builder .Configuration .GetConnectionString("azure-service-bus"); opts.UseAzureServiceBus(azureServiceBusConnectionString).AutoProvision(); // Customize the dead letter queue name for buffered endpoint opts.ListenToAzureServiceBusQueue("buffered-queue") .BufferedInMemory() .ConfigureDeadLetterQueue("my-custom-dlq"); }); using var host = builder.Build(); await host.StartAsync(); ``` ### Durable Endpoints Durable endpoints behave similarly to buffered endpoints, with dead lettering to the configured dead letter queue, while leveraging Wolverine's persistence for reliability. To customize the dead letter queue for durable endpoints: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { var azureServiceBusConnectionString = builder .Configuration .GetConnectionString("azure-service-bus"); opts.UseAzureServiceBus(azureServiceBusConnectionString).AutoProvision(); // Customize the dead letter queue name for durable endpoint opts.ListenToAzureServiceBusQueue("durable-queue") .UseDurableInbox() .ConfigureDeadLetterQueue("my-custom-dlq"); }); using var host = builder.Build(); await host.StartAsync(); ``` ## Disabling Dead Letter Queues You can disable dead letter queuing for specific endpoints if needed: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { var azureServiceBusConnectionString = builder .Configuration .GetConnectionString("azure-service-bus"); opts.UseAzureServiceBus(azureServiceBusConnectionString).AutoProvision(); // Disable dead letter queuing for this endpoint opts.ListenToAzureServiceBusQueue("no-dlq") .DisableDeadLetterQueueing(); }); using var host = builder.Build(); await host.StartAsync(); ``` --- --- url: /guide/messaging/transports/rabbitmq/deadletterqueues.md --- # Dead Letter Queues ::: info The end result is the same regardless, but Wolverine bypasses this functionality to move messages to the dead letter queue in `Buffered` or `Durable` queue endpoints. ::: By default, Wolverine's Rabbit MQ transport supports the [native dead letter exchange](https://www.rabbitmq.com/dlx.html) functionality in Rabbit MQ itself. If running completely with default behavior, Wolverine will: * Declare a queue named `wolverine-dead-letter-queue` as the system dead letter queue for the entire application -- but don't worry, that can be overridden queue by queue * Add the `x-dead-letter-exchange` argument to each non-system queue created by Wolverine in Rabbit MQ Great, but someone will inevitably want to alter the dead letter queue functionality to use differently named queues like so: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Use a different default deal letter queue name opts.UseRabbitMq() .CustomizeDeadLetterQueueing(new DeadLetterQueue("error-queue")) // or conventionally .ConfigureListeners(l => { l.DeadLetterQueueing(new DeadLetterQueue($"{l.QueueName}-errors")); }); // Use a different dead letter queue for this specific queue opts.ListenToRabbitQueue("incoming") .DeadLetterQueueing(new DeadLetterQueue("incoming-errors")); }).StartAsync(); ``` snippet source | anchor ::: warning You will need this if you are interoperating against NServiceBus! ::: But wait there's more! Other messaging tools or previous usages of Rabbit MQ in your environment may have already declared the Rabbit MQ queues without the `x-dead-letter-exchange` argument, meaning that Wolverine will not be able to declare queues for you, or might do so in a way that interferes with *other* messaging tools. To avoid all that hassle, you can opt out of native Rabbit MQ dead letter queues with the `InteropFriendly` option: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Use a different default deal letter queue name opts.UseRabbitMq() .CustomizeDeadLetterQueueing( new DeadLetterQueue("error-queue", DeadLetterQueueMode.InteropFriendly)) // or conventionally .ConfigureListeners(l => { l.DeadLetterQueueing(new DeadLetterQueue($"{l.QueueName}-errors", DeadLetterQueueMode.InteropFriendly)); }); // Use a different dead letter queue for this specific queue opts.ListenToRabbitQueue("incoming") .DeadLetterQueueing(new DeadLetterQueue("incoming-errors", DeadLetterQueueMode.InteropFriendly)); }).StartAsync(); ``` snippet source | anchor And lastly, if you don't particularly want to have any Rabbit MQ dead letter queues and you quite like the [database backed dead letter queues](/guide/durability/dead-letter-storage) you get with Wolverine's message durability, you can use the `WolverineStorage` option: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Disable dead letter queueing by default opts.UseRabbitMq() .DisableDeadLetterQueueing() // or conventionally .ConfigureListeners(l => { // Really does the same thing as the first usage l.DisableDeadLetterQueueing(); }); // Disable the dead letter queue for this specific queue opts.ListenToRabbitQueue("incoming").DisableDeadLetterQueueing(); }).StartAsync(); ``` snippet source | anchor --- --- url: /guide/messaging/transports/sqs/deadletterqueues.md --- # Dead Letter Queues By default, Wolverine will try to move dead letter messages in SQS to a single, global queue named "wolverine-dead-letter-queue." That can be overridden on a single queue at a time (or by conventions too of course) like: ```cs var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseAmazonSqsTransport(); // No dead letter queueing opts.ListenToSqsQueue("incoming") .DisableDeadLetterQueueing(); // Use a different dead letter queue opts.ListenToSqsQueue("important") .ConfigureDeadLetterQueue("important_errors", q => { // optionally configure how the dead letter queue itself // is built by Wolverine q.MaxNumberOfMessages = 1000; }); }).StartAsync(); ``` snippet source | anchor ## Disabling All Native Dead Letter Queueing In one stroke, you can disable all usage of native SQS queues for dead letter queueing with this syntax: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseAmazonSqsTransportLocally() // Disable all native SQS dead letter queueing .DisableAllNativeDeadLetterQueues() .AutoProvision(); opts.ListenToSqsQueue("incoming"); }).StartAsync(); ``` snippet source | anchor This would force Wolverine to use any persistent envelope storage for dead letter queueing. --- --- url: /tutorials/dead-letter-queues.md --- # Dead Letter Queues --- --- url: /guide/durability/dead-letter-storage.md --- # Dead Letter Storage If [message storage](/guide/durability/) is configured for your application, and you're using either the local queues or messaging transports where Wolverine doesn't (yet) support native [dead letter queueing](https://en.wikipedia.org/wiki/Dead_letter_queue), Wolverine is actually moving messages to the `wolverine_dead_letters` table in your database in lieu of native dead letter queueing. You can browse the messages in this table and see some of the exception details that led them to being moved to the dead letter queue. To recover messages from the dead letter queue after possibly fixing a production support issue, you can update this table's `replayable` column for any messages you want to recover with some kind of SQL command like: ```sql update wolverine_dead_letters set replayable = true where exception_type = 'InvalidAccountException'; ``` When you do this, Wolverine's durability agent that manages the inbox and outbox processing in the background will move these messages back into active incoming message handling. Just note that this process happens through some polling, so it won't be instantaneous. To replay dead lettered messages back to the incoming table, you also have a command line option: ```bash dotnet run -- storage replay ``` ## Dead Letter Expiration ::: tip You could see poor performance over time if the dead letter queue storage in the database gets excessively large, so Wolverine does have an "opt in" feature to let old messages expire and be expunged from the storage. ::: It's off by default (for backwards compatibility), but you can enable Wolverine to assign expiration times to dead letter queue messages persisted to durable storage like this: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // This is required opts.Durability.DeadLetterQueueExpirationEnabled = true; // Default is 10 days. This is the retention period opts.Durability.DeadLetterQueueExpiration = 3.Days(); }).StartAsync(); ``` snippet source | anchor Note that Wolverine will use the message's `DeliverBy` value as the expiration if that exists, otherwise, Wolverine will just add the `DeadLetterQueueExpiration` time to the current time. The actual stored messages are deleted by background processes and won't be quite real time. ## Integrating Dead Letters REST API into Your Application Integrating the Dead Letters REST API into your WolverineFX application provides an elegant and powerful way to manage dead letter messages directly through HTTP requests. This capability is crucial for applications that require a robust mechanism for dealing with message processing failures, enabling developers and administrators to query, replay, or delete dead letter messages as needed. Below, we detail how to add this functionality to your application and describe the usage of each endpoint. To get started, install that Nuget reference: ```bash dotnet add package WolverineFx.Http ``` ### Adding Dead Letters REST API to Your Application To integrate the Dead Letters REST API into your WolverineFX application, you simply need to register the endpoints in your application's startup process. This is done by calling `app.MapDeadLettersEndpoints();` within the `Configure` method of your `Startup` class or the application initialization block if using minimal API patterns. This method call adds the necessary routes and handlers for dead letter management to your application's routing table. ```cs app.MapDeadLettersEndpoints() // It's a Minimal API endpoint group, // so you can add whatever authorization // or OpenAPI metadata configuration you need // for just these endpoints //.RequireAuthorization("Admin") ; ``` snippet source | anchor ### Using the Dead Letters REST API #### Query Dead Letters Endpoint * **Path**: `/dead-letters/` * **Method**: `POST` * **Request Body**: `DeadLetterEnvelopeGetRequest` * `Limit` (uint): Number of records to return per page. * `StartId` (Guid?): Start fetching records after the specified ID. * `MessageType` (string?): Filter by message type. * `ExceptionType` (string?): Filter by exception type. * `ExceptionMessage` (string?): Filter by exception message. * `From` (DateTimeOffset?): Start date for fetching records. * `Until` (DateTimeOffset?): End date for fetching records. * `TenantId` (string?): Tenant identifier for multi-tenancy support. * **Response**: `DeadLetterEnvelopesFoundResponse` containing a list of `DeadLetterEnvelopeResponse` objects and an optional `NextId` for pagination. **Request Example**: ```json { "Limit": 50, "MessageType": "OrderPlacedEvent", "ExceptionType": "InvalidOrderException" } ``` **Reponse Example**: ```json { "Messages": [ { "Id": "4e3d5e88-e01f-4bcb-af25-6e4c14b0a867", "ExecutionTime": "2024-04-06T12:00:00Z", "Body": { "OrderId": 123456, "OrderStatus": "Failed", "Reason": "Invalid Payment Method" }, "MessageType": "OrderFailedEvent", "ReceivedAt": "2024-04-06T12:05:00Z", "Source": "OrderService", "ExceptionType": "PaymentException", "ExceptionMessage": "The payment method provided is invalid.", "SentAt": "2024-04-06T12:00:00Z", "Replayable": true }, { "Id": "5f2c3d1e-3f3d-46f9-ba29-dac8e0f9b078", "ExecutionTime": null, "Body": { "CustomerId": 78910, "AccountBalance": -150.75 }, "MessageType": "AccountOverdrawnEvent", "ReceivedAt": "2024-04-06T15:20:00Z", "Source": "AccountService", "ExceptionType": "OverdrawnException", "ExceptionMessage": "Account balance cannot be negative.", "SentAt": "2024-04-06T15:15:00Z", "Replayable": false } ], "NextId": "8a1d77f2-f91b-4edb-8b51-466b5a8a3a6f" } ``` #### Replay Dead Letters Endpoint * **Path**: `/dead-letters/replay` * **Method**: `POST` * **Description**: Marks specified dead letter messages as replayable. This operation signals the system to attempt reprocessing the messages, ideally after the cause of the initial failure has been resolved. **Request Example**: ```json { "Ids": ["d3b07384-d113-4ec8-98c4-b3bf34e2c572", "d3b07384-d113-4ec8-98c4-b3bf34e2c573"] } ``` #### Delete Dead Letters Endpoint * **Path**: `/dead-letters/` * **Method**: `DELETE` * **Description**: Permanently removes specified dead letter messages from the system. Use this operation to clear messages that are no longer needed or cannot be successfully reprocessed. **Request Example**: ```json { "Ids": ["d3b07384-d113-4ec8-98c4-b3bf34e2c574", "d3b07384-d113-4ec8-98c4-b3bf34e2c575"] } ``` ### Conclusion By integrating the Dead Letters REST API into your WolverineFX application, you gain fine-grained control over the management of dead letter messages. This feature not only aids in debugging and resolving processing issues but also enhances the overall reliability of your message-driven applications. --- --- url: /guide/messaging/transports/gcp-pubsub/deadlettering.md --- # Dead Lettering By default, Wolverine dead lettering is disabled for GCP Pub/Sub transport and Wolverine uses any persistent envelope storage for dead lettering. You can opt in to Wolverine dead lettering through GCP Pub/Sub globally as shown below. ```cs var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UsePubsub("your-project-id") // Enable dead lettering for all Wolverine endpoints .EnableDeadLettering( // Optionally configure how the GCP Pub/Sub dead letter itself // is created by Wolverine options => { options.Topic.MessageRetentionDuration = Duration.FromTimeSpan(TimeSpan.FromDays(14)); options.Subscription.MessageRetentionDuration = Duration.FromTimeSpan(TimeSpan.FromDays(14)); } ); }).StartAsync(); ``` snippet source | anchor When enabled, Wolverine will try to move dead letter messages in GCP Pub/Sub to a single, global topic named "wlvrn.dead-letter". That can be overridden on a single endpoint at a time (or by conventions too of course) like: ```cs var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UsePubsub("your-project-id") .EnableDeadLettering(); // No dead letter queueing opts.ListenToPubsubTopic("incoming") .DisableDeadLettering(); // Use a different dead letter queue opts.ListenToPubsubTopic("important") .ConfigureDeadLettering( "important_errors", // Optionally configure how the dead letter itself // is built by Wolverine e => { e.TelemetryEnabled = true; } ); }).StartAsync(); ``` snippet source | anchor --- --- url: /tutorials/concurrency.md --- # Dealing with Concurrency ![Lions and tigers and bears, oh my!](/wolverines-wizard-of-oz.png) With a little bit of research today -- and unfortunately my own experience -- here's a list of *some* of the problems that can be caused by concurrent message processing in your system trying to access or modify the same resources or data: * Race conditions * [Deadlocks](https://en.wikipedia.org/wiki/Deadlock) * Consistency errors when multiple threads may be overwriting the same data and some changes get lost * Out of order processing that may lead to erroneous results * Exceptions from tools like Marten that helpfully try to stop concurrent changes through [optimistic concurrency](https://en.wikipedia.org/wiki/Optimistic_concurrency_control) Because these issues are so common in the kind of systems you would want to use a tool like Wolverine on in the first place, the Wolverine community has invested quite heavily in features to help you manage concurrent access in your system. ## Error Retries on Concurrency Errors If you don't expect many concurrency exceptions, you can probably get away with some kind of optimistic concurrency. Using the [aggregate handler workflow](/guide/durability/marten/event-sourcing) integration with Marten as an example, there is some built in optimistic concurrency in Marten just to protect your system from simultaneous writes to the same event stream. In the case when Marten determines that *something* else has written to an event stream between your command handling starting and it trying to commit changes, Marten will throw the `JasperFx.ConcurrencyException`. If we're doing simplistic optimistic checks, we might be perfectly fine with a global error handler that simply [retries any failure](/guide/handlers/error-handling) due to this exception a few times: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { opts // On optimistic concurrency failures from Marten .OnException() .RetryWithCooldown(100.Milliseconds(), 250.Milliseconds(), 500.Milliseconds()) .Then.MoveToErrorQueue(); }); ``` snippet source | anchor Of course though, sometimes you are opting into a more stringent form of optimistic concurrency where the handler should fail fast if an event stream has advanced beyond a specific version number, as in the usage of this command message: ```csharp public record MarkItemReady(Guid OrderId, string ItemName, int Version); ``` In that case, there's absolutely no value in retrying the message, so we should use a different error handling policy to move that message off immediately like one of these: ```cs public static class MarkItemReadyHandler { // This will let us specify error handling policies specific // to only this message handler public static void Configure(HandlerChain chain) { // Can't ever process this message, so send it directly // to the DLQ // Do not pass Go, do not collect $200... chain.OnException() .MoveToErrorQueue(); // Or instead... // Can't ever process this message, so just throw it away // Do not pass Go, do not collect $200... chain.OnException() .Discard(); } public static IEnumerable Post( MarkItemReady command, // Wolverine + Marten will assert that the Order stream // in question has not advanced from command.Version [WriteAggregate] Order order) { // process the message and emit events yield break; } } ``` snippet source | anchor ## Exclusive Locks or Serializable Transactions You can try to deal with concurrency problems by utilizing whatever database tooling you're using for whatever exclusive locks or serializable transaction support they might have. The integration with Marten has an option for exclusive locks with the "Aggregate Handler Workflow." With EF Core, you should be able to opt into starting your own serializable transaction. The Wolverine team considers these approaches to maybe a necessary evil, but hopefully a temporary solution. We would probably recommend in most cases that you protect your system from concurrent access through selective queueing as much as possible as discussed in the next section. ## Using Queueing In many cases you can use queueing of some sort to reduce concurrent access to sensitive resources within your system. The most draconian way to do this is to say that all messages in a given queue will be executed single file in strict order on one single node within your application like so: ```cs var builder = Host.CreateApplicationBuilder() .UseWolverine(opts => { opts.UseRabbitMq(); // Wolverine will *only* listen to this queue // on one single node and process messages in strict // order opts.ListenToRabbitQueue("control").ListenWithStrictOrdering(); opts.Publish(x => { // Just keying off a made up marker interface x.MessagesImplementing(); x.ToRabbitQueue("control"); }); }); ``` snippet source | anchor The strict ordering usage definitely limits the throughput in your system while largely eliminating issues due to concurrency. This option is useful for fast processing messages where you may be coordinating long running work throughout the rest of your system. This has proven useful in file ingestion processes or systems that have to manage long running processes in other nodes. More likely though, to both protect against concurrent access against resources that are prone to issues with concurrent access *and* allow for greater throughput, you may want to reach for either: * [Session Identifier and FIFO queue support for Azure Service Bus](/guide/messaging/transports/azureservicebus/session-identifiers) * Wolverine's [Partitioned Sequential Messaging](/guide/messaging/partitioning) feature introduced in Wolverine 5.0 that was designed specifically to alleviate problems with concurrency within Wolverine systems. --- --- url: /guide/diagnostics.md --- # Diagnostics Wolverine can be configuration intensive, allows for quite a bit of customization if you want to go down that road, and involves quite a bit of external infrastructure. All of those things can be problematic, so Wolverine tries to provide diagnostic tools to unwind what's going on inside the application and the application's configuration. Many of the diagnostics explained in this page are part of the [JasperFx command line integration](https://jasperfx.github.io/oakton) <== NOT SURE OF THE RIGHT URL. As a reminder, to utilize this command line integration, you need to apply JasperFx as your command line parser as shown in the last line of the quickstart sample `Program.cs` file: ```cs using JasperFx; using Quickstart; using Wolverine; var builder = WebApplication.CreateBuilder(args); // The almost inevitable inclusion of Swashbuckle:) builder.Services.AddEndpointsApiExplorer(); builder.Services.AddSwaggerGen(); // For now, this is enough to integrate Wolverine into // your application, but there'll be *many* more // options later of course :-) builder.Host.UseWolverine(); // Some in memory services for our application, the // only thing that matters for now is that these are // systems built by the application's IoC container builder.Services.AddSingleton(); builder.Services.AddSingleton(); var app = builder.Build(); // An endpoint to create a new issue that delegates to Wolverine as a mediator app.MapPost("/issues/create", (CreateIssue body, IMessageBus bus) => bus.InvokeAsync(body)); // An endpoint to assign an issue to an existing user that delegates to Wolverine as a mediator app.MapPost("/issues/assign", (AssignIssue body, IMessageBus bus) => bus.InvokeAsync(body)); // Swashbuckle inclusion app.UseSwagger(); app.UseSwaggerUI(); app.MapGet("/", () => Results.Redirect("/swagger")); // Opt into using JasperFx for command line parsing // to unlock built in diagnostics and utility tools within // your Wolverine application return await app.RunJasperFxCommands(args); ``` snippet source | anchor ## Command Line Description From the command line at the root of your project, you can get a textual report about your Wolverine application including discovered handlers, messaging endpoints, and error handling through this command: ```bash dotnet run -- describe ``` ## Previewing Generated Code If you ever have any question about the applicability of Wolverine (or custom) conventions or the middleware that is configured for your application, you can see the exact code that Wolverine generates around your messaging handlers or HTTP endpoint methods from the command line. To write out all the generated source code to the `/Internal/Generated/WolverineHandlers` folder of your application (or designated application assembly), use this command: ```bash dotnet run -- codegen write ``` The naming convention for the files is `[Message Type Name]Handler#######` where the numbers are just a hashed suffix to disambiguate message types with the same name, but in different namespaces. Or if you just want to preview the code into your terminal window, you can also say: ```bash dotnet run -- codegen preview ``` ## Environment Checks ::: info Wolverine 4.0 will embrace the new [IHealthCheck](https://learn.microsoft.com/en-us/dotnet/api/microsoft.extensions.diagnostics.healthchecks.ihealthcheck?view=net-8.0) model in .NET as a replacement for the older, JasperFx-centric environment check model in this section. ::: Wolverine's external messaging transports and the durable inbox/outbox support expose [Oakton's environment checks](https://jasperfx.github.io/oakton/guide/host/environment.html) facility to help make your Wolverine applications be self diagnosing on configuration or connectivity issues like: * Can the application connect to its configured database? * Can the application connect to its configured Rabbit MQ / Amazon SQS / Azure Service Bus message brokers? * Is the underlying IoC container registrations valid? To exercise this functionality, try: ```bash dotnet run -- check-env ``` Or even at startup, you can use: ```bash dotnet run -- check-env ``` to have the environment checks executed at application startup, but just realize that the application will shutdown if any checks fail. ## Troubleshooting Handler Discovery Wolverine has admittedly been a little challenging for some new users to get used to its handler discovery. If you are not seeing Wolverine discover and use a message handler type and method, try this mechanism temporarily so that Wolverine can try to explain why it's not picking that type and method up as a message handler: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Surely plenty of other configuration for Wolverine... // This *temporary* line of code will write out a full report about why or // why not Wolverine is finding this handler and its candidate handler messages Console.WriteLine(opts.DescribeHandlerMatch(typeof(MyMissingMessageHandler))); }).StartAsync(); ``` snippet source | anchor ## Troubleshooting Message Routing Among other information, you can find a preview of how Wolverine will route known message types through the command line with: ```bash dotnet run -- describe ``` Part of this output is a table of the known message types and the routed destination of any subscriptions. You can enhance this diagnostic by helping Wolverine to [discover message types](/guide/messages#message-discovery) in your system. And lastly, there's a programmatic way to "preview" the Wolverine message routing at runtime that might be helpful: ```cs public static void using_preview_subscriptions(IMessageBus bus) { // Preview where Wolverine is wanting to send a message var outgoing = bus.PreviewSubscriptions(new BlueMessage()); foreach (var envelope in outgoing) { // The URI value here will identify the endpoint where the message is // going to be sent (Rabbit MQ exchange, Azure Service Bus topic, Kafka topic, local queue, etc.) Debug.WriteLine(envelope.Destination); } } ``` snippet source | anchor --- --- url: /guide/durability.md --- # Durable Messaging ::: info A major goal of Wolverine 4.0 is to bring the EF Core integration capabilities (including multi-tenancy support) up to match the current integration with Marten, add event sourcing support for SQL Server, and at least envelope storage integration with CosmosDb. ::: Wolverine can integrate with several database engines and persistence tools for: * Durable messaging through the transactional inbox and outbox pattern * Transactional middleware to simplify your application code * Saga persistence * Durable, scheduled message handling * Durable & replayable dead letter queueing * Node and agent assignment persistence that is necessary for Wolverine to do agent assignments (its virtual actor capability) ## Transactional Inbox/Outbox See the blog post [Transactional Outbox/Inbox with Wolverine and why you care](https://jeremydmiller.com/2022/12/15/transactional-outbox-inbox-with-wolverine-and-why-you-care/) for more context. One of Wolverine's most important features is durable message persistence using your application's database for reliable "[store and forward](https://en.wikipedia.org/wiki/Store_and_forward)" queueing with all possible Wolverine transport options, including the [lightweight TCP transport](/guide/messaging/transports/tcp) and external transports like the [Rabbit MQ transport](/guide/messaging/transports/rabbitmq). It's a chaotic world out when high volume systems need to interact with other systems. Your system may fail, other systems may be down, there's network hiccups, occasional failures -- and you still need your systems to get to a consistent state without messages just getting lost en route. Consider this sample message handler from Wolverine's [AppWithMiddleware sample project](https://github.com/JasperFx/wolverine/tree/main/src/Samples/Middleware): ```cs [Transactional] public static async Task Handle( DebitAccount command, Account account, IDocumentSession session, IMessageContext messaging) { account.Balance -= command.Amount; // This just marks the account as changed, but // doesn't actually commit changes to the database // yet. That actually matters as I hopefully explain session.Store(account); // Conditionally trigger other, cascading messages if (account.Balance > 0 && account.Balance < account.MinimumThreshold) { await messaging.SendAsync(new LowBalanceDetected(account.Id)); } else if (account.Balance < 0) { await messaging.SendAsync(new AccountOverdrawn(account.Id), new DeliveryOptions{DeliverWithin = 1.Hours()}); // Give the customer 10 days to deal with the overdrawn account await messaging.ScheduleAsync(new EnforceAccountOverdrawnDeadline(account.Id), 10.Days()); } // "messaging" is a Wolverine IMessageContext or IMessageBus service // Do the deliver within rule on individual messages await messaging.SendAsync(new AccountUpdated(account.Id, account.Balance), new DeliveryOptions { DeliverWithin = 5.Seconds() }); } ``` snippet source | anchor The handler code above is committing changes to an `Account` in the underlying database and potentially sending out additional messages based on the state of the `Account`. For folks who are experienced with asynchronous messaging systems who hear me say that Wolverine does not support any kind of 2 phase commits between the database and message brokers, you’re probably already concerned with some potential problems in that code above: * Maybe the database changes fail, but there are “ghost” messages already queued that pertain to data changes that never actually happened * Maybe the messages actually manage to get through to their downstream handlers and are applied erroneously because the related database changes have not yet been applied. That’s a race condition that absolutely happens if you’re not careful (ask me how I know 😦 ) * Maybe the database changes succeed, but the messages fail to be sent because of a network hiccup or who knows what problem happens with the message broker What you need is to guarantee that both the outgoing messages and the database changes succeed or fail together, and that the new messages are not actually published until the database transaction succeeds. To that end, Wolverine relies on message persistence within your application database as its implementation of the [Transactional Outbox](https://microservices.io/patterns/data/transactional-outbox.html) pattern. Using the "outbox" pattern is a way to avoid the need for problematic and slow [distributed transactions](https://en.wikipedia.org/wiki/Distributed_transaction) while still maintaining eventual consistency between database changes and the outgoing messages that are part of the logical transaction. Wolverine implementation of the outbox pattern also includes a separate *message relay* process that will send the persisted outgoing messages in background processes (it's done by marshalling the outgoing message envelopes through [TPL Dataflow](https://docs.microsoft.com/en-us/dotnet/standard/parallel-programming/dataflow-task-parallel-library) queues if you're curious.) If any node of a Wolverine system that uses durable messaging goes down before all the messages are processed, the persisted messages will be loaded from storage and processed when the system is restarted. Wolverine does this through its [DurabilityAgent](https://github.com/JasperFx/wolverine/blob/main/src/Wolverine/Persistence/Durability/DurabilityAgent.cs) that will run within your application through Wolverine's [IHostedService](https://docs.microsoft.com/en-us/aspnet/core/fundamentals/host/hosted-services?view=aspnetcore-6.0\&tabs=visual-studio) runtime that is automatically registered in your system through the `UseWolverine()` extension method. ::: tip At the moment, Wolverine only supports Postgresql, Sql Server, and RavenDb as the underlying database and either [Marten](/guide/durability/marten) or [Entity Framework Core](/guide/durability/efcore) as the application persistence framework. ::: There are four things you need to enable for the transactional outbox (and inbox for incoming messages): 1. Set up message storage in your application, and manage the storage schema objects -- don't worry though, Wolverine comes with a lot of tooling to help you with that 2. Enroll outgoing subscriber or listener endpoints in the durable storage at configuration time 3. Enable Wolverine's transactional middleware or utilize one of Wolverine's outbox publishing services The last bullet point varies a little bit between the [Marten integration](/guide/durability/marten) and the [EF Core integration](/guide/durability/efcore), so see the the specific documentation on each for more details. ## Using the Outbox for Outgoing Messages ::: tip It might be valuable to leave some endpoints as "buffered" or "inline" for message types that have limited lifetimes. See the blog post [Ephemeral Messages with Wolverine](https://jeremydmiller.com/2022/12/20/ephemeral-messages-with-wolverine/) for an example of this. ::: To make the Wolverine outbox feature persist messages in the durable message storage, you need to explicitly make the outgoing subscriber endpoints (Rabbit MQ queues or exchange/binding, Azure Service Bus queues, TCP port, etc.) be configured to be durable. That can be done either on specific endpoints like this sample: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.PublishAllMessages().ToPort(5555) // This option makes just this one outgoing subscriber use // durable message storage .UseDurableOutbox(); }).StartAsync(); ``` snippet source | anchor Or globally through a built in policy: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // This forces every outgoing subscriber to use durable // messaging opts.Policies.UseDurableOutboxOnAllSendingEndpoints(); }).StartAsync(); ``` snippet source | anchor ### Bumping out Stale Inbox/Outbox Messages ::: warning Do **not** make the inbox timeout too low are you could accidentally make Wolverine try to replay messages that are happily floating around in retries or just plain slow. Make the `InboxStaleTime` be at least longer than your longest expected message execution time with a couple retries for good measure. Ask us how we know this is a potential problem... Idempotency protections will help keep your system from having inconsistent state from accidentally having a message attempted to be handled multiple times, but it's always best to not make your system work so hard. ::: It should *not* be possible for there to be any path where a message gets "stuck" in the outbox tables without eventually being sent by the originating node or recovered by a different node if the original node goes down first. However, it's an imperfect world. If you are using one of the relational backed message stores for Wolverine (SQL Server or PostgreSQL at this point), you can "bump" a persisted record in the `wolverine_outgoing_envelopes` to be recovered and sent by the outbox by setting the `owner_id` field to zero. ::: info Just be aware that opting into the `OutboxStaleTime` or `InboxStaleTime` threshold will require database changes through Wolverine's database migration subsystem ::: You also have this setting to force Wolverine to automatically "bump" and older messages that seem to be stalled in the outbox table or the inbox table: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Bump any persisted message in the outbox tables // that is more than an hour old to be globally owned // so that the durability agent can recover it and force // it to be sent opts.Durability.OutboxStaleTime = 1.Hours(); // Same for the inbox, but it's configured independently // This should *never* be necessary and the Wolverine // team has no clue why this could ever happen and a message // could get "stuck", but yet, here this is: opts.Durability.InboxStaleTime = 10.Minutes(); }).StartAsync(); ``` snippet source | anchor Note that this will still respect the "deliver by" semantics. This is part of the polling that Wolverine normally does against the inbox/outbox/node storage tables. Note that this will only happen if the setting above has a non-null value. ## Using the Inbox for Incoming Messages On the incoming side, external transport endpoint listeners can be enrolled into Wolverine's transactional inbox mechanics where messages received will be immediately persisted to the durable message storage and tracked there until the message is successfully processed, expires, discarded due to error conditions, or moved to dead letter storage. To enroll individual listening endpoints or all listening endpoints in the Wolverine inbox mechanics, use one of these options: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.ListenAtPort(5555) // Make specific endpoints be enrolled // in the durable inbox .UseDurableInbox(); // Make every single listener endpoint use // durable message storage opts.Policies.UseDurableInboxOnAllListeners(); }).StartAsync(); ``` snippet source | anchor ## Local Queues When you mark a [local queue](/guide/messaging/transports/local) as durable, you're telling Wolverine to ensure that every message published to that queue be stored in the backing message database until it is successfully processed. Doing so makes even the local queues be able to guarantee eventual delivery even if the current node where the message was published fails before the message is processed. To configure individual or set durability on local queues by some kind of convention, consider these possible usages: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.Policies.UseDurableLocalQueues(); // or opts.LocalQueue("important").UseDurableInbox(); // or conventionally, make the local queues for messages in a certain namespace // be durable opts.Policies.ConfigureConventionalLocalRouting().CustomizeQueues((type, queue) => { if (type.IsInNamespace("MyApp.Commands.Durable")) { queue.UseDurableInbox(); } }); }).StartAsync(); ``` snippet source | anchor ## Message Identity Wolverine was originally conceived for a world in which micro-services were all the rage for software architectures. The world changed on us though, as folks are now interested in pursuing [Modular Monolith architectures](/tutorials/modular-monolith) where you may be trying to effectively jam what used to be separate micro-services into a single process. In the "classic" Wolverine configuration, incoming messages to the Wolverine transactional inboxes use the message id of the incoming `Envelope` objects as the primary key in message stores. Which breaks down if you have something like this: ![Receiving Same Message 2 or More Times](/receive-message-twice.png) In the diagram above, I'm trying to show what might happen (and it has happened) when the same Wolverine message is sent through an external broker and delivered more than once to the same downstream Wolverine application. In the "classic" mode, Wolverine will treat all but the first message as duplicate messages and reject them -- even though you mean these messages to be handled separately by different message handlers in your modular monolith. Not to worry, you can now opt into this setting to identify an incoming message by the combination of message id *and* destination: ```cs var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.PersistMessagesWithSqlServer(Servers.SqlServerConnectionString, "receiver2"); // This setting changes the internal message storage identity opts.Durability.MessageIdentity = MessageIdentity.IdAndDestination; }) .StartAsync(); ``` snippet source | anchor This might be an important setting for [modular monolith architectures](/tutorials/modular-monolith). ## Stale Inbox and Outbox Thresholds ::: info This is more a "defense in depth" feature than a common problem with the inbox/outbox mechanics. These flags are "opt in" only because they require database schema changes. ::: It should not ever be possible for messages to get "stuck" in the transactional inbox or outbox, but it's an imperfect world and occasionally there are hiccups that might lead to that situation. To that end, you have these "opt in" settings to tell Wolverine to "bump" apparently stalled or stale messages back into play *just in case*: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // configure the actual message persistence... // This directs Wolverine to "bump" any messages marked // as being owned by a specific node but older than // these thresholds as being open to any node pulling // them in // TL;DR: make Wolverine go grab stale messages and make // sure they are processed or sent to the messaging brokers opts.Durability.InboxStaleTime = 5.Minutes(); opts.Durability.OutboxStaleTime = 5.Minutes(); }).StartAsync(); ``` snippet source | anchor ::: info These settings will be defaults in Wolverine 6.0. ::: --- --- url: /guide/messaging/policies.md --- # Endpoint Policies --- --- url: /guide/messaging/endpoint-operations.md --- # Endpoint Specific Operations You can also explicitly send any message to a named endpoint in the system. You might do this to programmatically distribute work in your system, or when you need to do more programmatic routing as to what downstream system should handle the outgoing message. Regardless, that usage is shown below. Just note that you can give a name to any type of Wolverine endpoint: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.PublishAllMessages().ToPort(5555) .Named("One"); opts.PublishAllMessages().ToPort(5555) .Named("Two"); }).StartAsync(); var bus = host.Services .GetRequiredService(); // Explicitly send a message to a named endpoint await bus.EndpointFor("One").SendAsync(new SomeMessage()); // Or invoke remotely await bus.EndpointFor("One").InvokeAsync(new SomeMessage()); // Or request/reply var answer = bus.EndpointFor("One") .InvokeAsync(new Question()); ``` snippet source | anchor There's another option to reference a messaging endpoint by `Uri` as shown below: ```cs // Or access operations on a specific endpoint using a Uri await bus.EndpointFor(new Uri("rabbitmq://queue/rabbit-one")) .InvokeAsync(new SomeMessage()); ``` snippet source | anchor --- --- url: /guide/durability/efcore.md --- # Entity Framework Core Integration Wolverine supports [Entity Framework Core](https://learn.microsoft.com/en-us/ef/core/) through the `WolverineFx.EntityFrameworkCore` Nuget. * Transactional middleware - Wolverine will both call `DbContext.SaveChangesAsync()` and flush any persisted messages for you * EF Core as a saga storage mechanism - As long as one of your registered `DbContext` services has a mapping for the stateful saga type * Outbox integration - Wolverine can use directly use a `DbContext` that has mappings for the Wolverine durable messaging, or at least use the database connection and current database transaction from a `DbContext` as part of durable, outbox message persistence. * [Multi-Tenancy with EF Core](./multi-tenancy) ## Getting Started The first step is to just install the `WolverineFx.EntityFrameworkCore` Nuget: ```bash dotnet add package WolverineFx.EntityFrameworkCore ``` ::: warning For right now, it's perfectly possible to use multiple `DbContext` types with one Wolverine application and Wolverine is perfectly capable of using the correct `DbContext` type for `Saga` types. **But**, Wolverine can only use the transactional inbox/outbox with a single database registration. This limitation will be lifted later as folks are going to eventually hit this limitation with modular monolith approaches. ::: With that in place, there's two basic things you need in order to fully use EF Core with Wolverine as shown below: ```cs var builder = Host.CreateApplicationBuilder(); var connectionString = builder.Configuration.GetConnectionString("sqlserver"); // Register a DbContext or multiple DbContext types as normal builder.Services.AddDbContext( x => x.UseSqlServer(connectionString), // This is actually a significant performance gain // for Wolverine's sake optionsLifetime:ServiceLifetime.Singleton); // Register Wolverine builder.UseWolverine(opts => { // You'll need to independently tell Wolverine where and how to // store messages as part of the transactional inbox/outbox opts.PersistMessagesWithSqlServer(connectionString); // Adding EF Core transactional middleware, saga support, // and EF Core support for Wolverine storage operations opts.UseEntityFrameworkCoreTransactions(); }); // Rest of your bootstrapping... ``` snippet source | anchor Do note that I purposely configured the `ServiceLifetime` of the `DbContextOptions` for our `DbContext` type to be `Singleton`. That actually makes a non-trivial performance optimization for Wolverine and how it can treat `DbContext` types at runtime. Or alternatively, you can do this in one step with this equivalent approach: ```cs var builder = Host.CreateApplicationBuilder(); var connectionString = builder.Configuration.GetConnectionString("sqlserver"); builder.UseWolverine(opts => { // You'll need to independently tell Wolverine where and how to // store messages as part of the transactional inbox/outbox opts.PersistMessagesWithSqlServer(connectionString); // Registers the DbContext type in your IoC container, sets the DbContextOptions // lifetime to "Singleton" to optimize Wolverine usage, and also makes sure that // your Wolverine service has all the EF Core transactional middleware, saga support, // and storage operation helpers activated for this application opts.Services.AddDbContextWithWolverineIntegration( x => x.UseSqlServer(connectionString)); }); ``` snippet source | anchor Right now, we've tested Wolverine with EF Core using both [SQL Server](/guide/durability/sqlserver) and [PostgreSQL](/guide/durability/postgresql) persistence. --- --- url: /guide/handlers/error-handling.md --- # Error Handling @[youtube](k5WdzL85kGs) It's an imperfect world and almost inevitable that your Wolverine message handlers will occasionally throw exceptions as message handling fails. Maybe because a piece of infrastructure is down, maybe you get transient network issues, or maybe a database is overloaded. Wolverine comes with two flavors of error handling (so far). First, you can define error handling policies on message failures with fine-grained control over how various exceptions on different message. In addition, Wolverine supports a per-endpoint [circuit breaker](https://martinfowler.com/bliki/CircuitBreaker.html) approach that will temporarily pause message processing on a single listening endpoint in the case of a high rate of failures at that endpoint. ## Error Handling Rules ::: warning When using `IMessageBus.InvokeAsync()` to execute a message inline, only the "Retry" and "Retry With Cooldown" error policies are applied to the execution **automatically**. In other words, Wolverine will attempt to use retries inside the call to `InvokeAsync()` as configured. Custom actions can be explicitly enabled for execution inside of `InvokeAsync()` as shown in a section below. ::: Error handling rules in Wolverine are defined by three things: 1. The scope of the rule. Really just per message type or global at this point. 2. Exception matching 3. One or more actions (retry the message? discard it? move it to an error queue?) ## What to do on an error? | Action | Description | |----------------------|------------------------------------------------------------------------------------------------------------------------------------------| | Retry | Immediately retry the message *inline* without any pause | | Retry with Cooldown | Wait a short amount of time, then retry the message inline | | Requeue | Put the message at the back of the line for the receiving endpoint | | Schedule Retry | Schedule the message to be retried at a certain time | | Discard | Log, but otherwise discard the message and do not attempt to execute again | | Move to Error Queue | Move the message to a dedicated [dead letter queue](https://en.wikipedia.org/wiki/Dead_letter_queue) and do not attempt to execute again | | Pause the Listener | Stop all message processing on the current listener for a set duration of time | While we think the options above will suffice for most scenarios, it's possible to create your own action through Wolverine's `IContinuation` interface. So what to do in any particular scenario? Here's some initial guidance: * If the exception is a common, transient error like timeout conditions or database connectivity errors, build in a limited set of retries and potentially use [exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff) to avoid overloading your system (sample of this below) * If the exception tells you that the message is invalid or could never be processed, discard the message * If an exception happens on multiple attempts, move the message to a "dead letter queue" where it might be possible to replay at some later time * If an exception tells you than the system or part of the system itself is completely down, you may opt to pause the message listening altogether ## Moving Messages to an Error Queue ::: tip The actual mechanics of the error or "dead letter queue" vary between messaging transport ::: By default, a message will be moved to an error queue when it exhausts all its configured retry/requeue slots dependent upon the exception filter. You can, however explicitly short circuit the retries and immediately send a message to the error queue like so: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Don't retry, immediately send to the error queue opts.OnException().MoveToErrorQueue(); }).StartAsync(); ``` snippet source | anchor ## Discarding Messages If you can detect that an exception means that the message is invalid in your system and could never be processed, just tell Wolverine to discard it: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Bad message, get this thing out of here! opts.OnException() .Discard(); }).StartAsync(); ``` snippet source | anchor You have to explicitly discard a message or it will eventually be sent to a dead letter queue when the message has exhausted its configured retries or requeues. ## Exponential Backoff ::: tip This error handling strategy is effective for slowing down or throttling processing to give a distressed subsystem a chance to recover ::: Exponential backoff error handling is easy with either the `RetryWithCooldown()` syntax shown below: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Retry the message again, but wait for the specified time // The message will be dead lettered if it exhausts the delay // attempts opts .OnException() .RetryWithCooldown(50.Milliseconds(), 100.Milliseconds(), 250.Milliseconds()); }).StartAsync(); ``` snippet source | anchor Or through attributes on a single message: ```cs [RetryNow(typeof(SqlException), 50, 100, 250)] public class MessageWithBackoff { // whatever members } ``` snippet source | anchor ## Pausing Listening on Error Conditions ::: tip This feature exists in Wolverine because of the exact scenario described as an example in this section. Wish we'd had Wolverine then... ::: A common usage of asynchronous messaging frameworks is to make calls to an external API as a discrete step within a discrete message handler to isolate the calls to that external API from the rest of your application and put those calls into its own, isolated retry loop in the case of failures. Great! But what if something happens to that external API such that it's completely unable to accept any requests without manual intervention? You don't want to keep retrying messages that will just fail and eventually land in a dead letter queue where they can't be easily retried without manual intervention. Instead, let's just tell Wolverine to immediately pause all message processing in the incoming message listener when a certain exception is detected like so: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // The failing message is requeued for later processing, then // the specific listener is paused for 10 minutes opts.OnException() .Requeue().AndPauseProcessing(10.Minutes()); }).StartAsync(); ``` snippet source | anchor ## Scoping ::: tip To be clear, the error rules are "fall through," meaning that the rules are evaluated in order. ::: In order of precedence, exception handling rules can be defined at either the specific message type or globally. As a third possibility, you can use a chain policy to specify exception handling rules with any kind of user defined logic -- usually against a subset of message types. ::: tip The Wolverine team recommends using one style (attributes or fluent interface) or another, but not to mix and match styles too much within the same application so as to make reasoning about the error handling too difficult. ::: First off, you can define error handling rules for a specific message type by placing attributes on either the handler method or the message type itself as shown below: ```cs public class AttributeUsingHandler { [ScheduleRetry(typeof(IOException), 5)] [RetryNow(typeof(SqlException), 50, 100, 250)] [RequeueOn(typeof(InvalidOperationException))] [MoveToErrorQueueOn(typeof(DivideByZeroException))] [MaximumAttempts(2)] public void Handle(InvoiceCreated created) { // handle the invoice created message } } ``` snippet source | anchor You can also use the fluent interface approach on a specific message type if you put a method with the signature `public static void Configure(HandlerChain chain)` on the handler class itself as in this sample: ```cs public class MyErrorCausingHandler { // This method signature is meaningful public static void Configure(HandlerChain chain) { // Requeue on IOException for a maximum // of 3 attempts chain.OnException() .Requeue(); } public void Handle(InvoiceCreated created) { // handle the invoice created message } public void Handle(InvoiceApproved approved) { // handle the invoice approved message } } ``` snippet source | anchor To specify global error handling rules, use the fluent interface directly on `WolverineOptions.Handlers` as shown below: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.Policies.OnException().ScheduleRetry(5.Seconds()); opts.Policies.OnException().MoveToErrorQueue(); // You can also apply an additional filter on the // exception type for finer grained policies opts.Policies .OnException(ex => ex.Message.Contains("not responding")) .ScheduleRetry(5.Seconds()); }).StartAsync(); ``` snippet source | anchor TODO -- link to chain policies, after that exists:) Lastly, you can use chain policies to add error handling policies to a selected subset of message handlers. First, here's a sample policy that applies an error handling policy based on `SqlException` errors for all message types from a certain namespace: ```cs // This error policy will apply to all message types in the namespace // 'MyApp.Messages', and add a "requeue on SqlException" to all of these // message handlers public class ErrorHandlingPolicy : IHandlerPolicy { public void Apply(IReadOnlyList chains, GenerationRules rules, IServiceContainer container) { var matchingChains = chains .Where(x => x.MessageType.IsInNamespace("MyApp.Messages")); foreach (var chain in matchingChains) chain.OnException().Requeue(2); } } ``` snippet source | anchor ## Exception Filtering ::: tip While many of the examples in this page have shown simple policies based on the type `SqlException`, in real life you would probably want to filter on specific error codes to fine tune your error handling for SQL failures that are transient versus failures that imply the message could never be processed. ::: The attributes are limited to exception type, but the fluent interface has quite a few options to filter exception further with additional filters, inner exception tests, and compound filters: sample\_filtering\_by\_exception\_type ## Custom Actions ::: tip For the sake of granular error handling, it's recommended that your custom error handler code limit itself to publishing additional messages rather than trying to do work inline ::: Wolverine will enable you to create custom exception handling actions as additional steps to take during message failures. As an example, let's say that when your system is sent a `ShipOrder` message you'd like to send the original sending service a corresponding `ShippingFailed` message when that `ShipOrder` message fails during processing. The following code shows how to do this with an inline function: ```cs theReceiver = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.ListenAtPort(receiverPort); opts.ServiceName = "Receiver"; opts.Policies.OnException() .Discard().And(async (_, context, _) => { if (context.Envelope?.Message is ShipOrder cmd) { await context.RespondToSenderAsync(new ShippingFailed(cmd.OrderId)); } }); }).StartAsync(); ``` snippet source | anchor Optionally, you can implement a new type to handle this same custom logic by subclassing the `Wolverine.ErrorHandling.UserDefinedContinuation` type like so: ```cs public class ShippingOrderFailurePolicy : UserDefinedContinuation { public ShippingOrderFailurePolicy() : base( $"Send a {nameof(ShippingFailed)} back to the sender on shipping order failures") { } public override async ValueTask ExecuteAsync(IEnvelopeLifecycle lifecycle, IWolverineRuntime runtime, DateTimeOffset now, Activity activity) { if (lifecycle.Envelope?.Message is ShipOrder cmd) { await lifecycle .RespondToSenderAsync(new ShippingFailed(cmd.OrderId)); } } } ``` snippet source | anchor and register that secondary action like this: ```cs theReceiver = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.ListenAtPort(receiverPort); opts.ServiceName = "Receiver"; opts.Policies.OnException() .Discard().And(); }).StartAsync(); ``` snippet source | anchor ## Circuit Breaker ::: tip At this point, the circuit breaker mechanics need to be applied on an endpoint by endpoint basis ::: Wolverine also supports a [circuit breaker](https://martinfowler.com/bliki/CircuitBreaker.html) strategy for handling errors. The purpose of a circuit breaker is to pause message handling *for a single endpoint* if there are a significant percentage of message failures in order to allow the system to catch up and possibly allow for a distressed subsystem to recover and stabilize. The usage of the Wolverine circuit breaker is shown below: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.Policies.OnException() .Discard(); opts.ListenToRabbitQueue("incoming") .CircuitBreaker(cb => { // Minimum number of messages encountered within the tracking period // before the circuit breaker will be evaluated cb.MinimumThreshold = 10; // The time to pause the message processing before trying to restart cb.PauseTime = 1.Minutes(); // The tracking period for the evaluation. Statistics tracking cb.TrackingPeriod = 5.Minutes(); // If the failure percentage is higher than this number, trip // the circuit and stop processing cb.FailurePercentageThreshold = 10; // Optional allow list cb.Include(e => e.Message.Contains("Failure")); cb.Include(); // Optional ignore list cb.Exclude(); }); }).StartAsync(); ``` snippet source | anchor Note that the exception includes and excludes are optional. If there are no explicit `Include()` calls, the circuit breaker will assume that every exception should be considered a failure. Likewise, if there are no `Exclude()` calls, the circuit breaker will not throw out any exceptions. Also note that **it probably makes no sense to define both `Include()` and `Exclude()` rules**. ## Custom Actions for InvokeAsync() ::: info This usage was built for a [JasperFx Software](https://jasperfx.net) customer who is using Wolverine by calling `IMessageBus.InvokeAsync()` directly underneath [Hot Chocolate mutations](https://chillicream.com/docs/hotchocolate/v13/defining-a-schema/mutations). In their case, if the mutation action failed more than X number of times, they wanted to send a different message that would try to jumpstart the long running workflow that is somehow stalled. ::: This is maybe a little specialized, but let's say you have a reason for calling `IMessageBus.InvokeAsync()` inline, and that you want to carry out some kind of custom action if the message handler exceeds a certain number of retries (the only error handling action that applies automatically to `InvokeAsync()`). You can now opt custom actions into applying to exceptions thrown by your message handlers during a call to `InvokeAsync()` by specifying an `InvokeResult` value of `Stop` or `TryAgain` to a custom action. Here's a sample that uses a `CompensatingAction()` helper method for raising other messages on failures: ```cs public record ApproveInvoice(string InvoiceId); public record RequireIntervention(string InvoiceId); public static class InvoiceHandler { public static void Configure(HandlerChain chain) { chain.OnAnyException().RetryTimes(3) .Then .CompensatingAction((message, ex, bus) => bus.PublishAsync(new RequireIntervention(message.InvoiceId)), // By specifying a value here for InvokeResult, I'm making // this action apply to failures inside of IMessageBus.InvokeAsync() InvokeResult.Stop); // This is just a long hand way of doing the same thing as CompensatingAction // .CustomAction(async (runtime, lifecycle, _) => // { // if (lifecycle.Envelope.Message is ApproveInvoice message) // { // var bus = new MessageBus(runtime); // await bus.PublishAsync(new RequireIntervention(message.InvoiceId)); // } // // }, "Send a compensating action", InvokeResult.Stop); } public static int SucceedOnAttempt = 0; public static void Handle(ApproveInvoice invoice, Envelope envelope) { if (envelope.Attempts >= SucceedOnAttempt) return; throw new Exception(); } public static void Handle(RequireIntervention message) { Debug.WriteLine($"Got: {message}"); } } ``` snippet source | anchor ### Running custom actions indefinitely In some scenarios you want your custom action to control the retry lifecycle across multiple attempts (e.g., reschedule with a delay until some external condition is met), instead of Wolverine moving the message to the error queue after the first attempt. For that, use `CustomActionIndefinitely(...)`. `CustomActionIndefinitely` keeps invoking your custom action on subsequent attempts until your code explicitly stops the process. Inside the delegate you can for example: * Reschedule the message (e.g., with backoff, or by some dynamic values based on exception's payload....) via `lifecycle.ReScheduleAsync(...)` * Requeue if appropriate * Or stop further processing by calling `lifecycle.CompleteAsync()` (optionally after logging or publishing a compensating message) Example: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.Policies .OnException() .CustomActionIndefinitely(async (runtime, lifecycle, ex) => { // Stop after 10 attempts if (lifecycle.Envelope.Attempts >= 10) { // Decide to stop trying; you could also move to an error queue await lifecycle.CompleteAsync(); return; } // Keep trying later with a delay await lifecycle.ReScheduleAsync(DateTimeOffset.UtcNow.AddSeconds(15)); }, "Handle SpecialException with conditional reschedule/stop"); }).StartAsync(); ``` Note that custom actions would *always* be applied to exceptions thrown in asynchronous message handling. --- --- url: /guide/durability/marten/event-forwarding.md --- # Event Forwarding ::: tip As of Wolverine 2.2, you can use `IEvent` as the message type in a handler as part of the event forwarding when you need to utilize Marten metadata ::: ::: warning The Wolverine team recommends against combining this functionality with **also** using events as either a handler response or cascaded messages as the behavior can easily become confusing. Instead, prefer using custom types for handler responses or HTTP response bodies instead of the raw event types when using the event forwarding. ::: The "Event Forwarding" feature immediately pushes any event captured by Marten through Wolverine's persistent outbox where there is a known subscriber (either a local message handler or a known subscriber rule to that event type). The "Event Forwarding" publishes the new events as soon as the containing transaction is successfully committed. This is different from the [Event Subscriptions](./subscriptions) in that there is no ordering guarantee, and does require you to use the Wolverine transactional middleware for Marten. ::: tip The strong recommendation is to use either subscriptions or event forwarding, but not both in the same application. ::: To be clear, this will work for: * Any event type where the Wolverine application has a message handler for either the event type itself, or `IEvent` where `T` is the event type * Any event type where there is a known message subscription for that event type or its wrapping `IEvent` to an external transport Timing wise, the "event forwarding" happens at the time of committing the transaction for the original message that spawned the new events, and the resulting event messages go out as cascading messages only after the original transaction succeeds -- just like any other outbox usage. **There is no guarantee about ordering in this case.** Instead, Wolverine is trying to have these events processed as soon as possible. To opt into this feature, chain the Wolverine `AddMarten().EventForwardingToWolverine()` call as shown in this application bootstrapping sample shown below: ```cs builder.Services.AddMarten(opts => { var connString = builder .Configuration .GetConnectionString("marten"); opts.Connection(connString); // There will be more here later... opts.Projections .Add(ProjectionLifecycle.Async); // OR ??? // opts.Projections // .Add(ProjectionLifecycle.Inline); opts.Projections.Add(ProjectionLifecycle.Inline); opts.Projections .Snapshot(SnapshotLifecycle.Async); }) // This adds a hosted service to run // asynchronous projections in a background process .AddAsyncDaemon(DaemonMode.HotCold) // I added this to enroll Marten in the Wolverine outbox .IntegrateWithWolverine() // I also added this to opt into events being forward to // the Wolverine outbox during SaveChangesAsync() .EventForwardingToWolverine(); ``` snippet source | anchor This does need to be paired with a little bit of Wolverine configuration to add subscriptions to event types like so: ```cs builder.Host.UseWolverine(opts => { // I'm choosing to process any ChartingFinished event messages // in a separate, local queue with persistent messages for the inbox/outbox opts.PublishMessage() .ToLocalQueue("charting") .UseDurableInbox(); // If we encounter a concurrency exception, just try it immediately // up to 3 times total opts.Policies.OnException().RetryTimes(3); // It's an imperfect world, and sometimes transient connectivity errors // to the database happen opts.Policies.OnException() .RetryWithCooldown(50.Milliseconds(), 100.Milliseconds(), 250.Milliseconds()); // Automatic usage of transactional middleware as // Wolverine recognizes that an HTTP endpoint or message handler // persists data opts.Policies.AutoApplyTransactions(); }); ``` snippet source | anchor This forwarding of events is using an outbox that can be awaited in your tests using this extension method: ```cs public static Task SaveInMartenAndWaitForOutgoingMessagesAsync(this IHost host, Action action, int timeoutInMilliseconds = 5000) { var factory = host.Services.GetRequiredService(); return host.ExecuteAndWaitAsync(async context => { var session = factory.OpenSession(context); action(session); await session.SaveChangesAsync(); // Shouldn't be necessary, but real life says do it anyway await context.As().FlushOutgoingMessagesAsync(); }, timeoutInMilliseconds); } ``` snippet source | anchor To be used in your tests such as this: ```cs [Fact] public async Task execution_of_forwarded_events_can_be_awaited_from_tests() { var host = await Host.CreateDefaultBuilder() .UseWolverine() .ConfigureServices(services => { services.AddMarten(Servers.PostgresConnectionString) .IntegrateWithWolverine().EventForwardingToWolverine(opts => { opts.SubscribeToEvent().TransformedTo(e => new SecondMessage(e.StreamId, e.Sequence)); }); }).StartAsync(); var aggregateId = Guid.NewGuid(); await host.SaveInMartenAndWaitForOutgoingMessagesAsync(session => { session.Events.Append(aggregateId, new SecondEvent()); }, 100_000); using var store = host.Services.GetRequiredService(); await using var session = store.LightweightSession(); var events = await session.Events.FetchStreamAsync(aggregateId); events.Count.ShouldBe(2); events[0].Data.ShouldBeOfType(); events[1].Data.ShouldBeOfType(); } ``` snippet source | anchor Where the result contains `FourthEvent` because `SecondEvent` was forwarded as `SecondMessage` and that persisted `FourthEvent` in a handler such as: ```cs public static Task HandleAsync(SecondMessage message, IDocumentSession session) { session.Events.Append(message.AggregateId, new FourthEvent()); return session.SaveChangesAsync(); } ``` snippet source | anchor --- --- url: /tutorials/cqrs-with-marten.md --- # Event Sourcing and CQRS with Marten ::: info Sadly enough, what's now Wolverine was mostly an abandoned project during the COVID years. It was rescued and rebooted specifically to form a full blown CQRS with Event Sourcing stack in combination with Marten using what we now call the "aggregate handler workflow." At this point, the "Critter Stack" team firmly believes that this is the very most robust and productive tooling for CQRS with Event Sourcing in the entire .NET ecosystem. ::: ::: tip This guide assumes some familiarity with Event Sourcing nomenclature, but if you're relative new to that style of development, see [Understanding Event Sourcing with Marten](https://martendb.io/events/learning.html) from the Marten documentation. ::: @[youtube](U9zTGdo0Ps8) Let's get the entire "Critter Stack" (Wolverine + [Marten](https://martendb.io)) assembled and build a system using CQRS with Event Sourcing! We'll be using the [IncidentService](https://github.com/jasperfx/wolverine/tree/main/src/Samples/IncidentService) example service to show an example of using Wolverine with Marten in a headless web service with its accompanying test harness. The problem domain is pretty familiar to all of us developers because our lives are somewhat managed by issue tracking systems of some sort. Starting with some [Event Storming](https://jeremydmiller.com/2023/11/28/building-a-critter-stack-application-event-storming/), the first couple events and triggering commands might be something like this: ![Event Storming](/event-storming.png) We're going to start with a simple, headless ASP.Net Core project like so (and delete the silly weather forecast stuff): ```bash dotnet add webapi ``` Next, add the `WolverineFx.Http.Marten` Nuget to get Marten, Wolverine itself, and the full Wolverine + Marten integration including the HTTP integration. Inside the bootstrapping in the `Program` file, we'll start with this to bootstrap just Marten: ```csharp builder.Services.AddMarten(opts => { var connectionString = builder.Configuration.GetConnectionString("Marten"); opts.Connection(connectionString); opts.DatabaseSchemaName = "incidents"; }) // This adds configuration with Wolverine's transactional outbox and // Marten middleware support to Wolverine .IntegrateWithWolverine(); ``` For Wolverine itself, we'll start simply: ```csharp builder.Host.UseWolverine(opts => { // This is almost an automatic default to have // Wolverine apply transactional middleware to any // endpoint or handler that uses persistence services opts.Policies.AutoApplyTransactions(); }); // To add Wolverine.HTTP services to the IoC container builder.Services.AddWolverineHttp(); ``` ::: info We had to separate the IoC service registrations from the addition of the Wolverine endpoints when Wolverine was decoupled from Lamar as its only IoC tool. Two steps forward, one step back. ::: Next, let's add support for [Wolverine.HTTP]() endpoints: ```csharp app.MapWolverineEndpoints(); ``` And *lastly*, let's add the extended command line support through [Oakton](https://jasperfx.github.io/oakton) (don't worry, that's a transitive dependency of Wolverine and you're good to go): ```csharp // Using the expanded command line options for the Critter Stack // that are helpful for code generation, database migrations, and diagnostics return await app.RunOaktonCommands(args); ``` ## Event Types and a Projected Aggregate ::: tip In Marten parlance, a "Projection" is the mechanism of taking raw Marten events and "projecting" them into some kind of view, which could be a .NET object that may or may not be persisted to the database as JSON (PostgreSQL JSONB to be precise) or [flat table projections](https://martendb.io/events/projections/flat.html) that write to old fashioned relational database tables. The phrase "aggregate" is hopelessly overloaded in Event Sourcing and DDD communities. In Marten world we mostly just use the word "aggregate" to mean a projected document that is built up by a stream or cross stream of events. ::: In a real project, the event types and especially any projected documents will be designed as you go and will probably evolve through subsequent user stories. We're starting from an existing sample project, so we're going to skip ahead to some of our initial event types: ```cs public class Incident { public Guid Id { get; set; } // THIS IS IMPORTANT! Marten will set this itself, and you // can use this to communicate the current version to clients // as a way to opt into optimistic concurrency checks to prevent // problems from concurrent access public int Version { get; set; } public IncidentStatus Status { get; set; } = IncidentStatus.Pending; public IncidentCategory? Category { get; set; } public bool HasOutstandingResponseToCustomer { get; set; } = false; // Make serialization easy public Incident() { } public void Apply(IncidentLogged _) { } public void Apply(AgentRespondedToIncident _) => HasOutstandingResponseToCustomer = false; public void Apply(CustomerRespondedToIncident _) => HasOutstandingResponseToCustomer = true; public void Apply(IncidentResolved _) => Status = IncidentStatus.Resolved; public void Apply(ResolutionAcknowledgedByCustomer _) => Status = IncidentStatus.ResolutionAcknowledgedByCustomer; public void Apply(IncidentClosed _) => Status = IncidentStatus.Closed; public bool ShouldDelete(Archived @event) => true; } ``` snippet source | anchor ::: info You can use immutable `record` types for the aggregate documents, and sometimes you might want to. I think the code comes out a little simpler without the immutability, so I converted the `Incident` type to be mutable as part of writing out this guide. Also, it's a touch less efficient to use immutability due to the extra object allocations. No free lunch folks. ::: And here's a smattering of some of the first events we'll capture: ```cs public record IncidentLogged( Guid CustomerId, Contact Contact, string Description, Guid LoggedBy ); public record IncidentCategorised( Guid IncidentId, IncidentCategory Category, Guid CategorisedBy ); public record IncidentPrioritised( Guid IncidentId, IncidentPriority Priority, Guid PrioritisedBy, DateTimeOffset PrioritisedAt ); public record IncidentClosed( Guid ClosedBy ); ``` snippet source | anchor Many people -- myself included -- prefer to use `record` types for the event types. I would deviate from that though if the code is easier to read by doing property assignments if there are a *lot* of values to copy from a command to the event objects. In other words, I'm just not a fan of really big constructor function signatures. ## Start a New Stream So of course we're going to use a [Vertical Slice Architecture](/tutorials/vertical-slice-architecture) approach for our code, so here's the first cut at the HTTP endpoint that will log a new incident by starting a new event stream for the incident in one file: ```cs public record LogIncident( Guid CustomerId, Contact Contact, string Description, Guid LoggedBy ); public static class LogIncidentEndpoint { [WolverinePost("/api/incidents")] public static (CreationResponse, IStartStream) Post(LogIncident command) { var (customerId, contact, description, loggedBy) = command; var logged = new IncidentLogged(customerId, contact, description, loggedBy); var start = MartenOps.StartStream(logged); var response = new CreationResponse("/api/incidents/" + start.StreamId, start.StreamId); return (response, start); } } ``` snippet source | anchor And maybe there's a few details to unpack. It might help to [see the code](/guide/codegen) that Wolverine generates for this HTTP endpoint: ```csharp public class POST_api_incidents : Wolverine.Http.HttpHandler { private readonly Wolverine.Http.WolverineHttpOptions _wolverineHttpOptions; private readonly Wolverine.Runtime.IWolverineRuntime _wolverineRuntime; private readonly Wolverine.Marten.Publishing.OutboxedSessionFactory _outboxedSessionFactory; public POST_api_incidents(Wolverine.Http.WolverineHttpOptions wolverineHttpOptions, Wolverine.Runtime.IWolverineRuntime wolverineRuntime, Wolverine.Marten.Publishing.OutboxedSessionFactory outboxedSessionFactory) : base(wolverineHttpOptions) { _wolverineHttpOptions = wolverineHttpOptions; _wolverineRuntime = wolverineRuntime; _outboxedSessionFactory = outboxedSessionFactory; } public override async System.Threading.Tasks.Task Handle(Microsoft.AspNetCore.Http.HttpContext httpContext) { var messageContext = new Wolverine.Runtime.MessageContext(_wolverineRuntime); // Building the Marten session await using var documentSession = _outboxedSessionFactory.OpenSession(messageContext); // Reading the request body via JSON deserialization var (command, jsonContinue) = await ReadJsonAsync(httpContext); if (jsonContinue == Wolverine.HandlerContinuation.Stop) return; // The actual HTTP request handler execution (var creationResponse_response, var startStream) = IncidentService.LogIncidentEndpoint.Post(command); if (startStream != null) { // Placed by Wolverine's ISideEffect policy startStream.Execute(documentSession); } // This response type customizes the HTTP response ApplyHttpAware(creationResponse_response, httpContext); // Save all pending changes to this Marten session await documentSession.SaveChangesAsync(httpContext.RequestAborted).ConfigureAwait(false); // Have to flush outgoing messages just in case Marten did nothing because of https://github.com/JasperFx/wolverine/issues/536 await messageContext.FlushOutgoingMessagesAsync().ConfigureAwait(false); // Writing the response body to JSON because this was the first 'return variable' in the method signature await WriteJsonAsync(httpContext, creationResponse_response); } } ``` Just to rewind from the bootstrapping code up above, we had this line of code in the Wolverine setup to turn on [transactional middleware](/guide/durability/marten/transactional-middleware) by default: ```csharp // This is almost an automatic default to have // Wolverine apply transactional middleware to any // endpoint or handler that uses persistence services opts.Policies.AutoApplyTransactions(); ``` That directive tells Wolverine to use a Marten `IDocumentSession`, enroll it in the Wolverine transactional outbox just in case, and finally to call `SaveChangesAsync()` after the main handler. The `IStartStream` interface is a [Marten specific "side effect"](/guide/durability/marten/operations) type that tells Wolverine that this endpoint is applying changes to Marten. `MartenOps.StartStream()` is assigning a new sequential `Guid` value for the new incident. The [`CreationResponse`](/guide/http/metadata.html#ihttpaware-or-iendpointmetadataprovider-models) type is a special type in Wolverine used to: 1. Embed the new incident id as the `Value` property in the JSON sent back to the client 2. Write out a 201 http status code to denote a new resource was created 3. Communicate the Url of the new resource created, which in this case is the intended Url for a `GET` endpoint we'll write later to return the `Incident` state for a given event stream One of the biggest advantages of Wolverine is that allows you to use [pure functions](https://jeremydmiller.com/2024/01/10/building-a-critter-stack-application-easy-unit-testing-with-pure-functions/) for many handlers or HTTP endpoints, and this is no different. That endpoint above is admittedly using some Wolverine types to express the intended functionality through return values, but the unit test becomes just this: ```cs [Fact] public void unit_test() { var contact = new Contact(ContactChannel.Email); var command = new LogIncident(Guid.NewGuid(), contact, "It's broken", Guid.NewGuid()); // Pure function FTW! var (response, startStream) = LogIncidentEndpoint.Post(command); // Should only have the one event startStream.Events.ShouldBe([ new IncidentLogged(command.CustomerId, command.Contact, command.Description, command.LoggedBy) ]); } ``` snippet source | anchor ::: tip Encouraging testability is something the "Critter Stack" community takes a lot of pride. I would like to note that in many other event sourcing tools you can only effectively test command handlers through end to end, integration tests ::: ## Creating an Integration Test Harness ::: info This section was a request from a user, hope it makes sense. Alba is part of the same [JasperFx GitHub organization]() as Wolverine and Marten. In case your curious, the company [JasperFx Software](https://github.com/JasperFx) was named after the GitHub organization which in turn is named after one of our core team's ancestral hometown. ::: While we're definitely watching the [TUnit project](https://github.com/thomhurst/TUnit) and some of our customers happily use [NUnit](https://nunit.org/), I'm going to use a combination of [xUnit.Net](https://xunit.net/) and the [JasperFx Alba project](https://jasperfx.github.io/alba/) to author integration tests against our application. What I'm showing here is **a way** to do this, and certainly not the only possible way to write integration tests. My preference is to mostly use the application's `Program` bootstrapping with maybe just a few overrides so that you are mostly using the application **as it is actually configured in production**. As a little tip, I've added this bit of marker code to the very bottom of our `Program` file: ```cs // Adding this just makes it easier to bootstrap your // application in a test harness project. Only a convenience public partial class Program{} ``` snippet source | anchor Having that above, I'll switch to the test harness project and create a shared fixture to bootstrap the `IHost` for the application throughout the integration tests: ```cs public class AppFixture : IAsyncLifetime { public IAlbaHost? Host { get; private set; } public async Task InitializeAsync() { JasperFxEnvironment.AutoStartHost = true; // This is bootstrapping the actual application using // its implied Program.Main() set up Host = await AlbaHost.For(x => { // Just showing that you *can* override service // registrations for testing if that's useful x.ConfigureServices(services => { // If wolverine were using Rabbit MQ / SQS / Azure Service Bus, // turn that off for now services.DisableAllExternalWolverineTransports(); /// THIS IS IMPORTANT! services.MartenDaemonModeIsSolo(); services.RunWolverineInSoloMode(); }); }); } public async Task DisposeAsync() { await Host!.StopAsync(); Host.Dispose(); } } ``` snippet source | anchor And I like to add a base class for integration tests with some convenience methods that have been useful here and there: ```cs [CollectionDefinition("integration")] public class IntegrationCollection : ICollectionFixture; [Collection("integration")] public abstract class IntegrationContext : IAsyncLifetime { private readonly AppFixture _fixture; protected IntegrationContext(AppFixture fixture) { _fixture = fixture; Runtime = (WolverineRuntime)fixture.Host!.Services.GetRequiredService(); } public WolverineRuntime Runtime { get; } public IAlbaHost Host => _fixture.Host!; public IDocumentStore Store => _fixture.Host!.Services.GetRequiredService(); async Task IAsyncLifetime.InitializeAsync() { // Using Marten, wipe out all data and reset the state // back to exactly what we described in InitialAccountData await Store.Advanced.ResetAllData(); // SWitch to this instead please!!!! A super set of the above ^^^ await Host.ResetAllMartenDataAsync(); } // This is required because of the IAsyncLifetime // interface. Note that I do *not* tear down database // state after the test. That's purposeful public Task DisposeAsync() { return Task.CompletedTask; } public Task Scenario(Action configure) { return Host.Scenario(configure); } // This method allows us to make HTTP calls into our system // in memory with Alba, but do so within Wolverine's test support // for message tracking to both record outgoing messages and to ensure // that any cascaded work spawned by the initial command is completed // before passing control back to the calling test protected async Task<(ITrackedSession, IScenarioResult)> TrackedHttpCall(Action configuration) { IScenarioResult result = null!; // The outer part is tying into Wolverine's test support // to "wait" for all detected message activity to complete var tracked = await Host.ExecuteAndWaitAsync(async () => { // The inner part here is actually making an HTTP request // to the system under test with Alba result = await Host.Scenario(configuration); }); return (tracked, result); } } ``` snippet source | anchor With all of that in place (and if you're using Docker for your infrastructure, a quick `docker compose up -d` command), we can write an end to end test for the `LogIncident` endpoint like this: ```cs [Fact] public async Task happy_path_end_to_end() { var contact = new Contact(ContactChannel.Email); var command = new LogIncident(Guid.NewGuid(), contact, "It's broken", Guid.NewGuid()); // Log a new incident first var initial = await Scenario(x => { x.Post.Json(command).ToUrl("/api/incidents"); x.StatusCodeShouldBe(201); }); // Read the response body by deserialization var response = initial.ReadAsJson>(); // Reaching into Marten to build the current state of the new Incident // just to check the expected outcome using var session = Host.DocumentStore().LightweightSession(); // This wallpapers over the exact projection lifecycle.... var incident = await session.Events.FetchLatest(response.Value); incident.Status.ShouldBe(IncidentStatus.Pending); } ``` snippet source | anchor ## Appending Events to an Existing Stream ::: info Myself and others have frequently compared the "aggregate handler workflow" to the [Decider pattern](https://thinkbeforecoding.com/post/2021/12/17/functional-event-sourcing-decider) from the functional programming community, and it is similar in intent, but we think the Wolverine aggregate handler workflow does a better job of managing complexity and testability in non-trivial projects than the "Decider" pattern that can easily devolve into being just a massive switch statement. ::: This time let's write a simple HTTP endpoint to accept a `CategoriseIncident` command that may decide to append a new event to an `Incident` event stream. For exactly this kind of command handler, Wolverine has the [aggregate handler workflow](/guide/durability/marten/event-sourcing) that allows you to express most command handlers that target Marten event sourcing as pure functions. On to the code: ```cs public record CategoriseIncident( IncidentCategory Category, Guid CategorisedBy, int Version ); public static class CategoriseIncidentEndpoint { // This is Wolverine's form of "Railway Programming" // Wolverine will execute this before the main endpoint, // and stop all processing if the ProblemDetails is *not* // "NoProblems" public static ProblemDetails Validate(Incident incident) { return incident.Status == IncidentStatus.Closed ? new ProblemDetails { Detail = "Incident is already closed" } // All good, keep going! : WolverineContinue.NoProblems; } // This tells Wolverine that the first "return value" is NOT the response // body [EmptyResponse] [WolverinePost("/api/incidents/{incidentId:guid}/category")] public static IncidentCategorised Post( // the actual command CategoriseIncident command, // Wolverine is generating code to look up the Incident aggregate // data for the event stream with this id [Aggregate("incidentId")] Incident incident) { // This is a simple case where we're just appending a single event to // the stream. return new IncidentCategorised(incident.Id, command.Category, command.CategorisedBy); } } ``` snippet source | anchor In this case, I'm sourcing the `Incident` value using the `incidentId` route argument as the identity with the [\[Aggregate\] attribute usage](/guide/http/marten.html#marten-aggregate-workflow) that's specific to the `WolverineFx.Http.Marten` usage. Behind the scenes, Wolverine is using Marten's [`FetchForWriting` API](https://martendb.io/scenarios/command_handler_workflow.html#fetchforwriting). It's ugly, but the full generated code from Wolverine is: ```csharp public class POST_api_incidents_incidentId_category : Wolverine.Http.HttpHandler { private readonly Wolverine.Http.WolverineHttpOptions _wolverineHttpOptions; private readonly Wolverine.Runtime.IWolverineRuntime _wolverineRuntime; private readonly Wolverine.Marten.Publishing.OutboxedSessionFactory _outboxedSessionFactory; public POST_api_incidents_incidentId_category(Wolverine.Http.WolverineHttpOptions wolverineHttpOptions, Wolverine.Runtime.IWolverineRuntime wolverineRuntime, Wolverine.Marten.Publishing.OutboxedSessionFactory outboxedSessionFactory) : base(wolverineHttpOptions) { _wolverineHttpOptions = wolverineHttpOptions; _wolverineRuntime = wolverineRuntime; _outboxedSessionFactory = outboxedSessionFactory; } public override async System.Threading.Tasks.Task Handle(Microsoft.AspNetCore.Http.HttpContext httpContext) { // Reading the request body via JSON deserialization var (command, jsonContinue) = await ReadJsonAsync(httpContext); if (jsonContinue == Wolverine.HandlerContinuation.Stop) return; var version = command.Version; var messageContext = new Wolverine.Runtime.MessageContext(_wolverineRuntime); if (!System.Guid.TryParse((string)httpContext.GetRouteValue("incidentId"), out var incidentId)) { httpContext.Response.StatusCode = 404; return; } await using var documentSession = _outboxedSessionFactory.OpenSession(messageContext); var eventStore = documentSession.Events; var eventStream = await documentSession.Events.FetchForWriting(incidentId, version,httpContext.RequestAborted); if (eventStream.Aggregate == null) { await Microsoft.AspNetCore.Http.Results.NotFound().ExecuteAsync(httpContext); return; } var problemDetails1 = IncidentService.CategoriseIncidentEndpoint.Validate(eventStream.Aggregate); // Evaluate whether the processing should stop if there are any problems if (!(ReferenceEquals(problemDetails1, Wolverine.Http.WolverineContinue.NoProblems))) { await WriteProblems(problemDetails1, httpContext).ConfigureAwait(false); return; } // The actual HTTP request handler execution var incidentCategorised = IncidentService.CategoriseIncidentEndpoint.Post(command, eventStream.Aggregate); eventStream.AppendOne(incidentCategorised); await documentSession.SaveChangesAsync(httpContext.RequestAborted).ConfigureAwait(false); // Wolverine automatically sets the status code to 204 for empty responses if (!httpContext.Response.HasStarted) httpContext.Response.StatusCode = 204; } } ``` The usage of the `FetchForWriting()` API under the covers sets us up for both appending the events returned by our main command endpoint method to the right stream identified by the route argument. It's also opting into [optimistic concurrency checks](https://en.wikipedia.org/wiki/Optimistic_concurrency_control) both at the time the current `Incident` state is fetched and when the `IDocumentSession.SaveChangesAsync()` call is made. If you'll refer back to the `CategoriseIncident` command type, you'll see that it has a `Version` property on it. By convention, Wolverine is going to pipe that value in the command to the `FetchForWriting` API to facilitate the optimistic concurrency checks. ::: info There is also an option to use pessimistic locking through native PostgreSQL row locking, but please be cautious with this usage as it can lead to worse performance by serializing requests and maybe dead lock issues. It's probably more of a "break glass if necessary" approach. ::: You'll also notice that the HTTP endpoint above is broken up into two methods, the main `Post()` method and a `Validate()` method. As the names imply, Wolverine will call the `Validate()` method first as a filter to decide whether or not to proceed on to the main method. If the `Validate()` returns a `ProblemDetails` that actually contains problems, that stops the processing with a 400 HTTP status code and writes out the `ProblemDetails` to the response. This is part of Wolverine's [compound handler](/guide/handlers/#compound-handlers) technique that acts as a sort of [Railway Programming technique](./railway-programming) for Wolverine. You can learn more about Wolverine's built in support for [ProblemDetails here](/guide/http/problemdetails). ::: tip Wolverine.HTTP is able to glean more OpenAPI metadata from the signatures of the `Validate` methods that return `ProblemDetails`. Moreover, by using these validate methods to handle validation concerns and "sad path" failures, you're much more likely to be able to just return the response body directly from the endpoint method -- which also helps Wolverine.HTTP be able to generate OpenAPI metadata from the type signatures without forcing you to clutter up your code with more attributes just for OpenAPI. ::: Now, back to the `FetchForWriting` API usage. Besides the support for concurrency protection, `FetchForWriting` wallpapers over which projection lifecycle you're using to give you the compiled `Incident` data for a single stream. In the absence of any other configuration, Marten is building it `Live`, which means that inside of the call to `FetchForWriting`, Marten is fetching all the raw events for the `Incident` stream and running those through the implied [single stream projection](https://martendb.io/events/projections/aggregate-projections.html#aggregate-by-stream) of the `Incident` type to give you the latest information that is then passed into your endpoint method as just an argument. Now though, unlike many other Event Sourcing tools, Marten can reliably support "snapshotting" of the aggregate data and you can use that to improve performance in your CQRS command handlers. To make that concrete, let's go back to our `Program` file where we're bootstrapping Marten and we're going to add this code to update the `Incident` aggregates `Inline` with event capture: ```csharp builder.Services.AddMarten(opts => { var connectionString = builder.Configuration.GetConnectionString("Marten"); opts.Connection(connectionString); opts.DatabaseSchemaName = "incidents"; opts.Projections.Snapshot(SnapshotLifecycle.Inline); // Recent optimization you'd want with FetchForWriting up above opts.Projections.UseIdentityMapForAggregates = true; }) // Another performance optimization if you're starting from // scratch .UseLightweightSessions() // This adds configuration with Wolverine's transactional outbox and // Marten middleware support to Wolverine .IntegrateWithWolverine(); ``` ::: tip The `Json.WriteById()` API is in the [Marten.AspNetCore Nuget](https://martendb.io/documents/aspnetcore). ::: In this usage, the `Incident` projection gets updated every single time you append events, so that you can load the current data straight out of the database and know it's consistent with the event state. Switching to the "read side", if you are using `Inline` as the projection lifecycle, we can write a `GET` endpoint for a single `Incident` like this: ```csharp public static class GetIncidentEndpoint { // For right now, you have to help out the OpenAPI metdata [Produces(typeof(Incident))] [WolverineGet("/api/incidents/{id}")] public static async Task Get(Guid id, IDocumentSession session, HttpContext context) { await session.Json.WriteById(id, context); } } ``` The code up above is very efficient as all it's doing is taking the raw JSON stored in PostgreSQL and streaming it byte by byte right down to the HTTP response. No deserialization to the `Incident` .NET type just to immediately serialize it to a string, then writing it down. Of course this does require you to make your Marten JSON serialization settings exactly match what your clients want, but that's perfectly possible. If we decide to use `Live` or `Async` aggregation with [Marten's Async Daemon](https://martendb.io/events/projections/async-daemon.html) functionality, you could change the `GET` endpoint to this to ensure that you have the right state that matches the current event stream: ```csharp public static class GetIncidentEndpoint { // For right now, you have to help out the OpenAPI metdata [WolverineGet("/api/incidents/{id}")] public static async Task Get( Guid id, IDocumentSession session, // This will be the HttpContext.RequestAborted CancellationToken token) { return await session.Events.FetchLatest(id, token); } } ``` The `Events.FetchLatest()` API in Marten will also wallpaper over the actual projection lifecycle of the `Incident` projection, but does it in a lighter weight "read only" way compared to `FetchForWriting()`. ## Publishing or Handling Events With an [Event Driven Architecture](https://learn.microsoft.com/en-us/azure/architecture/guide/architecture-styles/event-driven) approach, you may want to do work against the events that are persisted to Marten. You can always explicitly publish messages through Wolverine at the same time you are appending events, but what if it's just easier to use the events themselves as messages to other message handlers or even to other services? The Wolverine + Marten combination comes with two main ways to do exactly that: [Event Forwarding](/guide/durability/marten/event-forwarding) is a lightweight way to immediately publish events that are appended to Marten within a Wolverine message handler through Wolverine's messaging infrastructure. Events can be handled either in process through local queues or published to external message brokers depending on the [message routing subscriptions](/guide/messaging/subscriptions) for that event type. Just note that event forwarding comes with **no ordering guarantees**. [Event Subscriptions](/guide/durability/marten/subscriptions) utilizes a **strictly ordered mechanism** to read in and process event data from the Marten event store. Wolverine supports three modes of event subscriptions from Marten: 1. Executing each event with a known Wolverine message handler (either the event type itself or wrapped in the Marten `IEvent` envelope) in strict order. This is essentially just calling [`IMessageBus.InvokeAsync()`](/guide/messaging/message-bus.html#invoking-message-execution) event by event in strict order from the Marten event store. 2. Publishing the events as messages through Wolverine. Essentially calling [`IMessageBus.PublishAsync()`](/guide/messaging/message-bus.html#sending-or-publishing-messages) on each event in strict order. 3. User defined operations on a batch of events at a time, again in strict order that the events are appended to the Marten event store. In all cases, the Event Subscriptions are running in a background process managed either by Marten itself with its [Async Daemon](https://martendb.io/events/projections/async-daemon.html) or the [Projection/Subscription Distribution](/guide/durability/marten/distribution) feature in Wolverine. ## Scaling Marten Projections ::: info The feature in this section was originally intended to be a commercial add on, but we decided to pull it into Wolverine core. ::: Wolverine has the ability to distribute the asynchronous projections and subscriptions to Marten events evenly across an application cluster for better scalability. See [Projection/Subscription Distribution](/guide/durability/marten/distribution) for more information. ## Observability Both Marten and Wolverine have strong support for [OpenTelemetry](https://opentelemetry.io/) (Otel) tracing as well as emitting performance metrics that can be used in conjunction with tools like Prometheus or Grafana to monitor and troubleshoot systems in production. See [Wolverine's Otel and Metrics](/guide/logging.html#open-telemetry) support and [Marten's Otel and Metrics](https://martendb.io/otel.html#open-telemetry-and-metrics) support for more information. --- --- url: /guide/durability/marten/subscriptions.md --- # Event Subscriptions ::: tip The older [Event Forwarding](./event-forwarding) feature is a subset of subscriptions that relies on the Marten transactional middleware in message handlers or HTTP endpoints, but happens at the time of event capture whereas the event subscriptions are processed in strict order in a background process through Marten's [async daemon](https://martendb.io/events/projections/async-daemon.html) subsystem **and do not require you to use the Marten transactional middleware for every operation**. The **strong suggestion from the Wolverine team is to use one or the other approach, but not both in the same system**. ::: Wolverine has the ability to extend Marten's [event subscription functionality](https://martendb.io/events/subscriptions.html) to carry out message processing by Wolverine on the events being captured by Marten in strict order. This new functionality works through Marten's [async daemon](https://martendb.io/events/projections/async-daemon.html) There are easy recipes for processing events through Wolverine message handlers, and also for just publishing events through Wolverine's normal message publishing to be processed locally or by being propagated through asynchronous messaging to other systems: ![Wolverine Subscription Recipes](/wolverine-subscriptions.png) ::: info Note that in all cases Marten itself will guarantee that each subscription (for each tenant database) is only running on one active node at a time. You may want to purposely segment subscriptions by event types to better distribute work across a running cluster of system nodes. ::: ## Publish Events as Messages ::: tip Unless you really want to publish every single event captured by Marten, set up event type filters to make the subscription do less work at runtime. No sense fetching and deserializing event data from the database that you end up not using at all! ::: The simplest recipe is to just ask Marten to publish events -- in strict order -- to Wolverine subscribers as shown with the usage of the `PublishEventsToWolverine()` API below that is chained after the `AddMarten()` declaration: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.Services .AddMarten() // Just pulling the connection information from // the IoC container at runtime. .UseNpgsqlDataSource() // You don't absolutely have to have the Wolverine // integration active here for subscriptions, but it's // more than likely that you will want this anyway .IntegrateWithWolverine() // The Marten async daemon most be active .AddAsyncDaemon(DaemonMode.HotCold) // This would attempt to publish every non-archived event // from Marten to Wolverine subscribers .PublishEventsToWolverine("Everything") // You wouldn't do this *and* the above option, but just to show // the filtering .PublishEventsToWolverine("Orders", relay => { // Filtering relay.FilterIncomingEventsOnStreamType(typeof(Order)); // Optionally, tell Marten to only subscribe to new // events whenever this subscription is first activated relay.Options.SubscribeFromPresent(); }); }).StartAsync(); ``` snippet source | anchor ::: tip Be careful with this feature if you are using any kind of automatic or conventional message routing that automatically routes messages based on the message type names or other criteria. In this case, you may want to filter the subscription to create an allow list of ::: First off, what's a "subscriber?" *That* would mean any event that Wolverine recognizes as having: * A local message handler in the application for the specific event type, which would effectively direct Wolverine to publish the event data to a local queue * A local message handler in the application for the specific `IEvent` type, which would effectively direct Wolverine to publish the event with its `IEvent` Marten metadata wrapper to a local queue * Any event type where Wolverine can discover subscribers through routing rules All the Wolverine subscription is doing is effectively calling `IMessageBus.PublishAsync()` against the event data or the `IEvent` wrapper. You can make the subscription run more efficiently by applying event or stream type filters for the subscription. If you need to do a transformation of the raw `IEvent` or the internal event type to some kind of external event type for publishing to external systems when you want to avoid directly coupling other subscribers to your system's internals, you can accomplish that by just building a message handler that does the transformation and publishes a cascading message like so: ```cs public record OrderCreated(string OrderNumber, Guid CustomerId); // I wouldn't use this kind of suffix in real life, but it helps // document *what* this is for the sample in the docs:) public record OrderCreatedIntegrationEvent(string OrderNumber, string CustomerName, DateTimeOffset Timestamp); // We're going to use the Marten IEvent metadata and some other Marten reference // data to transform the internal OrderCreated event // to an OrderCreatedIntegrationEvent that will be more appropriate for publishing to // external systems public static class InternalOrderCreatedHandler { public static Task LoadAsync(IEvent e, IQuerySession session, CancellationToken cancellationToken) { return session.LoadAsync(e.Data.CustomerId, cancellationToken); } public static OrderCreatedIntegrationEvent Handle(IEvent e, Customer customer) { return new OrderCreatedIntegrationEvent(e.Data.OrderNumber, customer.Name, e.Timestamp); } } ``` snippet source | anchor ## Process Events as Messages in Strict Order In some cases you may want the events to be executed by Wolverine message handlers in strict order. With the recipe below: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.Services .AddMarten(o => { // This is the default setting, but just showing // you that Wolverine subscriptions will be able // to skip over messages that fail without // shutting down the subscription o.Projections.Errors.SkipApplyErrors = true; }) // Just pulling the connection information from // the IoC container at runtime. .UseNpgsqlDataSource() // You don't absolutely have to have the Wolverine // integration active here for subscriptions, but it's // more than likely that you will want this anyway .IntegrateWithWolverine() // The Marten async daemon most be active .AddAsyncDaemon(DaemonMode.HotCold) // Notice the allow list filtering of event types and the possibility of overriding // the starting point for this subscription at runtime .ProcessEventsWithWolverineHandlersInStrictOrder("Orders", o => { // It's more important to create an allow list of event types that can be processed o.IncludeType(); // Optionally mark the subscription as only starting from events from a certain time o.Options.SubscribeFromTime(new DateTimeOffset(new DateTime(2023, 12, 1))); }); }).StartAsync(); ``` snippet source | anchor In this recipe, Marten & Wolverine are working together to call `IMessageBus.InvokeAsync()` on each event in order. You can use both the actual event type (`OrderCreated`) or the wrapped Marten event type (`IEvent`) as the message type for your message handler. ::: tip Wolverine will log all exceptions regardless of your configuration ::: In the case of exceptions from processing the event with Wolverine: 1. Any built in "retry" error handling will kick in to retry the event processing inline 2. If the retries are exhausted, and the Marten setting for `StoreOptions.Projections.Errors.SkipApplyErrors` is `true`, Wolverine will persist the event to its PostgreSQL backed dead letter queue and proceed to the next event. This setting is the default with Marten when the daemon is running continuously in the background, but `false` in rebuilds or replays 3. If the retries are exhausted, and `SkipApplyErrors = false`, Wolverine will direct Marten to pause the subscription. See the [Marten asynchronous daemon error handling](https://martendb.io/events/projections/async-daemon.html#error-handling) for more information. ## Custom Subscriptions ::: info The example below is pretty well exactly the first usage of this feature for a [JasperFx Software](https://jasperfx.net) client. ::: The base type for all Wolverine subscriptions is the `Wolverine.Marten.Subscriptions.BatchSubscription` class. If you need to do something completely custom, or just to take action on a batch of events at one time, subclass that type. Here is an example usage where I'm using [event carried state transfer](https://martinfowler.com/articles/201701-event-driven.html) to publish batches of reference data about customers being activated or deactivated within our system: ```cs public record CompanyActivated(string Name); public record CompanyDeactivated; public record NewCompany(Guid Id, string Name); // Message type we're going to publish to external // systems to keep them up to date on new companies public class CompanyActivations { public List Additions { get; set; } = new(); public List Removals { get; set; } = new(); public void Add(Guid companyId, string name) { Removals.Remove(companyId); // Fill is an extension method in JasperFx.Core that adds the // record to a list if the value does not already exist Additions.Fill(new NewCompany(companyId, name)); } public void Remove(Guid companyId) { Removals.Fill(companyId); Additions.RemoveAll(x => x.Id == companyId); } } public class CompanyTransferSubscription : BatchSubscription { public CompanyTransferSubscription() : base("CompanyTransfer") { IncludeType(); IncludeType(); } public override async Task ProcessEventsAsync(EventRange page, ISubscriptionController controller, IDocumentOperations operations, IMessageBus bus, CancellationToken cancellationToken) { var activations = new CompanyActivations(); foreach (var e in page.Events) { switch (e) { // In all cases, I'm assuming that the Marten stream id is the identifier for a customer case IEvent activated: activations.Add(activated.StreamId, activated.Data.Name); break; case IEvent deactivated: activations.Remove(deactivated.StreamId); break; } } // At the end of all of this, publish a single message // In case you're wondering, this will opt into Wolverine's // transactional outbox with the same transaction as any changes // made by Marten's IDocumentOperations passed in, including Marten's // own work to track the progression of this subscription await bus.PublishAsync(activations); } } ``` snippet source | anchor And the related code to register this subscription: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseRabbitMq(); // There needs to be *some* kind of subscriber for CompanyActivations // for this to work at all opts.PublishMessage() .ToRabbitExchange("activations"); opts.Services .AddMarten() // Just pulling the connection information from // the IoC container at runtime. .UseNpgsqlDataSource() .IntegrateWithWolverine() // The Marten async daemon most be active .AddAsyncDaemon(DaemonMode.HotCold) // Register the new subscription .SubscribeToEvents(new CompanyTransferSubscription()); }).StartAsync(); ``` snippet source | anchor ## Using IoC Services in Subscriptions To use IoC services in your subscription, you can use constructor injection within the actual subscription class and register the projection with this slightly different usage using the `SubscribeToEventsWithServices()` API: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseRabbitMq(); // There needs to be *some* kind of subscriber for CompanyActivations // for this to work at all opts.PublishMessage() .ToRabbitExchange("activations"); opts.Services .AddMarten() // Just pulling the connection information from // the IoC container at runtime. .UseNpgsqlDataSource() .IntegrateWithWolverine() // The Marten async daemon most be active .AddAsyncDaemon(DaemonMode.HotCold) // Register the new subscription // With this alternative you can inject services into your subscription's constructor // function .SubscribeToEventsWithServices(ServiceLifetime.Scoped); }).StartAsync(); ``` snippet source | anchor See the [Marten documentation on subscriptions](/guide/durability/marten/subscriptions.html#using-ioc-services-in-subscriptions) for more information about the lifecycle and mechanics. --- --- url: /guide/messaging/exclusive-node-processing.md --- # Exclusive Node Processing Sometimes you need to ensure that only one node in your cluster processes messages from a specific queue or topic, but you still want to take advantage of parallel processing for better throughput. This is different from strict ordering, which processes messages one at a time. ## When to Use Exclusive Node Processing Use exclusive node processing when you need: * **Singleton processing**: Background jobs or scheduled tasks that should only run on one node * **Resource constraints**: Operations that access limited resources that can't be shared across nodes * **Stateful processing**: When maintaining in-memory state that shouldn't be distributed * **Ordered event streams**: Processing events in order while still maintaining throughput ## Basic Configuration ### Exclusive Node with Parallelism Configure a listener to run exclusively on one node while processing multiple messages in parallel: ```cs var builder = Host.CreateDefaultBuilder(); builder.UseWolverine(opts => { opts.ListenToRabbitQueue("important-jobs") .ExclusiveNodeWithParallelism(maxParallelism: 5); }); ``` This configuration ensures: * Only one node in the cluster will process this queue * Up to 5 messages can be processed in parallel on that node * If the exclusive node fails, another node will take over ### Default Parallelism If you don't specify the parallelism level, it defaults to 10: ```csharp opts.ListenToRabbitQueue("background-tasks") .ExclusiveNodeWithParallelism(); // Defaults to 10 parallel messages ``` ## Session-Based Ordering For scenarios where you need to maintain ordering within specific groups (like Azure Service Bus sessions), use exclusive node with session ordering: ```cs opts.ListenToAzureServiceBusQueue("ordered-events") .ExclusiveNodeWithSessionOrdering(maxParallelSessions: 5); ``` This ensures: * Only one node processes the queue * Multiple sessions can be processed in parallel (up to 5 in this example) * Messages within each session are processed in order * Different sessions can be processed concurrently ## Azure Service Bus Specific Configuration Azure Service Bus has special support for exclusive node processing with sessions: ```cs opts.ListenToAzureServiceBusQueue("user-events") .ExclusiveNodeWithSessions(maxParallelSessions: 8); ``` This is a convenience method that: 1. Enables session support with the specified parallelism 2. Configures exclusive node processing 3. Ensures proper session handling For topic subscriptions without sessions: ```cs opts.ListenToAzureServiceBusSubscription("notifications", "email-sender") .ExclusiveNodeWithParallelism(maxParallelism: 3); ``` ## Combining with Other Options Exclusive node processing can be combined with other listener configurations: ```cs opts.ListenToRabbitQueue("critical-tasks") .ExclusiveNodeWithParallelism(maxParallelism: 5) .UseDurableInbox() // Use durable inbox for reliability .TelemetryEnabled(true) // Enable telemetry .Named("CriticalTaskProcessor"); // Give it a friendly name ``` ## Comparison with Other Modes | Mode | Nodes | Parallelism | Ordering | Use Case | |------|-------|-------------|----------|----------| | **Default (Competing Consumers)** | All nodes | Configurable | No guarantee | High throughput, load balancing | | **Sequential** | Current node | 1 | Yes (local) | Local ordering, single thread | | **ListenWithStrictOrdering** | One (exclusive) | 1 | Yes (global) | Global ordering, single thread | | **ExclusiveNodeWithParallelism** | One (exclusive) | Configurable | No | Singleton with throughput | | **ExclusiveNodeWithSessionOrdering** | One (exclusive) | Configurable | Yes (per session) | Singleton with session ordering | ## Implementation Notes ### Leader Election When using exclusive node processing, Wolverine uses its leader election mechanism to ensure only one node claims the exclusive listener. This requires: 1. A persistence layer (SQL Server, PostgreSQL, or RavenDB) 2. Node agent support enabled ```cs opts.PersistMessagesWithSqlServer(connectionString) .EnableNodeAgentSupport(); // Required for leader election opts.ListenToRabbitQueue("singleton-queue") .ExclusiveNodeWithParallelism(5); ``` ### Failover Behavior If the node running an exclusive listener fails: 1. Other nodes detect the failure through the persistence layer 2. A new node is elected to take over the exclusive listener 3. Processing resumes on the new node 4. Any in-flight messages are handled according to your durability settings ### Local Queues Exclusive node processing is not supported for local queues since they are inherently single-node: ```cs // This will throw NotSupportedException opts.LocalQueue("local") .ExclusiveNodeWithParallelism(5); // ❌ Not supported ``` ## Testing Exclusive Node Processing When testing exclusive node processing: 1. **Unit Tests**: Test the configuration separately from the execution 2. **Integration Tests**: Use `DurabilityMode.Solo` to simplify testing 3. **Load Tests**: Verify that parallelism improves throughput as expected ```cs // In tests, use Solo mode to avoid leader election complexity opts.Durability.Mode = DurabilityMode.Solo; opts.ListenToRabbitQueue("test-queue") .ExclusiveNodeWithParallelism(5); ``` ## Performance Considerations * **Parallelism Level**: Set based on your message processing time and resource constraints * **Session Count**: For session-based ordering, balance between parallelism and memory usage * **Failover Time**: Leader election typically takes a few seconds; plan accordingly * **Message Distribution**: Ensure your message grouping (sessions) distributes evenly for best performance * **Resource Implications**: Higher parallelism values increase memory usage and thread pool consumption. Each parallel message processor maintains its own execution context. For CPU-bound operations, setting parallelism higher than available CPU cores may decrease performance. For I/O-bound operations, higher values can improve throughput but monitor memory usage carefully. ## Troubleshooting ### Messages Not Processing If messages aren't being processed: 1. Check that node agents are enabled 2. Verify the persistence layer is configured 3. Look for leader election errors in logs 4. Ensure only one node is claiming the exclusive listener ### Lower Than Expected Throughput If throughput is lower than expected: 1. Increase the parallelism level 2. Check for blocking operations in message handlers 3. Verify that sessions (if used) are well-distributed 4. Monitor CPU and memory usage on the exclusive node ### Failover Not Working If failover isn't working properly: 1. Check network connectivity between nodes 2. Verify all nodes can access the persistence layer 3. Look for timeout or deadlock issues in logs 4. Ensure node agent support is enabled on all nodes --- --- url: /guide/messaging/transports/external-tables.md --- # Externally Controlled Database Tables Let's say that you'd like to publish messages to a Wolverine application from an existing system where it's not feasible to either utilize Wolverine, and that system does not currently have any kind of messaging capability. And of course, you want the messaging to Wolverine to be robust through some sort of transactional outbox, but you certainly don't want to have to build custom infrastructure to manage that. Wolverine provides a capability to scrape an externally controlled database table for incoming messages in a reliable way. Assuming that you are using one of the relational database options for persisting messages already like [PostgreSQL](/guide/durability/postgresql) or [Sql Server](/guide/durability/sqlserver), you can tell Wolverine to poll a table *in the same database as the message store* for incoming messages like this: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { opts.UsePostgresqlPersistenceAndTransport(builder.Configuration.GetConnectionString("postgres")); // Or // opts.UseSqlServerPersistenceAndTransport(builder.Configuration.GetConnectionString("sqlserver")); // Or // opts.Services // .AddMarten(builder.Configuration.GetConnectionString("postgres")) // .IntegrateWithWolverine(); // Directing Wolverine to "listen" for messages in an externally controlled table // You have to explicitly tell Wolverine about the schema name and table name opts.ListenForMessagesFromExternalDatabaseTable("exports", "messaging", table => { // The primary key column for this table, default is "id" table.IdColumnName = "id"; // What column has the actual JSON data? Default is "json" table.JsonBodyColumnName = "body"; // Optionally tell Wolverine that the message type name is this // column. table.MessageTypeColumnName = "message_type"; // Add a column for the current time when a message was inserted // Strictly for diagnostics table.TimestampColumnName = "added"; // How often should Wolverine poll this table? Default is 10 seconds table.PollingInterval = 1.Seconds(); // Maximum number of messages that each node should try to pull in at // any one time. Default is 100 table.MessageBatchSize = 50; // Is Wolverine allowed to try to apply automatic database migrations for this // table at startup time? Default is true. // Also overridden by WolverineOptions.AutoBuildMessageStorageOnStartup table.AllowWolverineControl = true; // Wolverine uses a database advisory lock so that only one node at a time // can ever be polling for messages at any one time. Default is 12000 // It might release contention to vary the advisory lock if you have multiple // incoming tables or applications targeting the same database table.AdvisoryLock = 12001; // Tell Wolverine what the default message type is coming from this // table to aid in deserialization table.MessageType = typeof(ExternalMessage); }) // Just showing that you have all the normal options for configuring and // fine tuning the behavior of a message listening endpoint here .Sequential(); }); ``` snippet source | anchor So a couple things to know: * The external table has to have a single primary key table that uses `Guid` as the .NET type. So `uuid` for PostgreSQL or `uniqueidentifier` for Sql Server * There must be a single column that holds the incoming message as JSON. For Sql Server this is `varbinary(max)` and `JSONB` for PostgreSQL * If there is a column mapped for the message type, Wolverine is using its message type naming to determine the actual .NET message type. See [Message Type Name or Alias](/guide/messages.html#message-type-name-or-alias) for information about how to use this or even add custom type mapping to synchronize between the upstream system and your Wolverine using system * If the upstream system is not sending a message type name, you will be limited to only accepting a single message type, and you will have to tell Wolverine the default message type as shown above in the code sample. This is common in interop with non-Wolverine systems * All "external table" endpoints in Wolverine are "durable" endpoints, and the incoming messages get moved to the incoming envelope tables * Likewise, the dead letter queueing for these messages is done with the typical database message store --- --- url: /guide/messaging/transports/sqs/fifo-queues.md --- # FIFO Queues [Amazon SQS FIFO queues](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues.html) guarantee that messages are processed exactly once, in the exact order they are sent. Wolverine has built-in support for SQS FIFO queues. ## Naming Convention Wolverine detects FIFO queues by the `.fifo` suffix on the queue name — this follows the AWS naming requirement. Simply use a queue name ending in `.fifo` when configuring your endpoints: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseAmazonSqsTransportLocally() .AutoProvision(); opts.PublishMessage() .ToSqsQueue("orders.fifo") .ConfigureQueueCreation(request => { // Required for FIFO queues request.Attributes[QueueAttributeName.FifoQueue] = "true"; // Enable content-based deduplication so you don't have to // supply a DeduplicationId on every message request.Attributes[QueueAttributeName.ContentBasedDeduplication] = "true"; }); opts.ListenToSqsQueue("orders.fifo", queue => { queue.Configuration.Attributes[QueueAttributeName.FifoQueue] = "true"; queue.Configuration.Attributes[QueueAttributeName.ContentBasedDeduplication] = "true"; }); }).StartAsync(); ``` ## Message Group Id and Deduplication Id SQS FIFO queues use a **Message Group Id** to determine ordering — messages within the same group are delivered in order. Optionally, a **Message Deduplication Id** prevents duplicate delivery within a 5-minute window. When Wolverine sends messages to a FIFO queue, it automatically maps: * `Envelope.GroupId` → SQS `MessageGroupId` * `Envelope.DeduplicationId` → SQS `MessageDeduplicationId` You can set these values using `DeliveryOptions` when publishing: ```cs await messageBus.PublishAsync(new OrderPlaced(orderId), new DeliveryOptions { GroupId = orderId.ToString(), DeduplicationId = $"order-placed-{orderId}" }); ``` If you enable `ContentBasedDeduplication` on the queue (as shown above), you can omit the `DeduplicationId` and SQS will generate one based on the message body. ## Dead Letter Queues for FIFO When using dead letter queues with FIFO queues, the dead letter queue must also be a FIFO queue. Make sure to name it with a `.fifo` suffix: ```cs opts.ListenToSqsQueue("orders.fifo", queue => { queue.Configuration.Attributes[QueueAttributeName.FifoQueue] = "true"; queue.Configuration.Attributes[QueueAttributeName.ContentBasedDeduplication] = "true"; }).ConfigureDeadLetterQueue("orders-errors.fifo", dlq => { dlq.Attributes[QueueAttributeName.FifoQueue] = "true"; dlq.Attributes[QueueAttributeName.ContentBasedDeduplication] = "true"; }); ``` ## Partitioned Publishing with FIFO Queues For high-throughput scenarios where you need ordered processing *per group* but want parallelism *across groups*, consider using Wolverine's [partitioned sequential messaging](/guide/messaging/partitioning) with sharded SQS FIFO queues. This distributes messages across multiple queues based on their group id, giving you the best of both worlds — strict ordering within a group and horizontal scaling across groups. --- --- url: /guide/handlers/fluent-validation.md --- # Fluent Validation Middleware ::: warning Wolverine's `UseFluentValidation()` does "type scanning" to discover validators unless you explicitly tell Wolverine not to. Be careful to not double register validators through some other mechanism and Wolverine's. Do note that Wolverine makes some performance optimizations around the `ServiceLifetime` of DI registrations for validation that can be valuable in terms of performance. ::: ::: tip There is also an HTTP specific middleware for WolverineFx.Http that uses the `ProblemDetails` specification. See [Fluent Validation Middleware for HTTP](/guide/http/fluentvalidation) for more information. ::: ::: warning If you need to use IoC services in a Fluent Validation `IValidator` that might force Wolverine to use a service locator pattern in the generated code (basically from `AddScoped(s => build it at runtime)`), we recommend instead using a more explicit `Validate` or `ValidateAsync()` method directly in your message handler~~~~ class for the data input. ::: You will frequently want or need to validate the messages coming into your Wolverine system for correctness or at least the presence of vital information. To that end, Wolverine has support for integrating the popular [Fluent Validation](https://docs.fluentvalidation.net/en/latest/) library via an unobtrusive middleware strategy where the middleware will stop invalid messages from even reaching the message handlers. To get started, add the `WolverineFx.FluentValidation` nuget to your project, and add this line to your Wolverine application bootstrapping: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Apply the validation middleware *and* discover and register // Fluent Validation validators opts.UseFluentValidation(); // Or if you'd prefer to deal with all the DI registrations yourself opts.UseFluentValidation(RegistrationBehavior.ExplicitRegistration); // Just a prerequisite for some of the test validators opts.Services.AddSingleton(); }).StartAsync(); ``` snippet source | anchor And now to situate this within the greater application, let's say you have a message and handler for creating a new customer, and you also have a Fluent Validation validator for your `CreateCustomer` message type in your codebase: ```cs public class CreateCustomerValidator : AbstractValidator { public CreateCustomerValidator() { RuleFor(x => x.FirstName).NotNull(); RuleFor(x => x.LastName).NotNull(); RuleFor(x => x.PostalCode).NotNull(); } } public record CreateCustomer ( string FirstName, string LastName, string PostalCode ); public static class CreateCustomerHandler { public static void Handle(CreateCustomer customer) { // do whatever you'd do here, but this won't be called // at all if the Fluent Validation rules fail } } ``` snippet source | anchor In the case above, the Fluent Validation check will happen at runtime *before* the call to the handler methods. If the validation fails, the middleware will throw a `ValidationException` and stop all processing. Some notes about the middleware: * The middleware is not applied to any message handler type that has no known validators in the application's IoC container * Wolverine uses a slightly different version of the middleware based on whether or not there is a single validator or multiple validators in the underlying IoC container * The registration also adds an error handling policy to discard messages when a `ValidationException` is thrown ## Customizing the Validation Failure Behavior ::: tip Unless there's a good reason not to, register your custom `IFailureAction` as singleton scoped for a performance optimization within the Wolverine pipeline. ::: Out of the box, the Fluent Validation middleware will throw a `FluentValidation.ValidationException` with all the validation failures if the validation fails. To customize that behavior, you can plug in a custom implementation of the `IFailureAction` interface as shown below: ```cs public class MySpecialException : Exception { public MySpecialException(string? message) : base(message) { } } public class CustomFailureAction : IFailureAction { public void Throw(T message, IReadOnlyList failures) { throw new MySpecialException("Your message stinks!: " + failures.Select(x => x.ErrorMessage).Join(", ")); } } ``` snippet source | anchor and with the corresponding override: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Apply the validation middleware *and* discover and register // Fluent Validation validators opts.UseFluentValidation(); // Override the service registration for IFailureAction opts.Services.AddSingleton(typeof(IFailureAction<>), typeof(CustomFailureAction<>)); // Just a prerequisite for some of the test validators opts.Services.AddSingleton(); }).StartAsync(); ``` snippet source | anchor --- --- url: /guide/http/fluentvalidation.md --- # Fluent Validation Middleware for HTTP See the [Validation Page](./validation). --- --- url: /introduction/getting-started.md --- # Getting Started Also see the full [quickstart code](https://github.com/JasperFx/wolverine/tree/main/src/Samples/Quickstart) on GitHub. For a first application, build a very simple issue tracking system for our own usage. If you're reading this web page, it's a pretty safe bet you spend quite a bit of time working with an issue tracking system. :) Let's start a new web api project for this new system with: ```bash dotnet new webapi ``` Next, let's add Wolverine to our project with: ```bash dotnet add package WolverineFx ``` To start off, we're just going to build two API endpoints that accepts a POST from the client that: 1. Creates a new `Issue`, stores it, and triggers an email to internal personnel. 2. Assigns an `Issue` to an existing `User` and triggers an email to that user letting them know there's more work on their plate The two *commands* for the POST endpoints are below: ```cs public record CreateIssue(Guid OriginatorId, string Title, string Description); ``` snippet source | anchor ```cs public record AssignIssue(Guid IssueId, Guid AssigneeId); ``` snippet source | anchor Let's jump right into the `Program.cs` file of our new web service: ```cs using JasperFx; using Quickstart; using Wolverine; var builder = WebApplication.CreateBuilder(args); // The almost inevitable inclusion of Swashbuckle:) builder.Services.AddEndpointsApiExplorer(); builder.Services.AddSwaggerGen(); // For now, this is enough to integrate Wolverine into // your application, but there'll be *many* more // options later of course :-) builder.Host.UseWolverine(); // Some in memory services for our application, the // only thing that matters for now is that these are // systems built by the application's IoC container builder.Services.AddSingleton(); builder.Services.AddSingleton(); var app = builder.Build(); // An endpoint to create a new issue that delegates to Wolverine as a mediator app.MapPost("/issues/create", (CreateIssue body, IMessageBus bus) => bus.InvokeAsync(body)); // An endpoint to assign an issue to an existing user that delegates to Wolverine as a mediator app.MapPost("/issues/assign", (AssignIssue body, IMessageBus bus) => bus.InvokeAsync(body)); // Swashbuckle inclusion app.UseSwagger(); app.UseSwaggerUI(); app.MapGet("/", () => Results.Redirect("/swagger")); // Opt into using JasperFx for command line parsing // to unlock built in diagnostics and utility tools within // your Wolverine application return await app.RunJasperFxCommands(args); ``` snippet source | anchor ::: tip `IMessageBus` is the entrypoint into all message invocation, publishing, or scheduling. Pretty much everything at runtime will start with this service. Wolverine registers `IMessageBus` as a scoped service inside your application's DI container as part of the `UseWolverine()` mechanism. ::: Alright, let's talk about what we wrote up above: 1. We integrated Wolverine into the new system through the call to `IHostBuilder.UseWolverine()` 2. We registered the `UserRepository` and `IssueRepository` services 3. We created a couple [Minimal API](https://docs.microsoft.com/en-us/aspnet/core/fundamentals/minimal-apis?view=aspnetcore-6.0) endpoints See also: [Wolverine as Command Bus](/guide/messaging/transports/local.html) The two Web API functions directly delegate to Wolverine's `IMessageBus.InvokeAsync()` method. In that method, Wolverine will direct the command to the correct handler and invoke that handler inline. In a simplistic form, here is the entire handler file for the `CreateIssue` command: ```cs namespace Quickstart; public class CreateIssueHandler { private readonly IssueRepository _repository; public CreateIssueHandler(IssueRepository repository) { _repository = repository; } // The IssueCreated event message being returned will be // published as a new "cascaded" message by Wolverine after // the original message and any related middleware has // succeeded public IssueCreated Handle(CreateIssue command) { var issue = new Issue { Title = command.Title, Description = command.Description, IsOpen = true, Opened = DateTimeOffset.Now, OriginatorId = command.OriginatorId }; _repository.Store(issue); return new IssueCreated(issue.Id); } } ``` snippet source | anchor Hopefully that code is simple enough, but let's talk what you do not see in this code or the initial `Program` code up above. Wolverine uses a [naming convention](/guide/handlers/#rules-for-message-handlers) to automatically discover message handler actions in your application assembly, so at no point did we have to explicitly register the `CreateIssueHandler` in any way. Wolverine does not require the use of marker interfaces or attributes to discover handlers. ::: info These conventions are just some of the ways Wolverine keeps out of the way of your application code whilst enabling developers to write more concise, decoupled code. ::: As mentioned earlier, we want our API to create an email whenever a new issue is created. In this case we're opting to have that email generation and email sending happen in a second message handler that will run after the initial command. You might also notice that the `CreateIssueHandler.Handle()` method returns an `IssueCreated` event. When Wolverine sees that a handler creates what we call a [cascading message](/guide/handlers/cascading), Wolverine will publish the `IssueCreated` event to an in memory queue after the initial message handler succeeds. The advantage of doing this is allowing the slower email generation and sending process to happen in background processes instead of holding up the initial web service call. The `IssueCreated` event message will be handled by this code: ```cs public static class IssueCreatedHandler { public static async Task Handle(IssueCreated created, IssueRepository repository) { var issue = repository.Get(created.Id); var message = await BuildEmailMessage(issue); using var client = new SmtpClient(); client.Send(message); } // This is a little helper method I made public // Wolverine will not expose this as a message handler internal static Task BuildEmailMessage(Issue issue) { // Build up a templated email message, with // some sort of async method to look up additional // data just so we can show off an async // Wolverine Handler return Task.FromResult(new MailMessage()); } } ``` snippet source | anchor Now, you'll notice that Wolverine is happy to allow you to use static methods as handler actions. And also notice that the `Handle()` method takes in an argument for `IssueRepository`. Wolverine always assumes that the first argument of a handler method is the message type, but other arguments are inferred to be services from the system's underlying IoC container. By supporting [method injection](https://www.tatvasoft.com/outsourcing/2023/11/dependency-injection-in-csharp.html#Method) like this, Wolverine is able to cut down on even more of the typical cruft code forced upon you by other .NET tools. *You might be saying that this sounds like the behavior of the conventional method injection behavior of Minimal API in .NET 6, and it is. But we'd like to point out that Wolverine had this years before the ASP.NET team got around to it. :-)* This page introduced the basic usage of Wolverine, how to wire Wolverine into .NET applications, and some rudimentary `Handler` usage. Of course, this all was local with in memory usage and Wolverine can do so much more. Dive deeper and learn more about its other [Handlers and Messages](/guide/handlers/) capabilities. --- --- url: /guide/messaging/introduction.md --- # Getting Started with Wolverine as Message Bus ::: tip As of Wolverine 3.0, you can now connect to multiple Rabbit MQ brokers from one application. We will extend this support to other message broker types in the future. ::: There's certainly some value in Wolverine just being a command bus running inside of a single process, but now it's time to utilize Wolverine to both publish and process messages received through external infrastructure like [Rabbit MQ](https://www.rabbitmq.com/) or [Pulsar](https://pulsar.apache.org/). ## Terminology To put this into perspective, here's how a Wolverine application could be connected to the outside world: ![Wolverine Messaging Architecture](/WolverineMessaging.png) :::tip The diagram above should just say "Message Handler" as Wolverine makes no structural differentiation between commands or events, but Jeremy is being too lazy to fix the diagram. ::: ## Configuring Messaging There's a couple moving parts to using Wolverine as a messaging bus. You'll need to configure connectivity to external infrastructure like Rabbit MQ brokers, set up listening endpoints, and create routing rules to teach Wolverine where and how to send your messages. The [TCP transport](/guide/messaging/transports/tcp) is built in, and the ["local" in memory queues](/guide/messaging/transports/local) can be used like a transport, but you'll need to configure connectivity for every other type of messaging transport adapter to external infrastructure. In all cases so far, the connectivity to external transports is done through an extension method on `WolverineOptions` using the `Use[ToolName]()` idiom that is now common across .NET tools. For an example, here's connecting to a Rabbit MQ broker: ```cs using JasperFx; using Wolverine; using Wolverine.RabbitMQ; var builder = WebApplication.CreateBuilder(args); builder.Host.UseWolverine(opts => { // Using the Rabbit MQ URI specification: https://www.rabbitmq.com/uri-spec.html opts.UseRabbitMq(new Uri(builder.Configuration["rabbitmq"])); // Or connect locally as you might for development purposes opts.UseRabbitMq(); // Or do it more programmatically: opts.UseRabbitMq(rabbit => { rabbit.HostName = builder.Configuration["rabbitmq_host"]; rabbit.VirtualHost = builder.Configuration["rabbitmq_virtual_host"]; rabbit.UserName = builder.Configuration["rabbitmq_username"]; // and you get the point, you get full control over the Rabbit MQ // connection here for the times you need that }); }); ``` snippet source | anchor ## Listening Endpoint Configuration ## Sending Endpoint Configuration --- --- url: /guide/http/endpoints.md --- # HTTP Endpoints ::: warning While Wolverine.HTTP has a relaxed view of naming conventions since it depends on the routing attributes for discovery. It is very possible to utilize the same method as both an HTTP endpoint and Wolverine message handler if the method both follows the correct naming conventions for message handler discovery and is decorated with one of the `[WolverineVerb]` attributes. This can lead to unexpected code generation errors on the message handler side if the method refers to HTTP route arguments, query string values, or other AspNetCore services. Our strong advice is to use the `Endpoint` class name nomenclature for HTTP endpoints unless you are explicitly meaning for a method to be both an HTTP endpoint and message handler. ::: First, a little terminology about Wolverine HTTP endpoints. Consider the following endpoint method: ```cs [WolverinePost("/question")] public static ArithmeticResults PostJson(Question question) { return new ArithmeticResults { Sum = question.One + question.Two, Product = question.One * question.Two }; } ``` snippet source | anchor In the method signature above, `Question` is the "request" type (the payload sent from the client to the server) and `ArithmeticResults` is the "resource" type (what is being returned to the client). If instead that method were asynchronous like this: ```cs [WolverinePost("/question2")] public static Task PostJsonAsync(Question question) { var results = new ArithmeticResults { Sum = question.One + question.Two, Product = question.One * question.Two }; return Task.FromResult(results); } ``` snippet source | anchor The resource type is still `ArithmeticResults`. Likewise, if an endpoint returns `ValueTask`, the resource type is also `ArithmeticResults`, and Wolverine will worry about the asynchronous (or `return Task.CompletedTask;`) mechanisms for you in the generated code. ## Legal Endpoint Signatures ::: info It's actually possible to create custom conventions for how Wolverine resolves method parameters to the endpoint methods using the `IParameterStrategy` plugin interface explained later in this page. ::: First off, every endpoint method must be a `public` method on a `public` type to accommodate the runtime code generation. After that, you have quite a bit of flexibility. In terms of what the legal parameters to your endpoint method, Wolverine uses these rules *in order of precedence* to determine how to source that parameter at runtime: | Type or Description | Behavior | |--------------------------------------------|-------------------------------------------------------------------------------------------------------------------------| | Decorated with `[FromServices]` | The argument is resolved as an IoC service | | `IMessageBus` | Creates a new Wolverine message bus object | | `HttpContext` or its members | See the section below on accessing the HttpContext | | Parameter name matches a route parameter | See the [routing page](/guide/http/routing) for more information | | Decorated with `[FromHeader]` | See [working with headers](/guide/http/headers) for more information | | `string`, `int`, `Guid`, etc. | All other "simple" .NET types are assumed to be [query string values](/guide/http/querystring) | | The first concrete, "not simple" parameter | Deserializes the HTTP request body as JSON to this type | | Every thing else | Wolverine will try to source the type as an IoC service | You can force Wolverine to ignore a parameter as the request body type by decorating the parameter with the `[NotBody]` attribute like this: ```cs [WolverinePost("/notbody")] // The Recorder parameter will be sourced as an IoC service // instead of being treated as the HTTP request body public string PostNotBody([NotBody] Recorder recorder) { recorder.Actions.Add("Called AttributesEndpoints.Post()"); return "all good"; } ``` snippet source | anchor ::: warning You can return any type that can technically be serialized to JSON, which means even primitive values like numbers, strings, or dates. Just know that there is special handling for `int` and any invalid HTTP status code may result in a web browser hanging -- and that's not typically what you'd like to happen! ::: In terms of the response type, you can use: | Type | Body | Status Code | Notes | |--------------------------------|--------------|-------------------|---------------------------------------------------------------------------| | `void` / `Task` / `ValueTask` | Empty | 200 | | | `string` | "text/plain" | 200 | Writes the result to the response | | `int` | Empty | Value of response | **Note**, this must be a valid HTTP status code or bad things may happen! | | Type that implements `IResult` | Varies | Varies | The `IResult.ExecuteAsync()` method is executed | | `CreationResponse` or subclass | JSON | 201 | The response is serialized, and writes a `location` response header | | `AcceptResponse` or subclass | JSON | 202 | The response is serialized, and writes a `location` response header | | Any other type | JSON | 200 | The response is serialized to JSON | In all cases up above, if the endpoint method is asynchronous using either `Task` or `ValueTask`, the `T` is the response type. In other words, a response of `Task` has the same rules as a response of `string` and `ValueTask` behaves the same as a response of `int`. And now to complicate *everything*, but I promise this is potentially valuable, you can also use [Tuples](https://learn.microsoft.com/en-us/dotnet/api/system.tuple?view=net-7.0) as the return type of an HTTP endpoint. In this case, the first item in the tuple is the official response type that is treated by the rules above. To make that concrete, consider this sample that we wrote in the introduction to Wolverine.Http: ```cs // Introducing this special type just for the http response // gives us back the 201 status code public record TodoCreationResponse(int Id) : CreationResponse("/todoitems/" + Id); // The "Endpoint" suffix is meaningful, but you could use // any name if you don't mind adding extra attributes or a marker interface // for discovery public static class TodoCreationEndpoint { [WolverinePost("/todoitems")] public static (TodoCreationResponse, TodoCreated) Post(CreateTodo command, IDocumentSession session) { var todo = new Todo { Name = command.Name }; // Just telling Marten that there's a new entity to persist, // but I'm assuming that the transactional middleware in Wolverine is // handling the asynchronous persistence outside of this handler session.Store(todo); // By Wolverine.Http conventions, the first "return value" is always // assumed to be the Http response, and any subsequent values are // handled independently return ( new TodoCreationResponse(todo.Id), new TodoCreated(todo.Id) ); } } ``` snippet source | anchor In the case above, `TodoCreationResponse` is the first item in the tuple, so Wolverine treats that as the response for the HTTP endpoint. The second `TodoCreated` value in the tuple is treated as a [cascading message](/guide/messaging/transports/local) that will be published through Wolverine's messaging (or a local queue depending on the routing). How Wolverine handles those extra "return values" is the same [return value rules](/guide/handlers/return-values) from the messaging handlers. In the case of wanting to leverage Wolverine "return value" actions but you want your endpoint to return an empty response body, you can use the `[Wolverine.Http.EmptyResponse]` attribute to tell Wolverine *not* to use any return values as a the endpoint response and to return an empty response with a `204` status code. Here's an example from the tests: ```cs [AggregateHandler] [WolverinePost("/orders/ship"), EmptyResponse] // The OrderShipped return value is treated as an event being posted // to a Marten even stream // instead of as the HTTP response body because of the presence of // the [EmptyResponse] attribute public static OrderShipped Ship(ShipOrder command, Order order) { return new OrderShipped(); } ``` snippet source | anchor ## JSON Handling See [JSON serialization for more information](/guide/http/json) ## Returning Strings To create an endpoint that writes a string with `content-type` = "text/plain", just return a string as your resource type, so `string`, `Task`, or `ValueTask` from your endpoint method like so: ```cs public class HelloEndpoint { [WolverineGet("/")] public string Get() => "Hello."; } ``` snippet source | anchor ## Using IResult ::: tip The `IResult` mechanics are applied to the return value of any type that can be cast to `IResult` ::: Wolverine will execute an ASP.Net Core `IResult` object returned from an HTTP endpoint method. ```cs [WolverinePost("/choose/color")] public IResult Redirect(GoToColor request) { switch (request.Color) { case "Red": return Results.Redirect("/red"); case "Green": return Results.Redirect("/green"); default: return Results.Content("Choose red or green!"); } } ``` snippet source | anchor ## Using IoC Services Wolverine HTTP endpoint methods happily support "method injection" of service types that are known in the IoC container. If there's any potential for confusion between the request type argument and what should be coming from the IoC container, you can decorate parameters with the `[FromServices]` attribute from ASP.Net Core to give Wolverine a hint. Otherwise, Wolverine is asking the underlying Lamar container if it knows how to resolve the service from the parameter argument. ## Accessing HttpContext Simply expose a parameter of any of these types to get either the current `HttpContext` for the current request or children members of `HttpContext`: 1. `HttpContext` 2. `HttpRequest` 3. `HttpResponse` 4. `CancellationToken` 5. `ClaimsPrincipal` You can also get at the trace identifier for the current `HttpContext` by a parameter like this: ```cs [WolverineGet("/http/identifier")] public string UseTraceIdentifier(string traceIdentifier) { return traceIdentifier; } ``` snippet source | anchor ## Customizing Parameter Handling There's actually a way to customize how Wolverine handles parameters in HTTP endpoints to create your own conventions. To do so, you'd need to write an implementation of the `IParameterStrategy` interface from Wolverine.Http: ```cs /// /// Apply custom handling to a Wolverine.Http endpoint/chain based on a parameter within the /// implementing Wolverine http endpoint method /// /// The Variable referring to the input of this parameter public interface IParameterStrategy { bool TryMatch(HttpChain chain, IServiceContainer container, ParameterInfo parameter, out Variable? variable); } ``` snippet source | anchor As an example, let's say that you want any parameter of type `DateTimeOffset` that's named "now" to receive the current system time. To do that, we can write this class: ```cs public class NowParameterStrategy : IParameterStrategy { public bool TryMatch(HttpChain chain, IServiceContainer container, ParameterInfo parameter, out Variable? variable) { if (parameter.Name == "now" && parameter.ParameterType == typeof(DateTimeOffset)) { // This is tying into Wolverine's code generation model variable = new Variable(typeof(DateTimeOffset), $"{typeof(DateTimeOffset).FullNameInCode()}.{nameof(DateTimeOffset.UtcNow)}"); return true; } variable = default; return false; } } ``` snippet source | anchor and register that strategy within our `MapWolverineEndpoints()` set up like so: ```cs // Customizing parameter handling opts.AddParameterHandlingStrategy(); ``` snippet source | anchor And lastly, here's the application within an HTTP endpoint for extra context: ```cs [WolverineGet("/now")] public static string GetNow(DateTimeOffset now) // using the custom parameter strategy for "now" { return now.ToString(); } ``` snippet source | anchor ## Http Endpoint / Message Handler Combo Here's a common scenario that has come up from Wolverine users. Let's say that you have some kind of logical command message that your system needs to handle that might come in from the outside from either HTTP clients or from asynchronous messaging. Folks have frequently asked about how to reuse code between the message handling invocation and the HTTP endpoint. You've got a handful of options: 1. Build a message handler and have the HTTP endpoint just delegate to `IMessageBus.InvokeAsync()` with the message 2. Have both the message handler and HTTP endpoint delegate to shared code, whether that be a shared service, just a static method somewhere, or even have the HTTP endpoint code directly call the concrete message handler 3. Use a hybrid Message Handler / HTTP Endpoint because Wolverine can do that! To make a single class and method be both a message handler and HTTP endpoint, just add a `[Wolverine{HttpVerb}]` attribute with the route directly on your message handler. As long as that method follows Wolverine's normal naming rules for message discovery, Wolverine will treat it as both a message handler and as an HTTP endpoint. Here's an example from our tests: ```cs public static class NumberMessageHandler { public static ProblemDetails Validate(NumberMessage message) { if (message.Number > 5) { return new ProblemDetails { Detail = "Number is bigger than 5", Status = 400 }; } // All good, keep on going! return WolverineContinue.NoProblems; } // This "Before" method would only be utilized as // an HTTP endpoint [WolverineBefore(MiddlewareScoping.HttpEndpoints)] public static void BeforeButOnlyOnHttp(HttpContext context) { Debug.WriteLine("Got an HTTP request for " + context.TraceIdentifier); CalledBeforeOnlyOnHttpEndpoints = true; } // This "Before" method would only be utilized as // a message handler [WolverineBefore(MiddlewareScoping.MessageHandlers)] public static void BeforeButOnlyOnMessageHandlers() { CalledBeforeOnlyOnMessageHandlers = true; } // Look at this! You can use this as an HTTP endpoint too! [WolverinePost("/problems2")] public static void Handle(NumberMessage message) { Debug.WriteLine("Handled " + message); Handled = true; } // These properties are just a cheap trick in Wolverine internal tests public static bool Handled { get; set; } public static bool CalledBeforeOnlyOnMessageHandlers { get; set; } public static bool CalledBeforeOnlyOnHttpEndpoints { get; set; } } ``` snippet source | anchor If you are using Wolverine.HTTP in your application, Wolverine is able to treat `ProblemDetails` similar to the built in `HandlerContinuation` when running inside of message handlers. If you have some middleware methods that should only apply specifically when running as a handler or when running as an HTTP endpoint, you can utilize `MiddlewareScoping` directives with `[WolverineBefore]`, `[WolverineAfter]`, or `[WolverineFinally]` attributes to limit the applicability of individual middleware methods. ::: info There is no runtime filtering here because the `MiddlewareScoping` impacts the generated code around your hybrid message handler / HTTP endpoint method, and Wolverine already generates code separately for the two use cases. ::: As of Wolverine 5.7, you can also technically use `HttpContext` arguments in the message handler usage *if* you are carefully accounting for that being null as shown in this sample: ```cs public record DoHybrid(string Message); public static class HybridHandler { [WolverinePost("/hybrid")] public static async Task HandleAsync(DoHybrid command, HttpContext? context) { // What this, because it will be null if this is used within // a message handler! if (context != null) { context.Response.ContentType = "text/plain"; await context.Response.WriteAsync(command.Message); } } } ``` snippet source | anchor --- --- url: /guide/http/transport.md --- # HTTP Messaging Transport On top of everything else that Wolverine does, the `WolverineFx.HTTP` Nuget also contains the ability to use HTTP as a messaging transport for Wolverine messaging. Assuming you have that library attached to your AspNetCore project, add this declaration to your `WebApplication` in your `Program.Main()` method: ```cs app.MapWolverineHttpTransportEndpoints(); ``` snippet source | anchor The declaration above is actually using Minimal API rather than native Wolverine.HTTP endpoints, but that's perfectly fine in this case. That declaration also enables you to use Minimal API's Fluent Interface to customize the authorization rules against the HTTP endpoints for Wolverine messaging. To establish publishing rules in your application to a remote endpoint in another system, use this syntax: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.PublishAllMessages() .ToHttpEndpoint("https://binary.com/api"); }) .StartAsync(); ``` snippet source | anchor ::: tip This functionality if very new, and you may want to reach out through Discord for any questions. ::: --- --- url: /guide/http/middleware.md --- # HTTP Middleware Creating and applying middleware to HTTP endpoints is very similar to [middleware for message handlers](/guide/handlers/middleware) with just a couple differences: 1. As shown in a later section, you can use `IResult` from ASP.Net Core Middleware for conditionally stopping request handling *in addition to* the `HandlerContinuation` approach from message handlers 2. Use the `IHttpPolicy` interface instead of `IHandlerPolicy` to conventionally apply middleware to only HTTP endpoints 3. Your middleware types can take in `HttpContext` and any other related services that Wolverine supports for HTTP endpoints in addition to IoC services The `[Middleware]` attribute from message handlers works on HTTP endpoint methods. ## Conditional Endpoint Continuations For message handlers, you use the [`HandlerContinuation`](/guide/handlers/middleware.html#conditionally-stopping-the-message-handling) to conditionally stop message handling in middleware that executes before the main handler. Likewise, you can do the same thing in HTTP endpoints, but instead use the [IResult](https://learn.microsoft.com/en-us/aspnet/core/fundamentals/minimal-apis/responses?view=aspnetcore-7.0) concept from ASP.Net Core Minimal API. As an example, let's say that you are using some middleware to do custom authentication filtering that stops processing with a 401 status code. Here's a bit of middleware from the Wolverine tests that does just that: ```cs public class FakeAuthenticationMiddleware { public static IResult Before(IAmAuthenticated message) { return message.Authenticated // This tells Wolverine to just keep going ? WolverineContinue.Result() // If the IResult is not WolverineContinue, Wolverine // will execute the IResult and stop processing otherwise : Results.Unauthorized(); } } ``` snippet source | anchor Which is registered like this (or as described in [`Registering Middleware by Message Type`](/guide/handlers/middleware.html##registering-middleware-by-message-type)): ```cs opts.AddMiddlewareByMessageType(typeof(FakeAuthenticationMiddleware)); opts.AddMiddlewareByMessageType(typeof(CanShipOrderMiddleWare)); ``` snippet source | anchor The key point to notice there is that `IResult` is a "return value" of the middleware. In the case of an HTTP endpoint, Wolverine will check if that `IResult` is a `WolverineContinue` object, and if so, will continue processing. If the `IResult` object is anything else, Wolverine will execute that `IResult` and stop processing the HTTP request otherwise. For a little more complex example, here's part of the Fluent Validation middleware for Wolverine.Http: ```cs [MethodImpl(MethodImplOptions.AggressiveInlining)] public static async Task ExecuteOne(IValidator validator, IProblemDetailSource source, T message) { // First, validate the incoming request of type T var result = await validator.ValidateAsync(message); // If there are any errors, create a ProblemDetails result and return // that to write out the validation errors and otherwise stop processing if (result.Errors.Any()) { var details = source.Create(message, result.Errors); return Results.Problem(details); } // Everything is good, full steam ahead! return WolverineContinue.Result(); } ``` snippet source | anchor Likewise, you can also just return a `null` from middleware for `IResult` and Wolverine will interpret that as "just continue" as shown in this sample: ```cs public class ValidatedCompoundEndpoint2 { public static User? Load(BlockUser2 cmd) { return cmd.UserId.IsNotEmpty() ? new User(cmd.UserId) : null; } // This method would be called, and if the NotFound value is // not null, will stop the rest of the processing // Likewise, Wolverine will use the NotFound type to add // OpenAPI metadata public static NotFound? Validate(User? user) { if (user == null) return (NotFound?)Results.NotFound(user); return null; } [WolverineDelete("/optional/result")] public static string Handle(BlockUser2 cmd, User user) { return "Ok - user blocked"; } } ``` snippet source | anchor ## Using Configure(chain) Methods You can make explicit modifications to HTTP processing for middleware or OpenAPI metadata for a single endpoint (really all endpoint methods on that type) using the `public static void Configure(HttpChain)` convention. Let's say you have a bit of custom middleware for HTTP endpoints like so: ```cs public class StopwatchMiddleware { private readonly Stopwatch _stopwatch = new(); public void Before() { _stopwatch.Start(); } public void Finally(ILogger logger, HttpContext context) { _stopwatch.Stop(); logger.LogDebug("Request for route {Route} ran in {Duration} milliseconds", context.Request.Path, _stopwatch.ElapsedMilliseconds); } } ``` snippet source | anchor And you want to apply it to a single HTTP endpoint without having to dirty your hands with an attribute. You can use that naming convention up above like so: ```cs public class MeasuredEndpoint { // The signature is meaningful here public static void Configure(HttpChain chain) { // Call this method before the normal endpoint chain.Middleware.Add(MethodCall.For(x => x.Before())); // Call this method after the normal endpoint chain.Postprocessors.Add(MethodCall.For(x => x.Finally(null, null))); } [WolverineGet("/timed")] public async Task Get() { await Task.Delay(100.Milliseconds()); return "how long did I take?"; } } ``` snippet source | anchor ## Apply Middleware by Policy To apply middleware to selected HTTP endpoints by some kind of policy, you can use the `IHttpPolicy` type to analyze and apply middleware to some subset of HTTP endpoints. As an example from Wolverine.Http itself, this middleware is applied to any endpoint that also uses Wolverine message publishing to apply tracing information from the `HttpContext` to subsequent Wolverine messages published during the request: ```cs public static class RequestIdMiddleware { public const string CorrelationIdHeaderKey = "X-Correlation-ID"; // Remember that most Wolverine middleware can be done with "just" a method public static void Apply(HttpContext httpContext, IMessageContext messaging) { if (httpContext.Request.Headers.TryGetValue(CorrelationIdHeaderKey, out var correlationId)) { messaging.CorrelationId = correlationId.First(); } } } ``` snippet source | anchor And a matching `IHttpPolicy` to apply that middleware to any HTTP endpoint where there is a dependency on Wolverine's `IMessageContext` or `IMessageBus`: ```cs internal class RequestIdPolicy : IHttpPolicy { public void Apply(IReadOnlyList chains, GenerationRules rules, IServiceContainer container) { foreach (var chain in chains) { var serviceDependencies = chain.ServiceDependencies(container, Type.EmptyTypes).ToArray(); if (serviceDependencies.Contains(typeof(IMessageContext)) || serviceDependencies.Contains(typeof(IMessageBus))) { chain.Middleware.Insert(0, new MethodCall(typeof(RequestIdMiddleware), nameof(RequestIdMiddleware.Apply))); } } } } ``` snippet source | anchor Lastly, this particular policy is included by default, but if it wasn't, this is the code to apply it explicitly: ```cs // app is a WebApplication app.MapWolverineEndpoints(opts => { // add the policy to Wolverine HTTP endpoints opts.AddPolicy(); }); ``` snippet source | anchor For simpler middleware application, you could also use this feature: ```cs app.MapWolverineEndpoints(opts => { // Fake policy to add authentication middleware to any endpoint classes under // an application namespace opts.AddMiddleware(typeof(MyAuthenticationMiddleware), c => c.HandlerCalls().Any(x => x.HandlerType.IsInNamespace("MyApp.Authenticated"))); }); ``` snippet source | anchor ## Required Inputs Here's a common pattern in HTTP service development. Based on a route argument, you first load some kind of entity from persistence. If the data is not found, return a status code 404 that means the resource was not found, but otherwise continue working against that entity data you just loaded. To help remove boilerplate code, Wolverine.Http 1.2 introduced support for this pattern using the standard `[Required]` attribute on the parameters of the inputs to the HTTP handler methods. Here's an example that tries to apply an update to an existing `Todo` entity: ```cs public record UpdateRequest(string Name, bool IsComplete); public static class UpdateEndpoint { // Find required Todo entity for the route handler below public static Task LoadAsync(int id, IDocumentSession session) => session.LoadAsync(id); [WolverinePut("/todos/{id:int}")] public static StoreDoc Put( // Route argument int id, // The request body UpdateRequest request, // Entity loaded by the method above, // but note the [Required] attribute [Required] Todo? todo) { todo.Name = request.Name; todo.IsComplete = request.IsComplete; return MartenOps.Store(todo); } } ``` snippet source | anchor You'll notice that the `LoadAsync()` method is looking up the `Todo` entity for the route parameter, where Wolverine would normally be passing that value to the matching `Todo` parameter of the main `Put` method. In this case though, because of the `[Required]` attribute, Wolverine.Http will stop processing with a 404 status code if the `Todo` cannot be found. You can see this behavior in the generated code below: ```csharp public class PUT_todos_id : Wolverine.Http.HttpHandler { private readonly Wolverine.Http.WolverineHttpOptions _wolverineHttpOptions; private readonly Marten.ISessionFactory _sessionFactory; public PUT_todos_id(Wolverine.Http.WolverineHttpOptions wolverineHttpOptions, Marten.ISessionFactory sessionFactory) : base(wolverineHttpOptions) { _wolverineHttpOptions = wolverineHttpOptions; _sessionFactory = sessionFactory; } public override async System.Threading.Tasks.Task Handle(Microsoft.AspNetCore.Http.HttpContext httpContext) { await using var documentSession = _sessionFactory.OpenSession(); if (!int.TryParse((string)httpContext.GetRouteValue("id"), out var id)) { httpContext.Response.StatusCode = 404; return; } var (request, jsonContinue) = await ReadJsonAsync(httpContext); if (jsonContinue == Wolverine.HandlerContinuation.Stop) return; var todo = await WolverineWebApi.Samples.UpdateEndpoint.LoadAsync(id, documentSession).ConfigureAwait(false); // 404 if this required object is null if (todo == null) { httpContext.Response.StatusCode = 404; return; } var storeDoc = WolverineWebApi.Samples.UpdateEndpoint.Put(id, request, todo); // Placed by Wolverine's ISideEffect policy storeDoc.Execute(documentSession); // Commit any outstanding Marten changes await documentSession.SaveChangesAsync(httpContext.RequestAborted).ConfigureAwait(false); // Wolverine automatically sets the status code to 204 for empty responses httpContext.Response.StatusCode = 204; } } ``` Lastly, Wolverine is also updating the OpenAPI metadata to reflect the possibility of a 404 response. --- --- url: /guide/http.md --- # Http Services with Wolverine ::: info Wolverine.Http is strictly designed for building web services, but you can happily mix and match Wolverine HTTP endpoints with ASP.Net Core MVC handling Razor views. ::: ::: warning If you are moving to Wolverine.Http from 2.\* or earlier, just know that there is now a required `IServiceCollection.AddWolverineHttp()` call in your `Program.Main()` bootstrapping. Wolverine.Http will "remind" you on startup by throwing an exception if the extra service registration is missing. This was a side effect of the change to support `ServiceProvider` and other IoC tools. ::: Server side applications are frequently built with some mixture of HTTP web services, asynchronous processing, and asynchronous messaging. Wolverine by itself can help you with the asynchronous processing through its [local queue functionality](/guide/messaging/transports/local), and it certainly covers all common [asynchronous messaging](/guide/messaging/introduction) requirements. Wolverine also has its Wolverine.Http library that utilizes Wolverine's execution pipeline for ASP.Net Core web services. Besides generally being a lower code ceremony option to MVC Core or Minimal API, Wolverine.HTTP provides very strong integration with Wolverine's transactional inbox/outbox support for durable messaging (something that has in the past been very poorly supported if at all by older .NET messaging tools) as a very effective tooling solution for Event Driven Architectures that include HTTP services. Moreover, Wolverine.HTTP's coding model is conducive to "vertical slice architecture" approaches with significantly lower code ceremony than other .NET web frameworks. Lastly, Wolverine.HTTP can help you create code where the business or workflow logic is easily unit tested in isolation without having to resort to complicated layering in code or copious usage of mock objects in your test code. For a simplistic example, let's say that we're inevitably building a "Todo" application where we want a web service endpoint that allows our application to create a new `Todo` entity, save it to a database, and raise an `TodoCreated` event that will be handled later and off to the side by Wolverine. ## Getting Started Even in this simple example usage, that endpoint *should* be developed such that the creation of the new `Todo` entity and the corresponding `TodoCreated` event message either succeed or fail together to avoid putting the system into an inconsistent state. That's a perfect use case for Wolverine's [transactional outbox](/guide/durability/). While the Wolverine team believes that Wolverine's outbox functionality is significantly easier to use outside of the context of message handlers than other .NET messaging tools, it's still easiest to use within the context of a message handler, so let's just build out a Wolverine message handler for the `CreateTodo` command: ```cs public class CreateTodoHandler { public static (Todo, TodoCreated) Handle(CreateTodo command, IDocumentSession session) { var todo = new Todo { Name = command.Name }; // Just telling Marten that there's a new entity to persist, // but I'm assuming that the transactional middleware in Wolverine is // handling the asynchronous persistence outside of this handler session.Store(todo); return (todo, new TodoCreated(todo.Id)); } } ``` snippet source | anchor Okay, but we still need to expose a web service endpoint for this functionality. We *could* utilize Wolverine within an MVC controller as a "mediator" tool like so: ```cs public class TodoController : ControllerBase { [HttpPost("/todoitems")] [ProducesResponseType(201, Type = typeof(Todo))] public async Task Post( [FromBody] CreateTodo command, [FromServices] IMessageBus bus) { // Delegate to Wolverine and capture the response // returned from the handler var todo = await bus.InvokeAsync(command); return Created($"/todoitems/{todo.Id}", todo); } } ``` snippet source | anchor Or we could do the same thing with Minimal API: ```cs // app in this case is a WebApplication object app.MapPost("/todoitems", async (CreateTodo command, IMessageBus bus) => { var todo = await bus.InvokeAsync(command); return Results.Created($"/todoitems/{todo.Id}", todo); }).Produces(201); ``` snippet source | anchor While the code above is certainly functional, and many teams are succeeding today using a similar strategy with older tools like [MediatR](https://github.com/jbogard/MediatR), the Wolverine team thinks there are some areas to improve in the code above: 1. When you look into the internals of the runtime, there's some potentially unnecessary performance overhead as every single call to that web service does service locations and dictionary lookups that could be eliminated 2. There's some opportunity to reduce object allocations on each request -- and that *can* be a big deal for performance and scalability 3. It's not that bad, but there's some boilerplate code above that serves no purpose at runtime but helps in the generation of [OpenAPI documentation](https://www.openapis.org/) through Swashbuckle At this point, let's look at some tooling in the `WolverineFx.Http` Nuget library that can help you incorporate Wolverine into ASP.Net Core applications in a potentially more successful way than trying to "just" use Wolverine as a mediator tool. After adding the `WolverineFx.Http` Nuget to our Todo web service, I could use this option for a little bit more efficient delegation to the underlying Wolverine message handler: ```cs // This is *almost* an equivalent, but you'd get a status // code of 200 instead of 201. If you care about that anyway. app.MapPostToWolverine("/todoitems"); ``` snippet source | anchor The code up above is very close to a functional equivalent to our early Minimal API or MVC Controller usage, but there's a couple differences: 1. In this case the HTTP endpoint will return a status code of `200` instead of the slightly more correct `201` that denotes a creation. **Most of us aren't really going to care, but we'll come back to this a little later** 2. In the call to `MapPostToWolverine()`, Wolverine.HTTP is able to make a couple performance optimizations that completely eliminates any usage of the application's IoC container at runtime and bypasses some dictionary lookups and object allocation that would have to occur in the simple "mediator" approach I personally find the indirection of delegating to a mediator tool to add more code ceremony and indirection than I prefer, but many folks like that approach because of how bloated MVC Controller types can become in enterprise systems over time. What if instead we just had a much cleaner way to code an HTTP endpoint that *still* helped us out with OpenAPI documentation? That's where the Wolverine.Http ["endpoint" model](/guide/http/endpoints) comes into play. Let's take the same Todo creation endpoint and use Wolverine to build an HTTP endpoint: ```cs // Introducing this special type just for the http response // gives us back the 201 status code public record TodoCreationResponse(int Id) : CreationResponse("/todoitems/" + Id); // The "Endpoint" suffix is meaningful, but you could use // any name if you don't mind adding extra attributes or a marker interface // for discovery public static class TodoCreationEndpoint { [WolverinePost("/todoitems")] public static (TodoCreationResponse, TodoCreated) Post(CreateTodo command, IDocumentSession session) { var todo = new Todo { Name = command.Name }; // Just telling Marten that there's a new entity to persist, // but I'm assuming that the transactional middleware in Wolverine is // handling the asynchronous persistence outside of this handler session.Store(todo); // By Wolverine.Http conventions, the first "return value" is always // assumed to be the Http response, and any subsequent values are // handled independently return ( new TodoCreationResponse(todo.Id), new TodoCreated(todo.Id) ); } } ``` snippet source | anchor The code above will actually generate the exact same OpenAPI documentation as the MVC Controller or Minimal API samples earlier in this post, but there's significantly less boilerplate code needed to expose that information. Instead, Wolverine.Http relies on type signatures to "know" what the OpenAPI metadata for an endpoint should be. In conjunction with Wolverine's [Marten integration](/guide/durability/marten/) (or Wolverine's [EF Core integration](/guide/durability/efcore) too!), you potentially get a very low ceremony approach to writing HTTP services that *also* utilizes Wolverine's [durable outbox](/guide/durability/) without giving up anything in regards to crafting effective and accurate OpenAPI metadata about your services. ## Eager Warmup Wolverine.HTTP has a known issue with applications that make simultaneous requests to the same endpoint at start up where the runtime code generation can blow up if the first requests come in together. While the Wolverine team works on this, the simple amelioration is to either "just" pre-generate the code ahead of time. See [Working with Code Generation](/guide/codegen) for more information on this. Or, you can opt for `Eager` initialization of the HTTP endpoints to side step this problem in development when pre-generating types isn't viable: ```cs var app = builder.Build(); app.MapWolverineEndpoints(x => x.WarmUpRoutes = RouteWarmup.Eager); return await app.RunJasperFxCommands(args); ``` snippet source | anchor ## Using the HttpContext.RequestServices ::: tip The opt in behavior to share the scoped services with the rest of the AspNetCore pipeline is useful for using Wolverine endpoints underneath AspNetCore middleware that "smuggles" state through the IoC container. Custom multi-tenancy middleware or custom authorization or other security middleware frequently does this. We think this will be helpful for mixed systems where Wolverine.HTTP is used for some routes while other routes are served by MVC Core or Minimal API or even some other kind of AspNetCore `Endpoint`. ::: By default, any time [Wolverine has to revert to using a service locator](/guide/codegen.html#wolverine-code-generation-and-ioc) to generate the adapter code for an HTTP endpoint, Wolverine is using an isolated `IServiceScope` (or Lamar `INestedContainer`) within the generated code. But, with Wolverine 5.0+ you can opt into Wolverine just using the `HttpContext.RequestServices` so that you can share services with AspNetCore middleware. You can also configure *some* service types to be pulled from the `HttpContext.RequestServices` even if Wolverine is otherwise generating more efficient constructor calls for all other dependencies. Here's an example using both of these opt in behaviors: ```cs var builder = WebApplication.CreateBuilder(); builder.UseWolverine(opts => { // more configuration }); // Just pretend that this IUserContext is being builder.Services.AddScoped(); builder.Services.AddWolverineHttp(); var app = builder.Build(); // Custom middleware that is somehow configuring our IUserContext // that might be getting used within app.UseMiddleware(); app.MapWolverineEndpoints(opts => { // Opt into using the shared HttpContext.RequestServices scoped // container any time Wolverine has to use a service locator opts.ServiceProviderSource = ServiceProviderSource.FromHttpContextRequestServices; // OR this is the default behavior to be backwards compatible: opts.ServiceProviderSource = ServiceProviderSource.IsolatedAndScoped; // We're telling Wolverine that the IUserContext should always // be pulled from HttpContext.RequestServices // and this happens regardless of the ServerProviderSource! opts.SourceServiceFromHttpContext(); }); return await app.RunJasperFxCommands(args); ``` snippet source | anchor Notice the call to `SourceServiceFromHttpContext()`. That directs Wolverine.HTTP to always pull the service `T` from the `HttpContext.RequestServices` scoped container so that Wolverine.HTTP can play nicely with custom AspNetCore middleware or whatever else you have around your Wolverine.HTTP endpoints. ::: warning The Wolverine team believes that smuggling important state between upstream middleware and downstream handlers leads to code that is hard to reason about and hence, potentially buggy in real life usage. Alas, you could easily need this functionality in the real world, so here you go. ::: --- --- url: /tutorials/idempotency.md --- # Idempotency in Messaging ::: tip Wolverine's built in idempotency detection can only be used in conjunction with configured envelope storage. ::: Wolverine should never be trying to publish or process the exact same message at the same endpoint more than once, but it's an imperfect world and things can go weird in real life usage. Assuming that you have some sort of [message storage](/guide/durability/) enabled, Wolverine can use its transactional inbox support to enforce message idempotency. As usual, we're going to reach for the EIP book and what they described as an [Idempotent Receiver](https://www.enterpriseintegrationpatterns.com/patterns/messaging/IdempotentReceiver.html): > Design a receiver to be an Idempotent Receiver--one that can safely receive the same message multiple times. In practical terms, this means that Wolverine is able to use its incoming message storage to "know" whether it has already processed an incoming message and discard any duplicate message with the same Wolverine message id that *somehow, some way* manages to arrive twice from an external transport. The mechanism is a little bit different depending on the [Wolverine listening endpoint mode](/guide/runtime.html#endpoint-types), it's always keying off the message id assigned by Wolverine. Unless of course you're pursuing a [modular monolith architecture](/tutorials/modular-monolith) where you might be expecting the same identified message to arrive and be processed separately in separate endpoints. In which case, this setting: ```cs var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.PersistMessagesWithSqlServer(Servers.SqlServerConnectionString, "receiver2"); // This setting changes the internal message storage identity opts.Durability.MessageIdentity = MessageIdentity.IdAndDestination; }) .StartAsync(); ``` snippet source | anchor Means that the uniqueness is the message id + the endpoint destination, which Wolverine stores as a `Uri` string in the various envelope storage databases. In all cases, Wolverine simply detects a [primary key](https://en.wikipedia.org/wiki/Primary_key) violation on the incoming envelope storage to "know" that the message has already been handled. ::: info There are built in error policies in Wolverine (introduced in 5.3) to automatically [discard](/guide/handlers/error-handling.html#discarding-messages) any message that is determined to be a duplicate. This is done through exception filters and matching based on exceptions thrown by the underlying message storage database, and there's certainly a chance you *might* have to occasionally help Wolverine out with more exception filter rules to discard these messages that can never be successfully processed. ::: ## In Durable Endpoints ::: info Wolverine 5.2 and 5.3 both included improvements to the idempotency tracking and this documentation reflects those versions. Before 5.2, Wolverine would try to mark the message as `Handled` after the full message was handled, but outside of any transaction during the message handling. ::: Idempotency checking is turned on by default with `Durable` endpoints. When messages are received at a `Durable` endpoint, this is the sequence of steps: 1. The Wolverine listener creates the Wolverine `Envelope` for the incoming message 2. The Wolverine listener will try to insert the new incoming `Envelope` into the transactional inbox storage 3. If the `IMessageStore` for the system throws a `DuplicateIncomingEnvelopeException` on that operation, that's a duplicate, so Wolverine logs that and discards that message by "ack-ing" the message broker (that's a little different based on the actual underlying message transport technology) 4. Assuming the message is correctly stored in the inbox storage, Wolverine "acks" the message with the broker and puts the message into the in memory channel for processing 5. With at least the Marten or EF Core transactional middleware support, Wolverine will try to update the storage for the current message with the status `Handled` as part of the message handling transaction 6. If the envelope was not previously marked as `Handled`, the Wolverine listener will try to mark the stored message as `Handled` after the message completely succeeds Also see the later section on message retention. ## Buffered or Inline Endpoints ::: tip The idempotency checking is only possible within message handlers that have the transactional middleware applied. ::: ::: info For `Buffered` or `Inline` endpoints, Wolverine is **only** storing metadata about the message and not the actual message body or `Envelope`. It's just enough information to feed the idempotency checks and to satisfy expected database data constraints. ::: ::: warning As of 5.4.1, **every** usage of explicit idempotency outside of durable listeners will use `Eager` checking regardless of the configuration. The `Optimistic` mode has thus far proven to be too buggy to be useful. ::: Idempotency checking within message handlers executing within `Buffered` or more likely `Inline` listeners will require you to "opt in." First though, the idempotency check in this case can be done in one of two modes: 1. `Eager` -- just means that Wolverine will apply some middleware around the handler such that it will make an early database call to try to insert a skeleton placeholder in the transactional inbox storage 2. `Optmistic` -- Wolverine will try to insert the skeleton message information as part of the message handling transaction to try to avoid extra database round trips To be honest, the EF Core integration will always use the `Eager` approach no matter what. Marten supports both modes, and the `Optimistic` approach may be valuable if all the activity of your message handler is in changes to that same database so everything can still be rolled back by the idempotency check failing. For another example, if your message handler involves a web service call to an external system or really any kind of action that potentially makes state changes outside of the current transaction, you have to use the `Eager` mode. With all of that being said, you can either opt into the idempotency checks one at a time with an overload of the `[Transactional]` attribute like this: ```cs [Transactional(IdempotencyStyle.Eager)] public static void Handle(DoSomething msg) { } ``` snippet source | anchor Or you can use an overload of the auto apply transactions policy: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.Policies.AutoApplyTransactions(IdempotencyStyle.Eager); }) .StartAsync(); ``` snippet source | anchor ::: tip The idempotency check and the process of marking an incoming envelope are themselves "idempotent" within Wolverine to avoid Wolverine from making unnecessary database calls. ~~~~ ::: ## Idempotency on Non Transactional Handlers ::: tip Idempotency checks are automatic for any message handler that uses any kind of transactional middleware. ::: ::: warning This functionality does require some kind of message persistence to be configured for your application as it utilizes Wolverine's inbox functionality ::: Every usage you've seen so far has featured utilizing Wolverine's transactional middleware support on handlers that use [EF Core](/guide/durability/efcore/transactional-middleware) or [Marten](/guide/durability/marten/transactional-middleware). But of course, you may have message handlers that don't need to touch your underlying storage at all. For example, a message handler might do nothing but call an external web service. You may want to make this message handler be idempotent to protect against duplicated calls to that web service. You're in luck, because Wolverine exposes this policy to do exactly that: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.Durability.Mode = DurabilityMode.Solo; opts.Services.AddDbContextWithWolverineIntegration(x => x.UseSqlServer(Servers.SqlServerConnectionString)); opts.Services.AddResourceSetupOnStartup(StartupAction.ResetState); opts.Policies.AutoApplyTransactions(IdempotencyStyle.Eager); opts.PersistMessagesWithSqlServer(Servers.SqlServerConnectionString, "idempotency"); opts.UseEntityFrameworkCoreTransactions(); // THIS RIGHT HERE opts.Policies.AutoApplyIdempotencyOnNonTransactionalHandlers(); }).StartAsync(); ``` snippet source | anchor Specifically, see the call to `WolverineOptions.Policies.AutoApplyIdempotencyOnNonTransactionalHandlers()` above. What that is doing is: 1. Inserting a call to assert that the current message doesn't already exist in your applications default envelope storage by the Wolverine message id. If the message is already marked as `Handled` in the inbox, Wolverine will reject and discard the current message processing 2. Assuming the message is all new, Wolverine will try to persist the `Handled` state in the default inbox storage. In the case of failures to the database storage (stuff happens), Wolverine will attempt to retry out of band, but allow the message processing to go through otherwise without triggering error policies so the message is not retried ::: tip While we're talking about call outs to external web services, the Wolverine team recommends isolating the call to that web service in its own handler with isolated error handling and maybe even a circuit breaker for outages of that service. Or at least making that your default practice. ::: You can also opt into this behavior on a message type by message type basis by decorating the message handler type or handler method with the Wolverine `[Idempotent]` attribute. ## Handled Message Retention The way that the idempotency checks work is to keep track of messages that have already been processed in the persisted transactional inbox storage. But of course, you don't want that storage to grow forever and choke off the performance of your system, so Wolverine has a background process to delete messages marked as `Handled` older than a configured threshold with the setting shown below: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // The default is 5 minutes, but if you want to keep // messages around longer (or shorter) in case of duplicates, // this is how you do it opts.Durability.KeepAfterMessageHandling = 10.Minutes(); }).StartAsync(); ``` snippet source | anchor The default is to keep messages for at least 5 minutes. --- --- url: /guide/durability/idempotency.md --- # Idempotent Message Delivery ::: tip There is nothing you need to do to opt into idempotent, no more than once message deduplication other than to be using the durable inbox on any Wolverine listening endpoint where you want this behavior. ::: When applying the [durable inbox](/guide/durability/#using-the-inbox-for-incoming-messages) to [message listeners](/guide/messaging/listeners), you also get a no more than once, [idempotent](https://en.wikipedia.org/wiki/Idempotence) message delivery guarantee. This means that Wolverine will discard any received message that it can detect has been previously handled. Wolverine does this with its durable inbox storage to check on receipt of a new message if that message is already known by its Wolverine identifier. Instead of immediately deleting message storage for a successfully completed message, Wolverine merely marks that the message is handled and keeps that message in storage for a default of 5 minutes to protect against duplicate incoming messages. To override that setting, you have this option: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // The default is 5 minutes, but if you want to keep // messages around longer (or shorter) in case of duplicates, // this is how you do it opts.Durability.KeepAfterMessageHandling = 10.Minutes(); }).StartAsync(); ``` snippet source | anchor --- --- url: /guide/logging.md --- # Instrumentation and Metrics Wolverine logs through the standard .NET `ILogger` abstraction, and there's nothing special you need to do to enable that logging other than using one of the standard approaches for bootstrapping a .NET application using `IHostBuilder`. Wolverine is logging all messages sent, received, and executed inline. ::: info Inside of message handling, Wolverine is using `ILogger` where `T` is the **message type**. So if you want to selectively filter logging levels in your application, rely on the message type rather than the handler type. ::: ## Configuring Message Logging Levels Wolverine automatically logs the execution start and stop of all message handling with `LogLevel.Debug`. Likewise, Wolverine logs the successful completion of all messages (including the capture of cascading messages and all middleware) with `LogLevel.Information`. However, many folks have found this logging to be too intrusive. Not to worry, you can quickly override the log levels within Wolverine for your system like so: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Turn off all logging of the message execution starting and finishing // The default is Debug opts.Policies.MessageExecutionLogLevel(LogLevel.None); // Turn down Wolverine's built in logging of all successful // message processing opts.Policies.MessageSuccessLogLevel(LogLevel.Debug); }).StartAsync(); ``` snippet source | anchor The sample up above turns down the logging on a global, application level. If you have some kind of command message where you don't want logging for that particular message type, but do for all other message types, you can override the log level for only that specific message type like so: ```cs public class CustomizedHandler { public void Handle(SpecialMessage message) { // actually handle the SpecialMessage } public static void Configure(HandlerChain chain) { chain.Middleware.Add(new CustomFrame()); // Turning off all execution tracking logging // from Wolverine for just this message type // Error logging will still be enabled on failures chain.SuccessLogLevel = LogLevel.None; chain.ProcessingLogLevel = LogLevel.None; } } ``` snippet source | anchor Methods on message handler types with the signature: ```csharp public static void Configure(HandlerChain chain) ``` will be called by Wolverine to apply message type specific overrides to Wolverine's message handling. ## Configuring Health Check Tracing Wolverine's node agent controller performs health checks periodically (every 10 seconds by default) to maintain node assignments and cluster state. By default, these health checks emit Open Telemetry traces named `wolverine_node_assignments`, which can result in high trace volumes in observability platforms. You can control this tracing behavior through the `DurabilitySettings`: ```cs // Disable the "wolverine_node_assignments" traces entirely opts.Durability.NodeAssignmentHealthCheckTracingEnabled = false; // Or, sample those traces to only once every 10 minutes // opts.Durability.NodeAssignmentHealthCheckTraceSamplingPeriod = TimeSpan.FromMinutes(10); ``` snippet source | anchor ## Controlling Message Specific Logging and Tracing While Open Telemetry tracing can be disabled on an endpoint by endpoint basis, you may want to disable Open Telemetry tracing for specific message types. You may also want to modify the log levels for message success and message execution on a message type by message type basis. While you *can* also do that with custom handler chain policies, the easiest way to do that is to use the `[WolverineLogging]` attribute on either the handler type or the handler method as shown below: ```cs public record QuietMessage; public record VerboseMessage; public class QuietAndVerboseMessageHandler { [WolverineLogging( telemetryEnabled:false, successLogLevel: LogLevel.None, executionLogLevel:LogLevel.Trace)] public void Handle(QuietMessage message) { Console.WriteLine("Hush!"); } [WolverineLogging( // Enable Open Telemetry tracing TelemetryEnabled = true, // Log on successful completion of this message SuccessLogLevel = LogLevel.Information, // Log on execution being complete, but before Wolverine does its own book keeping ExecutionLogLevel = LogLevel.Information, // Throw in yet another contextual logging statement // at the beginning of message execution MessageStartingLevel = LogLevel.Debug)] public void Handle(VerboseMessage message) { Console.WriteLine("Tell me about it!"); } } ``` snippet source | anchor ## Log Message Execution Start Wolverine is absolutely meant for "grown up development," so there's a few options for logging and instrumentation. While Open Telemetry logging is built in and will always give you the activity span for message execution start and finish, you may want the start of each message execution to be logged as well. Rather than force your development teams to write repetitive logging statements for every single message handler method, you can ask Wolverine to do that for you: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Opt into having Wolverine add a log message at the beginning // of the message execution opts.Policies.LogMessageStarting(LogLevel.Information); }).StartAsync(); ``` snippet source | anchor This will append log entries looking like this: ```text [09:41:00 INF] Starting to process () ``` With only the defaults, Wolverine is logging the type of message and the message id. As shown in the next section, you can also add additional context to these log messages. In conjunction with the "audited members" that are added to these logging statements, all the logging in Wolverine is using structural logging for better searching within your logs. ## Contextual Logging with Audited Members ::: tip As of verion 5.5, Wolverine will automatically audit any property that refers to a [saga identity](/guide/durability/sagas) or to an event stream identity within the [aggregate handler workflow](/guide/durability/marten/event-sourcing) with Marten event sourcing. ::: ::: warning Be cognizant of the information you're writing to log files or Open Telemetry data and whether or not that data is some kind of protected data like personal data identifiers. ::: Wolverine gives you the ability to mark public fields or properties on message types as "audited members" that will be part of the logging messages at the beginning of message execution described in the preview section, and also in the Open Telemetry support described in the next section. To explicitly mark members as "audited", you *can* use attributes within your message types (and these are inherited) like so: ```cs public class AuditedMessage { [Audit] public string Name { get; set; } [Audit("AccountIdentifier")] public int AccountId; } ``` snippet source | anchor Or if you are okay using a common message interface for common identification like "this message targets an account/organization/tenant/client" like the `IAccountCommand` shown below: ```cs // Marker interface public interface IAccountMessage { public int AccountId { get; } } // A possible command that uses our marker interface above public record DebitAccount(int AccountId, decimal Amount) : IAccountMessage; ``` snippet source | anchor You can specify audited members through this syntax: ```cs // opts is WolverineOptions inside of a UseWolverine() call opts.Policies.ForMessagesOfType().Audit(x => x.AccountId); ``` snippet source | anchor This will extend your log entries to like this: ```text [09:41:00 INFO] Starting to process IAccountMessage ("018761ad-8ed2-4bc9-bde5-c3cbb643f9f3") with AccountId: "c446fa0b-7496-42a5-b6c8-dd53c65c96c8" ``` ## Open Telemetry Wolverine also supports the [Open Telemetry](https://opentelemetry.io/docs/instrumentation/net/) standard for distributed tracing. To enable the collection of Open Telemetry data, you need to add Wolverine as a data source as shown in this code sample: ```cs // builder.Services is an IServiceCollection object builder.Services.AddOpenTelemetryTracing(x => { x.SetResourceBuilder(ResourceBuilder .CreateDefault() .AddService("OtelWebApi")) // <-- sets service name .AddJaegerExporter() .AddAspNetCoreInstrumentation() // This is absolutely necessary to collect the Wolverine // open telemetry tracing information in your application .AddSource("Wolverine"); }); ``` snippet source | anchor ```cs builder.Services.AddOpenTelemetry() .WithTracing(tracing => { tracing.AddSource("Wolverine"); }) .WithMetrics(metrics => { metrics.AddMeter("Wolverine"); }) .UseOtlpExporter(); ``` snippet source | anchor ::: tip Wolverine 1.7 added the ability to disable Open Telemetry tracing on an endpoint by endpoint basis, and **finally** turned off Otel tracing of internal Wolverine messages ::: Open Telemetry tracing can be selectively disabled on an endpoint by endpoint basis with this API: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts .PublishAllMessages() .ToPort(2222) // Disable Open Telemetry data collection on // all messages sent, received, or executed // from this endpoint .TelemetryEnabled(false); }).StartAsync(); ``` snippet source | anchor Note that this `TelemetryEnabled()` method is available on all possible subscriber and listener types within Wolverine. This flag applies to all messages sent, received, or executed at a particular endpoint. Wolverine endeavors to publish OpenTelemetry spans or activities for meaningful actions within a Wolverine application. Here are the specific span names, activity names, and tag names emitted by Wolverine: ```cs /// /// ActivityEvent marking when an incoming envelope is discarded /// public const string EnvelopeDiscarded = "wolverine.envelope.discarded"; /// /// ActivityEvent marking when an incoming envelope is being moved to the error queue /// public const string MovedToErrorQueue = "wolverine.error.queued"; /// /// ActivityEvent marking when an incoming envelope does not have a known message /// handler and is being shunted to registered "NoHandler" actions /// public const string NoHandler = "wolverine.no.handler"; /// /// ActivityEvent marking when a message failure is configured to pause the message listener /// where the message was handled. This is tied to error handling policies /// public const string PausedListener = "wolverine.paused.listener"; /// /// Span that is emitted when a listener circuit breaker determines that there are too many /// failures and listening should be paused /// public const string CircuitBreakerTripped = "wolverine.circuit.breaker.triggered"; /// /// Span emitted when a listening agent is started or restarted /// public const string StartingListener = "wolverine.starting.listener"; /// /// Span emitted when a listening agent is stopping /// public const string StoppingListener = "wolverine.stopping.listener"; /// /// Span emitted when a listening agent is being paused /// public const string PausingListener = "wolverine.pausing.listener"; /// /// ActivityEvent marking that an incoming envelope is being requeued after a message /// processing failure /// public const string EnvelopeRequeued = "wolverine.envelope.requeued"; /// /// ActivityEvent marking that an incoming envelope is being retried after a message /// processing failure /// public const string EnvelopeRetry = "wolverine.envelope.retried"; /// /// ActivityEvent marking than an incoming envelope has been rescheduled for later /// execution after a failure /// public const string ScheduledRetry = "wolverine.envelope.rescheduled"; /// /// Tag name trying to explain why a sender or listener was stopped or paused /// public const string StopReason = "wolverine.stop.reason"; /// /// The Wolverine Uri that identifies what sending or listening endpoint the activity /// refers to /// public const string EndpointAddress = "wolverine.endpoint.address"; /// /// A stop reason when back pressure policies call for a pause in processing in a single endpoint /// public const string TooBusy = "TooBusy"; /// /// A span emitted when a sending agent for a specific endpoint is paused /// public const string SendingPaused = "wolverine.sending.pausing"; /// /// A span emitted when a sending agent is resuming after having been paused /// public const string SendingResumed = "wolverine.sending.resumed"; /// /// A stop reason when sending agents are paused after too many sender failures /// public const string TooManySenderFailures = "TooManySenderFailures"; ``` snippet source | anchor ## Message Correlation ::: tip Each individual message transport technology like Rabbit MQ, Azure Service Bus, or Amazon SQS has its own flavor of *Envelope Wrapper*, but Wolverine uses its own `Envelope` structure internally and maps between its canonical representation and the transport specific envelope wrappers at runtime. ::: As part of Wolverine's instrumentation, it tracks the causality between messages received and published by Wolverine. It also enables you to correlate Wolverine activity back to inputs from outside of Wolverine like ASP.Net Core request ids. The key item here is Wolverine's `Envelope` class (see the [Envelope Wrapper](https://www.enterpriseintegrationpatterns.com/patterns/messaging/EnvelopeWrapper.html) pattern discussed in the venerable Enterprise Integration Patterns) that holds messages the message and all the metadata for the message within Wolverine handling. | Property | Type | Source | Description | |----------------|---------------------|------------------------------------------------------------------|------------------------------------------------------------------------------------------| | Id | `Guid` (Sequential) | Assigned by Wolverine | Identifies a specific Wolverine message | | CorrelationId | `string` | See the following discussion | Correlating identifier for the logical workflow or system action across multiple actions | | ConversationId | `Guid` | Assigned by Wolverine | Id of the immediate message or workflow that caused this envelope to be sent | | SagaId | `string` | Assigned by Wolverine | Identifies the current stateful saga that this message refers to, if part of a stateful saga | | TenantId | `string` | Assigned by user on IMessageBus, but transmitted across messages | User defined tenant identifier for multi-tenancy strategies | Correlation is a little bit complicated. The correlation id is originally owned at the `IMessageBus` or `IMessageContext` level. By default, the `IMessageBus.CorrelationId` is set to be the [root id of the current System.Diagnostics.Activity](https://learn.microsoft.com/en-us/dotnet/api/system.diagnostics.activity.rootid?view=net-7.0#system-diagnostics-activity-rootid). That's convenient, because it would hopefully, automatically tie your Wolverine behavior to outside activity like ASP.Net Core HTTP requests. If you are publishing messages within the context of a Wolverine handler -- either with `IMessageBus` / `IMessageContext` or through cascading messages -- the correlation id of any outgoing messages will be the correlation id of the original message that is being currently handled. If there is no existing correlation id from either a current activity or a previous message, Wolverine will assign a new correlation id as a `Guid` value converted to a string. ## Metrics Wolverine is automatically tracking several performance related metrics through the [System.Diagnostics.Metrics](https://learn.microsoft.com/en-us/dotnet/api/system.diagnostics.metrics?view=net-8.0) types, which sets Wolverine users up for being able to export their system’s performance metrics to third party observability tools like Honeycomb or Datadog that support Open Telemetry metrics. The current set of metrics in Wolverine are shown below: ::: warning The metrics for the inbox, outbox, and scheduled message counts were unfortunately lost when Wolverine introduced multi-tenancy. They will be added back to Wolverine in 4.0. ::: | Metric Name | Metric Type | Description | |------------------------------|-----------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | wolverine-messages-sent | [Counter](https://opentelemetry.io/docs/reference/specification/metrics/api/#counter) | Number of messages sent | | wolverine-execution-time | [Histogram](https://opentelemetry.io/docs/reference/specification/metrics/api/#histogram) | Execution time in milliseconds | | wolverine-messages-succeeded | Counter | Number of messages successfully processed | | wolverine-dead-letter-queue | Counter | Number of messages moved to dead letter queues | | wolverine-effective-time | Histogram | Effective time between a message being sent and being completely handled in milliseconds. Right now this works between Wolverine to Wolverine application sending and from NServiceBus applications sending to Wolverine applications through Wolverine’s NServiceBus interoperability. | | wolverine-execution-failure | Counter | Number of message execution failures. Tagged by exception type | As a sample set up for publishing metrics, here's a proof of concept built with Honeycomb as the metrics collector: ```csharp var host = Host.CreateDefaultBuilder(args) .UseWolverine((context, opts) => { opts.ServiceName = "Metrics"; // Open Telemetry *should* cover this anyway, but // if you want Wolverine to log a message for *beginning* // to execute a message, try this opts.Policies.LogMessageStarting(LogLevel.Debug); // For both Open Telemetry span tracing and the "log message starting..." // option above, add the AccountId as a tag for any command that implements // the IAccountCommand interface opts.Policies.ForMessagesOfType().Audit(x => x.AccountId); // Setting up metrics and Open Telemetry activity tracing // to Honeycomb var honeycombOptions = context.Configuration.GetHoneycombOptions(); honeycombOptions.MetricsDataset = "Wolverine:Metrics"; opts.Services.AddOpenTelemetry() // enable metrics .WithMetrics(x => { // Export metrics to Honeycomb x.AddHoneycomb(honeycombOptions); }) // enable Otel span tracing .WithTracing(x => { x.AddHoneycomb(honeycombOptions); x.AddSource("Wolverine"); }); }) .UseResourceSetupOnStartup() .Build(); await host.RunAsync(); ``` ### Additional Metrics Tags You can add additional tags to the performance metrics per message type for system specific correlation in tooling like Datadog, Grafana, or Honeycomb. From an example use case that I personally work with, let's say that our system handles multiple message types that all refer to a specific client entity we're going to call "Organization Code." For the sake of performance correlation and troubleshooting later, we would like to have an idea about how the system performance varies between organizations. To do that, we will be adding the "Organization Code" as a tag to the performance metrics. First, let's start by using a common interface called `IOrganizationRelated` interface that just provides a common way of exposing the `OrganizationCode` for these message types handled by Wolverine. Next, the mechanism to adding the "Organization Code" to the metrics is to use the `Envelope.SetMetricsTag()` method to tag the current message being processed. Going back to the `IOrganizationRelated` marker interface, we can add some middleware that acts on `IOrganizationRelated` messages to add the metrics tag as shown below: ```cs // Common interface on message types within our system public interface IOrganizationRelated { string OrganizationCode { get; } } // Middleware just to add a metrics tag for the organization code public static class OrganizationTaggingMiddleware { public static void Before(IOrganizationRelated command, Envelope envelope) { envelope.SetMetricsTag("org.code", command.OrganizationCode); } } ``` snippet source | anchor Finally, we'll add the new middleware to all message handlers where the message implements the `IOrganizationRelated` interface like so: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Add this middleware to all handlers where the message can be cast to // IOrganizationRelated opts.Policies.ForMessagesOfType().AddMiddleware(typeof(OrganizationTaggingMiddleware)); }).StartAsync(); ``` snippet source | anchor ### Tenant Id Tagging ```cs public static async Task publish_operation(IMessageBus bus, string tenantId, string name) { // All outgoing messages or executed messages from this // IMessageBus object will be tagged with the tenant id bus.TenantId = tenantId; await bus.PublishAsync(new SomeMessage(name)); } ``` snippet source | anchor --- --- url: /guide/http/marten.md --- # Integration with Marten New in Wolverine 1.10.0 is the `Wolverine.Http.Marten` library that adds the ability to more deeply integrate Marten into Wolverine.HTTP by utilizing information from route arguments. To install that library, use: ```bash dotnet add package WolverineFx.Http.Marten ``` ## Passing Marten Documents to Endpoint Parameters ::: tip The `[Document]` attribute is still valid, but it's the exact same behavior as the generalized `[Entity]` attribute that is supported by message handlers as well. ::: ::: info Strong typed identifiers are supported for this usage as of Wolverine 5.0 ::: Consider this very common use case, you have an HTTP endpoint that needs to work on a Marten document that will be loaded using the value of one of the route arguments as that document's identity. In a long hand way, that could look like this: ```cs { [WolverineGet("/invoices/longhand/{id}")] [ProducesResponseType(404)] [ProducesResponseType(200, Type = typeof(Invoice))] public static async Task GetInvoice( Guid id, IQuerySession session, CancellationToken cancellationToken) { var invoice = await session.LoadAsync(id, cancellationToken); if (invoice == null) return Results.NotFound(); return Results.Ok(invoice); } ``` snippet source | anchor Pretty straightforward, but it's a little annoying to have to scatter in all the attributes for OpenAPI and there's definitely some repetitive code. So let's introduce the new `[Document]` parameter and look at an exact equivalent for both the actual functionality and for the OpenAPI metadata: ```cs [WolverineGet("/invoices/{id}")] public static Invoice Get([Document] Invoice invoice) { return invoice; } ``` snippet source | anchor Notice that the `[Document]` attribute was able to use the "id" route parameter. By default, Wolverine is looking first for a route variable named "invoiceId" (the document type name + "Id"), then falling back to looking for "id". You can of course explicitly override the matching of route argument like so: ```cs [WolverinePost("/invoices/{number}/approve")] public static IMartenOp Approve([Document("number")] Invoice invoice) { invoice.Approved = true; return MartenOps.Store(invoice); } ``` snippet source | anchor In the code above, if the `Invoice` document does not exist, the route will stop and return a status code 404 for Not Found. If you, for whatever reason, want your handler executed even if the document does not exist, then you can set the `DocumentAttribute.Required` property to `false`. :::info Starting with Wolverine 3 `DocumentAttribute.Required = true` is the default behavior. In previous versions the default value was `false`. ::: However, if the document is soft-deleted your endpoint will still be executed. If you want soft-deleted documents to be treated as `NULL` for a endpoint, you can set `MaybeSoftDeleted` to `false`.\ In combination with `Required = true` that means the endpoint will return 404 for missing and soft-deleted documents. ```cs [WolverineGet("/invoices/soft-delete/{id}")] public static Invoice GetSoftDeleted([Document(Required = true, MaybeSoftDeleted = false)] Invoice invoice) { return invoice; } ``` snippet source | anchor ## Marten Aggregate Workflow The http endpoints can play inside the full "critter stack" combination with [Marten](https://martendb.io) with Wolverine's [specific support for Event Sourcing and CQRS](/guide/durability/marten/event-sourcing). Originally this has been done by just mimicking the command handler mechanism and having all the inputs come in through the request body (aggregate id, version). Wolverine 1.10 added a more HTTP-centric approach using route arguments. Because folks always want to insert strong typed identifiers in every possible nook and cranny of their application code, Wolverine 5.0 introduced support for using these custom value types as the stream and/or aggregate identity in all usages of the aggregate handler workflow with Wolverine.HTTP. ### Using Route Arguments ::: tip The `[Aggregate]` attribute was originally meant for the "aggregate handler workflow" where Wolverine is interacting with Marten with the assumption that it will be appending events to Marten streams and getting you ready for versioning assertions. If all you need is a read only copy of Marten aggregate data, the `[ReadAggregate]` is a lighter weight option. Also, the `[WriteAggregate]` attribute has the exact same behavior as the older `[Aggregate]`, but is available in both message handlers and HTTP endpoints. You may want to prefer `[WriteAggregate]` just to be more clear in the code about what's happening. ::: To opt into the Wolverine + Marten "aggregate workflow", but use data from route arguments for the aggregate id, use the new `[Aggregate]` attribute from Wolverine.Http.Marten on endpoint method parameters like shown below: ```cs [WolverinePost("/orders/{orderId}/ship2"), EmptyResponse] // The OrderShipped return value is treated as an event being posted // to a Marten even stream // instead of as the HTTP response body because of the presence of // the [EmptyResponse] attribute public static OrderShipped Ship(ShipOrder2 command, [Aggregate] Order order) { if (order.HasShipped) throw new InvalidOperationException("This has already shipped!"); return new OrderShipped(); } ``` snippet source | anchor Using this version of the "aggregate workflow", you no longer have to supply a command in the request body, so you could have an endpoint signature like this: ```cs [WolverinePost("/orders/{orderId}/ship3"), EmptyResponse] // The OrderShipped return value is treated as an event being posted // to a Marten even stream // instead of as the HTTP response body because of the presence of // the [EmptyResponse] attribute public static OrderShipped Ship3([Aggregate] Order order) { return new OrderShipped(); } ``` snippet source | anchor A couple other notes: * The return value handling for events follows the same rules as shown in the next section * The endpoints will return a 404 response code if the aggregate in question does not exist * The aggregate id can be set explicitly like `[Aggregate("number")]` to match against a route argument named "number", or by default the behavior will try to match first on "{camel case name of aggregate type}Id", then a route argument named "id" * This usage will automatically apply the transactional middleware for Marten ### Using Request Body ::: tip This usage only requires Wolverine.Marten and does not require the Wolverine.Http.Marten library because there's nothing happening here in regards to Marten that is using AspNetCore ::: For some context, let's say that we have the following events and [Marten aggregate](https://martendb.io/events/projections/aggregate-projections.html#aggregate-by-stream) to model the workflow of an `Order`: ```cs // OrderId refers to the identity of the Order aggregate public record MarkItemReady(Guid OrderId, string ItemName, int Version); public record OrderShipped; public record OrderCreated(Item[] Items); public record OrderReady; public record OrderConfirmed; public interface IShipOrder { Guid OrderId { init; } } public record ShipOrder(Guid OrderId) : IShipOrder; public record ShipOrder2(string Description); public record ItemReady(string Name); public class Item { public string Name { get; set; } public bool Ready { get; set; } } public class Order { // For JSON serialization public Order(){} public Order(OrderCreated created) { foreach (var item in created.Items) Items[item.Name] = item; } // This would be the stream id public Guid Id { get; set; } // This is important, by Marten convention this would // be the public int Version { get; set; } public DateTimeOffset? Shipped { get; private set; } public Dictionary Items { get; set; } = new(); public bool HasShipped { get; set; } // These methods are used by Marten to update the aggregate // from the raw events public void Apply(IEvent shipped) { Shipped = shipped.Timestamp; } public void Apply(ItemReady ready) { Items[ready.Name].Ready = true; } public void Apply(OrderConfirmed confirmed) { IsConfirmed = true; } public bool IsConfirmed { get; set; } public bool IsReadyToShip() { return Shipped == null && Items.Values.All(x => x.Ready); } public bool IsShipped() => Shipped.HasValue; } ``` snippet source | anchor To append a single event to an event stream from an HTTP endpoint, you can use a return value like so: ```cs [AggregateHandler] [WolverinePost("/orders/ship"), EmptyResponse] // The OrderShipped return value is treated as an event being posted // to a Marten even stream // instead of as the HTTP response body because of the presence of // the [EmptyResponse] attribute public static OrderShipped Ship(ShipOrder command, Order order) { return new OrderShipped(); } ``` snippet source | anchor Or potentially append multiple events using the `Events` type as a return value like this sample: ```cs [AggregateHandler] [WolverinePost("/orders/itemready")] public static (OrderStatus, Events) Post(MarkItemReady command, Order order) { var events = new Events(); if (order.Items.TryGetValue(command.ItemName, out var item)) { item.Ready = true; // Mark that the this item is ready events += new ItemReady(command.ItemName); } else { // Some crude validation throw new InvalidOperationException($"Item {command.ItemName} does not exist in this order"); } // If the order is ready to ship, also emit an OrderReady event if (order.IsReadyToShip()) { events += new OrderReady(); } return (new OrderStatus(order.Id, order.IsReadyToShip()), events); } ``` snippet source | anchor ### Responding with the Updated Aggregate See the documentation from the message handlers on using [UpdatedAggregate](/guide/durability/marten/event-sourcing.html#returning-the-updated-aggregate) for more background on this topic. To return the updated state of a projected aggregate from Marten as the HTTP response from an endpoint using the aggregate handler workflow, return the `UpdatedAggregate` marker type as the first "response value" of your HTTP endpoint like so: ```cs [AggregateHandler] [WolverinePost("/orders/{id}/confirm2")] // The updated version of the Order aggregate will be returned as the response body // from requesting this endpoint at runtime public static (UpdatedAggregate, Events) ConfirmDifferent(ConfirmOrder command, Order order) { return ( new UpdatedAggregate(), [new OrderConfirmed()] ); } ``` snippet source | anchor If you should happen to have a message handler or HTTP endpoint signature that uses multiple event streams, but you want the `UpdatedAggregate` to **only** apply to one of the streams, you can use the `UpdatedAggregate` to tip off Wolverine about that like in this sample: ```cs public static class MakePurchaseHandler { // See how we used the generic version // of UpdatedAggregate to tell Wolverine we // want *only* the XAccount as the response // from this handler public static UpdatedAggregate Handle( MakePurchase command, [WriteAggregate] IEventStream account, [WriteAggregate] IEventStream inventory) { if (command.Number > inventory.Aggregate.Quantity || (command.Number * inventory.Aggregate.UnitPrice) > account.Aggregate.Balance) { // Do Nothing! return new UpdatedAggregate(); } account.AppendOne(new ItemPurchased(command.InventoryId, command.Number, inventory.Aggregate.UnitPrice)); inventory.AppendOne(new Drawdown(command.Number)); return new UpdatedAggregate(); } } ``` snippet source | anchor ::: info Wolverine can't (yet) handle a signature with multiple event streams of the same aggregate type and `UpdatedAggregate`. ::: ## Reading the Latest Version of an Aggregate ::: info This is using Marten's [FetchLatest(https://martendb.io/events/projections/read-aggregates.html#fetchlatest) API]() and is limited to single stream projections. ::: If you want to inject the current state of an event sourced aggregate as a parameter into an HTTP endpoint method, use the `[ReadAggregate]` attribute like this: ```cs [WolverineGet("/orders/latest/{id}")] public static Order GetLatest(Guid id, [ReadAggregate] Order order) => order; ``` snippet source | anchor If the aggregate doesn't exist, the HTTP request will stop with a 404 status code. The aggregate/stream identity is found with the same rules as the `[Entity]` or `[Aggregate]` attributes: 1. You can specify a particular request body property name or route argument 2. Look for a request body property or route argument named "EntityTypeId" 3. Look for a request body property or route argument named "Id" or "id" ### Compiled Query Resource Writer Policy Marten integration comes with an `IResourceWriterPolicy` policy that handles compiled queries as return types. Register it in `WolverineHttpOptions` like this: ```cs opts.UseMartenCompiledQueryResultPolicy(); ``` snippet source | anchor If you now return a compiled query from an Endpoint the result will get directly streamed to the client as JSON. Short circuiting JSON deserialization. ```cs [WolverineGet("/invoices/approved")] public static ApprovedInvoicedCompiledQuery GetApproved() { return new ApprovedInvoicedCompiledQuery(); } ``` snippet source | anchor ```cs public class ApprovedInvoicedCompiledQuery : ICompiledListQuery { public Expression, IEnumerable>> QueryIs() { return q => q.Where(x => x.Approved); } } ``` snippet source | anchor --- --- url: /guide/http/sagas.md --- # Integration with Sagas Http endpoints can start [Wolverine sagas](/guide/durability/sagas) by just using a return value for a `Saga` value. Let's say that we have a stateful saga type for making online reservations like this: ```cs public class Reservation : Saga { public string? Id { get; set; } // Apply the CompleteReservation to the saga public void Handle(BookReservation book, ILogger logger) { logger.LogInformation("Completing Reservation {Id}", book.Id); // That's it, we're done. Delete the saga state after the message is done. MarkCompleted(); } // Delete this Reservation if it has not already been deleted to enforce a "timeout" // condition public void Handle(ReservationTimeout timeout, ILogger logger) { logger.LogInformation("Applying timeout to Reservation {Id}", timeout.Id); // That's it, we're done. Delete the saga state after the message is done. MarkCompleted(); } } ``` snippet source | anchor To start the `Reservation` saga, you could use an HTTP endpoint method like this one: ```cs [WolverinePost("/reservation")] public static ( // The first return value would be written out as the HTTP response body ReservationBooked, // Because this subclasses from Saga, Wolverine will persist this entity // with saga persistence Reservation, // Other return values that trigger no special handling will be treated // as cascading messages ReservationTimeout) Post(StartReservation start) { return (new ReservationBooked(start.ReservationId, DateTimeOffset.UtcNow), new Reservation { Id = start.ReservationId }, new ReservationTimeout(start.ReservationId)); } ``` snippet source | anchor Remember in Wolverine.HTTP that the *first* return value of an endpoint is assumed to be the response body by Wolverine, so if you are wanting to start a new saga from an HTTP endpoint, the `Saga` return value either has to be a secondary value in a tuple to opt into the Saga mechanics. Alternatively, if all you want to do is *create* a new saga, but nothing else, you can return the `Saga` type *and* force Wolverine to use the return value as a new `Saga` as shown in the snippet below. Please note that when creating a `Saga` entity in this manner, if it has a static `Start()` method, it will not be invoked. Other message handlers in the `Saga` will behave as usual. ```cs [WolverinePost("/reservation2")] // This directs Wolverine to disregard the Reservation return value // as the response body, and allow Wolverine to use the Reservation // return as a new saga [EmptyResponse] public static Reservation Post2(StartReservation start) { return new Reservation { Id = start.ReservationId }; } ``` snippet source | anchor --- --- url: /guide/messaging/transports/azureservicebus/interoperability.md --- # Interoperability ::: tip Also see the more generic [Wolverine Guide on Interoperability](/tutorials/interop) ::: Hey, it's a complicated world and Wolverine is a relative newcomer, so it's somewhat likely you'll find yourself needing to make a Wolverine application talk via Azure Service Bus to a non-Wolverine application. Not to worry (too much), Wolverine has you covered with the ability to customize Wolverine to Azure Service Bus mapping. You can create interoperability with non-Wolverine applications by writing a custom `IAzureServiceBusEnvelopeMapper` as shown in the following sample: ```cs public class CustomAzureServiceBusMapper : IAzureServiceBusEnvelopeMapper { public void MapEnvelopeToOutgoing(Envelope envelope, ServiceBusMessage outgoing) { outgoing.Body = new BinaryData(envelope.Data); if (envelope.DeliverWithin != null) { outgoing.TimeToLive = envelope.DeliverWithin.Value; } } public void MapIncomingToEnvelope(Envelope envelope, ServiceBusReceivedMessage incoming) { envelope.Data = incoming.Body.ToArray(); // You will have to help Wolverine out by either telling Wolverine // what the message type is, or by reading the actual message object, // or by telling Wolverine separately what the default message type // is for a listening endpoint envelope.MessageType = typeof(Message1).ToMessageTypeName(); } } ``` snippet source | anchor To apply that mapper to specific endpoints, use this syntax on any type of Azure Service Bus endpoint: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseAzureServiceBus("some connection string") .UseConventionalRouting() .ConfigureListeners(l => l.InteropWith(new CustomAzureServiceBusMapper())) .ConfigureSenders(s => s.InteropWith(new CustomAzureServiceBusMapper())); }).StartAsync(); ``` snippet source | anchor --- --- url: /guide/messaging/transports/gcp-pubsub/interoperability.md --- # Interoperability ::: tip Also see the more generic [Wolverine Guide on Interoperability](/tutorials/interop) ::: Hey, it's a complicated world and Wolverine is a relative newcomer, so it's somewhat likely you'll find yourself needing to make a Wolverine application talk via GCP Pub/Sub to a non-Wolverine application. Not to worry (too much), Wolverine has you covered with the ability to customize Wolverine to GCP Pub/Sub mapping. You can create interoperability with non-Wolverine applications by writing a custom `IPubsubEnvelopeMapper` as shown in the following sample: ```cs public class CustomPubsubMapper : EnvelopeMapper, IPubsubEnvelopeMapper { public CustomPubsubMapper(PubsubEndpoint endpoint) : base(endpoint) { } public void MapOutgoingToMessage(OutgoingMessageBatch outgoing, PubsubMessage message) { message.Data = ByteString.CopyFrom(outgoing.Data); } protected override void writeOutgoingHeader(PubsubMessage outgoing, string key, string value) { outgoing.Attributes[key] = value; } protected override void writeIncomingHeaders(PubsubMessage incoming, Envelope envelope) { if (incoming.Attributes is null) { return; } foreach (var pair in incoming.Attributes) envelope.Headers[pair.Key] = pair.Value; } protected override bool tryReadIncomingHeader(PubsubMessage incoming, string key, out string? value) { if (incoming.Attributes.TryGetValue(key, out var header)) { value = header; return true; } value = null; return false; } } ``` snippet source | anchor To apply that mapper to specific endpoints, use this syntax on any type of GCP Pub/Sub endpoint: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UsePubsub("your-project-id") .UseConventionalRouting() .ConfigureListeners(l => l.UseInterop((e, _) => new CustomPubsubMapper(e))) .ConfigureSenders(s => s.UseInterop((e, _) => new CustomPubsubMapper(e))); }).StartAsync(); ``` snippet source | anchor --- --- url: /guide/messaging/transports/rabbitmq/interoperability.md --- # Interoperability ::: tip Also see the more generic [Wolverine Guide on Interoperability](/tutorials/interop) ::: Hey, it's a complicated world and Wolverine is a relative newcomer, so it's somewhat likely you'll find yourself needing to make a Wolverine application talk via Rabbit MQ to a non-Wolverine application. Not to worry (too much), Wolverine has you covered with the ability to customize Wolverine to Rabbit MQ mapping and some built in recipes for interoperability with commonly used .NET messaging frameworks. ## Receiving Raw Data ::: tip Wolverine will be able to publish JSON to non-Wolverine applications out of the box with no further configuration ::: A lot of Wolverine functionality (request/reply, message correlation) relies on message metadata sent through Rabbit MQ headers. Sometimes though, you'll simply need Wolverine to receive data from external systems that certainly aren't speaking Wolverine's header protocol. In the simplest common scenario, you need Wolverine to be able to process JSON data (JSON is Wolverine's default data format) being published from another system. If you can make the assumption that Wolverine will only be receiving one type of message at a particular queue, and that the data will be valid JSON that can be deserialized to that single message type, you can simply tell Wolverine what the default message type is for that queue like this: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { var rabbitMqConnectionString = builder.Configuration.GetConnectionString("rabbit"); opts.UseRabbitMq(rabbitMqConnectionString); opts.ListenToRabbitQueue("emails") // Tell Wolverine to assume that all messages // received at this queue are the SendEmail // message type .DefaultIncomingMessage(); }); using var host = builder.Build(); await host.StartAsync(); ``` snippet source | anchor With this setting, there is **no other required headers** for Wolverine to process incoming messages. However, Wolverine will be unable to send responses back to the sender and may have a limited ability to create correlated tracking between the upstream non-Wolverine system and your Wolverine system. ## Roll Your Own Interoperability For interoperability, Wolverine needs to map data elements from the Rabbit MQ client `IBasicProperties` model to Wolverine's internal `Envelope` model. If you want a more advanced interoperability model that actually tries to map message metadata, you can implement Wolverine's `IRabbitMqEnvelopeMapper` as shown in this sample: ```cs public class SpecialMapper : IRabbitMqEnvelopeMapper { public void MapEnvelopeToOutgoing(Envelope envelope, IBasicProperties outgoing) { // All of this is default behavior, but this sample does show // what's possible here outgoing.CorrelationId = envelope.CorrelationId; outgoing.MessageId = envelope.Id.ToString(); outgoing.ContentType = "application/json"; if (envelope.DeliverBy.HasValue) { var ttl = Convert.ToInt32(envelope.DeliverBy.Value.Subtract(DateTimeOffset.Now).TotalMilliseconds); outgoing.Expiration = ttl.ToString(); } if (envelope.TenantId.IsNotEmpty()) { outgoing.Headers ??= new Dictionary(); outgoing.Headers["tenant-id"] = envelope.TenantId; } } public void MapIncomingToEnvelope(Envelope envelope, IReadOnlyBasicProperties incoming) { envelope.CorrelationId = incoming.CorrelationId; envelope.ContentType = "application/json"; if (Guid.TryParse(incoming.MessageId, out var id)) { envelope.Id = id; } else { envelope.Id = Guid.NewGuid(); } if (incoming.Headers != null && incoming.Headers.TryGetValue("tenant-id", out var tenantId)) { // Watch this in real life, some systems will send header values as // byte arrays envelope.TenantId = (string)tenantId; } } } ``` snippet source | anchor And register that special mapper like this: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { var rabbitMqConnectionString = builder.Configuration.GetConnectionString("rabbit"); opts.UseRabbitMq(rabbitMqConnectionString); opts.ListenToRabbitQueue("emails") // Apply your custom interoperability strategy here .UseInterop(new SpecialMapper()) // You may still want to define the default incoming // message as the message type name may not be sent // by the upstream system .DefaultIncomingMessage(); }); using var host = builder.Build(); await host.StartAsync(); ``` snippet source | anchor ## Publishing to Wolverine through the Rabbit MQ Console Some users like to use the Rabbit MQ management application to send messages to a running Wolverine application for exploratory testing. To do so with an out of the box Wolverine integration (i.e., you haven't opted out of JSON serialization), put your message as JSON in the `Payload` field and specify that the `type` property (**not header, property**) equals the [Wolverine message type name](/guide/messages.html#message-type-name-or-alias) for the message type, which by default would be the full .NET name. ## Interoperability with NServiceBus ::: warning You may need to override Wolverine's Rabbit MQ dead letter queue settings to avoid Wolverine and NServiceBus declaring queues with different settings and stomping all over each other. The Wolverine team blames NServiceBus for this one:-) ::: Wolverine is the new kid on the block, and it's quite likely that many folks will already be using NServiceBus for messaging. Fortunately, Wolverine has some ability to exchange messages with NServiceBus applications, so both tools can live and work together. At this point, the interoperability is only built and tested for the [Rabbit MQ transport](./transports/rabbitmq.md). Here's a sample: ```cs Wolverine = await Host.CreateDefaultBuilder().UseWolverine(opts => { opts.UseRabbitMq() .AutoProvision().AutoPurgeOnStartup() .BindExchange("wolverine").ToQueue("wolverine") .BindExchange("nsb").ToQueue("nsb") .BindExchange("NServiceBusRabbitMqService:ResponseMessage").ToQueue("wolverine"); opts.PublishAllMessages().ToRabbitExchange("nsb") // Tell Wolverine to make this endpoint send messages out in a format // for NServiceBus .UseNServiceBusInterop(); opts.ListenToRabbitQueue("wolverine") .UseNServiceBusInterop() .UseForReplies(); // This facilitates messaging from NServiceBus (or MassTransit) sending as interface // types, whereas Wolverine only wants to deal with concrete types opts.Policies.RegisterInteropMessageAssembly(typeof(IInterfaceMessage).Assembly); }).StartAsync(); ``` ## Interoperability with Mass Transit Wolverine can interoperate bi-directionally with [MassTransit](https://masstransit-project.com/) using [RabbitMQ](http://www.rabbitmq.com/). At this point, the interoperability is **only** functional if MassTransit is using its standard "envelope" serialization approach (i.e., **not** using raw JSON serialization). ::: warning At this point, if an endpoint is set up for interoperability with MassTransit, reserve that endpoint for traffic with MassTransit, and don't try to use that endpoint for Wolverine to Wolverine traffic ::: The configuration to do this is shown below: ```cs Wolverine = await Host.CreateDefaultBuilder().UseWolverine(opts => { opts.ApplicationAssembly = GetType().Assembly; opts.UseRabbitMq() .CustomizeDeadLetterQueueing(new DeadLetterQueue("errors", DeadLetterQueueMode.InteropFriendly)) .AutoProvision().AutoPurgeOnStartup() .BindExchange("wolverine").ToQueue("wolverine") .BindExchange("masstransit").ToQueue("masstransit"); opts.PublishAllMessages().ToRabbitExchange("masstransit") // Tell Wolverine to make this endpoint send messages out in a format // for MassTransit .UseMassTransitInterop(); opts.ListenToRabbitQueue("wolverine") // Tell Wolverine to make this endpoint interoperable with MassTransit .UseMassTransitInterop(mt => { // optionally customize the inner JSON serialization }) .DefaultIncomingMessage().UseForReplies(); }).StartAsync(); ``` --- --- url: /guide/messaging/transports/sqs/interoperability.md --- # Interoperability ::: tip Also see the more generic [Wolverine Guide on Interoperability](/tutorials/interop) ::: Hey, it's a complicated world and Wolverine is a relative newcomer, so it's somewhat likely you'll find yourself needing to make a Wolverine application talk via AWS SQS to a non-Wolverine application. Not to worry (too much), Wolverine has you covered with the ability to customize Wolverine to Amazon SQS mapping. ## Receive Raw JSON If you need to receive raw JSON from an upstream system *and* you can expect only one message type for the current queue, you can do that with this option: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseAmazonSqsTransport(); opts.ListenToSqsQueue("incoming").ReceiveRawJsonMessage( // Specify the single message type for this queue typeof(Message1), // Optionally customize System.Text.Json configuration o => { o.PropertyNamingPolicy = JsonNamingPolicy.CamelCase; }); }).StartAsync(); ``` snippet source | anchor Likewise, to send raw JSON to external systems, you have this option: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseAmazonSqsTransport(); opts.PublishAllMessages().ToSqsQueue("outgoing").SendRawJsonMessage( // Specify the single message type for this queue typeof(Message1), // Optionally customize System.Text.Json configuration o => { o.PropertyNamingPolicy = JsonNamingPolicy.CamelCase; }); }).StartAsync(); ``` snippet source | anchor ## Advanced Interoperability For any kind of advanced interoperability between Wolverine and any other kind of application communicating with your Wolverine application using SQS, you can build custom implementations of the `ISqsEnvelopeMapper` like this one: ```cs public class CustomSqsMapper : ISqsEnvelopeMapper { public string BuildMessageBody(Envelope envelope) { // Serialized data from the Wolverine message return Encoding.Default.GetString(envelope.Data); } // Specify header values for the SQS message from the Wolverine envelope public IEnumerable> ToAttributes(Envelope envelope) { if (envelope.TenantId.IsNotEmpty()) { yield return new KeyValuePair("tenant-id", new MessageAttributeValue { StringValue = envelope.TenantId }); } } public void ReadEnvelopeData(Envelope envelope, string messageBody, IDictionary attributes) { envelope.Data = Encoding.Default.GetBytes(messageBody); if (attributes.TryGetValue("tenant-id", out var att)) { envelope.TenantId = att.StringValue; } } } ``` snippet source | anchor And apply this to any or all of your SQS endpoints with the configuration fluent interface as shown in this sample: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseAmazonSqsTransport() .UseConventionalRouting() .DisableAllNativeDeadLetterQueues() .ConfigureListeners(l => l.InteropWith(new CustomSqsMapper())) .ConfigureSenders(s => s.InteropWith(new CustomSqsMapper())); }).StartAsync(); ``` snippet source | anchor ## Receive messages from Amazon SNS By default, Amazon SNS wraps their messages in a structured format that includes metadata. You can either turn this feature off in SNS or let Wolverine handle the mapping like this: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseAmazonSqsTransport(); opts.ListenToSqsQueue("incoming") // Interops with SNS structured metadata .ReceiveSnsTopicMessage(); }).StartAsync(); ``` snippet source | anchor It's possible for the original message that was sent to SNS to be in a different format. You can also specify a custom mapper to deal with the format of the original message as shown here: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseAmazonSqsTransport(); opts.ListenToSqsQueue("incoming") // Interops with SNS structured metadata .ReceiveSnsTopicMessage( // Sets inner mapper for original message new RawJsonSqsEnvelopeMapper(typeof(Message1), new JsonSerializerOptions())); }).StartAsync(); ``` snippet source | anchor --- --- url: /tutorials/interop.md --- # Interoperability with Non Wolverine Systems ::: warning We greatly expanded the interoperability options in Wolverine for 5.0, but some of the integrations may not have widely been used in real applications outside of testing by the time you try to use especially the MassTransit or NServiceBus for transports besides Rabbit MQ or CloudEvents with any transport. Please feel free to post issues to GitHub or use the Discord server to report any issues. ::: It's a complicated world, Wolverine is a relative newcomer in the asynchronous messaging space in the .NET ecosystem, and who knows what other systems on completely different technical platforms you might have going on. As Wolverine has gained adoption, and as a prerequisite for other folks to even consider adopting Wolverine, we've had to improve Wolverine's ability to exchange messages with non-Wolverine systems. We hope this guide will answer any questions you might have about how to leverage interoperability with Wolverine and non-Wolverine systems. As is typical for messaging tools, Wolverine has an internal ["envelope wrapper"](https://www.enterpriseintegrationpatterns.com/patterns/messaging/EnvelopeWrapper.html) structure called `Envelope` that holds the .NET message object and/or the binary representation of the message and all known metadata about the message like: * Correlation information * The [message type name for Wolverine](/guide/messages.html#message-type-name-or-alias) * The number of attempts in case of failures * When a message was originally sent * The content type of any serialized data * Topic name, group id, and deduplication id for transports that can use that information * Information about expected replies and a `ReplyUri` that tells Wolverine where to send any responses to the current message * Other headers Here's a little sample of how an `Envelope` might be used internally by Wolverine: ```cs var message = new ApproveInvoice("1234"); // I'm really creating an outgoing message here var envelope = new Envelope(message); // This information is assigned internally, // but it's good to know that it exists envelope.CorrelationId = "AAA"; // This would refer to whatever Wolverine message // started a set of related activity envelope.ConversationId = Guid.NewGuid(); // For both outgoing and incoming messages, // this identifies how the message data is structured envelope.ContentType = "application/json"; // When using multi-tenancy, this is used to track // what tenant a message applies to envelope.TenantId = "222"; // Not every broker cares about this of course envelope.GroupId = "BBB"; ``` snippet source | anchor As you can probably imagine, Wolverine uses this structure all throughout its internals to handle, send, track, and otherwise coordinate message processing. When using Wolverine with external transport brokers like Kafka, Pulsar, Google Pubsub, or Rabbit MQ, Wolverine goes through a bi-directional mapping from whatever each broker's own representation of a "message" is to Wolverine's own `Envelope` structure. Likewise, when Wolverine sends messages through an external messaging broker, it's having to map its `Envelope` to the transport's outgoing message structure as shown below: ![Envelope Mapping](/envelope-mappers.png) As you can probably surmise from the diagram, there's an important abstraction in Wolverine called an "envelope mapper" that does the work of translating Wolverine's `Envelope` structure to and from each message broker's own model for messages. These abstractions are a little bit different for each external broker, and Wolverine provides some built in mappers for common interoperability scenarios: | Transport | Envelope Mapper Name | Built In Interop | |-------------------------------------------------------------------|------------------------------------------------------------------------------------------------|-------------------------------------------------| | [Rabbit MQ](/guide/messaging/transports/rabbitmq/) | [IRabbitMqEnvelopeMapper](/guide/messaging/transports/rabbitmq/interoperability) | MassTransit, NServiceBus, CloudEvents, Raw Json | | [Azure Service Bus](/guide/messaging/transports/azureservicebus/) | [IAzureServiceBusEnvelopeMapper](/guide/messaging/transports/azureservicebus/interoperability) | MassTransit, NServiceBus, CloudEvents, Raw Json | | [Amazon SQS](/guide/messaging/transports/sqs/) | [ISqsEnvelopeMapper](/guide/messaging/transports/sqs/interoperability) | MassTransit, NServiceBus, CloudEvents, Raw Json |\ | [Amazon SNS](/guide/messaging/transports/sns) | [ISnsEnvelopeMapper](/guide/messaging/transports/sns.html#interoperability) | MassTransit, NServiceBus, CloudEvents, Raw Json | | [Kafka](/guide/messaging/transports/kafka) | [IKafkaEnvelopeMapper](/guide/messaging/transports/kafka.html#interoperability) | CloudEvents, Raw Json | | [Apache Pulsar](/guide/messaging/transports/pulsar) | [IPulsarEnvelopeMapper](/guide/messaging/transports/pulsar.html#interoperability) | CloudEvents | | [MQTT](/guide/messaging/transports/mqtt) | [IMqttEnvelopeMapper](/guide/messaging/transports/mqtt.html#interoperability) | CloudEvents | | [Redis](/guide/messaging/transports/redis) | [IRedisEnvelopeMapper](/guide/messaging/transports/redis.html#interoperability) | CloudEvents | ## Writing a Custom Envelope Mapper Let's say that you're needing to interact with an upstream system that publishes messages to Wolverine through an external message broker in a format that's completely different than what Wolverine itself uses or any built in envelope mapping recipe -- which is actually quite common. When you map incoming transport messages to Wolverine's `Envelope`, **at a bare minimum**, Wolverine needs to know the binary data that Wolverine will later try to deserialize to a .NET type in its own execution pipeline (`Envelope.Data`) and how to read that binary data into a .NET message object. When Wolverine tries to handle an incoming `Envelope` in its execution pipeline, it will: 1. Start some Open Telemetry span tracking using the metadata from the incoming `Envelope` to create traceability between the upstream publisher and the current message execution. You don't *have* to support this in your custom mapper, but you'd ideally *like* to have this. 2. Checks if the `Envelope` has expired based on its `DeliverBy` property, and discards the `Envelope` if so 3. Tries to choose a [message serializer](https://wolverinefx.net/guide/messages.html) based on the `Envelope.Serializer`, then the matching serializer based on `Envelope.ContentType` if that exists, then it falls through to the default serializer for the application (SystemTextJson by default) just in case the default serializer. As is hopefully clear from that series of steps above, when you are writing to the incoming `Envelope` in a custom message, you have to set the binary data for the incoming message, you'd ideally like to set the correlation information on `Envelope` to reflect the incoming data, and you need to either set at least `Envelope.MessageType` so Wolverine knows what message type to try to deserialize to, or just set a specific `IMessageSerializer` on `Envelope.Serializer` that Wolverine assumes will "know" how to build out the right type and maybe even infer more valuable metadata to the `Envelope` from the raw binary data (the MassTransit and CloudEvents interoperability works this way). In this first sample, I'm going to write a simplistic mapper for Kafka that assumes everything coming into an endpoint is JSON and a specific type: ```cs // Simplistic envelope mapper that expects every message to be of // type "T" and serialized as JSON that works perfectly well w/ our // application's default JSON serialization public class OurKafkaJsonMapper : IKafkaEnvelopeMapper { // Wolverine needs to know the private readonly string _messageTypeName = typeof(TMessage).ToMessageTypeName(); // Map the Wolverine Envelope structure to the outgoing Kafka structure public void MapEnvelopeToOutgoing(Envelope envelope, Message outgoing) { // We'll come back to this later... throw new NotSupportedException(); } // Map the incoming message from Kafka to the incoming Wolverine envelope public void MapIncomingToEnvelope(Envelope envelope, Message incoming) { // We're making an assumption here that only one type of message // is coming in on this particular Kafka topic, so we're telling // Wolverine what the message type is according to Wolverine's own // message naming scheme envelope.MessageType = _messageTypeName; // Tell Wolverine to use JSON serialization for the message // data envelope.ContentType = "application/json"; // Put the raw binary data right on the Envelope where // Wolverine "knows" how to get at it later envelope.Data = incoming.Value; } } ``` snippet source | anchor Which is essentially how the built in "Raw JSON" mapper works in external transport mappers. In the envelope mapper above we can assume that the actual message data is something that a straightforward serializer can deal with the raw data, and we really just need to deal with setting a few headers. In some cases you might just have to do a little bit different mapping of header information to `Envelope` properties than Wolverine's built in protocol. For most transports (Amazon SQS and SNS are the exceptions), you can just modify the "header name to Envelope" mappings something like this example from Azure Service Bus: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { // One way or another, you're probably pulling the Azure Service Bus // connection string out of configuration var azureServiceBusConnectionString = builder .Configuration .GetConnectionString("azure-service-bus"); // Connect to the broker in the simplest possible way opts.UseAzureServiceBus(azureServiceBusConnectionString).AutoProvision(); // I overrode the buffering limits just to show // that they exist for "back pressure" opts.ListenToAzureServiceBusQueue("incoming") .UseInterop((queue, mapper) => { // Not sure how useful this would be, but we can start from // the baseline Wolverine mapping and just override a few mappings mapper.MapPropertyToHeader(x => x.ContentType, "OtherTool.ContentType"); mapper.MapPropertyToHeader(x => x.CorrelationId, "OtherTool.CorrelationId"); // and more // or a little uglier where you might be mapping and transforming data between // the transport's model and the Wolverine Envelope mapper.MapProperty(x => x.ReplyUri, (e, msg) => e.ReplyUri = new Uri($"asb://queue/{msg.ReplyTo}"), (e, msg) => msg.ReplyTo = "response"); }); }); ``` snippet source | anchor That code isn't necessarily for the feint of heart, but that will sometimes be an easier recipe than trying to write a custom mapper from scratch. The NServiceBus interoperability for everything but Amazon SQS/SNS transports uses this approach: ```cs public void UseNServiceBusInterop() { // We haven't tried to address this yet, but NSB can stick in some characters // that STJ chokes on, but good ol' Newtonsoft handles just fine DefaultSerializer = new NewtonsoftSerializer(new JsonSerializerSettings()); customizeMapping((m, _) => { m.MapPropertyToHeader(x => x.ConversationId, "NServiceBus.ConversationId"); m.MapPropertyToHeader(x => x.SentAt, "NServiceBus.TimeSent"); m.MapPropertyToHeader(x => x.CorrelationId!, "NServiceBus.CorrelationId"); var replyAddress = new Lazy(() => { var replyEndpoint = (RabbitMqEndpoint)_parent.ReplyEndpoint()!; return replyEndpoint.RoutingKey(); }); void WriteReplyToAddress(Envelope e, IBasicProperties props) { props.Headers["NServiceBus.ReplyToAddress"] = replyAddress.Value; } void ReadReplyUri(Envelope e, IReadOnlyBasicProperties props) { if (props.Headers.TryGetValue("NServiceBus.ReplyToAddress", out var raw)) { var queueName = (raw is byte[] b ? Encoding.Default.GetString(b) : raw.ToString())!; e.ReplyUri = new Uri($"{_parent.Protocol}://queue/{queueName}"); } } m.MapProperty(x => x.ReplyUri!, ReadReplyUri, WriteReplyToAddress); }); } ``` snippet source | anchor Finally, here's another example that works quite differently where the mapper sets a serializer directly on the `Envelope`: ```cs // This guy is the envelope mapper for interoperating // with MassTransit internal class MassTransitMapper : ISqsEnvelopeMapper { private readonly IMassTransitInteropEndpoint _endpoint; private MassTransitJsonSerializer _serializer; public MassTransitMapper(IMassTransitInteropEndpoint endpoint) { _endpoint = endpoint; _serializer = new MassTransitJsonSerializer(endpoint); } public MassTransitJsonSerializer Serializer => _serializer; public string BuildMessageBody(Envelope envelope) { return Encoding.UTF8.GetString(_serializer.Write(envelope)); } public IEnumerable> ToAttributes(Envelope envelope) { yield break; } public void ReadEnvelopeData(Envelope envelope, string messageBody, IDictionary attributes) { // TODO -- this could be more efficient of course envelope.Data = Encoding.UTF8.GetBytes(messageBody); // This is the really important part // of the mapping envelope.Serializer = _serializer; } } ``` snippet source | anchor In the case above, the `MassTransitSerializer` is a two step process that first deserializes a JSON document that contains metadata about the message and also embedded JSON for the actual message, then figures out the proper message type to deserialize the inner JSON and *finally* sends the real message and all the expected correlation metadata about the message on to Wolverine's execution pipeline in such a way that Wolverine can create traceability between MassTransit on the other side and Wolverine. ## Interop with MassTransit AWS SQS, Azure Service Bus, or Rabbit MQ can interoperate with MassTransit by opting into this setting on an endpoint by endpoint basis as shown in this sample with Rabbit MQ: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // *A* way to configure Rabbit MQ using their Uri schema // documented here: https://www.rabbitmq.com/uri-spec.html opts.UseRabbitMq(new Uri("amqp://localhost")); // Set up a listener for a queue opts.ListenToRabbitQueue("incoming1") // There is a limitation here in that you will also // have to tell Wolverine what the message type is // because it cannot today figure out what the Wolverine // message type in the current application is from // MassTransit's metadata .DefaultIncomingMessage() .UseMassTransitInterop( // This is optional, but just letting you know it's there interop => { interop.UseSystemTextJsonForSerialization(stj => { // Don't worry all of this is optional, but // just making sure you know that you can configure // JSON serialization to work seamlessly with whatever // the application on the other end is doing }); }); }).StartAsync(); ``` snippet source | anchor Here's some details that you will need to know: * While Wolverine *can* send message type information to MassTransit, Wolverine is not (yet) able to glean the message type from MassTransit metadata, so you will have to hard code the incoming message type for a particular Wolverine endpoint that is receiving messages from a MassTransit application * Wolverine is able to do request/reply semantics with MassTransit, but there might be hiccups using Wolverine's automatic reply queues just because of differing naming conventions or reserved characters leaking through. * You probably want to use the `RegisterInteropMessageAssembly(Assembly)` for any assemblies of reused DTO message types between MassTransit and your Wolverine application to help Wolverine be able to map from NServiceBus publishing by an interface and Wolverine only handling concrete types ## Interop with NServiceBus NServiceBus has a wire protocol that is much more similar to Wolverine and works a little more cleanly -- except for Amazon SQS or SNS that is again, weird. For the transports that support NServiceBus, opt into the interoperability on an endpoint by endpoint basis with this syntax: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { // One way or another, you're probably pulling the Azure Service Bus // connection string out of configuration var azureServiceBusConnectionString = builder .Configuration .GetConnectionString("azure-service-bus"); // Connect to the broker in the simplest possible way opts.UseAzureServiceBus(azureServiceBusConnectionString).AutoProvision(); // I overrode the buffering limits just to show // that they exist for "back pressure" opts.ListenToAzureServiceBusQueue("incoming") .UseNServiceBusInterop(); // This facilitates messaging from NServiceBus (or MassTransit) sending as interface // types, whereas Wolverine only wants to deal with concrete types opts.Policies.RegisterInteropMessageAssembly(typeof(IInterfaceMessage).Assembly); }); ``` snippet source | anchor And some details that you will need to know: * Wolverine is able to detect the message type from the standard NServiceBus headers. You *might* need to utilize the [message type aliasing](/guide/messages.html#message-type-name-or-alias) to match the NServiceBus name for a message type * You probably want to use the `RegisterInteropMessageAssembly(Assembly)` for any assemblies of reused DTO message types between NServiceBus and your Wolverine application to help Wolverine be able to map from NServiceBus publishing by an interface and Wolverine only handling concrete types * Wolverine does support request/reply interactions with NServiceBus. Wolverine is able to interpret and also translate to NServiceBus's version of Wolverine's `Envelope.ReplyUri` ## Interop with CloudEvents We're honestly not sure how pervasive the [CloudEvents specification](https://cloudevents.io/) is really used outside of Microsoft's [Dapr](https://dapr.io/), but there have been enough mentions of this from the Wolverine community to justify its adoption. CloudEvents works by publishing messages in its own standardized JSON [envelope wrapper](). The Wolverine to CloudEvents interoperability is mapping between Wolverine's `Envelope` and the CloudEvents JSON payload, with the actual message data being embedded in the CloudEvents JSON. For the transports that support CloudEvents, you need to opt into the CloudEvents interoperability on an endpoint by endpoint basis like this: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // *A* way to configure Rabbit MQ using their Uri schema // documented here: https://www.rabbitmq.com/uri-spec.html opts.UseRabbitMq(new Uri("amqp://localhost")); // Set up a listener for a queue opts.ListenToRabbitQueue("incoming1") // Just note that you *can* override the STJ serialization // settings for messages coming in with the CloudEvents // wrapper .InteropWithCloudEvents(new JsonSerializerOptions()); }).StartAsync(); ``` snippet source | anchor With CloudEvents interoperability: * Basic correlation and causation is mapped for Open Telemetry style traceability * Wolverine is again depending on [message type aliases](/guide/messages.html#message-type-name-or-alias) to "know" what message type the CloudEvents envelopes are referring to, and you might very well have to explicitly register message type aliases to bridge the gap between CloudEvents and your Wolverine application. --- --- url: /guide/handlers/side-effects.md --- # Isolating Side Effects from Handlers ::: tip For easier unit testing, it's often valuable to separate responsibilities of "deciding" what to do from the actual "doing." The side effect facility in Wolverine is an example of this strategy. ::: ::: info Unlike [cascading messages](/guide/handlers/cascading), "side effects" are processed inline with the originating message and within the same logical transaction. ::: At times, you may wish to make Wolverine message handlers (or HTTP endpoints) be [pure functions](https://en.wikipedia.org/wiki/Pure_function) as a way of making the handler code itself easier to test or even just to understand. All the same, your application will almost certainly be interacting with the outside world of databases, file systems, and external infrastructure of all types. Not to worry though, Wolverine has some facility to allow you to declare the *[side effects](https://en.wikipedia.org/wiki/Side_effect_\(computer_science\))* as return values from your handler. To make this concrete, let's say that we're building a message handler that will take in some textual content and an id, and then try to write that text to a file at a certain path. In our case, we want to be able to easily unit test the logic that "decides" what content and what file path a message should be written to without ever having any usage of the actual file system (which is notoriously irritating to use in tests). First off, I'm going to create a new "side effect" type for writing a file like this: ```cs // This has to be public btw public record WriteFile(string Path, string Contents) { public Task WriteAsync() { return File.WriteAllTextAsync(Path, Contents); } } ``` snippet source | anchor ```cs // ISideEffect is a Wolverine marker interface public class WriteFile : ISideEffect { public string Path { get; } public string Contents { get; } public WriteFile(string path, string contents) { Path = path; Contents = contents; } // Wolverine will call this method. public Task ExecuteAsync(PathSettings settings) { if (!Directory.Exists(settings.Directory)) { Directory.CreateDirectory(settings.Directory); } return File.WriteAllTextAsync(Path, Contents); } } ``` snippet source | anchor And the matching message type, message handler, and a settings class for configuration: ```cs // An options class public class PathSettings { public string Directory { get; set; } = Environment.CurrentDirectory.AppendPath("files"); } public record RecordText(Guid Id, string Text); public class RecordTextHandler { // Notice that the concrete WriteFile is the return type in the method signature // and not the ISideEffect interface public WriteFile Handle(RecordText command) { return new WriteFile(command.Id + ".txt", command.Text); } } ``` snippet source | anchor At runtime, Wolverine is generating this code to handle the `RecordText` message: ```csharp public class RecordTextHandler597515455 : Wolverine.Runtime.Handlers.MessageHandler { public override System.Threading.Tasks.Task HandleAsync(Wolverine.Runtime.MessageContext context, System.Threading.CancellationToken cancellation) { var recordTextHandler = new CoreTests.Acceptance.RecordTextHandler(); var recordText = (CoreTests.Acceptance.RecordText)context.Envelope.Message; var pathSettings = new CoreTests.Acceptance.PathSettings(); var outgoing1 = recordTextHandler.Handle(recordText); // Placed by Wolverine's ISideEffect policy return outgoing1.ExecuteAsync(pathSettings); } } ``` To explain what is happening up above, when Wolverine sees that any return value from a message handler implements the `Wolverine.ISideEffect` interface, Wolverine knows that that value should have a method named either `Execute` or `ExecuteAsync()` that should be executed instead of treating the return value as a cascaded message. The method discovery is completely by method name, and it's perfectly legal to use arguments for any of the same types available to the actual message handler like: * Service dependencies from the application's IoC container * The actual message * Any objects created by middleware * `CancellationToken` * Message metadata from `Envelope` You can find more usages of side effect return values in the [Marten side effect operations](/guide/durability/marten/operations). Please note that it's not valid to return `ISideEffect` as the return type of your method. Wolverine will throw an exception asking you to return the concrete type (or at least an abstract or interface type that has the `Execute` or `ExecuteAsync` method). ## Storage Side Effects ::: info Wolverine may not be a functional programming toolset per se, but it's at least "FP-adjacent." The storage side effects explained in this section are arguably side effect monads from functional programming where the goal is to keep the behavioral logic function "pure" so that it can be easily tested and reasoned about without any of the actual persistence infrastructure being involved. The actual "side effect" object will be part of invoking the actual persistence tooling to make writes to the underlying database. ::: It's more than likely that your application using Wolverine will be using some kind of persistence tooling that you use to load and persist entity objects. Wolverine has first class support for designating entity values for persistence as part of its philosophy of utilizing [pure functions](https://en.wikipedia.org/wiki/Pure_function) for the behavioral part of message handlers or HTTP endpoint methods -- \*and this is advantageous because it allows you to write behavioral code in your message handlers or HTTP endpoints that is easy to unit test and reason about without having to employ high ceremony layering approaches. ::: info Returning any kind of `IStorageAction` type or the `UnitOfWork` type from a handler method or HTTP endpoint method will automatically apply transactional middleware around that handler or endpoint regardless of whether auto transactions are configured. ::: As a quick, concrete example, let's say that you have a message handler that conditionally creates a new `Item` if the request doesn't contain any profanity (it's late and I'm struggling to come up with sample use cases). With the storage side effect approach, you could code that like this: ```cs public record CreateItem(Guid Id, string Name); public static class CreateItemHandler { // It's always a struggle coming up with sample use cases public static IStorageAction Handle( CreateItem command, IProfanityDetector detector) { // First see if the name is valid if (detector.HasProfanity(command.Name)) { // and if not, do nothing return Storage.Nothing(); } return Storage.Insert(new Item { Id = command.Id, Name = command.Name }); } } ``` snippet source | anchor In the handler above, if we return `Wolverine.Persistence.IStorageAction`, that's recognized by Wolverine as a side effect that means an action should be taken to persist or delete an entity by the underlying persistence mechanism. Assuming that your application is using an EF Core service named `ItemDbContext` to persist the `Item` entities, the storage action side effect workflow at runtime is something like this: ```mermaid sequenceDiagram note right of MessageHandler: MessageHandler is generated by Wolverine MessageHandler->>CreateItemHandler:Handle() CreateItemHandler-->>MessageHandler: storage action side effect note left of EfCoreStorageActionApplier: This is Wolverine relaying the side effect to the DbContext MessageHandler->>EfCoreStorageActionApplier:Apply(ItemDbContext, storage action side effect) MessageHandler->>ItemDbContext:SaveChangesAsync() ``` Wolverine itself is generating the necessary code to take the side effect object you returned and apply that to the right persistence tool for the wrapped entity. In all cases, if you're ever curious or having any trouble understanding what Wolverine is doing with your side effect return types, look at the [pre-generated message handler code](/guide/codegen). As a convenience, you can create these side effect return values by using the static factory methods on the `Wolverine.Persistence.Storage` class, or just directly build a return value like: ```csharp return new Insert(new Item{}); ``` This storage side effect model can support these operations: 1. `Insert` -- some persistence tools will use their "upsert" functionality here 2. `Update` 3. `Store` -- which means "upsert" for the persistence tools like Marten or RavenDb that natively support upserts. For EF Core, this is an `Update` 4. `Delete` -- delete that entity 5. `Nothing` -- do absolutely nothing, but at least you don't have to return a null In your method signatures, you can: * Return `IStorageAction` which allows your handler or HTTP endpoint method to have some logic about whether the wrapped entity should be inserted, updated, deleted, or do absolutely nothing depending on business rules * Return a specific `Delete` or `Insert` or other storage action types * Use any of these types in a tuple return value just like any other type of side effect value * Return null values, in which case Wolverine is smart enough to do nothing ::: info As of now, this usage is supported by Wolverine's [Marten](/guide/durability/marten/), [EF Core](/guide/durability/efcore), and [RavenDb](/guide/durability/ravendb) integrations. Do note that not every persistence integration supports the `Store()` ["upsert"](https://en.wiktionary.org/wiki/upsert) capability (EF Core does not). ::: If you want to return a variable number of storage actions from a message handler, you'll want to use the `Wolverine.Persistence.UnitOfWork` type as a return type as shown below: ```cs public record StoreMany(string[] Adds); public static class StoreManyHandler { public static UnitOfWork Handle(StoreMany command) { var uow = new UnitOfWork(); foreach (var add in command.Adds) { uow.Insert(new Todo { Id = add }); } return uow; } } ``` snippet source | anchor The `UnitOfWork` is really just a `List>` that can relay zero to many storage actions to your underlying persistence tooling. --- --- url: /guide/http/json.md --- # JSON Serialization ::: warning At this point WolverineFx.Http **only** supports `System.Text.Json` as the default for the HTTP endpoints, with the JSON settings coming from the application's Minimal API configuration. ::: ::: tip You can tell Wolverine to ignore all return values as the request body by decorating either the endpoint method or the whole endpoint class with `[EmptyResponse]` ::: As explained up above, the "request" type to a Wolverine endpoint is the first argument that is: 1. Concrete 2. Not one of the value types that Wolverine considers for route or query string values 3. *Not* marked with `[FromServices]` from ASP.Net Core If a parameter like this exists, that will be the request type, and will come at runtime from deserializing the HTTP request body as JSON. Likewise, any resource type besides strings will be written to the HTTP response body as serialized JSON. In this sample endpoint, both the request and resource types are dealt with by JSON serialization. Here's the test from the actual Wolverine codebase: ```cs [Fact] public async Task post_json_happy_path() { // This test is using Alba to run an end to end HTTP request // and interrogate the results var response = await Scenario(x => { x.Post.Json(new Question { One = 3, Two = 4 }).ToUrl("/question"); x.WithRequestHeader("accept", "application/json"); }); var result = await response.ReadAsJsonAsync(); result.Product.ShouldBe(12); result.Sum.ShouldBe(7); } ``` snippet source | anchor ## Configuring System.Text.Json Wolverine depends on the value of the `IOptions` value registered in your application container for System.Text.Json configuration. But, because there are multiple `JsonOption` types in the AspNetCore world and it's way too easy to pick the wrong one and get confused and angry about why your configuration isn't impacting Wolverine, there's this extension method helper that will do the right thing behind the scenes: ```cs var builder = WebApplication.CreateBuilder(); builder.Host.UseWolverine(); builder.Services.ConfigureSystemTextJsonForWolverineOrMinimalApi(o => { // Do whatever you want here to customize the JSON // serialization o.SerializerOptions.WriteIndented = true; }); var app = builder.Build(); app.MapWolverineEndpoints(); return await app.RunJasperFxCommands(args); ``` snippet source | anchor ## Using Newtonsoft.Json ::: tip Newtonsoft.Json is still much more battle hardened than System.Text.Json, and you may need to drop back to Newtonsoft.Json for various scenarios. This feature was added specifically at the request of F# developers. ::: To opt into using Newtonsoft.Json for the JSON serialization of *HTTP endpoints*, you have this option within the call to the `MapWolverineEndpoints()` configuration: ```cs var builder = WebApplication.CreateBuilder([]); builder.Services.AddScoped(); builder.Services.AddMarten(Servers.PostgresConnectionString) .IntegrateWithWolverine(); builder.Host.UseWolverine(opts => { opts.Discovery.IncludeAssembly(GetType().Assembly); }); builder.Services.AddWolverineHttp(); await using var host = await AlbaHost.For(builder, app => { app.MapWolverineEndpoints(opts => { // Opt into using Newtonsoft.Json for JSON serialization just with Wolverine.HTTP routes // Configuring the JSON serialization is optional opts.UseNewtonsoftJsonForSerialization(settings => settings.TypeNameHandling = TypeNameHandling.All); }); }); ``` snippet source | anchor --- --- url: /tutorials/leader-election.md --- # Leader Election and Agents ![Who's in charge?](/leader-election.webp) Wolverine has a couple important features that enable Wolverine to distribute stateful, background work by assigning running agents to certain running nodes within an application cluster. To do so, Wolverine has a built in [leader election](https://en.wikipedia.org/wiki/Leader_election) feature so that it can make one single node run a "leadership agent" that continuously ensures that all known and supported agents are running within the system on a single node. Here's an illustration of that work distribution: ![Work Distribution across Nodes](/leader-election-diagram.png) Within Wolverine itself, there are a couple types of "agents" that Wolverine distributes: 1. The ["durability agents"](/guide/durability/) that poll against message stores for any stranded inbox or outbox messages that might need to be recovered and pushed along. Wolverine runs exactly one agent for each message store in the system, and distributes these across the cluster 2. "Exclusive Listeners" within Wolverine when you direct Wolverine to only listen to a queue, topic, or message subscription on a single node. This happens when you use the [strictly ordered listening](/guide/messaging/listeners.html#strictly-ordered-listeners) option. 3. In conjunction with [Marten](https://martendb.io), the [Wolverine managed projection and subscription distribution](/guide/durability/marten/distribution) uses Wolverine's agent assignment capability to make sure each projection or subscription is running on exactly one node. ## Enabling Leader Election Leader election is on by default in Wolverine **if** you have any type of message persistence configured for your application and some mechanism for cross node communication. First though, let's talk about message persistence. It could be by PostgreSQL: ```cs var builder = WebApplication.CreateBuilder(args); var connectionString = builder.Configuration.GetConnectionString("postgres"); builder.Host.UseWolverine(opts => { // Setting up Postgresql-backed message storage // This requires a reference to Wolverine.Postgresql opts.PersistMessagesWithPostgresql(connectionString); // Other Wolverine configuration }); // This is rebuilding the persistent storage database schema on startup // and also clearing any persisted envelope state builder.Host.UseResourceSetupOnStartup(); var app = builder.Build(); // Other ASP.Net Core configuration... // Using JasperFx opens up command line utilities for managing // the message storage return await app.RunJasperFxCommands(args); ``` snippet source | anchor or by SQL Server: ```cs var builder = WebApplication.CreateBuilder(args); var connectionString = builder.Configuration.GetConnectionString("sqlserver"); builder.Host.UseWolverine(opts => { // Setting up Sql Server-backed message storage // This requires a reference to Wolverine.SqlServer opts.PersistMessagesWithSqlServer(connectionString); // Other Wolverine configuration }); // This is rebuilding the persistent storage database schema on startup // and also clearing any persisted envelope state builder.Host.UseResourceSetupOnStartup(); var app = builder.Build(); // Other ASP.Net Core configuration... // Using JasperFx opens up command line utilities for managing // the message storage return await app.RunJasperFxCommands(args); ``` snippet source | anchor or through the Marten integration: ```cs // Adding Marten builder.Services.AddMarten(opts => { var connectionString = builder.Configuration.GetConnectionString("Marten"); opts.Connection(connectionString); opts.DatabaseSchemaName = "orders"; }) // Adding the Wolverine integration for Marten. .IntegrateWithWolverine(); ``` snippet source | anchor or by RavenDb: ```cs var builder = Host.CreateApplicationBuilder(); // You'll need a reference to RavenDB.DependencyInjection // for this one builder.Services.AddRavenDbDocStore(raven => { // configure your RavenDb connection here }); builder.UseWolverine(opts => { // That's it, nothing more to see here opts.UseRavenDbPersistence(); // The RavenDb integration supports basic transactional // middleware just fine opts.Policies.AutoApplyTransactions(); }); // continue with your bootstrapping... ``` snippet source | anchor Next, we need to have some kind of mechanism for cross node communication within Wolverine in the form of control queues for each node. When Wolverine bootstraps, it uses the message persistence to save information about the new node including a `Uri` for a control endpoint where other Wolverine nodes should send messages to "control" agent assignments. If you're using any of the message persistence options above, there's a fallback mechanism using the associated databases to act as a simplistic message queue between nodes. For better results though, some of the transports in Wolverine can instead use a non-durable queue for each node that will probably provide for better results. At the time this guide was written, the [Rabbit MQ transport](/guide/messaging/transports/rabbitmq/) and the [Azure Service Bus transport](/guide/messaging/transports/azureservicebus/) support this feature. ## Disabling Leader Election If you want to disable leader election and all the cross node traffic, or maybe if you just want to optimize automated testing scenarios by making a newly launched process automatically start up all possible agents immediately, you can use the `DurabilityMode.Solo` setting as shown below: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { opts.Services.AddMarten("some connection string") // This adds quite a bit of middleware for // Marten .IntegrateWithWolverine(); // You want this maybe! opts.Policies.AutoApplyTransactions(); if (builder.Environment.IsDevelopment()) { // But wait! Optimize Wolverine for usage as // if there would never be more than one node running opts.Durability.Mode = DurabilityMode.Solo; } }); using var host = builder.Build(); await host.StartAsync(); ``` snippet source | anchor For testing, you also have this helper: ```cs // This is bootstrapping the actual application using // its implied Program.Main() set up // For non-Alba users, this is using IWebHostBuilder Host = await AlbaHost.For(x => { x.ConfigureServices(services => { // Override the Wolverine configuration in the application // to run the application in "solo" mode for faster // testing cold starts services.RunWolverineInSoloMode(); // And just for completion, disable all Wolverine external // messaging transports services.DisableAllExternalWolverineTransports(); }); }); ``` snippet source | anchor Likewise, any other `DurabilityMode` setting than `Balanced` (the default) will disable leader election. ## Writing Your Own Agent Family To write your own family of "sticky" agents and use Wolverine to distribute them across an application cluster, you'll first need to make implementations of this interface: ```cs /// /// Models a constantly running background process within a Wolverine /// node cluster /// public interface IAgent : IHostedService // Standard .NET interface for background services { /// /// Unique identification for this agent within the Wolverine system /// Uri Uri { get; } // Not really used for anything real *yet*, but // hopefully becomes something useful for CritterWatch // health monitoring AgentStatus Status { get; } } ``` snippet source | anchor ```cs /// /// Models a constantly running background process within a Wolverine /// node cluster /// public interface IAgent : IHostedService // Standard .NET interface for background services { /// /// Unique identification for this agent within the Wolverine system /// Uri Uri { get; } // Not really used for anything real *yet*, but // hopefully becomes something useful for CritterWatch // health monitoring AgentStatus Status { get; } } public class CompositeAgent : IAgent { private readonly List _agents; public Uri Uri { get; } public CompositeAgent(Uri uri, IEnumerable agents) { Uri = uri; _agents = agents.ToList(); } public async Task StartAsync(CancellationToken cancellationToken) { foreach (var agent in _agents) { await agent.StartAsync(cancellationToken); } Status = AgentStatus.Running; } public async Task StopAsync(CancellationToken cancellationToken) { foreach (var agent in _agents) { await agent.StopAsync(cancellationToken); } Status = AgentStatus.Running ; } public AgentStatus Status { get; private set; } = AgentStatus.Stopped; } ``` snippet source | anchor Note that you could use [BackgroundService](https://learn.microsoft.com/en-us/aspnet/core/fundamentals/host/hosted-services?view=aspnetcore-9.0\&tabs=visual-studio) as a base class. The `Uri` property just needs to be unique and match up with our next service interface. Wolverine uses that `Uri` as a unique identifier to track where and whether the known agents are executing. The next service is the actual distributor. To plug into Wolverine, you need to build an implementation of this service: ```cs /// /// Pluggable model for managing the assignment and execution of stateful, "sticky" /// background agents on the various nodes of a running Wolverine cluster /// public interface IAgentFamily { /// /// Uri scheme for this family of agents /// string Scheme { get; } /// /// List of all the possible agents by their identity for this family of agents /// /// ValueTask> AllKnownAgentsAsync(); /// /// Create or resolve the agent for this family /// /// /// /// ValueTask BuildAgentAsync(Uri uri, IWolverineRuntime wolverineRuntime); /// /// All supported agent uris by this node instance /// /// ValueTask> SupportedAgentsAsync(); /// /// Assign agents to the currently running nodes when new nodes are detected or existing /// nodes are deactivated /// /// /// ValueTask EvaluateAssignmentsAsync(AssignmentGrid assignments); } ``` snippet source | anchor In this case, you can plug custom `IAgentFamily` strategies into Wolverine by just registering a concrete service in your DI container against that `IAgentFamily` interface (`services.AddSingleton();`). Wolverine does a simple `IServiceProvider.GetServices()` during its bootstrapping to find them. As you can probably guess, the `Scheme` should be unique, and the `Uri` structure needs to be unique across all of your agents. `EvaluateAssignmentsAsync()` is your hook to create distribution strategies, with a simple “just distribute these things evenly across my cluster” strategy possible like this example from Wolverine itself: ```csharp public ValueTask EvaluateAssignmentsAsync(AssignmentGrid assignments) { assignments.DistributeEvenly(Scheme); return ValueTask.CompletedTask; } ``` If you go looking for it, the equivalent in Wolverine’s distribution of Marten projections and subscriptions is a tiny bit more complicated in that it uses knowledge of node capabilities to support blue/green semantics to only distribute work to the servers that “know” how to use particular agents (like version 3 of a projection that doesn’t exist on “blue” nodes): ```csharp public ValueTask EvaluateAssignmentsAsync(AssignmentGrid assignments) { assignments.DistributeEvenlyWithBlueGreenSemantics(SchemeName); return new ValueTask(); } ``` The `AssignmentGrid` tells you the current state of your application in terms of which node is the leader, what all the currently running nodes are, and which agents are running on which nodes. Beyond the even distribution, the `AssignmentGrid` has fine grained API methods to start, stop, or reassign individual agents to specific running nodes. To wrap this up, I’m trying to guess at the questions you might have and see if I can cover all the bases: * **Is some kind of persistence necessary?** Yes, absolutely. Wolverine has to have some way to “know” what nodes are running and which agents are really running on each node. * **How does Wolverine do health checks for each node?** If you look in the wolverine\_nodes table when using PostgreSQL or Sql Server, you’ll see a heartbeat column with a timestamp. Each Wolverine application is running a polling operation that updates its heartbeat timestamp and also checks that there is a known leader node. In normal shutdown, Wolverine tries to gracefully mark the current node as offline and send a message to the current leader node if there is one telling the leader that the node is shutting down. In real world usage though, Kubernetes or who knows what is frequently killing processes without a clean shutdown. In that case, the leader node will be able to detect stale nodes that are offline, eject them from the node persistence, and redistribute agents. * **Can Wolverine switch over the leadership role?** Yes, and that should be relatively quick. Plus Wolverine would keep trying to start a leader election if none is found. But yet, it’s an imperfect world where things can go wrong and there will 100% be the ability to either kickstart or assign the leader role from the forthcoming CritterWatch user interface. * **How does the leadership election work?** Crudely and relatively effectively. All of the storage mechanics today have some kind of sequential node number assignment for all newly persisted nodes. In a kind of simplified “Bully Algorithm,” Wolverine will always try to send “try assume leadership” messages to the node with the lowest sequential node number which will always be the longest running node. When a node does try to take leadership, it uses whatever kind of global, advisory lock function the current persistence uses to get sole access to write the leader node assignment to itself, but will back out if the current node detects from storage that the leadership is already running on another active node. ## Singular Agent ::: info `SingularAgent` is trying to assign itself to the "first" node that is not the leader, but will choose the leader if there is only one node. `SingularAgent` will not reassign the itself to other nodes as long as it is running anywhere. If you need more sophisticated assignment logic, you will need to write a custom `IAgentFamily` and register that in your DI container. ::: What if all you really want is a single `IAgent` for some kind of background process, and that agent should only ever be running on one single node? Wolverine has the `SingularAgent` base class just for that scenario. See this sample from our tests: ```cs using JasperFx.Core; using Wolverine.Runtime.Agents; namespace Wolverine.ComplianceTests; public class SimpleSingularAgent : SingularAgent { private CancellationTokenSource _cancellation = new(); private Timer _timer; // The scheme argument is meant to be descriptive and // your agent will have the Uri {scheme}:// in all diagnostics // and node assignment storage public SimpleSingularAgent() : base("simple") { } // This template method should be used to start up your background service protected override Task startAsync(CancellationToken cancellationToken) { _cancellation = new(); _timer = new Timer(execute, null, 1.Seconds(), 5.Seconds()); return Task.CompletedTask; } private void execute(object? state) { // Do something... } // This template method should be used to cleanly stop up your background service protected override Task stopAsync(CancellationToken cancellationToken) { _timer.SafeDispose(); return Task.CompletedTask; } } ``` snippet source | anchor To add that to your Wolverine system, we've added this convenience method: ```cs // Little extension method helper on IServiceCollection to register your // SingularAgent opts.Services.AddSingularAgent(); ``` snippet source | anchor In the end, what you need is an `IAgentFamily` that can assign a singular `IAgent` to one and only one node within your system. `SingularAgent` just makes that a little bit simpler. --- --- url: /guide/messaging/transports/gcp-pubsub/listening.md --- # Listening Setting up Wolverine listeners and GCP Pub/Sub subscriptions for GCP Pub/Sub topics is shown below: ```cs var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UsePubsub("your-project-id"); opts.ListenToPubsubTopic("incoming1"); // Listen to an existing subscription opts.ListenToPubsubSubscription("subscription1", x => { // Other configuration... }); opts.ListenToPubsubTopic("incoming2") // You can optimize the throughput by running multiple listeners // in parallel .ListenerCount(5) .ConfigurePubsubSubscription(options => { // Optionally configure the subscription itself options.DeadLetterPolicy = new DeadLetterPolicy { DeadLetterTopic = "errors", MaxDeliveryAttempts = 5 }; options.AckDeadlineSeconds = 60; options.RetryPolicy = new RetryPolicy { MinimumBackoff = Duration.FromTimeSpan(TimeSpan.FromSeconds(1)), MaximumBackoff = Duration.FromTimeSpan(TimeSpan.FromSeconds(10)) }; }); }).StartAsync(); ``` snippet source | anchor --- --- url: /guide/messaging/transports/rabbitmq/listening.md --- # Listening ## Listening Options Wolverine's Rabbit MQ integration comes with quite a few options to fine tune listening performance as shown below: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // *A* way to configure Rabbit MQ using their Uri schema // documented here: https://www.rabbitmq.com/uri-spec.html opts.UseRabbitMq(new Uri("amqp://localhost")); // Set up a listener for a queue opts.ListenToRabbitQueue("incoming1") .PreFetchCount(100) .ListenerCount(5) // use 5 parallel listeners .CircuitBreaker(cb => { cb.PauseTime = 1.Minutes(); // 10% failures will cause the listener to pause cb.FailurePercentageThreshold = 10; }) .UseDurableInbox(); // Set up a listener for a queue, but also // fine-tune the queue characteristics if Wolverine // will be governing the queue setup opts.ListenToRabbitQueue("incoming2", q => { q.PurgeOnStartup = true; q.TimeToLive(5.Minutes()); }); }).StartAsync(); ``` snippet source | anchor To optimize and tune the message processing, you may want to read more about the [Rabbit MQ prefetch count and prefetch size concepts](https://www.cloudamqp.com/blog/how-to-optimize-the-rabbitmq-prefetch-count.html). ## Listen to a Queue Setting up a listener to a specific Rabbit MQ queue is shown below: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // *A* way to configure Rabbit MQ using their Uri schema // documented here: https://www.rabbitmq.com/uri-spec.html opts.UseRabbitMq(new Uri("amqp://localhost")); // Set up a listener for a queue opts.ListenToRabbitQueue("incoming1") .PreFetchCount(100) .ListenerCount(5) // use 5 parallel listeners .CircuitBreaker(cb => { cb.PauseTime = 1.Minutes(); // 10% failures will cause the listener to pause cb.FailurePercentageThreshold = 10; }) .UseDurableInbox(); // Set up a listener for a queue, but also // fine-tune the queue characteristics if Wolverine // will be governing the queue setup opts.ListenToRabbitQueue("incoming2", q => { q.PurgeOnStartup = true; q.TimeToLive(5.Minutes()); }); }).StartAsync(); ``` snippet source | anchor --- --- url: /guide/messaging/transports/sqs/listening.md --- # Listening Setting up a Wolverine listener for an SQS queue is shown below: ```cs var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseAmazonSqsTransport() // Let Wolverine create missing queues as necessary .AutoProvision() // Optionally purge all queues on application startup. // Warning though, this is potentially slow .AutoPurgeOnStartup(); opts.ListenToSqsQueue("incoming", queue => { queue.Configuration.Attributes[QueueAttributeName.DelaySeconds] = "5"; queue.Configuration.Attributes[QueueAttributeName.MessageRetentionPeriod] = 4.Days().TotalSeconds.ToString(); }) // You can optimize the throughput by running multiple listeners // in parallel .ListenerCount(5); }).StartAsync(); ``` snippet source | anchor --- --- url: /guide/messaging/listeners.md --- # Listening Endpoints ::: tip Unlike some other .NET messaging frameworks, Wolverine does not require specific message handlers to be registered at a certain listening endpoint like a Rabbit MQ queue or Kafka topic. ::: A vital piece of Wolverine is defining or configuring endpoints where Wolverine "listens" for incoming messages to pass to the Wolverine message handlers. Examples of endpoints supported by Wolverine that can listen for messages include: * TCP endpoints with Wolverine's built in socket based transport * Rabbit MQ queues * Azure Service Bus subscriptions or queues * Kafka topics * Pulsar topics * AWS SQS queues Listening endpoints with Wolverine come in three flavors as shown below: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // The Rabbit MQ transport supports all three types of listeners opts.UseRabbitMq(); // The durable mode requires some sort of envelope storage opts.PersistMessagesWithPostgresql("some connection string"); opts.ListenToRabbitQueue("inline") // Process inline, default is with one listener .ProcessInline() // But, you can use multiple, parallel listeners .ListenerCount(5); opts.ListenToRabbitQueue("buffered") // Buffer the messages in memory for increased throughput .BufferedInMemory(new BufferingLimits(1000, 500)); opts.ListenToRabbitQueue("durable") // Opt into durable inbox mechanics .UseDurableInbox(new BufferingLimits(1000, 500)); }).StartAsync(); ``` snippet source | anchor ## Inline Endpoints With `Inline` endpoints, the basic processing of messages is: 1. A message is received by the listener 2. The listener passes the message directly to Wolverine for handling 3. Depending on whether the message execution succeeds or fails, the message is either "ack-ed" or "nack-ed" to the underlying transport broker Use the `Inline` mode if you care about message ordering or if you do not want guaranteed delivery without having to use any kind of message persistence. To improve throughput, you can direct Wolverine to use a number of parallel listeners, but the default is just 1 per listening endpoint. ## Buffered Endpoints ::: tip Use `Buffered` endpoints where throughput is more important than delivery guarantees ::: With `Buffered` endpoints, the basic processing of messages is: 1. A message -- or batch of messages for transports like AWS SQS or Azure Service Bus that support batching -- arrives from the listener and is immediately "ack-ed" to the message broker 2. The message is placed into an in memory queue where it will be handled With `Buffered` endpoints, you can: * Specify the maximum number of parallel messages that can be handled at once * Specify buffering limits on the maximum number of messages that can be held in memory to enforce back pressure rules that will stop and restart message listening when the number of in memory messages goes down to an acceptable level Requeue error actions just put the failed message back into the in memory queue at the back of the queue. ## Durable Endpoints `Durable` endpoints essentially work the same as `Buffered` endpoints, but utilize Wolverine's [transactional inbox support](/guide/durability) for guaranteed delivery and processing. With `Durable` endpoints, the basic processing of messages is: 1. A message -- or batch of messages for transports like AWS SQS or Azure Service Bus that support batching -- arrives from the listener and is immediately "ack-ed" to the message broker 2. Each message -- or message batch -- is persisted to Wolverine's message storage 3. The message is placed into an in memory queue where it will be handled one at a time 4. When a message is successfully handled or moved to a dead letter queue, the message in the database is marked as "Handled" The durable inbox keeps handled messages in the database for just a little while (5 minutes is the default) to use for some built in idempotency on message id for incoming messages. ## Internal Architecture If you're curious, here's a diagram of the types involved in listening to messages from a single `Endpoint`. Just know that `Endpoint` only models the configuration of the listener in most transport types: ```mermaid classDiagram class Endpoint class IListener class ListeningAgent class IReceiver Endpoint-->IListener: Builds ListeningAgent-->IListener: Stops or starts ListeningAgent-->BackPressureAgent: potentially stops or restarts the listening ListeningAgent-->Restarter: helps restart a paused listener ListeningAgent-->IReceiver: delegates messages for execution ListeningAgent-->CircuitBreaker: potentially stops the listening ``` * `Endpoint` is a configuration element that models how the listener should behave * `IListener` is a specific service built by the `Endpoint` that does the actual work of listening to messages incoming from the messaging transport like a Rabbit MQ broker, and passes that information to Wolverine's message handlers * `ListeningAgent` is a controller within Wolverine that governs the listener lifecycle including pauses and restarts depending on load or error conditions ## Strictly Ordered Listeners In the case where you need messages from a single endpoint to be processed in strict, global order across the entire application, you have the `ListenWithStrictOrdering()` option: ```cs var host = await Host.CreateDefaultBuilder().UseWolverine(opts => { opts.UseRabbitMq().EnableWolverineControlQueues(); opts.PersistMessagesWithPostgresql(Servers.PostgresConnectionString, "listeners"); opts.ListenToRabbitQueue("ordered") // This option is available on all types of Wolverine // endpoints that can be configured to be a listener .ListenWithStrictOrdering(); }).StartAsync(); ``` snippet source | anchor This option does a couple things: * Ensures that Wolverine will *only* listen for messages on this endpoint on a single running node * Sets any local execution of the listener's internal, local queue to be strictly sequential and only process messages with a single thread ## Disabling All External Listeners In some cases, you may want to disable all message processing for messages received from external transports like Rabbit MQ or AWS SQS. To do that, simply set: ```cs .UseWolverine(opts => { // This will disable all message listening to // external message brokers opts.DisableAllExternalListeners = true; opts.DisableConventionalDiscovery(); // This could never, ever work opts.UseRabbitMq().AutoProvision(); opts.ListenToRabbitQueue("incoming"); }).StartAsync(); ``` snippet source | anchor The original use case for this flag was a command line tool that needed to publish messages to a system through Rabbit MQ then exit. Having that process also trying to publish messages received from Rabbit MQ kept the command line tool from quitting quickly as Wolverine had to "drain" ongoing work. For that kind of tool, we recommend this setting. --- --- url: /guide/messaging/transports/azureservicebus/listening.md --- # Listening for Messages ::: warning The Azure Service Bus transport uses batching to both send and receive messages. As such, the listeners or senders can only be configured to use buffered or durable mechanics. I.e., there is no current option for inline senders or listeners. ::: You can configure explicit queue listening with this syntax: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { // One way or another, you're probably pulling the Azure Service Bus // connection string out of configuration var azureServiceBusConnectionString = builder .Configuration .GetConnectionString("azure-service-bus"); // Connect to the broker in the simplest possible way opts.UseAzureServiceBus(azureServiceBusConnectionString).AutoProvision(); opts.ListenToAzureServiceBusQueue("incoming") // Customize how many messages to retrieve at one time .MaximumMessagesToReceive(100) // Customize how long the listener will wait for more messages before // processing a batch .MaximumWaitTime(3.Seconds()) // Add a circuit breaker for systematic failures .CircuitBreaker() // And all the normal Wolverine options you'd expect .BufferedInMemory(); }); using var host = builder.Build(); await host.StartAsync(); ``` snippet source | anchor ## Conventional Listener Configuration In the case of listening to a large number of queues, it may be beneficial to apply configuration to all the Azure Service Bus listeners like this: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { // One way or another, you're probably pulling the Azure Service Bus // connection string out of configuration var azureServiceBusConnectionString = builder .Configuration .GetConnectionString("azure-service-bus"); // Connect to the broker in the simplest possible way opts.UseAzureServiceBus(azureServiceBusConnectionString).AutoProvision() // Apply default configuration to all Azure Service Bus listeners // This can be overridden explicitly by any configuration for specific // listening endpoints .ConfigureListeners(listener => { listener.UseDurableInbox(new BufferingLimits(500, 100)); }); }); using var host = builder.Build(); await host.StartAsync(); ``` snippet source | anchor Note that any of these settings would be overridden by specific configuration to a specific endpoint. --- --- url: /guide/durability/managing.md --- # Managing Message Storage ::: info Wolverine will automatically check for the existence of necessary database tables and functions to support the configured message storage, and will also apply any necessary database changes to comply with the configuration automatically. ::: Wolverine uses the [Oakton "Stateful Resource"](https://jasperfx.github.io/oakton/guide/host/resources.html) model for managing infrastructure configuration at development or even deployment time for configured items like the database-backed message storage or message broker queues. ## Disable Automatic Storage Migration To disable the automatic storage migration, just flip this flag: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Disable automatic database migrations for message // storage opts.AutoBuildMessageStorageOnStartup = AutoCreate.None; }).StartAsync(); ``` snippet source | anchor ## Programmatic Management Especially in automated tests, you may want to programmatically rebuild or clear out all persisted messages. Here's a sample of the functionality in Wolverine to do just that: ```cs // IHost would be your application in a testing harness public static async Task testing_setup_or_teardown(IHost host) { // Programmatically apply any outstanding message store // database changes await host.SetupResources(); // Teardown the database message storage await host.TeardownResources(); // Clear out any database message storage // also tries to clear out any messages held // by message brokers connected to your Wolverine app await host.ResetResourceState(); var store = host.Services.GetRequiredService(); // Rebuild the database schema objects // and delete existing message data // This is good for testing await store.Admin.RebuildAsync(); // Remove all persisted messages await store.Admin.ClearAllAsync(); } ``` snippet source | anchor ## Building Storage on Startup To have any missing database schema objects built as needed on application startup, just add this option: ```cs // This is rebuilding the persistent storage database schema on startup builder.Host.UseResourceSetupOnStartup(); ``` snippet source | anchor ## Command Line Management Assuming that you are using [Oakton](https://jasperfx.github.io/oakton) as your command line parser in your Wolverine application as shown in this last line of a .NET 6/7 `Program` code file: ```cs // Opt into using JasperFx for command parsing await app.RunJasperFxCommands(args); ``` snippet source | anchor And you're using the message persistence from either the `WolverineFx.SqlServer` or `WolverineFx.Postgresql` or `WolverineFx.Marten` Nugets installed in your application, you will have some extended command line options that you can discover from typing `dotnet run -- help` at the command line at the root of your project: ```bash The available commands are: Alias Description ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ check-env Execute all environment checks against the application codegen Utilities for working with JasperFx.CodeGeneration and JasperFx.RuntimeCompiler db-apply Applies all outstanding changes to the database(s) based on the current configuration db-assert Assert that the existing database(s) matches the current configuration db-dump Dumps the entire DDL for the configured Marten database db-patch Evaluates the current configuration against the database and writes a patch and drop file if there are any differences describe Writes out a description of your running application to either the console or a file help List all the available commands resources Check, setup, or teardown stateful resources of this system run Start and run this .Net application storage Administer the envelope storage ``` There's admittedly some duplication here with different options coming from [Oakton](https://jasperfx.github.io/oakton) itself, the [Weasel.CommandLine](https://github.com/JasperFx/weasel) library, and the `storage` command from Wolverine itself. To build out the schema objects for [message persistence](/guide/durability/), you can use this command to apply any outstanding database changes necessary to bring the database schema to the Wolverine configuration: ```bash dotnet run -- db-apply ``` > NOTE: See the [Exporting SQL Scripts](#exporting-sql-scripts) section down the page for details of applying migrations when integrating with Marten or this option -- but just know that this will also clear out any existing message data: ```bash dotnet run -- storage rebuild ``` or this option which will also attempt to create Marten database objects or any known Wolverine transport objects like Rabbit MQ / Azure Service Bus / AWS SQS queues: ```bash dotnet run -- resources setup ``` ## Clearing Node Ownership ::: warning Don't use this option in production if any nodes are currently running ::: If you ever have a node crash and need to force any persisted, incoming or outgoing messages to be picked up by another node (this should be automatic anyway, but locks might persist and Wolverine might take a bit to recognize that a node has crashed), you can release the ownership of messages of all persisted nodes by: ```bash dotnet run -- storage release ``` ## Deleting Message Data At any time you can clear out any existing persisted message data with: ```bash dotnet run -- storage clear ``` ## Exporting SQL Scripts If you just want to export the SQL to create the necessary database objects, you can use: ```bash dotnet run -- db-dump export.sql ``` where `export.sql` should be a file name. ### Marten integration When integrating with Marten, scripts must be generated seperately for both Marten and Wolverine resources.\ Resources are separated into databases and can be listed as below: ```bash dotnet run -- db-list # ┌────────────────────────────────────────┬───────────────────────────┐ # │ DatabaseUri │ SubjectUri │ # ├────────────────────────────────────────┼───────────────────────────┤ # │ postgresql://localhost/postgres/orders │ marten://store/ │ # │ postgresql://localhost/postgres │ wolverine://messages/main │ # └────────────────────────────────────────┴───────────────────────────┘ ``` Once you've identified the database, pass the `-d` parameter with the `SubjectUri` from the output above to the `db-dump` command: ```bash dotnet run -- db-dump -d marten://store/ export_marten.sql dotnet run -- db-dump -d wolverine://messages/main export_wolverine.sql ``` ## Disabling All Persistence Let's say that you want to use the command line tooling to generate OpenAPI documentation, but do so without Wolverine being able to connect to any external databases (or transports, and you'll have to disable both for this to work). You can now do that with the option shown below as part of an [Alba](https://jasperfx.github.io/alba) test: ```cs using var host = await AlbaHost.For(builder => { builder.ConfigureServices(services => { // You probably have to do both services.DisableAllExternalWolverineTransports(); services.DisableAllWolverineMessagePersistence(); }); }); ``` snippet source | anchor --- --- url: /guide/durability/marten/inbox.md --- # Marten as Inbox On the flip side of using Wolverine's "outbox" support for outgoing messages, you can also choose to use the same message persistence for incoming messages such that incoming messages are first persisted to the application's underlying Postgresql database before being processed. While you *could* use this with external message brokers like Rabbit MQ, it's more likely this will be valuable for Wolverine's [local queues](/guide/messaging/transports/local). Back to the sample Marten + Wolverine integration from this page: ```cs var builder = WebApplication.CreateBuilder(args); builder.Host.ApplyJasperFxExtensions(); builder.Services.AddMarten(opts => { opts.Connection(Servers.PostgresConnectionString); opts.DatabaseSchemaName = "chaos2"; }) .IntegrateWithWolverine(); builder.Host.UseWolverine(opts => { opts.Policies.OnAnyException().RetryWithCooldown(50.Milliseconds(), 100.Milliseconds(), 250.Milliseconds()); opts.Services.AddScoped(); opts.Policies.DisableConventionalLocalRouting(); opts.UseRabbitMq().AutoProvision(); opts.Policies.UseDurableInboxOnAllListeners(); opts.Policies.UseDurableOutboxOnAllSendingEndpoints(); opts.ListenToRabbitQueue("chaos2"); opts.PublishAllMessages().ToRabbitQueue("chaos2"); opts.Policies.AutoApplyTransactions(); }); ``` snippet source | anchor ```cs var builder = WebApplication.CreateBuilder(args); builder.Host.ApplyJasperFxExtensions(); builder.Services.AddMarten(opts => { var connectionString = builder .Configuration .GetConnectionString("postgres"); opts.Connection(connectionString); opts.DatabaseSchemaName = "orders"; }) // Optionally add Marten/Postgresql integration // with Wolverine's outbox .IntegrateWithWolverine(); // You can also place the Wolverine database objects // into a different database schema, in this case // named "wolverine_messages" //.IntegrateWithWolverine("wolverine_messages"); builder.Host.UseWolverine(opts => { // I've added persistent inbox // behavior to the "important" // local queue opts.LocalQueue("important") .UseDurableInbox(); }); ``` snippet source | anchor But this time, focus on the Wolverine configuration of the local queue named "important." By marking this local queue as persistent, any messages sent to this queue in memory are first persisted to the underlying Postgresql database, and deleted when the message is successfully processed. This allows Wolverine to grant a stronger delivery guarantee to local messages and even allow messages to be processed if the current application node fails before the message is processed. ::: tip There are some vague plans to add a little more efficient integration between Wolverine and ASP.Net Core Minimal API, but we're not there yet. ::: Or finally, it's less code to opt into Wolverine's outbox by delegating to the [command bus](/guide/in-memory-bus) functionality as in this sample [Minimal API](https://docs.microsoft.com/en-us/aspnet/core/fundamentals/minimal-apis?view=aspnetcore-6.0) usage: ```cs // Delegate directly to Wolverine commands -- More efficient recipe coming later... app.MapPost("/orders/create2", (CreateOrder command, IMessageBus bus) => bus.InvokeAsync(command)); ``` snippet source | anchor --- --- url: /guide/durability/marten/sagas.md --- # Marten as Saga Storage Marten is an easy option for [persistent sagas](/guide/durability/sagas) with Wolverine. Yet again, to opt into using Marten as your saga storage mechanism in Wolverine, you just need to add the `IntegrateWithWolverine()` option to your Marten configuration as shown in the Marten Integration [Getting Started](/guide/durability/marten/#getting-started) section. When using the Wolverine + Marten integration, your stateful saga classes should be valid Marten document types that inherit from Wolverine's `Saga` type, which generally means being a public class with a valid Marten [identity member](https://martendb.io/documents/identity.html). Remember that your handler methods in Wolverine can accept "method injected" dependencies from your underlying IoC container. See the [Saga with Marten sample project](https://github.com/JasperFx/wolverine/tree/main/src/Samples/OrderSagaSample). ## Optimistic Concurrency Marten will automatically apply numeric revisioning to Wolverine `Saga` storage, and will increment the `Version` while handling `Saga` commands to use Marten's native optimistic concurrency protection. --- --- url: /guide/durability/marten/outbox.md --- # Marten as Transactional Outbox ::: tip Wolverine's outbox will help you order all outgoing messages until after the database transaction succeeds, but only messages being delivered to endpoints explicitly configured to be persistent will be stored in the database. While this may add complexity, it does give you fine grained support to mix and match fire and forget messaging with messages that require durable persistence. ::: One of the most important features in all of Wolverine is the [persistent outbox](https://microservices.io/patterns/data/transactional-outbox.html) support and its easy integration into Marten. If you're already familiar with the concept of an "outbox" (or "inbox"), skip to the sample code below. Here's a common problem when using any kind of messaging strategy. Inside the handling for a single web request, you need to make some immediate writes to the backing database for the application, then send a corresponding message out through your asynchronous messaging infrastructure. Easy enough, but here's a few ways that could go wrong if you're not careful: * The message is received and processed before the initial database writes are committed, and you get erroneous results because of that (I've seen this happen) * The database transaction fails, but the message was still sent out, and you get inconsistency in the system * The database transaction succeeds, but the message infrastructure fails some how, so you get inconsistency in the system You could attempt to use some sort of [two phase commit](https://martinfowler.com/articles/patterns-of-distributed-systems/two-phase-commit.html) between your database and the messaging infrastructure, but that has historically been problematic. This is where the "outbox" pattern comes into play to guarantee that the outgoing message and database transaction both succeed or fail, and that the message is only sent out after the database transaction has succeeded. Imagine a simple example where a Wolverine handler is receiving a `CreateOrder` command that will span a brand new Marten `Order` document and also publish an `OrderCreated` event through Wolverine messaging. Using the outbox, that handler **in explicit, long hand form** is this: ```cs public static async Task Handle( CreateOrder command, IDocumentSession session, IMartenOutbox outbox, CancellationToken cancellation) { var order = new Order { Description = command.Description }; // Register the new document with Marten session.Store(order); // Hold on though, this message isn't actually sent // until the Marten session is committed await outbox.SendAsync(new OrderCreated(order.Id)); // This makes the database commits, *then* flushed the // previously registered messages to Wolverine's sending // agents await session.SaveChangesAsync(cancellation); } ``` snippet source | anchor In the code above, the `OrderCreated` message is registered with the Wolverine `IMessageContext` for the current message, but nothing more than that is actually happening at that point. When `IDocumentSession.SaveChangesAsync()` is called, Marten is persisting the new `Order` document **and** creating database records for the outgoing `OrderCreated` message in the same transaction (and even in the same batched database command for maximum efficiency). After the database transaction succeeds, the pending messages are automatically sent to Wolverine's sending agents. Now, let's play "what if:" * What if the messaging broker is down? As long as the messages are persisted, Wolverine will continue trying to send the persisted outgoing messages until the messaging broker is back up and available. * What if the application magically dies after the database transaction but before the messages are sent through the messaging broker? Wolverine will still be able to send these persisted messages from either another running application node or after the application is restarted. The point here is that Wolverine is doing store and forward mechanics with the outgoing messages and these messages will eventually be sent to the messaging infrastructure (unless they hit a designated expiration that you've defined). In the section below on transactional middleware we'll see a shorthand way to simplify the code sample above and remove some repetitive ceremony. ## Outbox with ASP.Net Core The Wolverine outbox is also usable from within ASP.Net Core (really any code) controller or Minimal API handler code. Within an MVC controller, the `CreateOrder` handling code would be: ```cs public class CreateOrderController : ControllerBase { [HttpPost("/orders/create2")] public async Task Create( [FromBody] CreateOrder command, [FromServices] IDocumentSession session, [FromServices] IMartenOutbox outbox) { var order = new Order { Description = command.Description }; // Register the new document with Marten session.Store(order); // Don't worry, this message doesn't go out until // after the Marten transaction succeeds await outbox.PublishAsync(new OrderCreated(order.Id)); // Commit the Marten transaction await session.SaveChangesAsync(); } } ``` snippet source | anchor From a Minimal API, that could be this: ```cs app.MapPost("/orders/create3", async (CreateOrder command, IDocumentSession session, IMartenOutbox outbox) => { var order = new Order { Description = command.Description }; // Register the new document with Marten session.Store(order); // Don't worry, this message doesn't go out until // after the Marten transaction succeeds await outbox.PublishAsync(new OrderCreated(order.Id)); // Commit the Marten transaction await session.SaveChangesAsync(); }); ``` snippet source | anchor --- --- url: /guide/durability/marten.md --- # Marten Integration ::: info There is also some HTTP specific integration for Marten with Wolverine. See [Integration with Marten](/guide/http/marten) for more information. ::: [Marten](https://martendb.io) and Wolverine are sibling projects under the [JasperFx organization](https://github.com/JasperFx), and as such, have quite a bit of synergy when used together. At this point, adding the `WolverineFx.Marten` Nuget dependency to your application adds the capability to combine Marten and Wolverine to: * Simplify persistent handler coding with transactional middleware * Use Marten and Postgresql as a persistent inbox or outbox with Wolverine messaging * Support persistent sagas within Wolverine applications * Effectively use Wolverine and Marten together for a [Decider](https://thinkbeforecoding.com/post/2021/12/17/functional-event-sourcing-decider) function workflow with event sourcing * Selectively publish events captured by Marten through Wolverine messaging * Process events captured by Marten through Wolverine message handlers through either [subscriptions](./subscriptions) or the older [event forwarding](./event-forwarding). * Publish messages raised by [Marten projection "side effects"](https://martendb.io/events/projections/aggregate-projections.html#raising-events-messages-or-other-operations-in-aggregation-projections) through Wolverine messaging ::: warning Just a heads up, it is possible to publish messages from Marten projection "side effects" to Wolverine, even within `Inline` projections, **but**, if you want to have Wolverine messages published from a Marten `IInitialData`, you'll need to wrap that within its own `IHostedService` service that is registered **after** Wolverine in your IoC container service registrations. ::: ## Getting Started To use the Wolverine integration with Marten, just install the Wolverine.Persistence.Marten Nuget into your application. Assuming that you've [configured Marten](https://martendb.io/configuration/) in your application (and Wolverine itself!), you next need to add the Wolverine integration to Marten as shown in this sample application bootstrapping: ```cs var builder = WebApplication.CreateBuilder(args); builder.Host.ApplyJasperFxExtensions(); builder.Services.AddMarten(opts => { opts.Connection(Servers.PostgresConnectionString); opts.DatabaseSchemaName = "chaos2"; }) .IntegrateWithWolverine(); builder.Host.UseWolverine(opts => { opts.Policies.OnAnyException().RetryWithCooldown(50.Milliseconds(), 100.Milliseconds(), 250.Milliseconds()); opts.Services.AddScoped(); opts.Policies.DisableConventionalLocalRouting(); opts.UseRabbitMq().AutoProvision(); opts.Policies.UseDurableInboxOnAllListeners(); opts.Policies.UseDurableOutboxOnAllSendingEndpoints(); opts.ListenToRabbitQueue("chaos2"); opts.PublishAllMessages().ToRabbitQueue("chaos2"); opts.Policies.AutoApplyTransactions(); }); ``` snippet source | anchor ```cs var builder = WebApplication.CreateBuilder(args); builder.Host.ApplyJasperFxExtensions(); builder.Services.AddMarten(opts => { var connectionString = builder .Configuration .GetConnectionString("postgres"); opts.Connection(connectionString); opts.DatabaseSchemaName = "orders"; }) // Optionally add Marten/Postgresql integration // with Wolverine's outbox .IntegrateWithWolverine(); // You can also place the Wolverine database objects // into a different database schema, in this case // named "wolverine_messages" //.IntegrateWithWolverine("wolverine_messages"); builder.Host.UseWolverine(opts => { // I've added persistent inbox // behavior to the "important" // local queue opts.LocalQueue("important") .UseDurableInbox(); }); ``` snippet source | anchor For more information, see [durable messaging](/guide/durability/) and the [sample Marten + Wolverine project](https://github.com/JasperFx/wolverine/tree/main/src/Samples/WebApiWithMarten). Using the `IntegrateWithWolverine()` extension method behind your call to `AddMarten()` will: * Register the necessary [inbox and outbox](/guide/durability/) database tables with [Marten's database schema management](https://martendb.io/schema/migrations.html) * Adds Wolverine's "DurabilityAgent" to your .NET application for the inbox and outbox * Makes Marten the active [saga storage](/guide/durability/sagas) for Wolverine * Adds transactional middleware using Marten to your Wolverine application ## Entity Attribute Loading ::: info If your message handler or HTTP endpoint uses more than one declarative attribute for retrieving Marten data, Wolverine 5.0+ is able to utilize [Marten's Batch Querying capability](https://martendb.io/documents/querying/batched-queries.html#batched-queries) for more efficient interaction with the database. ::: The Marten integration is able to completely support the [Entity attribute usage](/guide/handlers/persistence.html#automatically-loading-entities-to-method-parameters). ## Marten as Outbox See the [Marten as Outbox](./outbox) page. ## Transactional Middleware See the [Transactional Middleware](./transactional-middleware) page. ## Marten as Inbox See the [Marten as Inbox](./inbox) page. ## Saga Storage See the [Marten as Saga Storage](./sagas) page. --- --- url: /guide/durability/marten/operations.md --- # Marten Operation Side Effects ::: tip You can certainly write your own `IMartenOp` implementations and use them as return values in your Wolverine handlers ::: ::: info This integration also includes full support for the [storage action side effects](/guide/handlers/side-effects.html#storage-side-effects) model when using Marten~~~~ with Wolverine. ::: The `Wolverine.Marten` library includes some helpers for Wolverine [side effects](/guide/handlers/side-effects) using Marten with the `IMartenOp` interface: ```cs /// /// Interface for any kind of Marten related side effect /// public interface IMartenOp : ISideEffect { void Execute(IDocumentSession session); } ``` snippet source | anchor The built in side effects can all be used from the `MartenOps` static class like this HTTP endpoint example: ```cs [WolverinePost("/invoices/{invoiceId}/pay")] public static IMartenOp Pay([Document] Invoice invoice) { invoice.Paid = true; return MartenOps.Store(invoice); } ``` snippet source | anchor There are existing Marten ops for storing, inserting, updating, and deleting a document. There's also a specific helper for starting a new event stream as shown below: ```cs public static class TodoListEndpoint { [WolverinePost("/api/todo-lists")] public static (TodoCreationResponse, IStartStream) CreateTodoList( CreateTodoListRequest request ) { var listId = CombGuidIdGeneration.NewGuid(); var result = new TodoListCreated(listId, request.Title); var startStream = MartenOps.StartStream(listId, result); return (new TodoCreationResponse(listId), startStream); } } ``` snippet source | anchor The major advantage of using a Marten side effect is to help keep your Wolverine handlers or HTTP endpoints be a pure function that can be easily unit tested through measuring the expected return values. Using `IMartenOp` also helps you utilize synchronous methods for your logic, even though at runtime Wolverine itself will be wrapping asynchronous code about your simpler, synchronous code. ## Returning Multiple Marten Side Effects Due to (somewhat) popular demand, Wolverine lets you return zero to many `IMartenOp` operations as side effects from a message handler or HTTP endpoint method like so: ```cs // Just keep in mind that this "example" was rigged up for test coverage public static IEnumerable Handle(AppendManyNamedDocuments command) { var number = 1; foreach (var name in command.Names) { yield return MartenOps.Store(new NamedDocument{Id = name, Number = number++}); } } ``` snippet source | anchor Wolverine will pick up on any return type that can be cast to `IEnumerable`, so for example: * `IEnumerable` * `IMartenOp[]` * `List` And you get the point. Wolverine is not (yet) smart enough to know that an array or enumerable of a concrete type of `IMartenOp` is a side effect. Like any other "side effect", you could technically return this as the main return type of a method or as part of a tuple. --- --- url: /guide/messaging/expiration.md --- # Message Expiration Some messages you publish or send will be transient, or only be valid for only a brief time. In this case you may find it valuable to apply message expiration rules to tell Wolverine to ignore messages that are too old. You won't use this explicitly very often, but this information is ultimately stored on the Wolverine `Envelope` with this property: ```cs /// /// Instruct Wolverine to throw away this message if it is not successfully sent and processed /// by the time specified /// public DateTimeOffset? DeliverBy { get => _deliverBy; set => _deliverBy = value?.ToUniversalTime(); } ``` snippet source | anchor At runtime, Wolverine will: 1. Wolverine will discard received messages that are past their `DeliverBy` time 2. Wolverine will also discard outgoing messages that are past their `DeliverBy` time 3. For transports that support this (Rabbit MQ for example), Wolverine will try to pass the `DeliverBy` time into a transport's native message expiration capabilities ## At Message Sending Time On a message by message basis, you can explicitly set the deliver by time either as an absolute time or as a `TimeSpan` past now with this syntax: ```cs public async Task message_expiration(IMessageBus bus) { // Disregard the message if it isn't sent and/or processed within 3 seconds from now await bus.SendAsync(new StatusUpdate("Okay"), new DeliveryOptions { DeliverWithin = 3.Seconds() }); // Disregard the message if it isn't sent and/or processed by 3 PM today // but watch all the potentially harmful time zone issues in your real code that I'm ignoring here! await bus.SendAsync(new StatusUpdate("Okay"), new DeliveryOptions { DeliverBy = DateTime.Today.AddHours(15) }); } ``` snippet source | anchor ## By Subscriber The message expiration can be set as a rule for all messages sent to a specific subscribing endpoint as shown by this sample: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { // One way or another, you're probably pulling the Azure Service Bus // connection string out of configuration var azureServiceBusConnectionString = builder .Configuration .GetConnectionString("azure-service-bus"); // Connect to the broker in the simplest possible way opts.UseAzureServiceBus(azureServiceBusConnectionString).AutoProvision(); // Explicitly configure a delivery expiration of 5 seconds // for a specific Azure Service Bus queue opts.PublishMessage().ToAzureServiceBusQueue("transient") // If the messages are transient, it's likely that they should not be // durably stored, so make things lighter in your system .BufferedInMemory() .DeliverWithin(5.Seconds()); }); using var host = builder.Build(); await host.StartAsync(); ``` snippet source | anchor ## By Message Type At the message type level, you can set message expiration rules with the `Wolverine.Attributes.DeliverWithinAttribute` attribute on the message type as in this sample: ```cs // The attribute directs Wolverine to send this message with // a "deliver within 5 seconds, or discard" directive [DeliverWithin(5)] public record AccountUpdated(Guid AccountId, decimal Balance); ``` snippet source | anchor --- --- url: /guide/handlers/discovery.md --- # Message Handler Discovery ::: warning The handler type scanning and discovery is done against an allow list of assemblies rather than running through your entire application's dependency tree. Watch for this if handlers are missing. ::: Wolverine has built in mechanisms for automatically finding message handler methods in your application based on a set of naming conventions or using explicit interface or attribute markers. If you really wanted to, you could also explicitly add handler types programmatically. ## Troubleshooting Handler Discovery It's an imperfect world and sometimes Wolverine isn't finding handler methods for some reason or another -- or seems to be using types and methods you'd rather it didn't. Not to fear, there are some diagnostic tools to help Wolverine explain what's going on. Directly on `WolverineOptions` itself is a diagnostic method named `DescribeHandlerMatch` that will give you a full textual report on why or why not Wolverine is identifying that type as a handler type, then if it is found by Wolverine to be a handler type, also giving you a full report on each public method about why or why not Wolverine considers it to be a valid handler method. ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Surely plenty of other configuration for Wolverine... // This *temporary* line of code will write out a full report about why or // why not Wolverine is finding this handler and its candidate handler messages Console.WriteLine(opts.DescribeHandlerMatch(typeof(MyMissingMessageHandler))); }).StartAsync(); ``` snippet source | anchor Even if the report itself isn't exactly clear to you, using this textual report in a Wolverine issue or within the [Critter Stack Discord](https://discord.gg/wBkZGpe3) group will help the Wolverine team be able to assist you much quicker. ## Assembly Discovery ::: tip The handler discovery uses the type scanning functionality into `JasperFx.Core` library for type scanning that is shared with several other JasperFx projects. ::: The first issue is which assemblies will Wolverine look through to find candidate handlers? By default, Wolverine is looking through what it calls the *application assembly*. When you call `IHostBuilder.UseWolverine()` to add Wolverine to an application, Wolverine looks up the call stack to find where the call to that method came from, and uses that to determine the application assembly. If you're using an idiomatic approach to bootstrap your application through `Program.Main(args)`, the application assembly is going to be the application's main assembly that holds the `Program.Main()` entrypoint. ::: tip We highly recommend you use [WebApplicationFactory](https://learn.microsoft.com/en-us/aspnet/core/test/integration-tests?view=aspnetcore-7.0) or [Alba](https://jasperfx.github.io/alba) (which uses `WebApplicationFactory` behind the covers) to bootstrap your application in integration tests to avoid any problems around Wolverine's application assembly determination. ::: In testing scenarios, if you're bootstrapping the application independently somehow of the application's "official" configuration, you may have to help Wolverine out a little bit and explicitly tell it what the application assembly is: ```cs using var host = Host.CreateDefaultBuilder() .UseWolverine(opts => { // Override the application assembly to help // Wolverine find its handlers // Should not be necessary in most cases opts.ApplicationAssembly = typeof(Program).Assembly; }).StartAsync(); ``` snippet source | anchor To pull in handlers from other assemblies, you can either decorate an assembly with this attribute: ```cs using Wolverine.Attributes; [assembly: WolverineModule] ``` snippet source | anchor Or you can programmatically add additional assemblies to the handler discovery with this syntax: ```cs using var host = Host.CreateDefaultBuilder() .UseWolverine(opts => { // Add as many other assemblies as you need opts.Discovery.IncludeAssembly(typeof(MessageFromOtherAssembly).Assembly); }).StartAsync(); ``` snippet source | anchor ## Handler Type Discovery ::: warning Wolverine does not support any kind of open generic types for message handlers and has no intentions of ever doing so. ::: By default, Wolverine is looking for public, concrete classes that follow any of these rules: * Implements the `Wolverine.IWolverineHandler` interface * Is decorated with the `[Wolverine.WolverineHandler]` attribute * Type name ends with "Handler" * Type name ends with "Consumer" The original intention was to strictly use naming conventions to locate message handlers, but if you prefer a more explicit approach for discovery, feel free to utilize the `IWolverineHandler` interface or `[WolverineHandler]` (you'll have to use the attribute approach for static classes). From the types, by default, Wolverine looks for any public instance method that is: 1. Is named `Handle`, `Handles`, `Consume`, `Consumes` or one of the names from [Wolverine's saga support](/guide/durability/sagas) 2. Is decorated by the `[WolverineHandler]` attribute if you want to use a different, descriptive name In all cases, Wolverine assumes that the first argument is the incoming message. To make that concrete, here are some valid handler method signatures: ```cs [WolverineHandler] public class ValidMessageHandlers { // There's only one argument, so we'll assume that // argument is the message public void Handle(Message1 something) { } // The parameter named "message" is assumed to be the message type public Task ConsumeAsync(Message1 message, IDocumentSession session) { return session.SaveChangesAsync(); } // In this usage, we're "cascading" a new message of type // Message2 public Task HandleAsync(Message1 message, IDocumentSession session) { return Task.FromResult(new Message2()); } // In this usage we're "cascading" 0 to many additional // messages from the return value public IEnumerable Handle(Message3 message) { yield return new Message1(); yield return new Message2(); } // It's perfectly valid to have multiple handler methods // for a given message type. Each will be called in sequence // they were discovered public void Consume(Message1 input, IEmailService emails) { } // You can inject additional services directly into the handler // method public ValueTask ConsumeAsync(Message3 weirdName, IEmailService service) { return ValueTask.CompletedTask; } public interface IEvent { string CustomerId { get; } Guid Id { get; } } } ``` snippet source | anchor The valid method names are: 1. Handle / HandleAsync 2. Handles / HandlesAsync 3. Consume / ConsumeAsync 4. Consumes / ConsumesAsync And also specific to sagas: 1. Start / StartAsync 2. Starts / StartAsync 3. Orchestrate / OrchestrateAsync 4. Orchestrates / OrchestratesAsync 5. StartOrHandle / StartOrHandleAsync 6. StartsOrHandles / StartsOrHandlesAsync 7. NotFound / NotFoundAsync See [Stateful Sagas](/guide/durability/sagas) for more information. ## Disabling Conventional Discovery ::: warning Note that disabling conventional discovery will *also* disable any customizations you may have made to the conventional handler discovery ::: You can completely turn off any automatic discovery of message handlers through type scanning by using this syntax in your `WolverineOptions`: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // No automatic discovery of handlers opts.Discovery.DisableConventionalDiscovery(); }).StartAsync(); ``` snippet source | anchor ## Replacing the Handler Discovery Rules You can completely replace Wolverine's handler type discovery by first disabling the conventional handler discovery, then adding all new rules like this: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Turn off Wolverine's built in handler discovery opts.DisableConventionalDiscovery(); // And replace the scanning with your own special discovery: opts.Discovery.CustomizeHandlerDiscovery(q => { q.Includes.WithNameSuffix("Listener"); }); }).StartAsync(); ``` snippet source | anchor ## Explicitly Ignoring Methods You can force Wolverine to disregard a candidate message handler action at either the class or method level by using the `[WolverineIgnore]` attribute like this: ```cs public class NetflixHandler : IMovieSink { public void Listen(MovieAdded added) { } public void Handles(IMovieEvent @event) { } public void Handles(MovieEvent @event) { } public void Consume(MovieAdded added) { } // Only this method will be ignored as // a handler method [WolverineIgnore] public void Handles(MovieAdded added) { } public void Handle(MovieAdded message, IMessageContext context) { } public static Task Handle(MovieRemoved removed) { return Task.CompletedTask; } } // All methods on this class will be ignored // as handler methods even though the class // name matches the discovery naming conventions [WolverineIgnore] public class BlockbusterHandler { public void Handle(MovieAdded added) { } } ``` snippet source | anchor ## Customizing Conventional Discovery ::: warning Do note that handler finding conventions are additive, meaning that adding additional criteria does not disable the built in handler discovery ::: The easiest way to use the Wolverine messaging functionality is to just code against the default conventions. However, if you wish to deviate from those naming conventions you can either supplement the handler discovery or replace it completely with your own conventions. At a minimum, you can disable the built in discovery, add additional type filtering criteria, or register specific handler classes with the code below: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.Discovery // Turn off the default handler conventions // altogether .DisableConventionalDiscovery() // Include candidate actions by a user supplied // type filter .CustomizeHandlerDiscovery(x => { x.Includes.WithNameSuffix("Worker"); x.Includes.WithNameSuffix("Listener"); }) // Include a specific handler class with a generic argument .IncludeType(); }).StartAsync(); ``` snippet source | anchor --- --- url: /guide/handlers.md --- # Message Handlers To be as clear as possible, **there is zero runtime Reflection happening within the Wolverine execution runtime pipeline**. Like all halfway serious frameworks, Wolverine only uses Reflection for configuration and bootstrapping. At actual runtime, Wolverine uses code generation (which might by dynamically compiled at runtime) to create the wrapping code that bridges Wolverine to your application code. ::: tip Wolverine's guiding philosophy is to remove code ceremony from a developer's day to day coding, but that comes at the cost of using conventions that some developers will decry as "too much magic." If you actually prefer having explicit interfaces or base classes or required attributes to direct your code, Wolverine will let you do that too, so don't go anywhere just yet! ::: Since the whole purpose of Wolverine is to connect incoming messages to handling code, most of your time as a user of Wolverine is going to be spent writing and testing Wolverine message handlers. Let's just jump right into the simplest possible message handler implementation: ```cs public class MyMessageHandler { public void Handle(MyMessage message) { Console.WriteLine("I got a message!"); } } ``` snippet source | anchor If you've used other messaging, command execution, or so-called "mediator" tools in .NET, you'll surely notice the absence of any kind of required `IHandler` type interface that frameworks typically require in order to make your custom code executable by the framework. Instead, Wolverine intelligently wraps dynamic code around *your* code based on naming conventions to allow *you* to just write plain old .NET code without any framework specific artifacts in your way. Back to the handler code, at the point which you pass a new message into Wolverine like so: ```cs public static async Task publish_command(IMessageBus bus) { await bus.PublishAsync(new MyMessage()); } ``` snippet source | anchor Between the call to `IMessageBus.PublishAsync()` and `MyMessageHandler.Handle(MyMessage)` there's a couple things going on: 1. Wolverine's built in, [automatic handler discovery](/guide/handlers/discovery) has to find the candidate message handler methods and correlate them by message type 2. Wolverine's [runtime message processing](/guide/runtime) builds some connective code at runtime to relay the messages passed into `IMessageBus` to the right message handler methods Before diving into the exact rules for message handlers, here are some valid handler methods: ```cs [WolverineHandler] public class ValidMessageHandlers { // There's only one argument, so we'll assume that // argument is the message public void Handle(Message1 something) { } // The parameter named "message" is assumed to be the message type public Task ConsumeAsync(Message1 message, IDocumentSession session) { return session.SaveChangesAsync(); } // In this usage, we're "cascading" a new message of type // Message2 public Task HandleAsync(Message1 message, IDocumentSession session) { return Task.FromResult(new Message2()); } // In this usage we're "cascading" 0 to many additional // messages from the return value public IEnumerable Handle(Message3 message) { yield return new Message1(); yield return new Message2(); } // It's perfectly valid to have multiple handler methods // for a given message type. Each will be called in sequence // they were discovered public void Consume(Message1 input, IEmailService emails) { } // You can inject additional services directly into the handler // method public ValueTask ConsumeAsync(Message3 weirdName, IEmailService service) { return ValueTask.CompletedTask; } public interface IEvent { string CustomerId { get; } Guid Id { get; } } } ``` snippet source | anchor It's also valid to use class instances with constructor arguments for your handlers: ```cs // Wolverine does constructor injection as you're probably // used to with basically every other framework in .NET public class CreateProjectHandler(IProjectRepository Repository) { public async Task HandleAsync(CreateProject message) { await Repository.CreateAsync(new Project(message.Name)); } } ``` snippet source | anchor ## Rules for Message Handlers ::: info The naming conventions in Wolverine are descended from a much earlier tool (FubuTransportation circa 2013, which was in turn meant to replace and even older tool called Rhino Service Bus) and the exact origins of the particular names are lost in the mist of time ::: * Message handlers must be public types with a public constructor. Sorry folks, but the code generation strategy that Wolverine uses requires this. * Likewise, the handler methods must also be public * Yet again, the message type must be public * The first argument of the handler method must be the message type * It's legal to connect multiple handler methods to a single message type. Whether that's a good idea or not is up to you and your use case * Handler methods can be either instance methods or static methods * It's legal to accept either an interface or abstract class as a message type, but read the documentation on that below first For naming conventions: * Handler type names should be suffixed with either `Handler` or `Consumer` * Handler method names should be either `Handle()` or `Consume()` Also see [stateful sagas](/guide/durability/sagas) as they have some additional rules. See [return values](./return-values) for much more information about what types can be returned from a handler method and how Wolverine would use those values. ## Multiple Handlers for the Same Message Type ::: tip Pay attention to this section if you are trying to utilize a "Modular Monolith" architecture. ::: ::: info The `Separated` setting is useful even with `Saga` handlers as of Wolverine 5.10, but ignored in previous versions. ::: Let's say that you want to take more than one action on a message type published in or to your application. In this case you'll probably have more than one handler method for the exact same message type. **The original concept for Wolverine was to effectively combine these individual handlers into one logical handler that executes together, and in the same logical transaction (if you use transactional middleware).** This very old decision has turned out to be harmful for folks trying to use Wolverine with newer ideas about "Modular Monolith" architectures or "Event Driven Architecture" approaches where you much more frequently take completely independent actions on the same message. To alleviate this issue of combining handlers, Wolverine first introduced the [Sticky Handler concept](/guide/handlers/sticky) in Wolverine 3.0 where you're able to explicitly separate handlers for the same message type and "stick" them against different listening endpoints or local queues. Now though, you can flip this switch in one place to ensure that Wolverine always "separates" handlers for the same message type into completely separate Wolverine message handlers and message subscriptions: ```cs using var host = Host.CreateDefaultBuilder() .UseWolverine(opts => { // Right here, tell Wolverine to make every handler "sticky" opts.MultipleHandlerBehavior = MultipleHandlerBehavior.Separated; }).StartAsync(); ``` snippet source | anchor This makes a couple changes. For example, let's say that you have these handlers for the same message type of `MyApp.Orders.OrderCreated`: 1. `MyApp.Module1.OrderCreatedHandler` 2. `MyApp.Module2.OrderCreatedHandler` 3. `MyApp.Module3.OrderCreatedHandler` In the default `ClassicCombineIntoOneLogicalHandler` mode, Wolverine will combine all of those handlers into one logical handler that would be published (using default routing configuration) to a local queue named "MyApp.Orders.OrderCreated". By switching to the `Separated` mode, Wolverine will create three completely separate handlers and local subscriptions named: 1. "MyApp.Module1.OrderCreatedHandler" with only executes the handler with the same full name 2. "MyApp.Module2.OrderCreatedHandler" with only executes the handler with the same full name 3. "MyApp.Module3.OrderCreatedHandler" with only executes the handler with the same full name Likewise, if you were using conventional routing for an external message broker, using the `Separated` mode will create separate listeners for each individual handler type and key the naming off of the handler type. So if you were using the baseline Rabbit MQ conventions and `Separated`, you would end up with three Rabbit MQ queues that each had a "sticky" relationship to one particular handler like so: 1. Listening to a queue named "MyApp.Module1.OrderCreatedHandler" that executes the `MyApp.Module1.OrderCreatedHandler` handler 2. And you get the picture... In all cases, if you are using one of the built in message broker conventional routing approaches, Wolverine will create a separate listener for each handler using the handler type to determine the queue / subscription / topic names instead of the message type *if* there is more than one handler for that type. ## Message Handler Parameters ::: info If you're thinking to yourself, hmm, the method injection seems a lot like ASP.NET Core Minimal APIs, Wolverine has been baking an embarrassingly long time and had that implemented years earlier. Just saying. ::: The first argument always has to be the message type, but after that, you can accept: * Additional services from your application's IoC container * `Envelope` from Wolverine to interrogate metadata about the current message * `IMessageContext` or `IMessageBus` from Wolverine scoped to the current message being handled * `CancellationToken` for the current message execution to check for timeouts or system shut down * `DateTime now` or `DateTimeOffset now` for the current time. Don't laugh, I like doing this for testability's sake. Some add ons or middleware add other possibilities as well. ## Handler Lifecycle & Service Dependencies Handler methods can be instance methods on handler classes if it's desirable to scope the handler object to the message: ```cs public class ExampleHandler { public void Handle(Message1 message) { // Do work synchronously } public Task Handle(Message2 message) { // Do work asynchronously return Task.CompletedTask; } } ``` snippet source | anchor When using instance methods, the containing handler type will be scoped to a single message and be disposed afterward. In the case of instance methods, it's perfectly legal to use constructor injection to resolve IoC registered dependencies as shown below: ```cs public class ServiceUsingHandler { private readonly IDocumentSession _session; public ServiceUsingHandler(IDocumentSession session) { _session = session; } public Task Handle(InvoiceCreated created) { var invoice = new Invoice { Id = created.InvoiceId }; _session.Store(invoice); return _session.SaveChangesAsync(); } } ``` snippet source | anchor ::: tip Using a static method as your message handler can be a small performance improvement by avoiding the need to create and garbage collect new objects at runtime. ::: As an alternative, you can also use static methods as message handlers: ```cs public static class ExampleHandler { public static void Handle(Message1 message) { // Do work synchronously } public static Task Handle(Message2 message) { // Do work asynchronously return Task.CompletedTask; } } ``` snippet source | anchor The handler classes can be static classes as well. This technique gets much more useful when combined with Wolverine's support for method injection in a following section. ## Method Injection Similar to ASP.NET Core, Wolverine supports the concept of [method injection](https://www.martinfowler.com/articles/injection.html) in handler methods where you can just accept additional arguments that will be passed into your method by Wolverine when a new message is being handled. Below is an example action method that takes in a dependency on an `IDocumentSession` from [Marten](https://jasperfx.github.io/marten/): ```cs public static class MethodInjectionHandler { public static Task Handle(InvoiceCreated message, IDocumentSession session) { var invoice = new Invoice { Id = message.InvoiceId }; session.Store(invoice); return session.SaveChangesAsync(); } } ``` snippet source | anchor So, what can be injected as an argument to your message handler? 1. Any service that is registered in your application's IoC container 2. `Envelope` 3. The current time in UTC if you have a parameter like `DateTime now` or `DateTimeOffset now` 4. Services or variables that match a registered code generation strategy. ## Cascading Messages from Actions See [Cascading Messages](/guide/handlers/cascading) for more details on this feature. Just know that a message "cascaded" from a handler is effectively the same thing as calling `IMessageBus.PublishAsync()` and gets handled independently from the originating message. ## "Compound Handlers" ::: info Wolverine's "compound handler" feature where handlers can be built from multiple methods that are called one at a time by Wolverine was heavily inspired by Jim Shore's writing on the "A-Frame Architecture". See Jeremy's post [A-Frame Architecture with Wolverine](https://jeremydmiller.com/2023/07/19/a-frame-architecture-with-wolverine/) for more background on the goals and philosophy behind this approach. ::: It's frequently advantageous to split message handling for a single message up into methods that load any necessary data and the business logic that transforms the current state or decides to take other actions. Wolverine allows you to use the [conventional middleware naming conventions](/guide/handlers/middleware.html#conventional-middleware) on each handler to do exactly this. The goal here is to use separate methods for different concerns like loading data or validating data so that the "main" message handler (or HTTP endpoint method) can be a pure function that is completely focused on domain logic or business workflow logic for easy reasoning and effective unit testing. This is Wolverine's way of creating separation of concerns in a vertical slice without incurring the overhead of typical Onion/Clean/Hexagonal/Ports and Adapters code organization strategies. That's a lot of words, so let's consider the case of a message handler that is used to initiate the shipment of an order. That handler will ultimately need to load data for both the order itself and the customer information in order to figure out exactly what to ship out, how to ship it (overnight air? 2 day ground delivery?), and where. Using Wolverine's compound handler feature, that might look like this: ```cs public static class ShipOrderHandler { // This would be called first public static async Task<(Order, Customer)> LoadAsync(ShipOrder command, IDocumentSession session) { var order = await session.LoadAsync(command.OrderId); if (order == null) { throw new MissingOrderException(command.OrderId); } var customer = await session.LoadAsync(command.CustomerId); return (order, customer); } // By making this method completely synchronous and having it just receive the // data it needs to make determinations of what to do next, Wolverine makes this // business logic easy to unit test public static IEnumerable Handle(ShipOrder command, Order order, Customer customer) { // use the command data, plus the related Order & Customer data to // "decide" what action to take next yield return new MailOvernight(order.Id); } } ``` snippet source | anchor ::: warning You may need to use separate handlers for separate messages if you are wanting to use `Before/After/Validate/Load` methods that target a specific message type. Wolverine is not (yet) smart enough to filter out the application of the implied middleware by message type and may throw exceptions on code compilation in some cases. Again, the easy work around is to just use separate message handler types for different message types in this case. ::: The naming conventions for what Wolverine will consider to be either a "before" or "after" method is shown below: Here's the conventions: | Lifecycle | Method Names | |----------------------------------------------------------|-----------------------------| | Before the Handler(s) | `Before`, `BeforeAsync`, `Load`, `LoadAsync`, `Validate`, `ValidateAsync` | | After the Handler(s) | `After`, `AfterAsync`, `PostProcess`, `PostProcessAsync` | | In `finally` blocks after the Handlers & "After" methods | `Finally`, `FinallyAsync` | The exact name has no impact on functionality, but the idiom is that `Load/LoadAsync` is used to load input data for the main handler method. These methods can be thought of as "setting the table" for whatever the main handler method actually needs to do. `Validate/ValidateAsync` are primarily for validating the incoming command or HTTP request against the current system state to "decide" if the message handling or HTTP request should continue. The choice of method name is really up to you as a description of what that method actually does. You can also mark any public method on a message handler or HTTP endpoint class with the Wolverine `[Before]` or `[After]` attributes so that you can use more specifically descriptive method names. These methods are mostly ordered from top to bottom depending on the order you define them in your handler class -- but Wolverine will reorder the methods when one method produces an input to another method. In a way, think of the compound handler technique in Wolverine as a cousin to [Railway Programming](https://fsharpforfunandprofit.com/rop/). ## Using the Message Envelope To access the `Envelope` for the current message being handled in your message handler, just accept `Envelope` as a method argument like this: ```cs public class EnvelopeUsingHandler { public void Handle(InvoiceCreated message, Envelope envelope) { var howOldIsThisMessage = DateTimeOffset.Now.Subtract(envelope.SentAt); } } ``` snippet source | anchor ## Using the Current IMessageContext If you want to access or use the current `IMessageContext` for the message being handled to send response messages or maybe to enqueue local commands within the current outbox scope, just take in `IMessageContext` as a method argument like in this example: ```cs using Messages; using Microsoft.Extensions.Logging; using Wolverine; namespace Ponger; public class PingHandler { public ValueTask Handle(Ping ping, ILogger logger, IMessageContext context) { logger.LogInformation("Got Ping #{Number}", ping.Number); return context.RespondToSenderAsync(new Pong { Number = ping.Number }); } } ``` snippet source | anchor ```cs public static class PingHandler { // Simple message handler for the PingMessage message type public static ValueTask Handle( // The first argument is assumed to be the message type PingMessage message, // Wolverine supports method injection similar to ASP.Net Core MVC // In this case though, IMessageContext is scoped to the message // being handled IMessageContext context) { AnsiConsole.MarkupLine($"[blue]Got ping #{message.Number}[/]"); var response = new PongMessage { Number = message.Number }; // This usage will send the response message // back to the original sender. Wolverine uses message // headers to embed the reply address for exactly // this use case return context.RespondToSenderAsync(response); } } ``` snippet source | anchor --- --- url: /guide/messaging/subscriptions.md --- # Message Routing When you publish a message using `IMessageBus` or `IMessageContext`, Wolverine uses its concept of subscriptions to know how and where to send the message. Consider this code that publishes a `PingMessage`: ```cs public class SendingExample { public async Task SendPingsAndPongs(IMessageContext bus) { // Publish a message await bus.SendAsync(new PingMessage()); } } ``` snippet source | anchor ## Routing Rules ::: info There are some special message type routing for some Wolverine internal messages and the Marten event forwarding through its `IEvent` wrappers. Because of course there are some oddball exception cases. ::: When sending, publishing, scheduling, or invoking a message type for the first time, Wolverine runs through a series of rules to determine what endpoint(s) subscribe to the message type. Those rules in order of precedence are: 1. Is the message type "forwarded" to another message type? If so, the routing uses the destination type. See [message forwarding](/guide/messages.html#versioned-message-forwarding) for more information. 2. Are there any explicit routing rules that apply to this message type? If so, use *only* the subscriptions discovered from explicit rules (as explained in a following section). 3. Use a local subscription using the conventional local queue routing if the message type has a known message handler within the application. This [conventional routing to local queues can be disabled](/guide/messaging/transports/local.html#disable-conventional-local-routing) or made "additive" so that Wolverine *also* applies other conventional routing. 4. Any registered message routing conventions like the Rabbit MQ or Amazon SQS routing conventions or a user defined routing convention. ## Diagnostics There's admittedly a lot of switches and options for message routing, and it's quite possible that the actual behavior could be confusing, especially with unusual configuration usages. Not to worry (too much), because Wolverine gives you a couple options to preview exactly what the subscriptions are for a given message type you can use to check your understanding of the Wolverine configuration. Programmatically, this code shows how to "look" into the configured Wolverine subscriptions for a message type: ```cs public static void PreviewRouting(IHost host) { // In test projects, you would probably have access to the IHost for // the running application // First, get access to the Wolverine runtime for the application // It's registered by Wolverine as a singleton in your IoC container var runtime = host.Services.GetRequiredService(); var router = runtime.RoutingFor(typeof(MyMessage)); // If using Wolverine 3.6 or later when we added more // ToString() behavior for exactly this reason foreach (var messageRoute in router.Routes) { Debug.WriteLine(messageRoute); } // Otherwise, you might have to do this to "see" where // the routing is going foreach (var route in router.Routes.OfType()) { Debug.WriteLine(route.Sender.Destination); } } ``` snippet source | anchor First, you can always use the [command line support](/guide/command-line) to preview Wolverine's known message types by using: ```bash dotnet run -- describe ``` You might have to scroll a little bit, but there is a section that previews message subscriptions by type as a tabular output from that command. ::: tip The command line preview can only show subscriptions for the message types that Wolverine "knows" it will try to send at bootstrapping time. See [Message Discovery](/guide/messages.html#message-discovery) for how to better utilize this preview functionality by "telling" Wolverine what your outgoing message types are. ::: ## Explicit Subscriptions To route messages to specific endpoints, we can apply static message routing rules by using a routing rule as shown below: ```cs using var host = Host.CreateDefaultBuilder() .UseWolverine(opts => { // Route a single message type opts.PublishMessage() .ToServerAndPort("server", 1111); // Send every possible message to a TCP listener // on this box at port 2222 opts.PublishAllMessages().ToPort(2222); // Or use a more fluent interface style opts.Publish().MessagesFromAssembly(typeof(PingMessage).Assembly) .ToPort(3333); // Complicated rules, I don't think folks will use this much opts.Publish(rule => { // Apply as many message matching // rules as you need // Specific message types rule.Message(); rule.Message(); // Implementing a specific marker interface or common base class rule.MessagesImplementing(); // All types in a certain assembly rule.MessagesFromAssemblyContaining(); // or this rule.MessagesFromAssembly(typeof(PingMessage).Assembly); // or by namespace rule.MessagesFromNamespace("MyMessageLibrary"); rule.MessagesFromNamespaceContaining(); // Express the subscribers rule.ToPort(1111); rule.ToPort(2222); }); // Or you just send all messages to a certain endpoint opts.PublishAllMessages().ToPort(3333); }).StartAsync(); ``` snippet source | anchor Do note that doing the message type filtering by namespace will also include child namespaces. In our own usage we try to rely on either namespace rules or by using shared message assemblies. ## Disabling Local Routing Hey, it's perfectly possible that you want all messages going through external message brokers even when the message types all have known message handlers in the application. To do that, simply disable the automatic local message routing like this: ```cs public static async Task disable_queue_routing() { using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // This will disable the conventional local queue // routing that would take precedence over other conventional // routing opts.Policies.DisableConventionalLocalRouting(); // Other routing conventions. Rabbit MQ? SQS? }).StartAsync(); ``` snippet source | anchor This does allow you to possibly do better load balancing between application nodes. ## Using Both Local Routing and External Broker Conventional Routing You may want *both* the local routing conventions and external routing conventions to apply to the same message type. An early Wolverine user needed to both handle an event message created by their application locally inside that application (through a local queue), and to publish the same event message through external brokers to a different system. You can now make the local routing conventions be "additive" such that the message routing will also use external routing conventions even with local handlers like this: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { var rabbitConnectionString = builder .Configuration.GetConnectionString("rabbitmq"); opts.UseRabbitMq(rabbitConnectionString) .AutoProvision() // Using the built in, default Rabbit MQ message routing conventions .UseConventionalRouting(); // Allow Wolverine to *also* apply the Rabbit MQ conventional // routing to message types that this system can handle locally opts.Policies.ConventionalLocalRoutingIsAdditive(); }); ``` snippet source | anchor ## Routing Internals Wolverine has an internal model called `IMessageRoute` that models a subscription for a message type that "knows" how to create the Wolverine `Envelope` for a single outgoing message to a single subscribing endpoint: ```cs /// /// Contains all the rules for where and how an outgoing message /// should be sent to a single subscriber /// public interface IMessageRoute { Envelope CreateForSending(object message, DeliveryOptions? options, ISendingAgent localDurableQueue, WolverineRuntime runtime, string? topicName); MessageSubscriptionDescriptor Describe(); } ``` snippet source | anchor This type "knows" about any endpoint or model sending customizations like delivery expiration rules or in some cases, some user defined logic to determine the topic name for the message at runtime for message broker endpoints that support topic based publishing. At runtime, when you decide to publish a message (and this applies to cascading messages in handlers), the workflow in the Wolverine internals is below: ```mermaid sequenceDiagram Application->>MessageBus:PublishAsync(message) MessageBus->>WolverineRuntime:RoutingFor(message type) WolverineRuntime-->>MessageBus:IMessageRouter MessageBus->>IMessageRouter:RouteForPublish(message, options) loop Every IMessageRoute IMessageRouter->>IMessageRoute:CreateForSending(message, options, ...) end IMessageRoute-->>IMessageRouter:Envelope IMessageRouter-->>MessageBus:Envelopes ``` As to *how* Wolverine determines the message routing, the internals are shown below: ```mermaid classDiagram IMessageRoute MessageRoute..|>IMessageRoute TopicRouting..|>IMessageRoute IMessageRouteSource IMessageRouteSource..>IMessageRoute:Builds ExplicitRouting..>IMessageRouteSource AgentMessages..>IMessageRouteSource MessageRoutingConventions..>IMessageRouteSource MessageRoutingConventions..>IMessageRoutingConvention:0..* IMessageRoutingConvention : DiscoverListeners(runtime, handled message types) IMessageRoutingConvention : Endpoints DiscoverSenders(messageType, runtime) TransformedMessageRouteSource..>IMessageRouteSource TransformedMessageRoute..>IMessageRoute ``` Wolverine has a handful of built in `IMessageRouteSource` implementations in precedence order: 1. `TransformedMessageRouteSource` - only really used by the Marten event sourcing support to "know" to forward messages of type `IEvent` to the event type `T` 2. `AgentMessages` - just for Wolverine's own internal `IAgentCommand` commands 3. `ExplicitRouting` - explicitly defined subscriptions rules from the Wolverine fluent interface (`PublishMessage().ToRabbitQueue("foo")`) 4. `LocalRouting` - if a message type has a known handler type in the system, Wolverine will route the message to any local queues for that message type 5. `MessageRoutingConventions` - this would be any message routing conventions enabled for external transport brokers like Rabbit MQ or Azure Service Bus. This could also be a custom message routing convention. ## Rolling Your own Messaging Convention Let's say you want to use a completely different conventional routing topology than anything Wolverine provides out of the box. You can do that by creating your own implementation of this interface: ```cs /// /// Plugin for doing any kind of conventional message routing /// public interface IMessageRoutingConvention { /// /// Use this to define listening endpoints based on the known message handlers for the application /// /// /// void DiscoverListeners(IWolverineRuntime runtime, IReadOnlyList handledMessageTypes); /// /// Create outgoing subscriptions for the application for the given message type /// /// /// /// IEnumerable DiscoverSenders(Type messageType, IWolverineRuntime runtime); } ``` snippet source | anchor As a concrete example, the Wolverine team received [this request](https://github.com/JasperFx/wolverine/issues/1130) to conventionally route messages based on the message type name to a [Rabbit MQ exchange and routing key](https://www.rabbitmq.com/tutorials/tutorial-four-dotnet). That's not something that Wolverine supports out of the box, but you could build your own simplistic routing convention like this: ```cs public class RouteKeyConvention : IMessageRoutingConvention { private readonly string _exchangeName; public RouteKeyConvention(string exchangeName) { _exchangeName = exchangeName; } public void DiscoverListeners(IWolverineRuntime runtime, IReadOnlyList handledMessageTypes) { // Not worrying about this at all for this case } public IEnumerable DiscoverSenders(Type messageType, IWolverineRuntime runtime) { var routingKey = messageType.FullNameInCode().ToLowerInvariant(); var rabbitTransport = runtime.Options.Transports.GetOrCreate(); // Find or create the named Rabbit MQ exchange in Wolverine's model var exchange = rabbitTransport.Exchanges[_exchangeName]; // Find or create the named routing key / binding key // in Wolverine's model var routing = exchange.Routings[routingKey]; // Tell Wolverine you want the message type routed to this // endpoint yield return routing; } } ``` snippet source | anchor And register it to your Wolverine application like so: ```cs var builder = Host.CreateApplicationBuilder(); var rabbitConnectionString = builder .Configuration .GetConnectionString("rabbitmq"); builder.UseWolverine(opts => { opts.UseRabbitMq(rabbitConnectionString) .AutoProvision(); var exchangeName = builder .Configuration .GetValue("exchange-name"); opts.RouteWith(new RouteKeyConvention(exchangeName)); }); // actually start the app... ``` snippet source | anchor --- --- url: /guide/handlers/timeout.md --- # Message Timeouts You don't want your Wolverine application to effectively become non-responsive by a handful of messages that accidentally run in an infinite loop and therefore block all other message execution. To that end, Wolverine let's you enforce configurable execution timeouts. Wolverine does this through the usage of setting a timeout on a `CancellationToken` used within the message execution. To play nicely with this timeout, you should take in `CancellationToken` in your asynchronous message handler methods and use that within asynchronous method calls. When a timeout occurs, a `TaskCanceledException` will be thrown. To override the default message timeout of 60 seconds, use this syntax at bootstrapping time: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.DefaultExecutionTimeout = 1.Minutes(); }).StartAsync(); ``` snippet source | anchor To override the message timeout on a message type by message type basis, you can use the `[MessageTimeout]` attribute as shown below: ```cs [MessageTimeout(1)] public async Task Handle(PotentiallySlowMessage message, CancellationToken cancellationToken) ``` snippet source | anchor --- --- url: /guide/messages.md --- # Messages and Serialization The ultimate goal of Wolverine is to allow developers to route messages representing some work to do within the system to the proper handler that can handle that message. Here's some facts about messages in Wolverine: * By role, you can think of messages as either a command you want to execute or as an event raised somewhere in your system that you want to be handled by separate code or in a separate thread * Messages in Wolverine **must be public types** * Unlike other .NET messaging or command handling frameworks, there's no requirement for Wolverine messages to be an interface or require any mandatory interface or framework base classes * Have a string identity for the message type that Wolverine will use as an identification when storing messages in either durable message storage or within external transports The default serialization option is [System.Text.Json](https://learn.microsoft.com/en-us/dotnet/api/system.text.json?view=net-8.0), as this is now mature, seems to work with just about anything now, and sets you up for relatively easy integration with a range of external non-Wolverine applications. You also have the option to fall back to Newtonsoft.JSON or to use higher performance [MemoryPack](/guide/messages.html#memorypack-serialization) or [MessagePack](/guide/messages.html#messagepack-serialization) or [Protobuf](/guide/messages.html#protobuf-serialization) integrations with Wolverine. ## Message Type Name or Alias Let's say that you have a basic message structure like this: ```cs public class PersonBorn { public string FirstName { get; set; } public string LastName { get; set; } // This is obviously a contrived example // so just let this go for now;) public int Day { get; set; } public int Month { get; set; } public int Year { get; set; } } ``` snippet source | anchor By default, Wolverine will identify this type by just using the .NET full name like so: ```cs [Fact] public void message_alias_is_fullname_by_default() { new Envelope(new PersonBorn()) .MessageType.ShouldBe(typeof(PersonBorn).FullName); } ``` snippet source | anchor However, if you want to explicitly control the message type because you aren't sharing the DTO types or for some other reason (readability? diagnostics?), you can override the message type alias with an attribute: ```cs [MessageIdentity("person-born")] public class PersonBorn { public string FirstName { get; set; } public string LastName { get; set; } public int Day { get; set; } public int Month { get; set; } public int Year { get; set; } } ``` snippet source | anchor Which now gives you different behavior: ```cs [Fact] public void message_alias_is_fullname_by_default() { new Envelope(new PersonBorn()) .MessageType.ShouldBe("person-born"); } ``` snippet source | anchor ## Message Discovery ::: tip Wolverine does not yet support the Async API standard, but the message discovery described in this section is also partially meant to enable that support later. ::: Strictly for diagnostic purposes in Wolverine (like the message routing preview report in `dotnet run -- describe`), you can mark your message types to help Wolverine "discover" outgoing message types that will be published by the application by either implementing one of these marker interfaces (all in the main `Wolverine` namespace): ```cs public record CreateIssue(string Name) : IMessage; public record DeleteIssue(Guid Id) : IMessage; public record IssueCreated(Guid Id, string Name) : IMessage; ``` snippet source | anchor ::: tip The marker types shown above may be helpful in transitioning an existing codebase from NServiceBus to Wolverine. ::: You can optionally use an attribute to mark a type as a message: ```cs [WolverineMessage] public record CloseIssue(Guid Id); ``` snippet source | anchor Or lastly, make up your own criteria to find and mark message types within your system as shown below: ```cs opts.Discovery.CustomizeHandlerDiscovery(types => types.Includes.Implements()); ``` snippet source | anchor Note that only types that are in assemblies either marked with `[assembly: WolverineModule]` or the main application assembly or an explicitly registered assembly will be discovered. See [Handler Discovery](/guide/handlers/discovery) for more information about the assembly scanning. ## Versioning By default, Wolverine will just assume that any message is "V1" unless marked otherwise. Going back to the original `PersonBorn` message class in previous sections, let's say that you create a new version of that message that is no longer structurally equivalent to the original message: ```cs [MessageIdentity("person-born", Version = 2)] public class PersonBornV2 { public string FirstName { get; set; } public string LastName { get; set; } public DateTime Birthday { get; set; } } ``` snippet source | anchor The `[MessageIdentity("person-born", Version = 2)]` attribute usage tells Wolverine that this class is "Version 2" for the `message-type` = "person-born." Wolverine will now accept or publish this message using the built in Json serialization with the content type of `application/vnd.person-born.v2+json`. Any custom serializers should follow some kind of naming convention for content types that identify versioned representations. ## Serialization ::: warning Just in time for 1.0, Wolverine switched to using System.Text.Json as the default serializer instead of Newtonsoft.Json. Fingers crossed! ::: Wolverine needs to be able to serialize and deserialize your message objects when sending messages with external transports like Rabbit MQ or when using the inbox/outbox message storage. To that end, the default serialization is performed with [System.Text.Json](https://docs.microsoft.com/en-us/dotnet/api/system.text.json?view=net-6.0) but you may also opt into using old, battle tested Newtonsoft.Json. And to instead opt into using System.Text.Json with different defaults -- which can give you better performance but with increased risk of serialization failures -- use this syntax where `opts` is a `WolverineOptions` object: ```cs opts.UseSystemTextJsonForSerialization(stj => { stj.UnknownTypeHandling = JsonUnknownTypeHandling.JsonNode; }); ``` snippet source | anchor When using Newtonsoft.Json, the default configuration is: ```cs return new JsonSerializerSettings { TypeNameHandling = TypeNameHandling.Auto, PreserveReferencesHandling = PreserveReferencesHandling.Objects }; ``` snippet source | anchor To customize the Newtonsoft.Json serialization, use this option: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseNewtonsoftForSerialization(settings => { settings.ConstructorHandling = ConstructorHandling.AllowNonPublicDefaultConstructor; }); }).StartAsync(); ``` snippet source | anchor ### MessagePack Serialization Wolverine supports the [MessagePack](https://github.com/neuecc/MessagePack-CSharp) serializer for message serialization through the `WolverineFx.MessagePack` Nuget package. To enable MessagePack serialization through the entire application, use: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Make MessagePack the default serializer throughout this application opts.UseMessagePackSerialization(); }).StartAsync(); ``` snippet source | anchor Likewise, you can use MessagePack on selected endpoints like this: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Use MessagePack on a local queue opts.LocalQueue("one").UseMessagePackSerialization(); // Use MessagePack on a listening endpoint opts.ListenAtPort(2223).UseMessagePackSerialization(); // Use MessagePack on one subscriber opts.PublishAllMessages().ToPort(2222).UseMessagePackSerialization(); }).StartAsync(); ``` snippet source | anchor ### MemoryPack Serialization Wolverine supports the high performance [MemoryPack](https://github.com/Cysharp/MemoryPack) serializer through the `WolverineFx.MemoryPack` Nuget package. To enable MemoryPack serialization through the entire application, use: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Make MemoryPack the default serializer throughout this application opts.UseMemoryPackSerialization(); }).StartAsync(); ``` snippet source | anchor Likewise, you can use MemoryPack on selected endpoints like this: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Use MemoryPack on a local queue opts.LocalQueue("one").UseMemoryPackSerialization(); // Use MemoryPack on a listening endpoint opts.ListenAtPort(2223).UseMemoryPackSerialization(); // Use MemoryPack on one subscriber opts.PublishAllMessages().ToPort(2222).UseMemoryPackSerialization(); }).StartAsync(); ``` snippet source | anchor ### Protobuf Serialization Wolverine supports Google's data interchange format [Protobuf](https://github.com/protocolbuffers/protobuf) through the `WolverineFx.Protobuf` Nuget package. To enable Protobuf serialization through the entire application, use: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Make Protobuf the default serializer throughout this application opts.UseProtobufSerialization(); }).StartAsync(); ``` snippet source | anchor Likewise, you can use Protobuf on selected endpoints like this: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Use Protobuf on a local queue opts.LocalQueue("one").UseProtobufSerialization(); // Use Protobuf on a listening endpoint opts.ListenAtPort(2223).UseProtobufSerialization(); // Use Protobuf on one subscriber opts.PublishAllMessages().ToPort(2222).UseProtobufSerialization(); }).StartAsync(); ``` snippet source | anchor ## Versioned Message Forwarding If you make breaking changes to an incoming message in a later version, you can simply handle both versions of that message separately: ```cs public class PersonCreatedHandler { public static void Handle(PersonBorn person) { // do something w/ the message } public static void Handle(PersonBornV2 person) { // do something w/ the message } } ``` snippet source | anchor Or you could use a custom `IMessageDeserializer` to read incoming messages from V1 into the new V2 message type, or you can take advantage of message forwarding so you only need to handle one message type using the `IForwardsTo` interface as shown below: ```cs public class PersonBorn : IForwardsTo { public string FirstName { get; set; } public string LastName { get; set; } public int Day { get; set; } public int Month { get; set; } public int Year { get; set; } public PersonBornV2 Transform() { return new PersonBornV2 { FirstName = FirstName, LastName = LastName, Birthday = new DateTime(Year, Month, Day) }; } } ``` snippet source | anchor Which forwards to the current message type: ```cs [MessageIdentity("person-born", Version = 2)] public class PersonBornV2 { public string FirstName { get; set; } public string LastName { get; set; } public DateTime Birthday { get; set; } } ``` snippet source | anchor Using this strategy, other systems could still send your system the original `application/vnd.person-born.v1+json` formatted message, and on the receiving end, Wolverine would know to deserialize the Json data into the `PersonBorn` object, then call its `Transform()` method to build out the `PersonBornV2` type that matches up with your message handler. ## "Self Serializing" Messages ::: info This was originally built for an unusual MQTT requirement, but is going to be used extensively by Wolverine internals as a tiny optimization ::: This is admittedly an oddball use case for micro-optimization, but you may embed the serialization logic for a message type right into the message type itself through Wolverine's `ISerializable` interface as shown below: ```cs public class SerializedMessage : ISerializable { public string Name { get; set; } = "Bob Schneider"; public byte[] Write() { return Encoding.Default.GetBytes(Name); } // You'll need at least C# 11 for static methods // on interfaces! public static object Read(byte[] bytes) { var name = Encoding.Default.GetString(bytes); return new SerializedMessage { Name = name }; } } ``` snippet source | anchor Wolverine will see the interface implementation of the message type, and automatically opt into using this "intrinsic" serialization. --- --- url: /guide/messaging/transports.md --- # Messaging Transports ## Building a new Transport In Wolverine parlance, a "transport" refers to one of Wolverine's adapter libraries that enable the usage of an external messaging infrastructure technology like Rabbit MQ or Pulsar. The local queues and [lightweight TCP transport](/tcp) come in the box with Wolverine, but you'll need an add on Nuget to enable any of the other transports. ### Key Abstractions | Abstraction | Description | |--------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | `ITransport` | Manages the connection to the messaging infrastructure like a Rabbit MQ broker and creates all the other objects referenced below | | `Endpoint` | The configuration for a sending or receiving address to your transport identified by a unique Uri scheme. For example, a Rabbit MQ endpoint may refer to a queue or an exchange and binding key. A TCP endpoint will refer to a server name and port number | | `IListener` | A service that helps read messages from the underlying message transport and relays those to Wolverine as Wolverine's `Envelope` structure | | `ISender` | A service that helps put Wolverine `Envelope` structures out into the outgoing messaging infrastructure | To build a new transport, we recommend looking first at the [Wolverine.AmazonSqs](https://github.com/JasperFx/wolverine/tree/main/src/Wolverine.Pulsar) library for a sample. At a bare minimum, you'll need to implement the services above, and also add some kind of `WolverineOptions.Use[TransportName]()` extension method to configure the connectivity to the messaging infrastructure and add the new transport to your Wolverine application. Also note, you will definitely want to use the [SendingCompliance](https://github.com/JasperFx/wolverine/blob/main/src/TestingSupport/Compliance/SendingCompliance.cs) tests in Wolverine to verify that your new transport meets all Wolverine requirements. --- --- url: /guide/handlers/middleware.md --- # Middleware ::: tip One of the big advantages of Wolverine's middleware model as compared to almost any other .NET application framework is that middleware can be selectively applied to only certain message handlers or HTTP endpoints. When you craft your middleware, try to take advantage of this to avoid unnecessary runtime logic in middleware (i.e., for example, don't use Reflection or optional IoC service registrations to "decide" if middleware applies to the current HTTP request or message). ::: Wolverine supports the "Russian Doll" model of middleware, similar in concept to ASP.NET Core but very different in implementation. Wolverine's middleware uses runtime code generation and compilation with [JasperFx.CodeGeneration](https://github.com/jasperfx/jasperfx.codegeneration) (which is also used by [Marten](https://martendb.io)). What this means is that "middleware" in Wolverine is code that is woven right into the message and route handlers. The end result is a much more efficient runtime pipeline than most other frameworks that adopt the "Russian Doll" middleware approach that suffer performance issues because of the sheer number of object allocations. It also hopefully means that the exception stack traces from failures in Wolverine message handlers will be far less noisy than competitor tools and Wolverine's own predecessors. ::: tip Wolverine has [performance metrics](/guide/logging) around message execution out of the box, so this whole "stopwatch" sample is unnecessary. But it *was* an easy way to illustrate the middleware approach. ::: As an example, let's say you want to build some custom middleware that is a simple performance timing of either HTTP route execution or message execution. In essence, you want to inject code like this: ```cs var stopwatch = new Stopwatch(); stopwatch.Start(); try { // execute the HTTP request // or message } finally { stopwatch.Stop(); logger.LogInformation("Ran something in " + stopwatch.ElapsedMilliseconds); } ``` snippet source | anchor You've got a couple different options, but the easiest by far is to use Wolverine's conventional middleware approach. ## Conventional Middleware ::: info Conventional application of middleware is done separately between HTTP endpoints and message handlers. To apply global middleware to HTTP endpoints, see [HTTP endpoint middleware](/guide/http/middleware). ::: As an example middleware using Wolverine's conventional approach, here's the stopwatch functionality from above: ```cs public class StopwatchMiddleware { private readonly Stopwatch _stopwatch = new(); public void Before() { _stopwatch.Start(); } public void Finally(ILogger logger, Envelope envelope) { _stopwatch.Stop(); logger.LogDebug("Envelope {Id} / {MessageType} ran in {Duration} milliseconds", envelope.Id, envelope.MessageType, _stopwatch.ElapsedMilliseconds); } } ``` snippet source | anchor and that can be added to our application at bootstrapping time like this: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Apply our new middleware to message handlers, but optionally // filter it to only messages from a certain namespace opts.Policies .AddMiddleware(chain => chain.MessageType.IsInNamespace("MyApp.Messages.Important")); }).StartAsync(); ``` snippet source | anchor And just for the sake of completeness, here's another version of the same functionality, but this time using a static class *just* to save on object allocations: ```cs public static class StopwatchMiddleware2 { // The Stopwatch being returned from this method will // be passed back into the later method [MethodImpl(MethodImplOptions.AggressiveInlining)] public static Stopwatch Before() { var stopwatch = new Stopwatch(); stopwatch.Start(); return stopwatch; } [MethodImpl(MethodImplOptions.AggressiveInlining)] public static void Finally(Stopwatch stopwatch, ILogger logger, Envelope envelope) { stopwatch.Stop(); logger.LogDebug("Envelope {Id} / {MessageType} ran in {Duration} milliseconds", envelope.Id, envelope.MessageType, stopwatch.ElapsedMilliseconds); } } ``` snippet source | anchor Alright, let's talk about what's happening in the code samples above: * You'll notice that I took in `ILogger` instead of any specific `ILogger`. Wolverine is quietly using the `ILogger` for the current handler when it generates the code. * Wolverine places the `Before()` method to be called in front of the actual message handler method * Because there is a `Finally()` method, Wolverine wraps a `try/finally` block around the code running after the `Before()` method and calls `Finally()` within that `finally` block ::: tip Note that the method name matching is case sensitive. ::: Here's the conventions: | Lifecycle | Method Names | |----------------------------------------------------------|-----------------------------| | Before the Handler(s) | `Before`, `BeforeAsync`, `Load`, `LoadAsync`, `Validate`, `ValidateAsync` | | After the Handler(s) | `After`, `AfterAsync`, `PostProcess`, `PostProcessAsync` | | In `finally` blocks after the Handlers & "After" methods | `Finally`, `FinallyAsync` | The generated code for the conventionally applied methods would look like this basic structure: ```cs middleware.Before(); try { // call the actual handler methods middleware.After(); } finally { middleware.Finally(); } ``` snippet source | anchor Here's the rules for these conventional middleware classes: * Can optionally be static classes, and that maybe advantageous when possible from a performance standpoint * If the middleware class is not static, Wolverine can inject constructor arguments with the same rules as for [handler methods](/guide/handlers/) * Objects returned from the `Before` / `BeforeAsync` methods can be used as arguments to the inner handler methods or the later "after" or "finally" methods * A middleware class can have any mix of zero to many "befores", "afters", or "finallys." ## Conditionally Stopping the Message Handling A "before" method in middleware can be used to stop further message handler by either directly returning `HandlerContinuation` or returning that value as part of a tuple. If the value `Stop` is returned, Wolverine will stop all of the further message processing (it's done by generating an `if (continuation == HandlerContinuation.Stop) return;` line of code). Here's an example from the [custom middleware tutorial](/tutorials/middleware) that tries to load a matching `Account` entity referenced by the incoming message and aborts the message processing if it is not found: ```cs // This is *a* way to build middleware in Wolverine by basically just // writing functions/methods. There's a naming convention that // looks for Before/BeforeAsync or After/AfterAsync public static class AccountLookupMiddleware { // The message *has* to be first in the parameter list // Before or BeforeAsync tells Wolverine this method should be called before the actual action public static async Task<(HandlerContinuation, Account?, OutgoingMessages)> LoadAsync( IAccountCommand command, ILogger logger, // This app is using Marten for persistence IDocumentSession session, CancellationToken cancellation) { var messages = new OutgoingMessages(); var account = await session.LoadAsync(command.AccountId, cancellation); if (account == null) { logger.LogInformation("Unable to find an account for {AccountId}, aborting the requested operation", command.AccountId); messages.RespondToSender(new InvalidAccount(command.AccountId)); return (HandlerContinuation.Stop, null, messages); } // messages would be empty here return (HandlerContinuation.Continue, account, messages); } } ``` snippet source | anchor Notice that the middleware above uses a tuple as the return value so that it can both pass an `Account` entity to the inner handler and also to return the continuation directing Wolverine to continue or stop the message processing. ## Sending Messages From Middleware ::: tip Everything shown here works for both middleware methods on external types that are applied to the message handlers, or to ::: ::: warning This will not work for WolverineFx.Http endpoints, but at least there, you'd probably be better served through returning a `ProblemDetails` response or some other error response to the original caller. ::: Wolverine *can* send outgoing messages from middleware. You can use either `IMessageBus` directly as shown below: ```cs public static class MaybeBadThingHandler { public static async Task ValidateAsync(MaybeBadThing thing, IMessageBus bus) { if (thing.Number > 10) { await bus.PublishAsync(new RejectYourThing(thing.Number)); return HandlerContinuation.Stop; } return HandlerContinuation.Continue; } public static void Handle(MaybeBadThing message) { Debug.WriteLine("Got " + message); } } ``` snippet source | anchor Or by returning `OutgoingMessages` from a middleware method as shown below: ```cs public static class MaybeBadThing2Handler { public static (HandlerContinuation, OutgoingMessages) ValidateAsync(MaybeBadThing2 thing, IMessageBus bus) { if (thing.Number > 10) { return (HandlerContinuation.Stop, [new RejectYourThing(thing.Number)]); } return (HandlerContinuation.Continue, []); } public static void Handle(MaybeBadThing2 message) { Debug.WriteLine("Got " + message); } } ``` snippet source | anchor ## Registering Middleware by Message Type Let's say that some of our message types implement this interface: ```cs public interface IAccountCommand { Guid AccountId { get; } } ``` snippet source | anchor We can apply the `AccountMiddleware` from the section above to only these message types by telling Wolverine to only apply this middleware to any message that implements the `IAccountCommand` interface like this: ```cs builder.Host.UseWolverine(opts => { // This middleware should be applied to all handlers where the // command type implements the IAccountCommand interface that is the // "detected" message type of the middleware opts.Policies.ForMessagesOfType().AddMiddleware(typeof(AccountLookupMiddleware)); opts.UseFluentValidation(); // Explicit routing for the AccountUpdated // message handling. This has precedence over conventional routing opts.PublishMessage() .ToLocalQueue("signalr") // Throw the message away if it's not successfully // delivered within 10 seconds .DeliverWithin(10.Seconds()) // Not durable .BufferedInMemory(); }); ``` snippet source | anchor Wolverine determines the message type for a middleware class method by assuming that the first argument is the message type, and then looking for actual messages that implement that interface or subclass. ## Applying Middleware Explicitly by Attribute ::: tip You can subclass the `MiddlewareAttribute` class to make more specific middleware applicative attributes for your application. ::: You can apply the middleware types to individual handler methods with the `[Middleware]` attribute as shown below: ```cs public static class SomeHandler { [Middleware(typeof(StopwatchMiddleware))] public static void Handle(PotentiallySlowMessage message) { // do something expensive with the message } } ``` snippet source | anchor Note that this attribute will accept multiple middleware types. Also note that the `[Middleware]` attribute can be placed either on an individual handler method or apply to all handler methods on the same handler class if the attribute is at the class level. ## Custom Code Generation For more advanced usage, you can drop down to the JasperFx.CodeGeneration `Frame` model to directly inject code. The first step is to create a JasperFx.CodeGeneration `Frame` class that generates that code around the inner message or HTTP handler: ```cs public class StopwatchFrame : SyncFrame { private readonly IChain _chain; private readonly Variable _stopwatch; private Variable _logger; public StopwatchFrame(IChain chain) { _chain = chain; // This frame creates a Stopwatch, so we // expose that fact to the rest of the generated method // just in case someone else wants that _stopwatch = new Variable(typeof(Stopwatch), "stopwatch", this); } public override void GenerateCode(GeneratedMethod method, ISourceWriter writer) { writer.Write($"var stopwatch = new {typeof(Stopwatch).FullNameInCode()}();"); writer.Write("stopwatch.Start();"); writer.Write("BLOCK:try"); Next?.GenerateCode(method, writer); writer.FinishBlock(); // Write a finally block where you record the stopwatch writer.Write("BLOCK:finally"); writer.Write("stopwatch.Stop();"); writer.Write( $"{_logger.Usage}.Log(Microsoft.Extensions.Logging.LogLevel.Information, \"{_chain.Description} ran in \" + {_stopwatch.Usage}.{nameof(Stopwatch.ElapsedMilliseconds)});)"); writer.FinishBlock(); } public override IEnumerable FindVariables(IMethodVariables chain) { // This in effect turns into "I need ILogger injected into the // compiled class" _logger = chain.FindVariable(typeof(ILogger)); yield return _logger; } } ``` snippet source | anchor ## Custom Attributes To attach our `StopwatchFrame` as middleware to any route or message handler, we can write a custom attribute based on Wolverine's `ModifyChainAttribute` class as shown below: ```cs public class StopwatchAttribute : ModifyChainAttribute { public override void Modify(IChain chain, GenerationRules rules, IServiceContainer container) { chain.Middleware.Add(new StopwatchFrame(chain)); } } ``` snippet source | anchor This attribute can now be placed either on a specific HTTP route endpoint method or message handler method to **only** apply to that specific action, or it can be placed on a `Handler` or `Endpoint` class to apply to all methods exported by that type. Here's an example: ```cs public class ClockedEndpoint { [Stopwatch] public string get_clocked() { return "how fast"; } } ``` snippet source | anchor Now, when the application is bootstrapped, this is the code that would be generated to handle the "GET /clocked" route: ```csharp public class Wolverine_Testing_Samples_ClockedEndpoint_get_clocked : Wolverine.Http.Model.RouteHandler { private readonly Microsoft.Extensions.Logging.ILogger _logger; public Wolverine_Testing_Samples_ClockedEndpoint_get_clocked(Microsoft.Extensions.Logging.ILogger logger) { _logger = logger; } public override Task Handle(Microsoft.AspNetCore.Http.HttpContext httpContext, System.String[] segments) { var clockedEndpoint = new Wolverine.Testing.Samples.ClockedEndpoint(); var stopwatch = new System.Diagnostics.Stopwatch(); stopwatch.Start(); try { var result_of_get_clocked = clockedEndpoint.get_clocked(); return WriteText(result_of_get_clocked, httpContext.Response); } finally { stopwatch.Stop(); _logger.Log(Microsoft.Extensions.Logging.LogLevel.Information, "Route 'GET: clocked' ran in " + stopwatch.ElapsedMilliseconds);) } } } ``` `ModifyChainAttribute` is a generic way to add middleware or post processing frames, but if you need to configure things specific to routes or message handlers, you can also use `ModifyHandlerChainAttribute` for message handlers or `ModifyRouteAttribute` for http routes. ## Policies ::: tip warning Again, please go easy with this feature and try not to shoot yourself in the foot by getting too aggressive with custom policies ::: You can register user-defined policies that apply to all chains or some subset of chains. For message handlers, implement this interface: ```cs /// /// Use to apply your own conventions or policies to message handlers /// public interface IHandlerPolicy : IWolverinePolicy { /// /// Called during bootstrapping to alter how the message handlers are configured /// /// /// /// The application's underlying IoC Container void Apply(IReadOnlyList chains, GenerationRules rules, IServiceContainer container); } ``` snippet source | anchor Here's a simple sample that registers middleware on each handler chain: ```cs public class WrapWithSimple : IHandlerPolicy { public void Apply(IReadOnlyList chains, GenerationRules rules, IServiceContainer container) { foreach (var chain in chains) chain.Middleware.Add(new SimpleWrapper()); } } ``` snippet source | anchor Then register your custom `IHandlerPolicy` with a Wolverine application like this: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.Policies.Add(); }).StartAsync(); ``` snippet source | anchor ## Using Configure(chain) Methods ::: tip warning This feature is experimental, but is meant to provide an easy way to apply middleware or other configuration to specific HTTP endpoints or message handlers without writing custom policies or having to resort to all new attributes. ::: There's one last option for configuring chains by a naming convention. If you want to configure the chains from just one handler or endpoint class, you can implement a method with one of these signatures: ```csharp public static void Configure(IChain) { // gets called for each endpoint or message handling method // on just this class } public static void Configure(RouteChain chain) { // gets called for each endpoint method on this class } public static void Configure(HandlerChain chain) { // gets called for each message handling method // on just this class } ``` Here's an example of this being used from Wolverine's test suite: ```cs public class CustomizedHandler { public void Handle(SpecialMessage message) { // actually handle the SpecialMessage } public static void Configure(HandlerChain chain) { chain.Middleware.Add(new CustomFrame()); // Turning off all execution tracking logging // from Wolverine for just this message type // Error logging will still be enabled on failures chain.SuccessLogLevel = LogLevel.None; chain.ProcessingLogLevel = LogLevel.None; } } ``` snippet source | anchor ## Sending Messages from Middleware Wolverine 5.0 included some improvements to the usage of middleware *external* to the main handler or HTTP endpoint types to enable you to send messages through the usage of `OutgoingMessages` return types. You can now write middleware method like this: ```cs public record MaybeBadThing4(int Number); public static class MaybeBadThing4Middleware { public static (OutgoingMessages, HandlerContinuation) Validate(MaybeBadThing4 thing) { if (thing.Number > 10) { return ([new RejectYourThing(thing.Number)], HandlerContinuation.Stop); } return ([], HandlerContinuation.Continue); } } [Middleware(typeof(MaybeBadThing4Middleware))] public static class MaybeBadThing4Handler { public static void Handle(MaybeBadThing4 message) { Debug.WriteLine("Got " + message); } } ``` snippet source | anchor And any objects in the `OutgoingMessages` return value from the middleware method will be sent as cascaded messages. Wolverine will also apply a "maybe stop" frame from the `IHandlerContinuation` as well. --- --- url: /guide/migrating-to-wolverine.md --- # Migrating to Wolverine This guide is for developers coming to Wolverine from other .NET messaging and mediator frameworks. Whether you're using MassTransit, NServiceBus, MediatR, Rebus, or Brighter, this document covers the key conceptual differences, practical migration paths, and best practices for adopting Wolverine. ::: tip Wolverine is a unified framework that handles both in-process mediator usage *and* asynchronous messaging with external brokers. If you're currently using MediatR *plus* a separate messaging framework, Wolverine can replace both with a single set of conventions. ::: ::: warning Wolverine does **not** support interfaces or abstract types as message types for the purpose of routing or handler discovery. All message types must be **concrete classes or records**. If your current system publishes messages as interfaces (a common pattern in MassTransit and NServiceBus), you will need to convert these to concrete types. During a gradual migration, use `opts.Policies.RegisterInteropMessageAssembly(assembly)` to help Wolverine map from interface-based messages to concrete types. However, and especially if you are pursing a [Modular Monolith Architecture](/tutorials/modular-monolith), you can still do ["sticky" assignments](/guide/handlers/sticky) of Wolverine message handlers to specific listener endpoints. ::: ## Wolverine vs "IHandler of T" Frameworks Almost every popular .NET messaging and mediator framework follows the "IHandler of T" pattern -- your handlers must implement a framework interface or inherit from a framework base class. This includes MassTransit's `IConsumer`, NServiceBus's `IHandleMessages`, MediatR's `IRequestHandler`, Rebus's `IHandleMessages`, and Brighter's `RequestHandler`. Wolverine takes a fundamentally different approach: **convention over configuration**. Your handlers are plain C# methods with no required interfaces, base classes, or attributes. Wolverine infers everything from method signatures. ### The Interface-Based Pattern Every "IHandler of T" framework follows roughly the same pattern: ```csharp // MassTransit public class OrderConsumer : IConsumer { public async Task Consume(ConsumeContext context) { ... } } // NServiceBus public class OrderHandler : IHandleMessages { public async Task Handle(SubmitOrder message, IMessageHandlerContext context) { ... } } // MediatR public class OrderHandler : IRequestHandler { public async Task Handle(SubmitOrder request, CancellationToken ct) { ... } } // Rebus public class OrderHandler : IHandleMessages { public async Task Handle(SubmitOrder message) { ... } } // Brighter public class OrderHandler : RequestHandler { public override SubmitOrder Handle(SubmitOrder command) { // ... must call base.Handle(command) to continue pipeline return base.Handle(command); } } ``` In every case you must: 1. Implement a specific interface or inherit from a specific base class 2. Register that handler with the framework (sometimes automatic, sometimes manual) 3. Inject dependencies through the constructor 4. Use the framework's context object to publish or send additional messages ### The Wolverine Way ::: tip Unlike some other messaging frameworks, Wolverine does **not** require you to explicitly register message handlers against a specific listener endpoint like a Rabbit MQ queue or an Azure Service Bus subscription. ::: Wolverine discovers handlers through naming conventions. The [best practice](/introduction/best-practices) is to write handlers as **pure functions** -- static methods that take in data and return decisions: ```csharp // No interface, no base class, static method, pure function public static class SubmitOrderHandler { // First parameter = message type (by convention) // Return value = cascading message (published automatically) // IDocumentSession = dependency injected as method parameter public static OrderSubmitted Handle(SubmitOrder command, IDocumentSession session) { session.Store(new Order(command.OrderId)); return new OrderSubmitted(command.OrderId); } } ``` This is possible because Wolverine uses **runtime code generation** to build optimized execution pipelines at startup. Rather than resolving handlers from an IoC container at runtime and invoking them through interface dispatch, Wolverine generates C# code that directly calls your methods, injects dependencies, and handles cascading messages -- all with minimal allocations and clean exception stack traces. ::: tip It's an imperfect world, and Wolverine's code generation strategy can easily be an issue in production resource utilization, but fear not! Wolverine has [some mechanisms to avoid that problem](http://localhost:5050/guide/codegen.html#generating-code-ahead-of-time) easily in real projection usage. ::: ### Why Pure Functions Matter The Wolverine team strongly recommends writing handlers as pure functions whenever possible. A pure function: * Takes in all its inputs as parameters (the message, loaded entities, injected services) * Returns its outputs explicitly (cascading messages, side effects, storage operations) * Has no hidden side effects (no injecting `IMessageBus` deep in the call stack to secretly publish messages) This matters for **testability**: pure function handlers can be unit tested with zero mocking infrastructure: ```csharp [Fact] public void submit_order_publishes_submitted_event() { var result = SubmitOrderHandler.Handle( new SubmitOrder("ABC-123"), someSession); result.OrderId.ShouldBe("ABC-123"); } ``` Compare this to the typical "IHandler of T" test that requires mocking the framework context, the message bus, repositories, and verifying mock interactions. ### Railway Programming Wolverine supports a form of [Railway Programming](/tutorials/railway-programming) through its compound handler support. By using `Before`, `Validate`, or `Load` methods alongside the main `Handle` method, you can separate the "sad path" (validation failures, missing data) from the "happy path" (business logic): ```csharp public static class ShipOrderHandler { // Runs first -- handles the "sad path" public static async Task<(HandlerContinuation, Order?)> LoadAsync( ShipOrder command, IDocumentSession session) { var order = await session.LoadAsync(command.OrderId); return order == null ? (HandlerContinuation.Stop, null) : (HandlerContinuation.Continue, order); } // Pure function -- only runs on the "happy path" public static ShipmentCreated Handle(ShipOrder command, Order order) { return new ShipmentCreated(order.Id, order.ShippingAddress); } } ``` Returning `HandlerContinuation.Stop` from a `Before`/`Validate`/`Load` method aborts processing before the main handler executes. For HTTP endpoints, you can return `ProblemDetails` instead for RFC 7807 compliant error responses. ### Middleware: Runtime Pipeline vs Compile-Time Code Generation ::: info If you're familiar with NServiceBus's concept of "Behaviors," that concept was originally taken directly from [FubuMVC's `BehaviorGraph` model](https://fubumvc.github.io) that allowed you to attach middleware strategies on a message type by message type basis through a mix of explicit configuration and user defined conventions or policies. Wolverine itself was started as a "next generation, .NET Core" successor to the earlier FubuMVC and its FubuTransportation messaging bus add on. However, where the NServiceBus team improved the admittedly grotesque inefficiency of FubuMVC through more efficient usage of `Expression` compilation to lambda functions, Wolverine beat the same problems through [its code generation model](/guide/codegen). ::: In "IHandler of T" frameworks, middleware wraps handler execution at runtime: | Framework | Middleware Model | |-----------|-----------------| | MassTransit | `IFilter` with `next.Send(context)` | | NServiceBus | `Behavior` with `await next()` | | MediatR | `IPipelineBehavior` with `await next()` | | Rebus | `IIncomingStep` / `IOutgoingStep` with `await next()` | | Brighter | Attribute-driven decorators, must call `base.Handle()` | All of these apply middleware to **every** message regardless of whether the middleware is relevant, then use runtime conditional logic to skip irrelevant cases. Wolverine's [middleware](/guide/handlers/middleware) is fundamentally different. It uses compile-time code generation -- your middleware methods are woven directly into the generated handler code at startup: ```csharp public class AuditMiddleware { public static void Before(ILogger logger, Envelope envelope) { logger.LogInformation("Processing {MessageType}", envelope.MessageType); } public static void Finally(ILogger logger, Envelope envelope) { logger.LogInformation("Completed {MessageType}", envelope.MessageType); } } ``` A critical advantage is that Wolverine middleware can be **selectively applied on a message type by message type basis**: ```csharp // Apply only to messages in a specific namespace opts.Policies.AddMiddleware(chain => chain.MessageType.IsInNamespace("MyApp.Commands")); // Apply only to messages implementing a marker interface opts.Policies.AddMiddleware(chain => chain.MessageType.CanBeCastTo()); // Apply to a specific message type opts.Policies.AddMiddleware(chain => chain.MessageType == typeof(ImportantCommand)); ``` This means middleware that only applies to certain message types is never even included in the generated code for other handlers. No runtime conditional checks, no wasted allocations, and much cleaner exception stack traces. ### Comparison Table | Aspect | MassTransit | NServiceBus | MediatR | Rebus | Brighter | Wolverine | |--------|-------------------------|-------------|---------|-------|----------|-----------| | Handler contract | `IConsumer` | `IHandleMessages` | `IRequestHandler` | `IHandleMessages` | `RequestHandler` base class | None (convention) | | Static handlers | No | No | No | No | No | Yes | | Method injection | No | No | No | No | No | Yes | | Pure function style | Difficult | Difficult | Difficult | Difficult | Difficult | First-class | | Return values as messages | No | No | Response only | No | Pipeline chain | Cascading messages | | Middleware model | Runtime filters | Runtime behaviors | Runtime pipeline | Runtime steps | Attribute decorators | Compile-time codegen | | Per-message-type middleware | Via consumer definition | Via pipeline stage | No (all handlers) | No (global) | Yes (per-handler attributes) | Yes (policy filters) | | In-process mediator | Yes | No | Yes | No | Yes | Yes (`InvokeAsync`) | | Async messaging | Yes | Yes | No | Yes | Yes (with Darker) | Yes | | Transactional outbox | Yes (w/ EF Core) | Yes | No | No | Yes | Yes (built-in) | | Railway programming | No | No | No | No | No | Yes (compound handlers) | | License | Apache 2.0 | Commercial | Apache 2.0 | MIT | MIT | MIT | ## From MassTransit If you're migrating from [MassTransit](https://masstransit.io/), Wolverine has built-in [interoperability](/tutorials/interop#interop-with-masstransit) for RabbitMQ, Azure Service Bus, and Amazon SQS/SNS, enabling a gradual migration where both frameworks exchange messages during the transition. ### Handlers **MassTransit** `IConsumer` with `ConsumeContext`: ```csharp public class SubmitOrderConsumer : IConsumer { private readonly IOrderRepository _repo; public SubmitOrderConsumer(IOrderRepository repo) { _repo = repo; } public async Task Consume(ConsumeContext context) { await _repo.Save(new Order(context.Message.OrderId)); await context.Publish(new OrderSubmitted { OrderId = context.Message.OrderId }); } } ``` **Wolverine** equivalent as a pure function: ```csharp public static class SubmitOrderHandler { public static (OrderSubmitted, Storage.Insert) Handle(SubmitOrder command) { var order = new Order(command.OrderId); return ( new OrderSubmitted(command.OrderId), // cascading message Storage.Insert(order) // side effect ); } } ``` ### Error Handling **MassTransit** layers separate middleware: ```csharp cfg.ReceiveEndpoint("orders", e => { e.UseMessageRetry(r => r.Interval(5, TimeSpan.FromSeconds(1))); e.UseDelayedRedelivery(r => r.Intervals( TimeSpan.FromMinutes(5), TimeSpan.FromMinutes(15))); e.UseInMemoryOutbox(); }); ``` **Wolverine** uses declarative [error handling policies](/guide/handlers/error-handling): ```csharp opts.Policies.Failures.Handle() .RetryWithCooldown(50.Milliseconds(), 100.Milliseconds(), 250.Milliseconds()) .Then.MoveToErrorQueue(); opts.Policies.Failures.Handle() .ScheduleRetry(5.Minutes(), 15.Minutes(), 30.Minutes()); ``` ### Sagas MassTransit state machines (`MassTransitStateMachine`) require a complete rewrite. Consumer sagas (`ISaga`/`InitiatedBy`/`Orchestrates`) map more directly: | MassTransit | Wolverine | |-------------|-----------| | `InitiatedBy` | `Start(T)` method | | `Orchestrates` | `Handle(T)` or `Orchestrate(T)` method | | `InitiatedByOrOrchestrates` | `StartOrHandle(T)` method | | `SetCompletedWhenFinalized()` | `MarkCompleted()` | | `SagaStateMachineInstance` | `Saga` base class, state as properties | ### Send/Publish | Operation | MassTransit | Wolverine | |-----------|------------|-----------| | Command | `Send()` via `ISendEndpointProvider` | `SendAsync()` via `IMessageBus` | | Event | `Publish()` via `IPublishEndpoint` | `PublishAsync()` via `IMessageBus` | | In-process | N/A (separate MediatR) | `InvokeAsync()` via `IMessageBus` | | Request/response | `IRequestClient` | `InvokeAsync()` | ### Configuration **MassTransit**: ```csharp services.AddMassTransit(x => { x.AddConsumer(); x.UsingRabbitMq((context, cfg) => { cfg.Host("localhost"); cfg.ConfigureEndpoints(context); }); }); ``` **Wolverine**: ```csharp builder.Host.UseWolverine(opts => { opts.UseRabbitMq(r => r.HostName = "localhost").AutoProvision(); opts.ListenToRabbitQueue("orders"); opts.PublishMessage().ToRabbitExchange("events"); }); ``` ### Transport Interoperability Enable MassTransit interop on an endpoint-by-endpoint basis: ```csharp opts.ListenToRabbitQueue("incoming") .DefaultIncomingMessage() .UseMassTransitInterop(); opts.Policies.RegisterInteropMessageAssembly(typeof(SharedMessages).Assembly); ``` Supported transports: RabbitMQ, Azure Service Bus, Amazon SQS/SNS. See the full [interoperability guide](/tutorials/interop#interop-with-masstransit). ### MassTransit Shim Interfaces Wolverine provides shim interfaces in the `Wolverine.Shims.MassTransit` namespace that mimic MassTransit's core consumer API while delegating to Wolverine's `IMessageBus` and `IMessageContext`. These shims let you keep your existing `IConsumer` handler signatures working under Wolverine during migration. ::: tip The shim interfaces are included in the core Wolverine NuGet package -- no additional packages are needed. While these shims ease migration, the Wolverine team recommends eventually moving to Wolverine's native convention-based handlers for the best developer experience. ::: #### Automatic Handler Discovery Wolverine automatically discovers classes implementing `IConsumer` during its normal [handler discovery](/guide/handlers/discovery) assembly scanning -- no explicit registration is needed. The `ConsumeContext`, `IPublishEndpoint`, and `ISendEndpointProvider` types are automatically resolved in handler methods via Wolverine's built-in code generation. Just make sure the assembly containing your `IConsumer` implementations is included in Wolverine's discovery. By default, Wolverine scans the application assembly and any assemblies explicitly added via `opts.Discovery.IncludeAssembly()`. See the [handler discovery documentation](/guide/handlers/discovery) for more details on controlling which assemblies are scanned. #### Available Interfaces | MassTransit Shim | Delegates To | Purpose | |-----------------|-------------|---------| | `IConsumer` | `IWolverineHandler` | Consumer/handler discovery marker | | `ConsumeContext` | `IMessageContext` | Message access, Send/Publish/Respond inside consumers | | `IPublishEndpoint` | `IMessageBus` | Publish events outside of consumers | | `ISendEndpointProvider` | `IMessageBus` | Send commands outside of consumers | #### Using IConsumer\ The `IConsumer` shim extends `IWolverineHandler`, so implementing it automatically registers your consumer with Wolverine's handler discovery: ```csharp using Wolverine.Shims.MassTransit; public class OrderConsumer : IConsumer { public async Task Consume(ConsumeContext context) { var order = new Order(context.Message.OrderId); // ConsumeContext delegates to Wolverine's IMessageContext await context.Publish(new OrderSubmitted { OrderId = context.Message.OrderId }); await context.RespondAsync(new SubmitOrderResponse { Success = true }); } } ``` #### Using IPublishEndpoint / ISendEndpointProvider Inject these interfaces to send and publish messages outside of consumers: ```csharp using Wolverine.Shims.MassTransit; public class OrderController : ControllerBase { private readonly ISendEndpointProvider _sender; private readonly IPublishEndpoint _publisher; public OrderController(ISendEndpointProvider sender, IPublishEndpoint publisher) { _sender = sender; _publisher = publisher; } [HttpPost] public async Task PlaceOrder(PlaceOrderRequest request) { await _sender.Send(new SubmitOrder(request.OrderId)); return Accepted(); } [HttpPost("notify")] public async Task NotifyOrderShipped(string orderId) { await _publisher.Publish(new OrderShipped { OrderId = orderId }); return Ok(); } } ``` #### ConsumeContext Properties The `ConsumeContext` shim exposes common MassTransit properties mapped to Wolverine: | ConsumeContext Property | Wolverine Source | |------------------------|-----------------| | `Message` | The message instance | | `MessageId` | `Envelope.Id` | | `CorrelationId` | `IMessageContext.CorrelationId` | | `ConversationId` | `Envelope.ConversationId` | | `Headers` | `Envelope.Headers` | ### Migration Checklist **Phase 1: Coexistence** * \[ ] Add Wolverine and transport NuGet packages alongside MassTransit * \[ ] Configure `UseWolverine()` in your host setup * \[ ] Enable `UseMassTransitInterop()` on endpoints exchanging messages * \[ ] Register shared assemblies: `opts.Policies.RegisterInteropMessageAssembly(assembly)` * \[ ] Convert interface-based message types to concrete classes or records * \[ ] Write new handlers in Wolverine while existing MassTransit consumers run **Phase 2: Handler Migration** * \[ ] Convert `IConsumer` to Wolverine convention handlers * \[ ] Replace `ConsumeContext` with method parameter injection * \[ ] Replace `context.Publish()` with return values (cascading messages) * \[ ] Refactor toward pure functions * \[ ] Convert `IFilter` middleware to Wolverine [conventional middleware](/guide/handlers/middleware) * \[ ] Rewrite retry config to Wolverine error handling policies **Phase 3: Saga Migration** * \[ ] Convert consumer sagas to Wolverine `Saga` with `Start`/`Handle` methods * \[ ] Rewrite state machine sagas as Wolverine saga classes * \[ ] Configure saga persistence (Marten, EF Core, SQL Server, etc.) **Phase 4: Cleanup** * \[ ] Remove MassTransit interop, packages, and registration code * \[ ] Enable Wolverine's [transactional outbox](/guide/durability/) * \[ ] Consider [pre-generated types](/guide/codegen) for production performance ## From NServiceBus If you're migrating from [NServiceBus](https://particular.net/nservicebus), Wolverine has built-in [interoperability](/tutorials/interop#interop-with-nservicebus) for RabbitMQ, Azure Service Bus, and Amazon SQS/SNS. NServiceBus's wire protocol is quite similar to Wolverine's, so interop tends to work cleanly. ::: info NServiceBus requires a commercial license for production use. Wolverine is MIT licensed and free for all use. ::: ### Handlers **NServiceBus** `IHandleMessages`: ```csharp public class SubmitOrderHandler : IHandleMessages { private readonly IOrderRepository _repo; public SubmitOrderHandler(IOrderRepository repo) { _repo = repo; } public async Task Handle(SubmitOrder message, IMessageHandlerContext context) { await _repo.Save(new Order(message.OrderId)); await context.Publish(new OrderSubmitted { OrderId = message.OrderId }); } } ``` **Wolverine** equivalent as a pure function: ```csharp public static class SubmitOrderHandler { public static OrderSubmitted Handle(SubmitOrder command, IDocumentSession session) { session.Store(new Order(command.OrderId)); return new OrderSubmitted(command.OrderId); } } ``` Key migration steps: * Remove the `IHandleMessages` interface * Change the `Handle(T message, IMessageHandlerContext context)` signature to `Handle(T message, ...dependencies...)` * Replace `context.Send()` / `context.Publish()` with return values (cascading messages) * Consider making handlers static with method injection ### Commands vs Events NServiceBus enforces a strict distinction between commands (`ICommand`) and events (`IEvent`) with marker interfaces. Commands can only be `Send()`, events can only be `Publish()`. Wolverine has no such enforcement. Any concrete type can be a message. The distinction between `SendAsync()` (expects at least one subscriber, throws if none) and `PublishAsync()` (silently succeeds with no subscribers) is behavioral, not based on the message type. ### Error Handling / Recoverability **NServiceBus** uses a two-tier retry model: ```csharp var recoverability = endpointConfiguration.Recoverability(); recoverability.Immediate(i => i.NumberOfRetries(3)); recoverability.Delayed(d => { d.NumberOfRetries(2); d.TimeIncrease(TimeSpan.FromSeconds(15)); }); recoverability.AddUnrecoverableException(); ``` **Wolverine** provides per-exception-type [error handling policies](/guide/handlers/error-handling): ```csharp opts.Policies.Failures.Handle() .RetryWithCooldown(50.Milliseconds(), 100.Milliseconds()) .Then.ScheduleRetry(15.Seconds(), 30.Seconds()) .Then.MoveToErrorQueue(); opts.Policies.Failures.Handle() .MoveToErrorQueue(); // skip all retries ``` Wolverine's approach gives you finer-grained control: different exception types can have entirely different retry strategies, and actions are chainable (retry inline, then schedule, then dead letter). ### Sagas | NServiceBus | Wolverine | |-------------|-----------| | `Saga` base class | `Saga` base class (no generic parameter) | | Separate `ContainSagaData` class | State properties directly on saga class | | `IAmStartedByMessages` | `Start(T)` / `Starts(T)` method | | `IHandleMessages` on saga | `Handle(T)` / `Orchestrate(T)` method | | `ConfigureHowToFindSaga()` | Convention: `[SagaIdentity]`, `{SagaType}Id`, `SagaId`, or `Id` | | `MarkAsComplete()` | `MarkCompleted()` | | `IHandleTimeouts` | Scheduled messages (use `ScheduleAsync()`) | | `RequestTimeout()` | Return a `DelayedMessage` from handler | ### Pipeline Behaviors **NServiceBus** `Behavior`: ```csharp public class LogBehavior : Behavior { public override async Task Invoke( IIncomingLogicalMessageContext context, Func next) { Console.WriteLine("Before"); await next(); Console.WriteLine("After"); } } ``` **Wolverine** middleware with per-message-type filtering: ```csharp public class LogMiddleware { public static void Before(ILogger logger, Envelope envelope) { logger.LogInformation("Before {Type}", envelope.MessageType); } public static void After(ILogger logger, Envelope envelope) { logger.LogInformation("After {Type}", envelope.MessageType); } } // Apply only to specific message types opts.Policies.AddMiddleware(chain => chain.MessageType.IsInNamespace("MyApp.ImportantMessages")); ``` NServiceBus behaviors are singletons that run for every message. Wolverine middleware is code-generated per handler chain and can be filtered to only the message types that need it. ### Configuration **NServiceBus**: ```csharp var endpointConfiguration = new EndpointConfiguration("Sales"); endpointConfiguration.UseTransport(new RabbitMQTransport( RoutingTopology.Conventional(QueueType.Quorum), connectionString)); endpointConfiguration.UsePersistence(); var routing = endpointConfiguration.UseTransport(transport); routing.RouteToEndpoint(typeof(PlaceOrder), "Sales.Orders"); ``` **Wolverine**: ```csharp builder.Host.UseWolverine(opts => { opts.UseRabbitMq(r => r.HostName = "localhost").AutoProvision(); opts.PersistMessagesWithPostgresql(connectionString); opts.PublishMessage().ToRabbitQueue("sales-orders"); opts.ListenToRabbitQueue("sales-orders"); }); ``` Key differences: * NServiceBus uses `EndpointConfiguration`; Wolverine uses `UseWolverine()` on the .NET Generic Host * NServiceBus selects one transport; Wolverine can use multiple transports simultaneously * NServiceBus routes commands by assembly or type; Wolverine configures explicit routing per message type ### Transport Interoperability Enable NServiceBus interop on an endpoint-by-endpoint basis: ```csharp opts.ListenToAzureServiceBusQueue("incoming") .UseNServiceBusInterop(); opts.Policies.RegisterInteropMessageAssembly(typeof(SharedMessages).Assembly); ``` Wolverine detects message types from standard NServiceBus headers. You may need [message type aliases](/guide/messages#message-type-name-or-alias) to bridge naming differences. See the full [interoperability guide](/tutorials/interop#interop-with-nservicebus). ### NServiceBus Shim Interfaces Wolverine provides shim interfaces in the `Wolverine.Shims.NServiceBus` namespace that mimic the core NServiceBus API surface while delegating to Wolverine's `IMessageBus` and `IMessageContext` under the hood. These shims let you migrate handler code incrementally without rewriting every handler signature at once. ::: tip The shim interfaces are included in the core Wolverine NuGet package -- no additional packages are needed. While these shims ease migration, the Wolverine team recommends eventually moving to Wolverine's native convention-based handlers and pure function style for the best developer experience. ::: #### Automatic Handler Discovery Wolverine automatically discovers classes implementing `IHandleMessages` during its normal [handler discovery](/guide/handlers/discovery) assembly scanning -- no explicit registration is needed. The `IMessageHandlerContext` parameter in `Handle(T message, IMessageHandlerContext context)` is automatically resolved via Wolverine's built-in code generation. Just make sure the assembly containing your `IHandleMessages` implementations is included in Wolverine's discovery. By default, Wolverine scans the application assembly and any assemblies explicitly added via `opts.Discovery.IncludeAssembly()`. See the [handler discovery documentation](/guide/handlers/discovery) for more details on controlling which assemblies are scanned. #### DI Registration for Non-Handler Interfaces If you need to inject NServiceBus shim interfaces (`IMessageSession`, `IEndpointInstance`, `IUniformSession`, `ITransactionalSession`) into services outside of message handlers via constructor injection, register them with: ```csharp builder.Host.UseWolverine(opts => { opts.UseNServiceBusShims(); // Your Wolverine configuration... }); ``` #### Available Interfaces | NServiceBus Shim | Delegates To | Purpose | |-----------------|-------------|---------| | `IMessageSession` | `IMessageBus` | Send/Publish outside of handlers | | `IEndpointInstance` | `IMessageBus` + `IHost` | Running endpoint with lifecycle | | `IMessageHandlerContext` | `IMessageContext` | Send/Publish/Reply inside handlers | | `IUniformSession` | `IMessageBus` | Unified Send/Publish (inside or outside handlers) | | `ITransactionalSession` | `IMessageBus` | Transactional Send/Publish (Open/Commit are obsolete) | | `IHandleMessages` | `IWolverineHandler` | Handler discovery marker | #### Using IHandleMessages\ The `IHandleMessages` shim extends `IWolverineHandler`, so implementing it automatically registers your handler with Wolverine's handler discovery: ```csharp using Wolverine.Shims.NServiceBus; // This handler is discovered by Wolverine via the IWolverineHandler marker public class OrderHandler : IHandleMessages { public async Task Handle(PlaceOrder message, IMessageHandlerContext context) { // context.Send, context.Publish, context.Reply all delegate to Wolverine await context.Publish(new OrderPlaced(message.OrderId)); await context.Reply(new PlaceOrderResponse { Success = true }); } } ``` #### Using IMessageSession / IEndpointInstance Inject `IMessageSession` or `IEndpointInstance` to send and publish messages outside of handlers: ```csharp using Wolverine.Shims.NServiceBus; public class OrderController : ControllerBase { private readonly IMessageSession _session; public OrderController(IMessageSession session) => _session = session; [HttpPost] public async Task PlaceOrder(PlaceOrderRequest request) { await _session.Send(new PlaceOrder(request.OrderId)); return Accepted(); } } ``` #### NServiceBus-Style Options The shims include `SendOptions`, `PublishOptions`, and `ReplyOptions` classes that map to Wolverine's `DeliveryOptions`: ```csharp var options = new SendOptions(); options.SetDestination("remote-endpoint"); // routes to a named endpoint options.SetHeader("tenant-id", "acme"); // adds a header options.DelayDeliveryWith(TimeSpan.FromMinutes(5)); // schedules delivery await session.Send(new PlaceOrder("ABC-123"), options); ``` #### ITransactionalSession `ITransactionalSession` delegates `Send` and `Publish` to `IMessageBus`. The `Open()` and `Commit()` lifecycle methods are marked `[Obsolete]` and throw `NotSupportedException` because Wolverine handles transactional messaging automatically via its built-in [outbox](/guide/durability/): ```csharp // These methods are obsolete -- just delete the calls // session.Open(); // throws NotSupportedException // session.Commit(); // throws NotSupportedException // Send and Publish work normally await session.Send(new PlaceOrder("ABC-123")); await session.Publish(new OrderPlaced("ABC-123")); ``` ### Migration Checklist **Phase 1: Coexistence** * \[ ] Add Wolverine and transport NuGet packages alongside NServiceBus * \[ ] Configure `UseWolverine()` in your host setup * \[ ] Enable `UseNServiceBusInterop()` on endpoints exchanging messages * \[ ] Register shared assemblies: `opts.Policies.RegisterInteropMessageAssembly(assembly)` * \[ ] Convert `ICommand`/`IEvent` interface message types to concrete types * \[ ] Write new handlers in Wolverine while NServiceBus handlers continue running **Phase 2: Handler Migration** * \[ ] Remove `IHandleMessages` interfaces from handler classes * \[ ] Replace `IMessageHandlerContext` with method parameter injection * \[ ] Replace `context.Send()`/`context.Publish()` with cascading message return values * \[ ] Refactor toward pure functions and static handlers * \[ ] Convert pipeline behaviors to Wolverine middleware * \[ ] Rewrite recoverability config to Wolverine error handling policies **Phase 3: Saga Migration** * \[ ] Convert `Saga` to Wolverine `Saga` base class * \[ ] Move saga data properties onto the saga class directly * \[ ] Convert `IAmStartedByMessages` to `Start(T)` methods * \[ ] Convert `ConfigureHowToFindSaga()` to convention-based correlation * \[ ] Replace `IHandleTimeouts` with scheduled messages **Phase 4: Cleanup** * \[ ] Remove NServiceBus interop, packages, and configuration * \[ ] Remove NServiceBus license file * \[ ] Enable Wolverine's [transactional outbox](/guide/durability/) * \[ ] Consider [pre-generated types](/guide/codegen) for production ## From MediatR For a detailed comparison of MediatR and Wolverine, see the dedicated [Wolverine for MediatR Users](/introduction/from-mediatr) guide. ### MediatR Shim Interfaces Wolverine provides shim interfaces in the `Wolverine.Shims.MediatR` namespace that let you keep your existing MediatR handler signatures working under Wolverine without any code changes. These shims are included in the core Wolverine NuGet package. ::: tip These shim interfaces are marker types that Wolverine's [handler discovery](/guide/handlers/discovery) recognizes via `IWolverineHandler`. No additional DI registration is needed -- just change your `using` statements from `MediatR` to `Wolverine.Shims.MediatR` and remove the MediatR NuGet packages. ::: #### Available Interfaces | MediatR Shim | Purpose | |-------------|---------| | `IRequest` | Marker for request messages that return a response of type `T` | | `IRequest` | Marker for request messages that do not return a response | | `IRequestHandler` | Handler for requests with a response (extends `IWolverineHandler`) | | `IRequestHandler` | Handler for requests without a response (extends `IWolverineHandler`) | #### Usage Simply change the `using` directive from `MediatR` to `Wolverine.Shims.MediatR`: ```csharp // Before: using MediatR; using Wolverine.Shims.MediatR; public record CreateOrder(string OrderId) : IRequest; public record OrderResult(string OrderId, string Status); public class CreateOrderHandler : IRequestHandler { public Task Handle(CreateOrder request, CancellationToken cancellationToken) { return Task.FromResult(new OrderResult(request.OrderId, "Created")); } } ``` Invoke using Wolverine's `IMessageBus`: ```csharp // Before: var result = await mediator.Send(new CreateOrder("ABC-123")); var result = await bus.InvokeAsync(new CreateOrder("ABC-123")); ``` #### Migration Steps 1. Replace `using MediatR;` with `using Wolverine.Shims.MediatR;` in your handler files 2. Replace `IMediator.Send()` calls with `IMessageBus.InvokeAsync()` at call sites 3. Replace `IMediator.Publish()` calls with `IMessageBus.PublishAsync()` 4. Remove the MediatR NuGet packages 5. Over time, consider removing the shim interfaces and adopting Wolverine's native convention-based handlers The key differences in summary: * **No `IRequest` / `IRequestHandler`** -- Wolverine handlers are discovered by convention * **No `INotificationHandler`** -- Use Wolverine's [local queues](/guide/messaging/transports/local) with [durable inbox/outbox](/guide/durability/) for reliable background work * **No `IPipelineBehavior`** -- Use Wolverine [middleware](/guide/handlers/middleware) with per-message-type filtering * **Return values are cascading messages** -- No need for explicit `IMediator.Send()` to chain work * **Pure function handlers** -- Static methods, method injection, no mocking needed for unit tests * **Railway Programming** -- Use [compound handlers](/guide/handlers/#compound-handlers) with `Load`/`Validate` methods for sad-path handling * **Unified model** -- Wolverine's `InvokeAsync()` replaces MediatR's `Send()`, and the same handler conventions work for both in-process and async messaging The most common reason to migrate from MediatR is that Wolverine provides both the mediator pattern *and* asynchronous messaging with durable outbox support in one framework, eliminating the need for MediatR + MassTransit or MediatR + NServiceBus. ## From Rebus [Rebus](https://github.com/rebus-org/Rebus) uses the same `IHandleMessages` interface pattern as NServiceBus: ```csharp // Rebus public class OrderHandler : IHandleMessages { public async Task Handle(PlaceOrder message) { // handle the message } } ``` **Wolverine** equivalent: ```csharp public static class OrderHandler { public static void Handle(PlaceOrder command) { // handle the message } } ``` Key differences from Rebus: * **No interfaces** -- Remove `IHandleMessages`, Wolverine discovers handlers by convention * **Sagas** -- Rebus uses `Saga` with `IAmInitiatedBy` and explicit `CorrelateMessages()`. Wolverine uses convention-based `Start`/`Handle` methods with automatic correlation * **Error handling** -- Rebus has retry-count + error queue with optional `IFailed` second-level handling. Wolverine has [per-exception-type policies](/guide/handlers/error-handling) with composable actions * **Middleware** -- Rebus has global pipeline steps (`IIncomingStep`/`IOutgoingStep`). Wolverine has conventional middleware that can be [filtered per message type](/guide/handlers/middleware) * **No transport interop** -- Unlike MassTransit and NServiceBus, there is no built-in Rebus interoperability in Wolverine. You would need a [custom envelope mapper](/tutorials/interop) or migrate endpoints fully ## From Brighter [Brighter](https://github.com/BrighterCommand/Brighter) (Paramore.Brighter) uses a base class pattern with an attribute-driven middleware pipeline: ```csharp // Brighter public class OrderHandler : RequestHandler { [RequestLogging(step: 1, timing: HandlerTiming.Before)] [UseResiliencePipeline(policy: "retry", step: 2)] public override PlaceOrder Handle(PlaceOrder command) { // handle the command return base.Handle(command); // MUST call to continue pipeline } } ``` **Wolverine** equivalent: ```csharp public static class OrderHandler { public static void Handle(PlaceOrder command) { // handle the command -- no base class, no pipeline chain to call } } // Middleware applied by policy, not attributes opts.Policies.AddMiddleware(); ``` Key differences from Brighter: * **No base class** -- No need to inherit from `RequestHandler` or `RequestHandlerAsync` * **No `base.Handle()` chain** -- Wolverine handles pipeline chaining automatically via code generation; you cannot accidentally break the pipeline by forgetting `base.Handle()` * **No sync/async split** -- Wolverine supports both sync and async handler methods in the same pipeline. Brighter requires entirely separate `RequestHandler` vs `RequestHandlerAsync` hierarchies * **Middleware by policy *and/or* attributes** -- Wolverine applies middleware through policies that can filter by message type, namespace, or any predicate. Brighter uses per-handler `[RequestHandlerAttribute]` decorators with compile-time-constant parameters * **Error handling** -- Brighter delegates to Polly via `[UseResiliencePipeline]` attributes. Wolverine has [built-in error handling policies](/guide/handlers/error-handling) with retry, schedule, requeue, and dead letter actions ::: info Wolverine originally used Polly internally, but we felt like it was not adding any value in our particular usage and decided to eliminate its usage as Polly's widespread adoption means that it's a common "diamond dependency conflict" waiting to happen. Marten continues to use Polly for low level command resiliency. ::: ## Message Routing Wolverine supports any mix of explicit or conventional [message routing](/guide/messaging/subscriptions) to outbound endpoints (Rabbit MQ exchanges, Azure Service Bus or Kafka topics for example). What Wolverine generally calls "conventional routing" is sometimes referred to by other tools as "automatic routing." In many cases Wolverine's out of the box conventional routing choices are going to be very similar to MassTransit or NServiceBus's existing routing topology both to ease interoperability and also because frankly we thought their routing rules made perfect sense as is. ## Transport Overview ::: tip Wolverine's [transaction inbox & outbox support](/guide/durability/) is orthogonal to the message broker or transport integration packages and is available for all of our supported messaging transports including our local, in process queues option. ::: If your current framework uses one of these transports, here's how they map to Wolverine: | Transport | MassTransit Package | NServiceBus Package | Wolverine Package | Interop Support | |-----------|-------------------|-------------------|-----------------|-----------------| | RabbitMQ | `MassTransit.RabbitMQ` | `NServiceBus.Transport.RabbitMQ` | `Wolverine.RabbitMQ` | MassTransit, NServiceBus | | Azure Service Bus | `MassTransit.Azure.ServiceBus.Core` | `NServiceBus.Transport.AzureServiceBus` | `Wolverine.AzureServiceBus` | MassTransit, NServiceBus | | Amazon SQS | `MassTransit.AmazonSQS` | `NServiceBus.Transport.SQS` | `Wolverine.AmazonSqs` | MassTransit, NServiceBus | | Amazon SNS | (via SQS) | (via SQS) | `Wolverine.AmazonSns` | MassTransit, NServiceBus | | Kafka | `MassTransit.Kafka` | N/A | `Wolverine.Kafka` | CloudEvents | | In-memory | `MassTransit.InMemory` | `LearningTransport` | Built-in [local queues](/guide/messaging/transports/local) | N/A | | SQL Server | `MassTransit.SqlTransport` | `NServiceBus.Transport.SqlServer` | `Wolverine.SqlServer` | N/A | | PostgreSQL | `MassTransit.PostgreSql` | `NServiceBus.Transport.PostgreSql` | `Wolverine.Postgresql` | N/A | See the full list of [Wolverine transports](/guide/messaging/introduction) and the [interoperability tutorial](/tutorials/interop) for configuration details. Note that Wolverine supports a far greater number of messaging options because our community has been awesome at contributing new "transports." --- --- url: /guide/migration.md --- # Migration Guide ## Key Changes in 5.0 5.0 had very few breaking changes in the public API, but some in "publinternals" types most users would never touch. The biggest change in the internals is the replacement of the venerable [TPL DataFlow library](https://learn.microsoft.com/en-us/dotnet/standard/parallel-programming/dataflow-task-parallel-library) with the [System.Threading.Channels library](https://learn.microsoft.com/en-us/dotnet/core/extensions/channels) in every place that Wolverine uses in memory queueing. The only change this caused to the public API was the removal of the option for direct configuration of the TPL DataFlow `ExecutionOptions`. Endpoint ordering and parallelization options are unchanged otherwise in the public fluent interface for configuration. The `IntegrateWithWolverine()` syntax for ["ancillary stores"](/guide/durability/marten/ancillary-stores) changed to a [nested closure](https://martinfowler.com/dslCatalog/nestedClosure.html) syntax to be more consistent with the syntax for the main [Marten](https://martendb.io) store. The [Wolverine managed distribution of Marten projections and subscriptions](/guide/durability/marten/distribution) now applies to the ancillary stores as well. The new [Partitioned Sequential Messaging](/guide/messaging/partitioning) feature is a potentially huge step forward for building a Wolverine system that can efficiently and resiliently handle concurrent access to sensitive resources. The [Aggregate Handler Workflow](/guide/durability/marten/event-sourcing) feature with Marten now supports strong typed identifiers. The declarative data access features with Marten (`[Aggregate]`, `[ReadAggregate]`, `[Entity]` or `[Document]`) can utilize Marten batch querying for better efficiency when a handler or HTTP endpoint uses more than one declaration for data loading. Better control over how [Wolverine generates code with respect to IoC container usage](/guide/codegen.html#wolverine-code-generation-and-ioc). `IServiceContainer` moved to the `JasperFx` namespace. By and large, we've *tried* to replace any API nomenclature using "master" with "main." ## Key Changes in 4.0 * Wolverine dropped all support for .NET 6/7 * The previous dependencies on Oakton, JasperFx.Core, and JasperFx.CodeGeneration were all combined into a single [JasperFx](https://github.com/jasperfx/jasperfx) library. There are shims for any method with "Oakton" in its name, but these are marked as `[Obsolete]`. You can pretty well do a find and replace for "Oakton" to "JasperFx". If your Oakton command classes live in a different project than the runnable application, add this to that project's `Properties/AssemblyInfo.cs` file: ```cs using JasperFx; [assembly: JasperFxAssembly] ``` This attribute replaces the older Oakton assembly attribute: ```cs using Oakton; [assembly: OaktonCommandAssembly] ``` * Internally, the full "Critter Stack" is trying to use `Uri` values to identify databases when targeting multiple databases in either a modular monolith approach or with multi-tenancy * Many of the internal dependencies like Marten or AWS SQS SDK Nugets were updated * The signature of the Kafka `IKafkaEnvelopeMapper` changed somewhat to be more efficient in message serialization * Wolverine now supports [multi-tenancy through separate databases for EF Core](/guide/durability/efcore/multi-tenancy) * The Open Telemetry span names for executing a message are now the [Wolverine message type name](/guide/messages.html#message-type-name-or-alias) ## Key Changes in 3.0 The 3.0 release did not have any breaking changes to the public API, but does come with some significant internal changes. ### Lamar Removal ::: tip Lamar is more "forgiving" than the built in `ServiceProvider`. If after converting to Wolverine 3.0, you receive messages from `ServiceProvider` about not being able to resolve this, that, or the other, just go back to Lamar with the steps in this guide. ::: The biggest change is that Wolverine is no longer directly coupled to the [Lamar IoC library](https://jasperfx.github.io/lamar) and Wolverine will no longer automatically replace the built in `ServiceProvider` with Lamar. At this point it is theoretically possible to use Wolverine with any IoC library that fully supports the ASP.Net Core DI conformance behavior, but Wolverine has only been tested against the default `ServiceProvider` and Lamar IoC containers. Do be aware if moving to Wolverine 3.0 that Lamar is more forgiving than `ServiceProvider`, so there might be some hiccups if you choose to forgo Lamar. See the [Configuration Guide](/guide/configuration) for more information. Lamar does still have a little more robust support for the code generation abilities in Wolverine (Wolverine uses the IoC configuration to generate code to inline dependency creation in a way that's more efficient than an IoC tool at runtime -- when it can). ::: tip If you have any issues with Wolverine's code generation about your message handlers or HTTP endpoints after upgrading to Wolverine 3.0, please open a GitHub issue with Wolverine, but just know that you can probably fall back to using Lamar as the IoC tool to "fix" those issues with code generation planning. ::: Wolverine 3.0 can now be bootstrapped with the `HostApplicationBuilder` or any standard .NET bootstrapping mechanism through `IServiceCollection.AddWolverine()`. The limitation of having to use `IHostBuilder` is gone. ### Marten Integration The Marten/Wolverine `IntegrateWithWolverine()` integration syntax changed from a *lot* of optional arguments to a single call with a nested lambda registration like this: ```cs services.AddMarten(opts => { opts.Connection(Servers.PostgresConnectionString); opts.DisableNpgsqlLogging = true; }) .IntegrateWithWolverine(w => { w.MessageStorageSchemaName = "public"; w.TransportSchemaName = "public"; }) .ApplyAllDatabaseChangesOnStartup(); ``` snippet source | anchor All Marten/Wolverine integration options are available by this one syntax call now, with the exception of event subscriptions. ### Wolverine.RabbitMq The RabbitMq transport recieved a significant overhaul for 3.0. #### RabbitMq Client v7 The RabbitMq .NET client has been updated to v7, bringing with it an internal rewrite to support async I/O and vastly improved memory usage & throughput. This version also supports OTEL out of the box. ::: warning RabbitMq v7 is newly released. If you use another RabbitMQ wrapper/bus in your application, hold off on upgrading until it also supports v7. ::: #### Conventional Routing Improvements * Queue bindings can now be manually overridden on a per-message basis via `BindToExchange`, this is useful for scenarios where you wish to use conventional naming between different applications but need other exchange types apart from `FanOut`. This should make conventional routing the default usage in the majority of situations. See [Conventional Routing](/guide/messaging/transports/rabbitmq/conventional-routing) for more information. * Conventional routing entity creation has been split between the sender and receive side. Previously the sender would generate all exchange and queue bindings, but now if the sender has no handlers for a specific message, the queues will not be created. #### General RabbitMQ Improvements * Added support for Headers exchange * Queues now apply bindings instead of exchanges. This is an internal change and shouldn't result in any obvious differences for users. * The configuration model has expanded flexibility with Queues now bindable to Exchanges, alongside the existing model of Exchanges binding to Queues. * The previous `BindExchange()` syntax was renamed to `DeclareExchange()` to better reflect Rabbit MQ operations ### Sagas Wolverine 3.0 added optimistic concurrency support to the stateful `Saga` support. This will potentially cause database migrations for any Marten-backed `Saga` types as it will now require the numeric version storage. ### Leader Election The leader election functionality in Wolverine has been largely rewritten and *should* eliminate the issues with poor behavior in clusters or local debugging time usage where nodes do not gracefully shut down. Internal testing has shown a significant improvement in Wolverine's ability to detect node changes and rollover the leadership election. ### Wolverine.PostgresSql The PostgreSQL transport option requires you to explicitly set the `transportSchema`, or Wolverine will fall through to using `wolverine_queues` as the schema for the database backed queues. Wolverine will no longer use the envelope storage schema for the queues. ### Wolverine.Http For [Wolverine.Http usage](/guide/http/), the Wolverine 3.0 usage of the less capable `ServiceProvider` instead of the previously mandated [Lamar](https://jasperfx.github.io/lamar) library necessitates the usage of this API to register necessary services for Wolverine.HTTP in addition to adding the Wolverine endpoints: ```cs var builder = WebApplication.CreateBuilder(args); // Add services to the container. // Necessary services for Wolverine HTTP // And don't worry, if you forget this, Wolverine // will assert this is missing on startup:( builder.Services.AddWolverineHttp(); ``` snippet source | anchor Also for Wolverine.Http users, the `[Document]` attribute behavior in the Marten integration is now "required by default." ### Azure Service Bus The Azure Service Bus will now "sanitize" any queue/subscription names to be all lower case. This may impact your usage of conventional routing. Please report any problems with this to GitHub. ### Messaging The behavior of `IMessageBus.InvokeAsync(message)` changed in 3.0 such that the `T` response **is not also published as a message** at the same time when the initial message is sent with request/response semantics. Wolverine has gone back and forth in this behavior in its life, but at this point, the Wolverine thinks that this is the least confusing behavioral rule. You can selectively override this behavior and tell Wolverine to publish the response as a message no matter what by using the new 3.0 `[AlwaysPublishResponse]` attribute like this: ```cs public class CreateItemCommandHandler { // Using this attribute will force Wolverine to also publish the ItemCreated event even if // this is called by IMessageBus.InvokeAsync() [AlwaysPublishResponse] public async Task<(ItemCreated, SecondItemCreated)> Handle(CreateItemCommand command, IDocumentSession session) { var item = new Item { Id = Guid.NewGuid(), Name = command.Name }; session.Store(item); return (new ItemCreated(item.Id, item.Name), new SecondItemCreated(item.Id, item.Name)); } } ``` snippet source | anchor --- --- url: /tutorials/modular-monolith.md --- # Modular Monoliths @[youtube](JSnBe7n-CNI) ::: info Wolverine's mantra is "low code ceremony," and the modular monolith approach comes with a mountain of temptation for a certain kind of software architect to try out a world of potentially harmful high ceremony coding techniques. The Wolverine team urges you to proceed with caution and allow simplicity to trump architectural theories about coupling between application modules. ::: Software development is still a young profession, and we are still figuring out the best ways to build systems, and that means the pendulum swings a bit back and forth on what the software community thinks is the best way to build large systems. We saw some poor results from the old monolithic applications of your as we got codebases with slow build times that made our IDE tools sluggish and were generally just hard to maintain over time. Enter micro-services as an attempt to build software in smaller chunks where you might be able to be mostly working on smaller codebases with quicker builds, faster tests, and a much easier time upgrading technical infrastructure compared to monolithic applications. Of course there were some massive downsides with the whole distributed development thing, and our industry has become disillusioned. ::: tip We still think that Wolverine (and Marten) with its relentless focus on low ceremony code and strong support for asynchronous messaging makes the "Critter Stack" a great fit for micro-services -- and in some sense, a "modular monolith" can also be the first stage of a system architecture that ends up being micro-services after the best service boundaries are proven out *before* you try to pull modules into a separate service. ::: While micro-services as a concept might be parked in the [trough of despair](https://tidyfirst.substack.com/p/the-trough-of-despair) for awhile, the new thinking is to use a so called "Modular Monolith" approach that splits the difference between monoliths and micro-services.\ The general idea is to start inside of a single process, but try to create more vertical decoupling between logical modules in the system as an alternative to both monoliths and micro-services. ![Modular Monolith](/modular-monolith.png) The hope is that you can more easily reason about the code in a single module at a time compared to a monolith, but without having to tackle the extra deployment and management of micro-services upfront. Borrowing heavily from [Milan Jovanović's writing on Modular Monoliths](https://www.milanjovanovic.tech/blog/what-is-a-modular-monolith), the potential benefits are: * Easier deployments than micro-services from simply having less to deploy * Improved performance assuming that integration between modules is done in process * Maybe easier debugging by just having one process to deal with, but asynchronous messaging even in process is never going to be the easiest thing in the world * Hopefully, you have a relatively easy path to being able to separate modules into separate services later as the logical boundaries become clear. Arguably some of the worst outcomes of micro-services come from getting the service boundaries wrong upfront and creating very chatty interactions between different services. That can still happen with a modular monolith, but hopefully it's a lot easier to correct the boundaries later. We'll talk a lot more about this in the "Severability" section. * The ability to adjust transaction boundaries to use native database transactions as it's valuable instead of only having eventual consistency Another explicitly stated hope for modular monoliths is that you're able to better iterate between modules to find the most effective boundaries between logical modules *before* severing modules into separate services later when that is beneficial. ## Important Wolverine Settings Wolverine was admittedly conceived of and optimized for a world where micro-service architecture was the hot topic, and we've had to scramble a little bit as a community lately to make Wolverine be more suitable for how users now want to use Wolverine for modular monoliths. To avoid making breaking changes, we've had to put some modular monolith-friendly features behind configuration settings so as not to break existing users. Specifically, Wolverine "classic" has two conceptual problems for modular monoliths with its original model: 1. If you have multiple message handlers for the same message type, Wolverine combines these handlers into one logical message handler and one logical transaction 2. Messages in its [transactional inbox](/guide/durability/#using-the-inbox-for-incoming-messages) are identified by only the message id. That's worked great until folks start wanting to receive the same message from an external broker, but handled separately by different handlers receiving the same message from different queues or subscriptions or topics depending on the external transport. This is shown below: ![Receiving Same Message 2 or More Times](/receive-message-twice.png) Both of these behaviors can be changed in your application by setting these two flags shown below: ```cs var builder = Host.CreateApplicationBuilder(); // It's not important that it's Marten here, just that if you have // *any* message persistence configured for the transactional inbox/outbox // support, it's impacted by the MessageIdentity setting builder.Services.AddMarten(opts => { var connectionString = builder.Configuration.GetConnectionString("marten"); opts.Connection(connectionString); }) // This line of code is adding a PostgreSQL backed transactional inbox/outbox // integration using the same database as what Marten is using .IntegrateWithWolverine(); builder.UseWolverine(opts => { // This helps Wolverine to use a unified envelope storage across all // modules, which in turn should help Wolverine be more efficient with // your database opts.Durability.MessageStorageSchemaName = "wolverine"; // Tell Wolverine that when you have more than one handler for the same // message type, they should be executed separately and automatically // "stuck" to separate local queues opts.MultipleHandlerBehavior = MultipleHandlerBehavior.Separated; // *If* you may be using external message brokers in such a way that they // are "fanning out" a single message sent from an upstream system into // multiple listeners within the same Wolverine process, you'll want to make // this setting to tell Wolverine to treat that as completely different messages // instead of failing by idempotency checks opts.Durability.MessageIdentity = MessageIdentity.IdAndDestination; // Not 100% necessary for "modular monoliths", but this makes the Wolverine durable // inbox/outbox feature a lot easier to use and DRYs up your message handlers opts.Policies.AutoApplyTransactions(); }); ``` snippet source | anchor See [Message Identity](/guide/durability/#message-identity) and [Multiple Handlers for the Same Message Type](/guide/handlers/#multiple-handlers-for-the-same-message-type) for more detail. The `MultipleHandlerBehavior.Separated` setting is meant for an increasingly common scenario shown below where you want to take completely separate actions on an event message by separate logical modules published by an upstream handler: ![Publishing a message to multiple local subscribers](/publish-event-to-multiple-handlers.png) By using the `MultipleHandlerBehavior.Separated` setting, we're directing Wolverine to track any `OrderPlaced` event message completely separately for each handler. By default this would be publishing the event message to two completely separate [local, in process queues](/guide/messaging/transports/local) inside the Wolverine application. By publishing to separate queues, you also get: * Independent transactions for each handler -- assuming that you're using Wolverine transactional middleware anyway * Separate retry loops and potentially different [error handling policies](/guide/handlers/error-handling) for the message to each handler * The ability to mix and match [durable vs lighter weight "fire and forget"](/guide/runtime.html#endpoint-types) (`Buffered` in Wolverine parlance) semantics for different handlers * Granular tracing and logging on the handlers ::: tip When using `MultipleHandlerBehavior.Separated`, Wolverine automatically fans out messages arriving from external broker endpoints (RabbitMQ, Azure Service Bus, Kafka, etc.) to all the separated local handler queues. This means you don't need any special routing configuration -- a single message received from an external queue will be forwarded to each local handler queue automatically, so every separated handler processes its own copy of the message independently. ::: ## Splitting Your System into Separate Assemblies ::: info The technical leader of Wolverine has a decades old loathing of the Onion Architecture and now the current Clean Architecture fad. While it's perfectly possible to spread a Wolverine application out over separate assemblies, we'd urge you to keep your project structure as simple as possible and not automatically incur extra code ceremony by trying to use separate projects just to enforce coupling rules. ::: To be honest, the Wolverine team would recommend just keeping your modules segregated in separate namespaces until the initial system gets subjectively big enough that you'd want them separated. Do note that Wolverine identifies message types by default by the message type's full type name (\[.NET namespace].\[type name]). You can always override that explicitly through the [`[MessageIdentity]`](/guide/messages.html#message-type-name-or-alias) attribute, but you might try to *not* have to move message types around in the namespace structure. The only real impact is on messages that are in flight in either external message queues or message persistence, so it does no harm to change namespaces if you are only in development and have not yet deployed to production. For handler or HTTP endpoint discovery, you can tell Wolverine to look in additional assemblies. See [Assembly Discovery](/guide/messages.html#message-type-name-or-alias) for more information. As for [pre-generated code](/guide/codegen) with Wolverine, the least friction and idiomatic approach is to just have all Wolverine-generated code placed in the entry assembly. That can be overridden if you have to by setting the "Application Assembly" as shown in the [Assembly Discovery](/guide/handlers/discovery.html#assembly-discovery) section in the documentation. ## In Process vs External Messaging ::: tip Just to be clear, we pretty well never recommend calling `IMessageBus.InvokeAsync()` inline in any message handler to another message handler. For the most part, we think you can build much more robust and resilient systems by leveraging asynchronous messaging. Using [Wolverine as a "Mediator"](/tutorials/mediator) in MVC controllers, Minimal API functions, or maybe Hot Chocolate mutations is an exception case that we fully support. We think this advice applies to any mediator tool and the pattern in general as well. ::: By and large, the Wolverine community will recommend you do most communication between modules through some sort of asynchronous messaging, either locally in process or through external message brokers. Asynchronous messaging will help you keep your modules decoupled, and often leads to much more resilient systems as your modules aren't "temporally" coupled and you utilize [retry or other error handling policies](/guide/handlers/error-handling) independently on downstream queues. You can communicate with any mix of in process messaging and messaging through external messaging brokers like Rabbit MQ or Azure Service Bus. Let's start with just using local, in process queueing with Wolverine between your modules as shown below: ![Communicating through local queues](/modular-monolith-local-queues.png) Now, let's say that you want to publish an `OrderPlaced` event message from the successful processing of a `PlaceOrder` command in a message handler something like this: ```csharp public static OrderPlaced Handle(PlaceOrder command) { // actually do stuff to place a new order... // Returning this from the method will "cascade" this // object as a message. Essentially just publishing // this as a message to any active subscribers in the // Wolverine system return new OrderPlaced(command.OrderId); } ``` and assuming that there's *at least one* known message handler in your application for the `OrderPlaced` event: ```csharp public static class OrderPlacedHandler { public static void Handle(OrderPlaced @event) => Debug.WriteLine("got a new order " + @event.OrderId); } ``` then Wolverine -- by default -- will happily publish `OrderPlaced` through [a local queue](/guide/messaging/transports/local) named after the full type name of the `OrderPlaced` event. You can even make these local queues durable by having them effectively backed by your application's Wolverine message storage (the transactional inbox to be precise), with a couple different approaches to do this shown below: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.Policies.UseDurableLocalQueues(); // or opts.LocalQueue("important").UseDurableInbox(); // or conventionally, make the local queues for messages in a certain namespace // be durable opts.Policies.ConfigureConventionalLocalRouting().CustomizeQueues((type, queue) => { if (type.IsInNamespace("MyApp.Commands.Durable")) { queue.UseDurableInbox(); } }); }).StartAsync(); ``` snippet source | anchor Using local queues for communication is a simple way to get started, requires less deployment overhead in general, and is potentially faster than using external message brokers due to the in process communication. ::: info If you are using durable local queues, Wolverine is still serializing the message to put it in the durable transactional inbox storage, but the actual message object is used as is when it's passed into the local queue. ::: Alternatively, you could instead choose to do all intra-module communication through external message brokers as shown below: ![Communicating through external brokers](/modular-monolith-communication-external-broker.png) Picking Azure Service Bus for our sample, you could use conventional message routing to publish all messages through your system through Azure Service Bus queues like this: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { // Turn *off* the conventional local routing so that // the messages that this application handles still go // through the external Azure Service Bus broker opts.Policies.DisableConventionalLocalRouting(); // One way or another, you're probably pulling the Azure Service Bus // connection string out of configuration var azureServiceBusConnectionString = builder .Configuration .GetConnectionString("azure-service-bus"); // Connect to the broker in the simplest possible way opts.UseAzureServiceBus(azureServiceBusConnectionString).AutoProvision() .UseConventionalRouting(); }); using var host = builder.Build(); await host.StartAsync(); ``` snippet source | anchor By using external queues instead of local queues, you are: * Potentially getting smoother load balanced workloads between running nodes of a clustered application * Reducing memory pressure in your applications, especially if there's any risk of a queue getting backed up and growing large in memory And of course, Wolverine has a wealth of ways to customize message routing for sequencing, grouping, and parallelization. As well as allowing you to mix and match local and external broker messaging or durable and non-durable messaging all within the same application. See the recently updated documentation on [Message Routing in Wolverine](/guide/messaging/subscriptions) to learn more. ## Eventual Consistency between Modules We (the Wolverine team) are loathe to recommend using [eventual consistency](https://en.wikipedia.org/wiki/Eventual_consistency) between modules if you don't have to. It's always going to be technically simpler to just make all the related changes in a single database transaction. It'll definitely be easier to test and troubleshoot problems if you don't use eventual consistency. Not to mention the challenges with user interfaces getting the right updates and possibly dealing with stale data. **To be clear though, we strongly recommend using asynchronous communication between modules** and recommend against using `IMessageBus.InvokeAsync()` inline in most cases to synchronously interact with any other module from a message handler. We think your most common decision is: * Would it be easier in the end to combine functionality into one larger module to utilize transactional integrity and avoid the need for eventual consistency through asynchronous messaging * Or is there a real justification for publishing event messages to other modules to take action later? Assuming that you do opt for eventual consistency, Wolverine makes that quite simple. Just make sure that you are using [durable endpoints](/guide/durability) for communication between any two or more actions that are involved for the implied eventual consistency transactional boundary so that the work does not get lost even in the face of transient errors or unexpected system shutdowns. ::: tip Look, MediatR is an almost dominant tool in the .NET ecosystem right now, but it doesn't come with any kind of built in transactional inbox/outbox support that you need to make asynchronous message passing be resilient. See [MediatR to Wolverine](/introduction/from-mediatr) for information about switching to Wolverine from MediatR. ::: ## Test Automation Support ::: info As a community, we'll most assuredly need to add more convenient API signatures to the tracked sessions specifically to deal with the new usages coming out of modular monolith strategies, but we're first waiting for feedback from real projects on what would be helpful before doing that.\ ::: Wolverine's [Tracked Sessions](/guide/testing.html#integration-testing-with-tracked-sessions) feature is purpose built for test automation support when you want to write tests that might span the activity of more than one message being handled. Consider the case of testing the handling of a `PlaceOrder` command that in turn publishes an `OrderPlaced` event message that is handled by one or more other handlers within your modular monolith system. If you want to write a **reliable** test that spans the activities of all of these messages, you can utilize Wolverine's "tracked sessions" like this: ```cs // Personally, I prefer to reuse the IHost between tests and // do something to clear off any dirty state, but other folks // will spin up an IHost per test to maybe get better test parallelization public static async Task run_end_to_end(IHost host) { var placeOrder = new PlaceOrder("111", "222", 1000); // This would be the "act" part of your arrange/act/assert // test structure var tracked = await host.InvokeMessageAndWaitAsync(placeOrder); // proceed to test the outcome of handling the original command *and* // any subsequent domain events that are published from the original // command handler } ``` snippet source | anchor In the code sample above, the `InvokeAndMessageAndWaitAsync()` method puts the Wolverine runtime into a "tracked" mode where it's able to "know" when all in flight work is complete and allow your integration testing to be reliable by waiting until all cascaded messages are also complete (and yes, it works recursively). One of the challenges of testing asynchronous code is not doing the *assert* phase of the test until the *act* part is really complete, and "tracked sessions" are Wolverine's answer to that problem. Just to note that there are more options you'll maybe need to use with modular monoliths, this version of tracking activity also includes any outstanding work from messages that are sent to external brokers: ```cs public static async Task run_end_to_end_with_external_transports(IHost host) { var placeOrder = new PlaceOrder("111", "222", 1000); // This would be the "act" part of your arrange/act/assert // test structure var tracked = await host .TrackActivity() // Direct Wolverine to also track activity coming and going from // external brokers .IncludeExternalTransports() // You'll sadly need to do this sometimes .Timeout(30.Seconds()) // You *might* have to do this as well to make // your tests more reliable in the face of async messaging .WaitForMessageToBeReceivedAt(host) .InvokeMessageAndWaitAsync(placeOrder); // proceed to test the outcome of handling the original command *and* // any subsequent domain events that are published from the original // command handler } ``` snippet source | anchor And to test the invocation of an event message to a specific handler, we can still do that by sending the message to a specific local queue: ```cs public static async Task test_specific_handler(IHost host) { // We're not thrilled with this usage and it's possible there's // syntactic sugar additions to the API soon await host.ExecuteAndWaitAsync( c => c.EndpointFor("local queue name").SendAsync(new OrderPlaced("111")).AsTask()); } ``` snippet source | anchor ## With EF Core ::: tip There is no way to utilize more than one `DbContext` type in a single handler while using the Wolverine transactional middleware. You can certainly do that, just with explicit code. ::: For EF Core usage, we would recommend using separate `DbContext` types for different modules that all target a separate database schema, but still land in the same physical database. This may change soon, but for right now, Wolverine only supports transactional inbox/outbox usage with a single database with EF Core. To maintain "severability" of modules to separate services later, you probably want to avoid making foreign key relationships in your database between tables owned by different modules. And of course, by and large only use one `DbContext` type in the code for a single module. Or maybe more accurately, only one module should use one `DbContext`. ## With Marten [Marten](https://martendb.io) plays pretty well with modular monoliths. For the most part, you can happily just stick all your documents in the same database schema and use the same `IDocumentStore` if you want while still being able to migrate some of those documents later if you choose to sever some modules over into a separate service. With the event sourcing though, all the events for different aggregate types or stream types all go into the same events table. While it's not impossible to separate the events through database scripts if you want to move a module into a separate service later, it's probably going to be easier if you use [Marten's separate document store](https://martendb.io/configuration/hostbuilder.html#working-with-multiple-marten-databases) feature. Wolverine has [direct support for Marten's separate or "ancillary" stores](/guide/durability/marten/ancillary-stores) that still enables the usage of all Wolverine + Marten integrations. Also note that the Wolverine + Marten "Critter Stack" combination is a great fit for "Event Driven Architecture" approaches where you depend on reliably publishing event messages to interested listeners in your application -- which is essentially how a lot of folks want to build their modular monoliths. See the introduction to [event subscriptions from Marten](/tutorials/cqrs-with-marten.html#publishing-or-handling-events). Do note that if you are using multiple document stores with Marten for different modules, but all the stores target the exact same physical PostgreSQL database as shown in this diagram below: ![Modules using the same physical database](/modules-hitting-same-database.png) You can help Wolverine be a little more efficient by using the same transactional inbox/outbox storage across all modules by using this setting: ```cs // THIS IS IMPORTANT FOR MODULAR MONOLITH USAGE! // This helps Wolverine out to always utilize the same envelope storage // for all modules for more efficient usage of resources opts.Durability.MessageStorageSchemaName = "wolverine"; ``` snippet source | anchor By setting any value for `WolverineOptions.Durability.MessageStorageSchemaName`, Wolverine will use that value for the database schema of the message storage tables, and be able to share the inbox/outbox processing across all the modules. ## Observability If you're going to opt into using asynchronous message passing within your application between modules or even just really using any kind of asynchronous messaging within a Wolverine application, we very strongly recommending using some sort of [OpenTelemetry](https://opentelemetry.io/) (Otel) compatible monitoring tool (I would think that every monitoring tool supports Otel by now). Wolverine emits Otel activity spans for all message processing as well as just about any kind of relevant event within a Wolverine application. See [the Wolverine Otel support](/guide/logging.html#open-telemetry) for more information. --- --- url: /guide/http/multi-tenancy.md --- # Multi-Tenancy and ASP.Net Core ::: warning Neither Wolverine.HTTP nor Wolverine message handling use the shared, scoped IoC/DI container from an ASP.Net Core request and any common mechanism for multi-tenancy inside of HTTP requests that relies on IoC trickery will probably not work -- with the possible exception of `IHttpContextAccessor` using `AsyncLocal` ::: ::: info "Real" multi-tenancy support for Wolverine.HTTP was added in Wolverine 1.7.0. ::: ## Tenant Id Detection ::: warning Wolverine's multi-tenancy support is very admittedly built with [Marten's multi-tenancy support]() in mind, and part of that is assuming that tenants are identified with a `string`. ::: ::: tip Wolverine has no direct or special security integration, but should be usable with (we think) any existing [ASP.Net Core authentication and authorization support](https://learn.microsoft.com/en-us/aspnet/core/security/authorization/claims?view=aspnetcore-7.0) including the `[Authorize]` attribute usage that declares required claims. ::: The first part of any multi-tenancy approach in HTTP services is to just detect which tenant should be active within the current request. Wolverine.HTTP refers to this as "tenant id detection". Out of the box, Wolverine comes with some simple recipes that can be mixed and matched as shown below: ```cs var builder = WebApplication.CreateBuilder(); var connectionString = builder.Configuration.GetConnectionString("postgres"); builder.Services .AddMarten(connectionString) .IntegrateWithWolverine(); builder.Host.UseWolverine(opts => { opts.Policies.AutoApplyTransactions(); }); var app = builder.Build(); // Configure the WolverineHttpOptions app.MapWolverineEndpoints(opts => { // The tenancy detection is fall through, so the first strategy // that finds anything wins! // Use the value of a named request header opts.TenantId.IsRequestHeaderValue("tenant"); // Detect the tenant id from an expected claim in the // current request's ClaimsPrincipal opts.TenantId.IsClaimTypeNamed("tenant"); // Use a query string value for the key 'tenant' opts.TenantId.IsQueryStringValue("tenant"); // Use a named route argument for the tenant id opts.TenantId.IsRouteArgumentNamed("tenant"); // Use the *first* sub domain name of the request Url // Note that this is very naive opts.TenantId.IsSubDomainName(); // If the tenant id cannot be detected otherwise, fallback // to a designated tenant id opts.TenantId.DefaultIs("default_tenant"); }); return await app.RunJasperFxCommands(args); ``` snippet source | anchor All of the options are configured on `WolverineHttpOptions.TenantId`. ::: tip Wolverine does not yet have direct support for multi-tenancy with Entity Framework Core, but that's something we're interested in building into Wolverine's feature set. [You can track or comment on that work here](https://github.com/JasperFx/wolverine/issues/556). ::: When Wolverine is actively detecting the tenant id, it's first setting the detected value on the active `MessageContext.TenantId` property, so any messages sent out during the execution of the HTTP request will also be tagged with this tenant id. In the case of the [Marten integration with Wolverine](/guide/durability/marten/), Wolverine is able to use the tenant id to create the proper `IDocumentSession`. As an example, consider the [MultiTenantedTodoService ](https://github.com/JasperFx/wolverine/tree/main/src/Samples/MultiTenantedTodoService/MultiTenantedTodoService) sample in the Wolverine codebase. That service first sets up multi-tenancy in Marten with a separate database per tenant like so: ```cs // Adding Marten for persistence builder.Services.AddMarten(m => { // Not necessary to do this for the runtime, but does help the codegen // and diagnostics m.Schema.For(); // With multi-tenancy through a database per tenant m.MultiTenantedDatabases(tenancy => { // You would probably be pulling the connection strings out of configuration, // but it's late in the afternoon and I'm being lazy building out this sample! tenancy.AddSingleTenantDatabase("Host=localhost;Port=5433;Database=tenant1;Username=postgres;password=postgres", "tenant1"); tenancy.AddSingleTenantDatabase("Host=localhost;Port=5433;Database=tenant2;Username=postgres;password=postgres", "tenant2"); tenancy.AddSingleTenantDatabase("Host=localhost;Port=5433;Database=tenant3;Username=postgres;password=postgres", "tenant3"); }); m.DatabaseSchemaName = "mttodo"; }) .IntegrateWithWolverine(x => x.MainDatabaseConnectionString = connectionString); ``` snippet source | anchor Then configures Wolverine itself like: ```cs // Wolverine usage is required for WolverineFx.Http builder.Host.UseWolverine(opts => { // This middleware will apply to the HTTP // endpoints as well opts.Policies.AutoApplyTransactions(); // Setting up the outbox on all locally handled // background tasks opts.Policies.UseDurableLocalQueues(); }); ``` snippet source | anchor Lastly, the Wolverine.HTTP setup to add the tenant id detection: ```cs // Let's add in Wolverine HTTP endpoints to the routing tree app.MapWolverineEndpoints(opts => { // Letting Wolverine HTTP automatically detect the tenant id! opts.TenantId.IsRouteArgumentNamed("tenant"); // Assert that the tenant id was successfully detected, // or pull the rip cord on the request and return a // 400 w/ ProblemDetails opts.TenantId.AssertExists(); opts.WarmUpRoutes = RouteWarmup.Eager; }); ``` snippet source | anchor In the code sample above, I'm choosing to make the "tenant" a mandatory route argument on each HTTP endpoint, then relying on that for the tenant id detection. As discussed in a later section, this application is also enforcing that all routes must have a non-null tenant. ::: warning Wolverine is not yet doing anything to validate your tenant id, so that will need to be done explicitly in your own code. ::: Inside of this "Todo" web service, there's an endpoint that just allows users to access the data for all the `Todo` items persisted in the current tenant's database like so: ```cs // The "tenant" route argument would be the route [WolverineGet("/todoitems/{tenant}")] public static Task> Get(string tenant, IQuerySession session) { return session.Query().ToListAsync(); } ``` snippet source | anchor At runtime, Wolverine is now generating this code around that endpoint method: ```csharp public class GET_todoitems_tenant : Wolverine.Http.HttpHandler { private readonly Wolverine.Http.WolverineHttpOptions _wolverineHttpOptions; private readonly Wolverine.Runtime.IWolverineRuntime _wolverineRuntime; private readonly Wolverine.Marten.Publishing.OutboxedSessionFactory _outboxedSessionFactory; public GET_todoitems_tenant(Wolverine.Http.WolverineHttpOptions wolverineHttpOptions, Wolverine.Runtime.IWolverineRuntime wolverineRuntime, Wolverine.Marten.Publishing.OutboxedSessionFactory outboxedSessionFactory) : base(wolverineHttpOptions) { _wolverineHttpOptions = wolverineHttpOptions; _wolverineRuntime = wolverineRuntime; _outboxedSessionFactory = outboxedSessionFactory; } public override async System.Threading.Tasks.Task Handle(Microsoft.AspNetCore.Http.HttpContext httpContext) { var messageContext = new Wolverine.Runtime.MessageContext(_wolverineRuntime); // Tenant Id detection // 1. Tenant Id is route argument named 'tenant' var tenantId = await TryDetectTenantId(httpContext); messageContext.TenantId = tenantId; if (string.IsNullOrEmpty(tenantId)) { await WriteTenantIdNotFound(httpContext); return; } // Building the Marten session using the detected tenant id await using var querySession = _outboxedSessionFactory.QuerySession(messageContext, tenantId); var tenant = (string)httpContext.GetRouteValue("tenant"); // The actual HTTP request handler execution var todoIReadOnlyList_response = await MultiTenantedTodoWebService.TodoEndpoints.Get(tenant, querySession).ConfigureAwait(false); // Writing the response body to JSON because this was the first 'return variable' in the method signature await WriteJsonAsync(httpContext, todoIReadOnlyList_response); } } ``` ## Referencing the Tenant Id in Endpoint Methods See [Referencing the TenantId](/guide/handlers/multi-tenancy.html#referencing-the-tenantid) on using Wolverine's `TenantId` type. ## Requiring Tenant Id -- or Not! You can direct Wolverine.HTTP to verify that there is a non-null, non-empty tenant id on all requests with this syntax: ```cs app.MapWolverineEndpoints(opts => { // Configure your tenant id detection... // Require tenant id some how, some way... opts.TenantId.AssertExists(); }); ``` snippet source | anchor At runtime, this is going to return a status code of 400 with a [ProblemDetails](https://learn.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.mvc.problemdetails?view=aspnetcore-7.0) specification response stating that the tenant id was missing. But of course, you will frequently have *some* endpoints within your system that do **not** use any kind of multi-tenancy, so you can completely opt out of all tenant id detection and assertions through the `[NotTenanted]` attribute as shown here in the tests: ```cs // Mark this endpoint as not using any kind of multi-tenancy [WolverineGet("/nottenanted"), NotTenanted] public static string NoTenantNoProblem() { return "hey"; } ``` snippet source | anchor If the above usage completely disabled all tenant id detection or validation, in the case of an endpoint that *might* be tenanted or might be validly used across all tenants depending on client needs, you can add the tenant id detection while disabling the tenant id assertion on missing values with the '\[MaybeTenanted]\` attribute shown below in test code: ```cs // Mark this endpoint as "maybe" having a tenant id [WolverineGet("/maybe"), MaybeTenanted] public static string MaybeTenanted(IMessageBus bus) { return bus.TenantId ?? "none"; } ``` snippet source | anchor ## Custom Tenant Detection Strategy The built in tenant id detection strategies are all very simplistic, and it's quite possible that you will have more complex needs. Maybe you need to do some database lookups. Maybe you need to interpret the values and partially parse route parameters. Wolverine still has you covered by allowing you to create custom implementations of its `Wolverine.Http.Runtime.MultiTenancy.ITenantDetection` interface: ```cs /// /// Used to create new strategies to detect the tenant id from an HttpContext /// for the current request /// public interface ITenantDetection { /// /// This method can return the actual tenant id or null to represent "not found" /// /// /// public ValueTask DetectTenant(HttpContext context); } ``` snippet source | anchor As any example, the route argument detection implementation looks like this: ```cs internal class ArgumentDetection : ITenantDetection, ISynchronousTenantDetection { private readonly string _argumentName; public ArgumentDetection(string argumentName) { _argumentName = argumentName; } public ValueTask DetectTenant(HttpContext httpContext) => new(DetectTenantSynchronously(httpContext)); public override string ToString() { return $"Tenant Id is route argument named '{_argumentName}'"; } public string? DetectTenantSynchronously(HttpContext context) { return context.Request.RouteValues.TryGetValue(_argumentName, out var value) ? value?.ToString() : null; } } ``` snippet source | anchor ::: tip When you implement your custom strategy, the `ToString()` output will be a hopefully descriptive comment in the generated HTTP endpoint code as a diagnostics ::: To add a custom tenant id detection strategy, you can use one of two options: ```cs app.MapWolverineEndpoints(opts => { // If your strategy does not need any IoC service // dependencies, just add it directly opts.TenantId.DetectWith(new MyCustomTenantDetection()); // In this case, your detection type will be built by // the underlying IoC container for your application // No other registration is necessary here for the strategy // itself opts.TenantId.DetectWith(); }); ``` snippet source | anchor Just note that if you are having the IoC container for your Wolverine application resolve your custom `ITenantDetection` strategy that it's going to be effectively `Singleton`-scoped. Wolverine depends on using [Lamar](https://jasperfx.github.io/lamar) as the underlying IoC container, and Lamar does not require prior registrations to directly resolve a concrete type as long as it can select a public constructor with dependencies that it "knows" how to resolve in turn. ## Delegating to Wolverine as "Mediator" To utilize multi-tenancy with Wolverine.HTTP today *and* play nicely with Wolverine's transactional inbox/outbox at the same time, you will have to use Wolverine as a mediator but also pass the tenant id as an argument as shown in this sample project: ```cs // While this is still valid.... [WolverineDelete("/todoitems/{tenant}/longhand")] public static async Task Delete( string tenant, DeleteTodo command, IMessageBus bus) { // Invoke inline for the specified tenant await bus.InvokeForTenantAsync(tenant, command); } // Wolverine.HTTP 1.7 added multi-tenancy support so // this short hand works without the extra jump through // "Wolverine as Mediator" [WolverineDelete("/todoitems/{tenant}")] public static void Delete( DeleteTodo command, IDocumentSession session) { // Just mark this document as deleted, // and Wolverine middleware takes care of the rest // including the multi-tenancy detection now session.Delete(command.Id); } ``` snippet source | anchor and with an expected result: ```cs [WolverinePost("/todoitems/{tenant}")] public static CreationResponse Create( // Only need this to express the location of the newly created // Todo object string tenant, CreateTodo command, IDocumentSession session) { var todo = new Todo { Name = command.Name }; // Marten itself sets the Todo.Id identity // in this call session.Store(todo); // New syntax in Wolverine.HTTP 1.7 // Helps Wolverine return CreationResponse.For(new TodoCreated(todo.Id), $"/todoitems/{tenant}/{todo.Id}"); } ``` snippet source | anchor See [Multi-Tenancy with Wolverine](/guide/handlers/multi-tenancy) for a little more information. ## Tenant Id Detection for Marten Without Wolverine Okay, here's an oddball case that absolutely came up for our users. Let's say that you need to do the tenant id detection for Marten directly within HTTP requests without using Wolverine otherwise \-- like a recent Marten user needed to do with [Hot Chocolate](https://chillicream.com/docs/hotchocolate/v13) endpoints. Using the `WolverineFx.Http.Marten` Nuget, there's a helper to replace Marten's `ISessionFactory` with a multi-tenanted version like this: ```cs builder.Services.AddMartenTenancyDetection(tenantId => { tenantId.IsQueryStringValue("tenant"); tenantId.DefaultIs("default-tenant"); }); ``` snippet source | anchor ```cs builder.Services.AddMartenTenancyDetection(tenantId => { tenantId.IsQueryStringValue("tenant"); tenantId.DefaultIs("default-tenant"); }, (c, session) => { session.CorrelationId = c.TraceIdentifier; }); ``` snippet source | anchor --- --- url: /guide/durability/marten/multi-tenancy.md --- # Multi-Tenancy and Marten ::: info This functionality was a very late addition just in time for Wolverine 1.0. ::: Wolverine.Marten fully supports Marten multi-tenancy features. Both ["conjoined" multi-tenanted documents](https://martendb.io/documents/multi-tenancy.html) and full blown [multi-tenancy through separate databases](https://martendb.io/configuration/multitenancy.html). Some important facts to know: * Wolverine.Marten's transactional middleware is able to respect the [tenant id from Wolverine](/guide/handlers/multi-tenancy) in resolving an `IDocumentSession` * If using a database per tenant(s) strategy with Marten, Wolverine.Marten is able to create separate message storage tables in each tenant Postgresql database * With the strategy above though, you'll need a "master" PostgreSQL database for tenant neutral operations as well * The 1.0 durability agent is happily able to work against both the master and all of the tenant databases for reliable messaging ## Database per Tenant ::: info All of these samples are taken from the [MultiTenantedTodoWebService sample project](https://github.com/JasperFx/wolverine/tree/main/src/Samples/MultiTenantedTodoService/MultiTenantedTodoService); ::: To get started using Wolverine with Marten's database per tenant strategy, configure Marten multi-tenancy as you normally would, but you also need to specify a "master" database connection string for Wolverine as well as shown below: ```cs // Adding Marten for persistence builder.Services.AddMarten(m => { // Not necessary to do this for the runtime, but does help the codegen // and diagnostics m.Schema.For(); // With multi-tenancy through a database per tenant m.MultiTenantedDatabases(tenancy => { // You would probably be pulling the connection strings out of configuration, // but it's late in the afternoon and I'm being lazy building out this sample! tenancy.AddSingleTenantDatabase("Host=localhost;Port=5433;Database=tenant1;Username=postgres;password=postgres", "tenant1"); tenancy.AddSingleTenantDatabase("Host=localhost;Port=5433;Database=tenant2;Username=postgres;password=postgres", "tenant2"); tenancy.AddSingleTenantDatabase("Host=localhost;Port=5433;Database=tenant3;Username=postgres;password=postgres", "tenant3"); }); m.DatabaseSchemaName = "mttodo"; }) .IntegrateWithWolverine(x => x.MainDatabaseConnectionString = connectionString); ``` snippet source | anchor And you'll probably want this as well to make sure the message storage is in all the databases upfront: ```cs builder.Services.AddResourceSetupOnStartup(); ``` snippet source | anchor Lastly, this is the Wolverine set up: ```cs // Wolverine usage is required for WolverineFx.Http builder.Host.UseWolverine(opts => { // This middleware will apply to the HTTP // endpoints as well opts.Policies.AutoApplyTransactions(); // Setting up the outbox on all locally handled // background tasks opts.Policies.UseDurableLocalQueues(); }); ``` snippet source | anchor From there, you should be completely ready to use Marten + Wolverine with usages like this: ```cs // While this is still valid.... [WolverineDelete("/todoitems/{tenant}/longhand")] public static async Task Delete( string tenant, DeleteTodo command, IMessageBus bus) { // Invoke inline for the specified tenant await bus.InvokeForTenantAsync(tenant, command); } // Wolverine.HTTP 1.7 added multi-tenancy support so // this short hand works without the extra jump through // "Wolverine as Mediator" [WolverineDelete("/todoitems/{tenant}")] public static void Delete( DeleteTodo command, IDocumentSession session) { // Just mark this document as deleted, // and Wolverine middleware takes care of the rest // including the multi-tenancy detection now session.Delete(command.Id); } ``` snippet source | anchor ## Conjoined Multi-Tenancy First, let's try just "conjoined" multi-tenancy where there's still just one database for Marten. From the tests, here's a simple Marten persisted document that requires the "conjoined" tenancy model, and a command/handler combination for inserting new documents with Marten: ```cs // Implementing Marten's ITenanted interface // also makes Marten treat this document type as // having "conjoined" multi-tenancy public class TenantedDocument : ITenanted { public Guid Id { get; init; } public string TenantId { get; set; } public string Location { get; set; } } // A command to create a new document that's multi-tenanted public record CreateTenantDocument(Guid Id, string Location); // A message handler to create a new document. Notice there's // absolutely NO code related to a tenant id, but yet it's // fully respecting multi-tenancy here in a second public static class CreateTenantDocumentHandler { public static IMartenOp Handle(CreateTenantDocument command) { return MartenOps.Insert(new TenantedDocument{Id = command.Id, Location = command.Location}); } } ``` snippet source | anchor For completeness, here's the Wolverine and Marten bootstrapping: ```cs _host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.Services.AddMarten(Servers.PostgresConnectionString) .IntegrateWithWolverine() .UseLightweightSessions(); opts.Policies.AutoApplyTransactions(); }).StartAsync(); ``` snippet source | anchor and after that, the calls to [InvokeForTenantAsync()]() "just work" as you can see if you squint hard enough reading this test: ```cs [Fact] public async Task execute_with_tenancy() { var id = Guid.NewGuid(); await _host.ExecuteAndWaitAsync(c => c.InvokeForTenantAsync("one", new CreateTenantDocument(id, "Andor"))); await _host.ExecuteAndWaitAsync(c => c.InvokeForTenantAsync("two", new CreateTenantDocument(id, "Tear"))); await _host.ExecuteAndWaitAsync(c => c.InvokeForTenantAsync("three", new CreateTenantDocument(id, "Illian"))); var store = _host.Services.GetRequiredService(); // Check the first tenant using (var session = store.LightweightSession("one")) { var document = await session.LoadAsync(id); document.Location.ShouldBe("Andor"); } // Check the second tenant using (var session = store.LightweightSession("two")) { var document = await session.LoadAsync(id); document.Location.ShouldBe("Tear"); } // Check the third tenant using (var session = store.LightweightSession("three")) { var document = await session.LoadAsync(id); document.Location.ShouldBe("Illian"); } } ``` snippet source | anchor --- --- url: /guide/messaging/transports/azureservicebus/multi-tenancy.md --- # Multi-Tenancy with Azure Service Bus Let's take a trip to the world of IoT where you might very well build a single cloud hosted service that needs to communicate via Rabbit MQ with devices at your customers sites. You'd preferably like to keep traffic separate so that one customer never accidentally receives information from another customer. In this case, Wolverine now lets you register separate Rabbit MQ brokers -- or at least separate virtual hosts within a single Rabbit MQ broker -- for each tenant. ::: info Definitely see [Multi-Tenancy with Wolverine](/guide/handlers/multi-tenancy) for more information about how Wolverine tracks the tenant id across messages. ::: Let's just jump straight into a simple example of the configuration: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { // One way or another, you're probably pulling the Azure Service Bus // connection string out of configuration var azureServiceBusConnectionString = builder .Configuration .GetConnectionString("azure-service-bus"); // Connect to the broker in the simplest possible way opts.UseAzureServiceBus(azureServiceBusConnectionString) // This is the default, if there is no tenant id on an outgoing message, // use the default broker .TenantIdBehavior(TenantedIdBehavior.FallbackToDefault) // Or tell Wolverine instead to just quietly ignore messages sent // to unrecognized tenant ids .TenantIdBehavior(TenantedIdBehavior.IgnoreUnknownTenants) // Or be draconian and make Wolverine assert and throw an exception // if an outgoing message does not have a tenant id .TenantIdBehavior(TenantedIdBehavior.TenantIdRequired) // Add new tenants by registering the tenant id and a separate fully qualified namespace // to a different Azure Service Bus connection .AddTenantByNamespace("one", builder.Configuration.GetValue("asb_ns_one")) .AddTenantByNamespace("two", builder.Configuration.GetValue("asb_ns_two")) .AddTenantByNamespace("three", builder.Configuration.GetValue("asb_ns_three")) // OR, instead, add tenants by registering the tenant id and a separate connection string // to a different Azure Service Bus connection .AddTenantByConnectionString("four", builder.Configuration.GetConnectionString("asb_four")) .AddTenantByConnectionString("five", builder.Configuration.GetConnectionString("asb_five")) .AddTenantByConnectionString("six", builder.Configuration.GetConnectionString("asb_six")); // This Wolverine application would be listening to a queue // named "incoming" on all Azure Service Bus connections, including the default opts.ListenToAzureServiceBusQueue("incoming"); // This Wolverine application would listen to a single queue // at the default connection regardless of tenant opts.ListenToAzureServiceBusQueue("incoming_global") .GlobalListener(); // Likewise, you can override the queue, subscription, and topic behavior // to be "global" for all tenants with this syntax: opts.PublishMessage() .ToAzureServiceBusQueue("message1") .GlobalSender(); opts.PublishMessage() .ToAzureServiceBusTopic("message2") .GlobalSender(); }); ``` snippet source | anchor ::: warning Wolverine has no way of creating new Azure Service Bus namespaces for you ::: In the code sample above, I'm setting up the Azure Service Bus transport to "know" that there are multiple tenants with separate Azure Service Bus fully qualified namespaces. ::: tip Note that Wolverine uses the credentials specified for the default Azure Service Bus connection for all tenant specific connections ::: At runtime, if we send a message like so: ```cs public static async Task send_message_to_specific_tenant(IMessageBus bus) { // Send a message tagged to a specific tenant id await bus.PublishAsync(new Message1(), new DeliveryOptions { TenantId = "two" }); } ``` snippet source | anchor In the case above, in the Wolverine internals, it: 1. Routes the message to a Azure Service Bus queue named "outgoing" 2. Within the sender for that queue, Wolverine sees that `TenantId == "two"`, so it sends the message to the "outgoing" queue on the Azure Service Bus connection that we specified for the "two" tenant id. Likewise, see the listening set up against the "incoming" queue above. At runtime, this Wolverine application will be listening to a queue named "incoming" on the default Azure Service Bus namespace and a separate queue named "incoming" on the separate fully qualified namespaces for the known tenants. When a message is received at any of these queues, it's tagged with the `TenantId` that's appropriate for each separate tenant-specific listening endpoint. That helps Wolverine also track tenant specific operations (with Marten maybe?) and tracks the tenant id across any outgoing messages or responses as well. --- --- url: /guide/durability/efcore/multi-tenancy.md --- # Multi-Tenancy with EF Core Wolverine has first class support for using a single EF Core `DbContext` type that potentially uses different databases for different clients within your system, and this includes every single bit of EF Core capabilities with Wolverine: * Wolverine will manage a separate transactional inbox & outbox for each tenant database and any main database * The transactional middleware is multi-tenant aware for EF Core * Wolverine's [Tenant id detection for HTTP](/guide/http/multi-tenancy.html#tenant-id-detection) is supported by the EF Core integration * The [storage actions](/guide/durability/efcore/operations) and `[Entity]` attribute support for EF Core will respect the multi-tenancy Alright, let's get into a first concrete sample. In this simplest usage, I'm assuming that there are only three separate tenant databases, and each database will only hold data for a single tenant. To use EF Core with [multi-tenanted PostgreSQL](/guide/durability/postgresql.html#multi-tenancy) storage, we can use this: ```cs var builder = Host.CreateApplicationBuilder(); var configuration = builder.Configuration; builder.UseWolverine(opts => { // First, you do have to have a "main" PostgreSQL database for messaging persistence // that will store information about running nodes, agents, and non-tenanted operations opts.PersistMessagesWithPostgresql(configuration.GetConnectionString("main")) // Add known tenants at bootstrapping time .RegisterStaticTenants(tenants => { // Add connection strings for the expected tenant ids tenants.Register("tenant1", configuration.GetConnectionString("tenant1")); tenants.Register("tenant2", configuration.GetConnectionString("tenant2")); tenants.Register("tenant3", configuration.GetConnectionString("tenant3")); }); opts.Services.AddDbContextWithWolverineManagedMultiTenancy((builder, connectionString, _) => { builder.UseNpgsql(connectionString.Value, b => b.MigrationsAssembly("MultiTenantedEfCoreWithPostgreSQL")); }, AutoCreate.CreateOrUpdate); }); ``` snippet source | anchor And instead with [multi-tenanted SQL Server](/guide/durability/sqlserver.html#multi-tenancy) storage: ```cs var builder = Host.CreateApplicationBuilder(); var configuration = builder.Configuration; builder.UseWolverine(opts => { // First, you do have to have a "main" PostgreSQL database for messaging persistence // that will store information about running nodes, agents, and non-tenanted operations opts.PersistMessagesWithSqlServer(configuration.GetConnectionString("main")) // Add known tenants at bootstrapping time .RegisterStaticTenants(tenants => { // Add connection strings for the expected tenant ids tenants.Register("tenant1", configuration.GetConnectionString("tenant1")); tenants.Register("tenant2", configuration.GetConnectionString("tenant2")); tenants.Register("tenant3", configuration.GetConnectionString("tenant3")); }); // Just to show that you *can* use more than one DbContext opts.Services.AddDbContextWithWolverineManagedMultiTenancy((builder, connectionString, _) => { // You might have to set the migration assembly builder.UseSqlServer(connectionString.Value, b => b.MigrationsAssembly("MultiTenantedEfCoreWithSqlServer")); }, AutoCreate.CreateOrUpdate); opts.Services.AddDbContextWithWolverineManagedMultiTenancy((builder, connectionString, _) => { builder.UseSqlServer(connectionString.Value, b => b.MigrationsAssembly("MultiTenantedEfCoreWithSqlServer")); }, AutoCreate.CreateOrUpdate); }); ``` snippet source | anchor Note in both samples how I'm registering the `DbContext` types. There's a fluent interface first to register the multi-tenanted database storage, then a call to register a `DbContext` with multi-tenancy. You'll have to supply Wolverine with a lambda to configure the `DbContextOptionsBuilder` for the individual `DbContext` object. At runtime, Wolverine will be passing in the right connection string for the active tenant id. There is also other overloads to configure based on a `DbDataSource` if using PostgreSQL or to also take in a `TenantId` value type that will give you the active tenant id if you need to use that for setting EF Core query filters like [this example from the Microsoft documentation](https://learn.microsoft.com/en-us/ef/core/miscellaneous/multitenancy#an-example-solution-single-database). ## Combine with Marten It's perfectly possible to use [Marten](https://martendb.io) and its multi-tenancy support for targeting a separate database with EF Core using the same databases. Maybe you're using Marten for event sourcing, then using EF Core for flat table projections. Regardless, you simply allow Marten to manage the multi-tenancy and the relationship between tenant ids and the various databases, and the Wolverine EF Core integration can more or less ride on Marten's coat tails: ```cs opts.Services.AddMarten(m => { m.MultiTenantedDatabases(x => { x.AddSingleTenantDatabase(tenant1ConnectionString, "red"); x.AddSingleTenantDatabase(tenant2ConnectionString, "blue"); x.AddSingleTenantDatabase(tenant3ConnectionString, "green"); }); }).IntegrateWithWolverine(x => { x.MainDatabaseConnectionString = Servers.PostgresConnectionString; }); opts.Services.AddDbContextWithWolverineManagedMultiTenancyByDbDataSource((builder, dataSource, _) => { builder.UseNpgsql(dataSource, b => b.MigrationsAssembly("MultiTenantedEfCoreWithPostgreSQL")); }, AutoCreate.CreateOrUpdate); ``` snippet source | anchor ## Outside of Handlers or Endpoints It's a complex world full of legacy systems and existing codebases, and it's quite possible you're going to want to publish messages to Wolverine from outside of Wolverine HTTP endpoints or Wolverine message handlers where there is no clean transactional middleware approach to just do the outbox and multi-tenancy mechanics for you. Not to worry, you can still leverage Wolverine's EF Core integration with both multi-tenancy and the Wolverine outbox sending with the `IDbContextOutboxFactory` service. Let's say that you have a relatively simple multi-tenancy setup with SQL Server and EF Core `DbContext` services like this: ```cs var builder = Host.CreateApplicationBuilder(); var configuration = builder.Configuration; builder.UseWolverine(opts => { // First, you do have to have a "main" PostgreSQL database for messaging persistence // that will store information about running nodes, agents, and non-tenanted operations opts.PersistMessagesWithSqlServer(configuration.GetConnectionString("main")) // Add known tenants at bootstrapping time .RegisterStaticTenants(tenants => { // Add connection strings for the expected tenant ids tenants.Register("tenant1", configuration.GetConnectionString("tenant1")); tenants.Register("tenant2", configuration.GetConnectionString("tenant2")); tenants.Register("tenant3", configuration.GetConnectionString("tenant3")); }); // Just to show that you *can* use more than one DbContext opts.Services.AddDbContextWithWolverineManagedMultiTenancy((builder, connectionString, _) => { // You might have to set the migration assembly builder.UseSqlServer(connectionString.Value, b => b.MigrationsAssembly("MultiTenantedEfCoreWithSqlServer")); }, AutoCreate.CreateOrUpdate); opts.Services.AddDbContextWithWolverineManagedMultiTenancy((builder, connectionString, _) => { builder.UseSqlServer(connectionString.Value, b => b.MigrationsAssembly("MultiTenantedEfCoreWithSqlServer")); }, AutoCreate.CreateOrUpdate); }); ``` snippet source | anchor Then you can *still* use those EF Core `DbContext` services with Wolverine messaging including the Wolverine outbox like this sample code: ```cs public class MyMessageHandler { private readonly IDbContextOutboxFactory _factory; public MyMessageHandler(IDbContextOutboxFactory factory) { _factory = factory; } public async Task HandleAsync(CreateItem command, TenantId tenantId, CancellationToken cancellationToken) { // Get an EF Core DbContext wrapped in a Wolverine IDbContextOutbox // for message sending wrapped in a transaction spanning the DbContext and Wolverine var outbox = await _factory.CreateForTenantAsync(tenantId.Value, cancellationToken); var item = new Item { Name = command.Name, Id = CombGuidIdGeneration.NewGuid() }; outbox.DbContext.Items.Add(item); // Don't worry, this messages doesn't *actually* get sent until // the transaction succeeds await outbox.PublishAsync(new ItemCreated { Id = item.Id }); // Save and commit the unit of work with the outgoing message, // then "flush" the outgoing messages through Wolverine await outbox.SaveChangesAndFlushMessagesAsync(cancellationToken); } } ``` snippet source | anchor The important thing to note above is just that this pattern and service will work with any .NET code and not just within Wolverine handlers or HTTP endpoints. This is your primary mechanism most likely to start transforming an existing AspNetCore system that isn't using Wolverine.HTTP. --- --- url: /guide/messaging/transports/rabbitmq/multi-tenancy.md --- # Multi-Tenancy with Rabbit MQ Let's take a trip to the world of IoT where you might very well build a single cloud hosted service that needs to communicate via Rabbit MQ with devices at your customers sites. You'd preferably like to keep traffic separate so that one customer never accidentally receives information from another customer. In this case, Wolverine now lets you register separate Rabbit MQ brokers -- or at least separate virtual hosts within a single Rabbit MQ broker -- for each tenant. ::: info Definitely see [Multi-Tenancy with Wolverine](/guide/handlers/multi-tenancy) for more information about how Wolverine tracks the tenant id across messages. ::: Let's just jump straight into a simple example of the configuration: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { // At this point, you still have to have a *default* broker connection to be used for // messaging. opts.UseRabbitMq(new Uri(builder.Configuration.GetConnectionString("main"))) // This will be respected across *all* the tenant specific // virtual hosts and separate broker connections .AutoProvision() // This is the default, if there is no tenant id on an outgoing message, // use the default broker .TenantIdBehavior(TenantedIdBehavior.FallbackToDefault) // Or tell Wolverine instead to just quietly ignore messages sent // to unrecognized tenant ids .TenantIdBehavior(TenantedIdBehavior.IgnoreUnknownTenants) // Or be draconian and make Wolverine assert and throw an exception // if an outgoing message does not have a tenant id .TenantIdBehavior(TenantedIdBehavior.TenantIdRequired) // Add specific tenants for separate virtual host names // on the same broker as the default connection .AddTenant("one", "vh1") .AddTenant("two", "vh2") .AddTenant("three", "vh3") // Or, you can add a broker connection to something completel // different for a tenant .AddTenant("four", new Uri(builder.Configuration.GetConnectionString("rabbit_four"))); // This Wolverine application would be listening to a queue // named "incoming" on all virtual hosts and/or tenant specific message // brokers opts.ListenToRabbitQueue("incoming"); opts.ListenToRabbitQueue("incoming_global") // This opts this queue out from being per-tenant, such that // there will only be the single "incoming_global" queue for the default // broker connection .GlobalListener(); // More on this in the docs.... opts.PublishMessage() .ToRabbitQueue("outgoing").GlobalSender(); }); ``` snippet source | anchor ::: warning Wolverine has no way of creating new virtual hosts in Rabbit MQ for you. You will have to do that manually through either the Rabbit MQ admin site, the Rabbit MQ HTTP API, or the Rabbit MQ command line. ::: In the code sample above, I'm setting up Rabbit MQ to "know" that there are four specific tenants identified as "one", "two", "three", and "four". I've also told Wolverine how to connect to Rabbit MQ separately for each known tenant id. At runtime, if we send a message like so: ```cs public static async Task send_message_to_specific_tenant(IMessageBus bus) { // Send a message tagged to a specific tenant id await bus.PublishAsync(new Message1(), new DeliveryOptions { TenantId = "two" }); } ``` snippet source | anchor In the case above, in the Wolverine internals, it: 1. Routes the message to a Rabbit MQ queue named "outgoing" 2. Within the sender for that queue, Wolverine sees that `TenantId == "two"`, so it sends the message to the "outgoing" queue on the "vh2" virtual host Likewise, see the listening set up against the "incoming" queue above. At runtime, this Wolverine application will be listening to a queue named "incoming" on the default Rabbit MQ broker and a separate queue named "incoming" on the separate virtual hosts or brokers for the known tenants. When a message is received at any of these queues, it's tagged with the `TenantId` that's appropriate for each separate tenant-specific listening endpoint. That helps Wolverine also track tenant specific operations (with Marten maybe?) and tracks the tenant id across any outgoing messages or responses as well. If you do not want to use the default virtual host, "/", you should use the UseRabbitMq with the detailed control, so that the initial host is set up as the one you wish to use: ```cs opts.UseRabbitMq(rabbit => { rabbit.HostName = builder.Configuration["rabbitmq_host"]; rabbit.VirtualHost = builder.Configuration["rabbitmq_virtual_host"]; ``` --- --- url: /guide/handlers/multi-tenancy.md --- # Multi-Tenancy with Wolverine Wolverine has first class support for multi-tenancy by tracking the tenant id as message metadata. When invoking a message inline, you can execute that message for a specific tenant with this syntax: ```cs public static async Task invoking_by_tenant(IMessageBus bus) { // Invoke inline await bus.InvokeForTenantAsync("tenant1", new CreateTodo("Release Wolverine 1.0")); // Invoke with an expected result (request/response) var created = await bus.InvokeForTenantAsync("tenant2", new CreateTodo("Update the Documentation")); } ``` snippet source | anchor When using this syntax, any [cascaded messages](/guide/handlers/cascading) will also be tagged with the same tenant id. This functionality is valid with both messages executed locally and messages that are executed remotely depending on the routing rules for that particular message. To publish a message for a particular tenant id and ultimately pass the tenant id on to the message handler, use the `DeliveryOptions` approach: ```cs public static async Task publish_by_tenant(IMessageBus bus) { await bus.PublishAsync(new CreateTodo("Fix that last broken test"), new DeliveryOptions { TenantId = "tenant3" }); } ``` snippet source | anchor ## Cascading Messages As a convenience, you can embed tenant id information into outgoing cascading messages with these helpers: ```cs public static IEnumerable Handle(IncomingMessage message) { yield return new Message1().WithTenantId("one"); yield return new Message2().WithTenantId("one"); yield return new Message3().WithDeliveryOptions(new DeliveryOptions { ScheduleDelay = 5.Minutes(), TenantId = "two" }); // Long hand yield return new Message4().WithDeliveryOptions(new DeliveryOptions { TenantId = "one" }); ``` snippet source | anchor ## Referencing the TenantId Let's say that you want to reference the current tenant id in your Wolverine message handler or Wolverine HTTP endpoint, but you don't want to inject the Wolverine `IMessageContext` or `Envelope` into your methods, but instead would like an easy way to just "push" the current tenant id into your handler methods. Maybe this is for ease of writing unit tests, or conditional logic, or some other reason. To that end, you can inject the `Wolverine.Persistence.TenantId` into any Wolverine message handler or HTTP endpoint method to get easy access to the tenant id: TODO/FIX: snippet: sample\_TenantId There's really nothing to it other than just pulling that type in as a parameter argument to a message handler: ```cs public static class SomeCommandHandler { // Wolverine is keying off the type, the parameter name // doesn't really matter public static void Handle(SomeCommand command, TenantId tenantId) { Debug.WriteLine($"I got a command {command} for tenant {tenantId.Value}"); } } ``` snippet source | anchor In tests, you can create that `TenantId` value just by: ```csharp var tenantId = new TenantId("tenant1"); ``` and then just pass the value into the method under test. --- --- url: /guide/durability/mysql.md --- # MySQL Integration ::: info Wolverine can use the MySQL durability options with any mix of Entity Framework Core as a higher level persistence framework ::: Wolverine supports a MySQL/MariaDB backed message persistence strategy and even a MySQL backed messaging transport option. To get started, add the `WolverineFx.MySql` dependency to your application: ```bash dotnet add package WolverineFx.MySql ``` ## Message Persistence To enable MySQL to serve as Wolverine's [transactional inbox and outbox](./), you just need to use the `WolverineOptions.PersistMessagesWithMySql()` extension method as shown below in a sample: ```cs var builder = WebApplication.CreateBuilder(args); var connectionString = builder.Configuration.GetConnectionString("mysql"); builder.Host.UseWolverine(opts => { // Setting up MySQL-backed message storage // This requires a reference to Wolverine.MySql opts.PersistMessagesWithMySql(connectionString); // Other Wolverine configuration }); // This is rebuilding the persistent storage database schema on startup // and also clearing any persisted envelope state builder.Host.UseResourceSetupOnStartup(); var app = builder.Build(); // Other ASP.Net Core configuration... // Using JasperFx opens up command line utilities for managing // the message storage return await app.RunJasperFxCommands(args); ``` ## MySQL Messaging Transport ::: info All MySQL queues are built into a *wolverine\_queues* schema at this point. ::: The `WolverineFx.MySql` Nuget also contains a simple messaging transport that was mostly meant to be usable for teams who want asynchronous queueing without introducing more specialized infrastructure. To enable this transport in your code, use this option which *also* activates MySQL backed message persistence: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { var connectionString = builder.Configuration.GetConnectionString("mysql"); opts.UseMySqlPersistenceAndTransport( connectionString, // This argument is the database schema for the envelope storage // If separate logical services are targeting the same physical database, // you should use a separate schema name for each logical application // to make basically *everything* run smoother "myapp", // This schema name is for the actual MySQL queue tables. If using // the MySQL transport between two logical applications, make sure // to use the same transportSchema! transportSchema:"queues") // Tell Wolverine to build out all necessary queue or scheduled message // tables on demand as needed .AutoProvision() // Optional that may be helpful in testing, but probably bad // in production! .AutoPurgeOnStartup(); // Use this extension method to create subscriber rules opts.PublishAllMessages().ToMySqlQueue("outbound"); // Use this to set up queue listeners opts.ListenToMySqlQueue("inbound") .CircuitBreaker(cb => { // fine tune the circuit breaker // policies here }) // Optionally specify how many messages to // fetch into the listener at any one time .MaximumMessagesToReceive(50); }); using var host = builder.Build(); await host.StartAsync(); ``` The MySQL transport is strictly queue-based at this point. The queues are configured as durable by default, meaning that they are utilizing the transactional inbox and outbox. The MySQL queues can also be buffered: ```cs opts.ListenToMySqlQueue("sender").BufferedInMemory(); ``` Using this option just means that the MySQL queues can be used for both sending or receiving with no integration with the transactional inbox or outbox. This is a little more performant, but less safe as messages could be lost if held in memory when the application shuts down unexpectedly. ### Polling Wolverine has a number of internal polling operations, and any MySQL queues will be polled on a configured interval. The default polling interval is set in the `DurabilitySettings` class and can be configured at runtime as below: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { // Health check message queue/dequeue opts.Durability.HealthCheckPollingTime = TimeSpan.FromSeconds(10); // Node reassigment checks opts.Durability.NodeReassignmentPollingTime = TimeSpan.FromSeconds(5); // User queue poll frequency opts.Durability.ScheduledJobPollingTime = TimeSpan.FromSeconds(5); } ``` ::: info Control queue Wolverine has an internal control queue (`dbcontrol`) used for internal operations. This queue is hardcoded to poll every second and should not be changed to ensure the stability of the application. ::: ## Multi-Tenancy As of Wolverine 5.x, you can use multi-tenancy through separate databases per tenant with MySQL: To utilize Wolverine managed multi-tenancy, you have a couple main options. The simplest is just using a static configured set of tenant id to database connections like so: ```cs var builder = Host.CreateApplicationBuilder(); var configuration = builder.Configuration; builder.UseWolverine(opts => { // First, you do have to have a "main" MySQL database for messaging persistence // that will store information about running nodes, agents, and non-tenanted operations opts.PersistMessagesWithMySql(configuration.GetConnectionString("main")) // Add known tenants at bootstrapping time .RegisterStaticTenants(tenants => { // Add connection strings for the expected tenant ids tenants.Register("tenant1", configuration.GetConnectionString("tenant1")); tenants.Register("tenant2", configuration.GetConnectionString("tenant2")); tenants.Register("tenant3", configuration.GetConnectionString("tenant3")); }); opts.Services.AddDbContextWithWolverineManagedMultiTenancy((builder, connectionString, _) => { builder.UseMySql(connectionString.Value, ServerVersion.AutoDetect(connectionString.Value), b => b.MigrationsAssembly("MultiTenantedEfCoreWithMySql")); }, AutoCreate.CreateOrUpdate); }); ``` Since the underlying [MySqlConnector library](https://mysqlconnector.net/) supports the `MySqlDataSource` concept, and you might need to use this for a variety of reasons, you can also directly configure `MySqlDataSource` objects for each tenant. This one might be a little more involved, but let's start by saying that you might be using Aspire to configure MySQL and both the main and tenant databases. In this usage, Aspire will register `MySqlDataSource` services as `Singleton` scoped in your IoC container. We can build an `IWolverineExtension` that utilizes the IoC container to register Wolverine like so: ```cs public class OurFancyMySQLMultiTenancy : IWolverineExtension { private readonly IServiceProvider _provider; public OurFancyMySQLMultiTenancy(IServiceProvider provider) { _provider = provider; } public void Configure(WolverineOptions options) { options.PersistMessagesWithMySql(_provider.GetRequiredService()) .RegisterStaticTenantsByDataSource(tenants => { tenants.Register("tenant1", _provider.GetRequiredKeyedService("tenant1")); tenants.Register("tenant2", _provider.GetRequiredKeyedService("tenant2")); tenants.Register("tenant3", _provider.GetRequiredKeyedService("tenant3")); }); } } ``` And add that to the greater application like so: ```cs var host = Host.CreateDefaultBuilder() .UseWolverine() .ConfigureServices(services => { services.AddSingleton(); }).StartAsync(); ``` ::: warning Wolverine is not able to dynamically tear down tenants yet. That's long planned, and honestly probably only happens when an outside company sponsors that work. ::: If you need to be able to add new tenants at runtime or just have more tenants than is comfortable living in static configuration or plenty of other reasons I could think of, you can also use Wolverine's "master table tenancy" approach where tenant id to database connection string information is kept in a separate database table. Here's a possible usage of that model: ```cs var builder = Host.CreateApplicationBuilder(); var configuration = builder.Configuration; builder.UseWolverine(opts => { // You need a main database no matter what that will hold information about the Wolverine system itself // and.. opts.PersistMessagesWithMySql(configuration.GetConnectionString("wolverine")) // ...also a table holding the tenant id to connection string information .UseMasterTableTenancy(seed => { // These registrations are 100% just to seed data for local development // Maybe you want to omit this during production? // Or do something programmatic by looping through data in the IConfiguration? seed.Register("tenant1", configuration.GetConnectionString("tenant1")); seed.Register("tenant2", configuration.GetConnectionString("tenant2")); seed.Register("tenant3", configuration.GetConnectionString("tenant3")); }); }); ``` Here's some more important background on the multi-tenancy support: * Wolverine is spinning up a completely separate "durability agent" across the application to recover stranded messages in the transactional inbox and outbox, and that's done automatically for you * The lightweight saga support for MySQL absolutely works with this model of multi-tenancy * Wolverine is able to manage all of its database tables including the tenant table itself (`wolverine_tenants`) across both the main database and all the tenant databases including schema migrations * Wolverine's transactional middleware is aware of the multi-tenancy and can connect to the correct database based on the `IMesageContext.TenantId` or utilize the tenant id detection in Wolverine.HTTP as well * You can "plug in" a custom implementation of `ITenantSource` to manage tenant id to connection string assignments in whatever way works for your deployed system ## Lightweight Saga Usage See the details on [Lightweight Saga Storage](/guide/durability/sagas.html#lightweight-saga-storage) for more information. MySQL saga storage uses the native `JSON` column type for saga state and supports optimistic concurrency with version tracking. ## MySQL-Specific Considerations ### Advisory Locks Wolverine uses MySQL's `GET_LOCK()` and `RELEASE_LOCK()` functions for distributed locking. These locks are session-scoped and automatically released when the connection is closed. Lock names follow the pattern `wolverine_{lockId}`. ### Data Types The MySQL persistence uses the following data type mappings: | Purpose | MySQL Type | |---------|------------| | Message body | `LONGBLOB` | | Saga state | `JSON` | | Timestamps | `DATETIME(6)` | | GUIDs | `CHAR(36)` | ### Compatibility The MySQL persistence is compatible with: * MySQL 8.0+ * MariaDB 10.5+ The implementation uses the [MySqlConnector](https://mysqlconnector.net/) driver via Weasel.MySql. --- --- url: /guide/messaging/transports/mysql.md --- # MySQL Transport See the [MySQL Transport](/guide/durability/mysql#mysql-messaging-transport) documentation in the [MySQL Integration](/guide/durability/mysql) topic. --- --- url: /guide/messaging/transports/azureservicebus/object-management.md --- # Object Management ::: warning If you are using Wolverine to initialize and build Azure Service Bus subscriptions, then it is in control of all filters. Any filter built outside of Wolverine will be deleted by Wolverine when it tries to initialize the application. The "fix" is just to have Wolverine know exactly which filters you want. ::: When using the Azure Service Bus transport, Wolverine is able to use the stateful resource model where all missing queues, topics, and subscriptions would be built at application start up time with this option applied: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseAzureServiceBus("some connection string"); // Make sure that all known resources like // the Azure Service Bus queues, topics, and subscriptions // configured for this application exist at application start up opts.Services.AddResourceSetupOnStartup(); }).StartAsync(); ``` snippet source | anchor You can also direct Wolverine to build out Azure Service Bus object on demand as needed with: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseAzureServiceBus("some connection string") // Wolverine will build missing queues, topics, and subscriptions // as necessary at runtime .AutoProvision(); }).StartAsync(); ``` snippet source | anchor You can also opt to auto-purge all queues (there's also an option to do this queue by queue) on application start up time with: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseAzureServiceBus("some connection string") .AutoPurgeOnStartup(); }).StartAsync(); ``` snippet source | anchor ## Identifier Prefixing for Shared Brokers Because Azure Service Bus is a centralized broker model, you may need to share a single namespace between multiple developers or development environments. You can use `PrefixIdentifiers()` to automatically prepend a prefix to every queue, topic, and subscription name created by Wolverine, isolating the cloud resources for each environment: ```csharp using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseAzureServiceBus("some connection string") .AutoProvision() // Prefix all queue, topic, and subscription names with "dev-john." .PrefixIdentifiers("dev-john"); // A queue named "orders" becomes "dev-john.orders" opts.ListenToAzureServiceBusQueue("orders"); }).StartAsync(); ``` You can also use `PrefixIdentifiersWithMachineName()` as a convenience to use the current machine name as the prefix: ```csharp opts.UseAzureServiceBus("some connection string") .AutoProvision() .PrefixIdentifiersWithMachineName(); ``` The default delimiter between the prefix and the original name is `.` for Azure Service Bus (e.g., `dev-john.orders`). ## Configuring Queues If Wolverine is provisioning the queues for you, you can use one of these options shown below to directly control exactly how the Azure Service Bus queue will be configured: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { // One way or another, you're probably pulling the Azure Service Bus // connection string out of configuration var azureServiceBusConnectionString = builder .Configuration .GetConnectionString("azure-service-bus"); // Connect to the broker in the simplest possible way opts.UseAzureServiceBus(azureServiceBusConnectionString).AutoProvision() // Alter how a queue should be provisioned by Wolverine .ConfigureQueue("outgoing", options => { options.AutoDeleteOnIdle = 5.Minutes(); }); // Or do the same thing when creating a listener opts.ListenToAzureServiceBusQueue("incoming") .ConfigureQueue(options => { options.MaxDeliveryCount = 5; }); // Or as part of a subscription opts.PublishAllMessages() .ToAzureServiceBusQueue("outgoing") .ConfigureQueue(options => { options.LockDuration = 3.Seconds(); }) // You may need to change the maximum number of messages // in message batches depending on the size of your messages // if you hit maximum data constraints .MessageBatchSize(50); }); using var host = builder.Build(); await host.StartAsync(); ``` snippet source | anchor --- --- url: /guide/http/metadata.md --- # OpenAPI Metadata As much as possible, Wolverine is trying to glean [OpenAPI](https://www.openapis.org/) ([Swashbuckle](https://learn.microsoft.com/en-us/aspnet/core/tutorials/getting-started-with-swashbuckle?view=aspnetcore-7.0\&tabs=visual-studio) / Swagger) metadata from the method signature of the HTTP endpoint methods instead of forcing developers to add repetitive boilerplate code. There's a handful of predictable rules about metadata for Wolverine endpoints: * `application/json` is assumed for any request body type or any response body type * `text/plain` is the content type for any endpoint that returns a string as the response body * `200` and `500` are always assumed as valid status codes by default * `404` is also part of the metadata in most cases That aside, there's plenty of ways to modify the OpenAPI metadata for Wolverine endpoints for whatever you need. First off, all the attributes from ASP.Net Core that you use for MVC controller methods happily work on Wolverine endpoints: ```cs public class SignupEndpoint { // The first couple attributes are ASP.Net Core // attributes that add OpenAPI metadata to this endpoint [Tags("Users")] [ProducesResponseType(204)] [WolverinePost("/users/sign-up")] public static IResult SignUp(SignUpRequest request) { return Results.NoContent(); } } ``` snippet source | anchor Or if you prefer the fluent interface from Minimal API, that's actually supported as well for either individual endpoints or by policy directly on the `HttpChain` model: ```cs public static void Configure(HttpChain chain) { // This sample is from Wolverine itself on endpoints where all you do is forward // a request directly to a Wolverine messaging endpoint for later processing chain.Metadata.Add(builder => { // Adding metadata builder.Metadata.Add(new WolverineProducesResponseTypeMetadata { StatusCode = 202, Type = null }); }); // This is run after all other metadata has been applied, even after the wolverine built-in metadata // So use this if you want to change or remove some metadata chain.Metadata.Finally(builder => { builder.RemoveStatusCodeResponse(200); }); } ``` snippet source | anchor ## Swashbuckle and Wolverine [Swashbuckle](https://github.com/domaindrivendev/Swashbuckle.AspNetCore) is de facto the OpenAPI tooling for ASP.Net Core applications. It's also very MVC Core-centric in its assumptions about how to generate OpenAPI metadata to describe endpoints. If you need to (or just want to), you can do quite a bit to control exactly how Swashbuckle works against Wolverine endpoints by using a custom `IOperationFilter` of your making that can use Wolverine's own `HttpChain` model for finer grained control. Here's a sample from the Wolverine testing code that just uses Wolverine' own model to determine the OpenAPI operation id: ```cs // This class is NOT distributed in any kind of Nuget today, but feel very free // to copy this code into your own as it is at least tested through Wolverine's // CI test suite public class WolverineOperationFilter : IOperationFilter // IOperationFilter is from Swashbuckle itself { public void Apply(OpenApiOperation operation, OperationFilterContext context) { if (context.ApiDescription.ActionDescriptor is WolverineActionDescriptor action) { operation.OperationId = action.Chain.OperationId; } } } ``` snippet source | anchor And that would be registered with Swashbuckle inside of your `Program.Main()` method like so: ```cs builder.Services.AddSwaggerGen(x => { x.OperationFilter(); x.ResolveConflictingActions(apiDescriptions => apiDescriptions.First()); }); ``` snippet source | anchor ## Operation Id ::: warning You will have to use the custom `WolverineOperationFilter` in the previous section to relay Wolverine's operation id determination to Swashbuckle. We have not (yet) been able to relay that information to Swashbuckle otherwise. ::: By default, Wolverine.HTTP is trying to mimic the logic for determining the OpenAPI `operationId` logic from MVC Core which is *endpoint class name*.*method name*. You can also override the operation id through the normal routing attribute through an optional property as shown below (from the Wolverine.HTTP test code): ```cs // Override the operation id within the generated OpenAPI // metadata [WolverineGet("/fake/hello/async", OperationId = "OverriddenId")] public Task SayHelloAsync() { return Task.FromResult("Hello"); } ``` snippet source | anchor ## IHttpAware or IEndpointMetadataProvider Models Wolverine honors the ASP.Net Core [IEndpointMetadataProvider](https://learn.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.http.metadata.iendpointmetadataprovider?view=aspnetcore-7.0) interface on resource types to add or modify endpoint metadata. If you want Wolverine to automatically apply metadata (and HTTP runtime behavior) based on the resource type of an HTTP endpoint, you can have your response type implement the `IHttpAware` interface from Wolverine. As an example, consider the `CreationResponse` type in Wolverine: ```cs /// /// Base class for resource types that denote some kind of resource being created /// in the system. Wolverine specific, and more efficient, version of Created from ASP.Net Core /// public record CreationResponse([StringSyntax("Route")]string Url) : IHttpAware { public static void PopulateMetadata(MethodInfo method, EndpointBuilder builder) { builder.RemoveStatusCodeResponse(200); var create = new MethodCall(method.DeclaringType!, method).Creates.FirstOrDefault()?.VariableType; var metadata = new WolverineProducesResponseTypeMetadata { Type = create, StatusCode = 201 }; builder.Metadata.Add(metadata); } void IHttpAware.Apply(HttpContext context) { context.Response.Headers.Location = Url; context.Response.StatusCode = 201; } public static CreationResponse For(T value, string url) => new CreationResponse(url, value); } ``` snippet source | anchor Any endpoint that returns `CreationResponse` or a sub class will automatically expose a status code of `201` for successful processing to denote resource creation instead of the generic `200`. Same goes for the built-in `AcceptResponse` type, but returning `202` status. Your own custom implementations of the `IHttpAware` interface would apply the metadata declarations at configuration time so that those customizations would be part of the exported Swashbuckle documentation of the system. As of Wolverine 3.4, Wolverine will also apply OpenAPI metadata from any value created by compound handler middleware or other middleware that implements the `IEndpointMetadataProvider` interface -- which many `IResult` implementations from within ASP.Net Core middleware do. Consider this example from the tests: ```cs public class ValidatedCompoundEndpoint2 { public static User? Load(BlockUser2 cmd) { return cmd.UserId.IsNotEmpty() ? new User(cmd.UserId) : null; } // This method would be called, and if the NotFound value is // not null, will stop the rest of the processing // Likewise, Wolverine will use the NotFound type to add // OpenAPI metadata public static NotFound? Validate(User? user) { if (user == null) return (NotFound?)Results.NotFound(user); return null; } [WolverineDelete("/optional/result")] public static string Handle(BlockUser2 cmd, User user) { return "Ok - user blocked"; } } ``` snippet source | anchor ## With Microsoft.Extensions.ApiDescription.Server Just a heads up, if you are trying to use [Microsoft.Extensions.ApiDescription.Server](https://learn.microsoft.com/en-us/aspnet/core/fundamentals/openapi/aspnetcore-openapi?view=aspnetcore-9.0\&tabs=net-cli%2Cvisual-studio-code#generate-openapi-documents-at-build-time) and you get an `ObjectDisposedException` error on compilation against the `IServiceProvider`, follow these steps to fix: 1. Remove `Microsoft.Extensions.ApiDescription.Server` altogether 2. Just run `dotnet run` to see why your application isn't able to start correctly, and fix *that* problem 3. Add `Microsoft.Extensions.ApiDescription.Server` back For whatever reason, the source generator for OpenAPI tries to start the entire application, including Wolverine's `IHostedService`, and the whole thing blows up with that very unhelpful message if anything is wrong with the application. Chances are good that one of the things preventing a successful startup is that Marten and Wolverine will, by default, begin performing their usual tasks immediately upon startup. This entails connecting to the database, as well as to any external messaging providers you may be using. Since those connections are probably not going to be possible in your build environment, they will need to be disabled while the OpenApi generation is being done. Microsoft's recomendation for detecting whether the application is running for the purpose of document generation is to use this code: ```cs var generatingOpenApi = Assembly.GetEntryAssembly()?.GetName().Name == "GetDocument.Insider" ``` If this mode is detected, all connections can be disabled like so: ```cs builder.Services.DisableAllExternalWolverineTransports(); builder.Services.DisableAllWolverineMessagePersistence(); ``` Note that a syntactically valid connection string still needs to be provided to Marten, but it does not need to represent a real DB; a minimal placeholder is sufficient. Also, if you are using the async daemon, you'll want to use the `DaemonMode.Disabled` mode. ```cs if(generatingOpenApi) { builder.Services .AddMarten(ConfigureMarten("Server=.;Database=Foo")) .AddAsyncDaemon(DaemonMode.Disabled) .UseLightweightSessions(); } else { // usual Marten config } ``` ## With NSwag Be aware that if you want to use NSwag to generate a .NET/Typescript client for Wolverine.HTTP endpoints, you will need to add this line before `return await app.RunJasperFxCommands(args);`: ```cs args = args.Where(arg => !arg.StartsWith("--applicationName")).ToArray(); ``` See the full NSwag demo at https://github.com/JasperFx/wolverine/tree/main/src/Http/NSwagDemonstrator --- --- url: /guide/durability/oracle.md --- # Oracle Integration ::: info Wolverine can use the Oracle durability options with any mix of Entity Framework Core as a higher level persistence framework ::: Wolverine supports an Oracle backed message persistence strategy and even an Oracle backed messaging transport option. To get started, add the `WolverineFx.Oracle` dependency to your application: ```bash dotnet add package WolverineFx.Oracle ``` ## Message Persistence To enable Oracle to serve as Wolverine's [transactional inbox and outbox](./), you just need to use the `WolverineOptions.PersistMessagesWithOracle()` extension method as shown below in a sample: ```cs var builder = WebApplication.CreateBuilder(args); var connectionString = builder.Configuration.GetConnectionString("oracle"); builder.Host.UseWolverine(opts => { // Setting up Oracle-backed message storage // This requires a reference to Wolverine.Oracle opts.PersistMessagesWithOracle(connectionString); // Other Wolverine configuration }); // This is rebuilding the persistent storage database schema on startup // and also clearing any persisted envelope state builder.Host.UseResourceSetupOnStartup(); var app = builder.Build(); // Other ASP.Net Core configuration... // Using JasperFx opens up command line utilities for managing // the message storage return await app.RunJasperFxCommands(args); ``` ## Oracle Messaging Transport ::: info All Oracle queues are built into a *WOLVERINE\_QUEUES* schema by default. ::: The `WolverineFx.Oracle` Nuget also contains a simple messaging transport that was mostly meant to be usable for teams who want asynchronous queueing without introducing more specialized infrastructure. To enable this transport in your code, use the `EnableMessageTransport()` option which also requires Oracle backed message persistence: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { var connectionString = builder.Configuration.GetConnectionString("oracle"); opts.PersistMessagesWithOracle( connectionString, // This argument is the database schema for the envelope storage // If separate logical services are targeting the same physical database, // you should use a separate schema name for each logical application // to make basically *everything* run smoother "MYAPP") // Enable the Oracle messaging transport .EnableMessageTransport(transport => { // Configure the schema name for transport queue tables transport.TransportSchemaName("QUEUES"); // Tell Wolverine to build out all necessary queue or scheduled message // tables on demand as needed transport.AutoProvision(); // Optional that may be helpful in testing, but probably bad // in production! transport.AutoPurgeOnStartup(); }); // Use this extension method to create subscriber rules opts.PublishAllMessages().ToOracleQueue("outbound"); // Use this to set up queue listeners opts.ListenToOracleQueue("inbound") .CircuitBreaker(cb => { // fine tune the circuit breaker // policies here }) // Optionally specify how many messages to // fetch into the listener at any one time .MaximumMessagesToReceive(50); }); using var host = builder.Build(); await host.StartAsync(); ``` The Oracle transport is strictly queue-based at this point. The queues are configured as durable by default, meaning that they are utilizing the transactional inbox and outbox. The Oracle queues can also be buffered: ```cs opts.ListenToOracleQueue("sender").BufferedInMemory(); ``` Using this option just means that the Oracle queues can be used for both sending or receiving with no integration with the transactional inbox or outbox. This is a little more performant, but less safe as messages could be lost if held in memory when the application shuts down unexpectedly. ### Polling Wolverine has a number of internal polling operations, and any Oracle queues will be polled on a configured interval. The default polling interval is set in the `DurabilitySettings` class and can be configured at runtime as below: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { // Health check message queue/dequeue opts.Durability.HealthCheckPollingTime = TimeSpan.FromSeconds(10); // Node reassigment checks opts.Durability.NodeReassignmentPollingTime = TimeSpan.FromSeconds(5); // User queue poll frequency opts.Durability.ScheduledJobPollingTime = TimeSpan.FromSeconds(5); } ``` ::: info Control queue Wolverine has an internal control queue (`dbcontrol`) used for internal operations. This queue is hardcoded to poll every second and should not be changed to ensure the stability of the application. ::: ## Multi-Tenancy As of Wolverine 5.x, you can use multi-tenancy through separate databases per tenant with Oracle. To utilize Wolverine managed multi-tenancy, you have a couple main options. The simplest is just using a static configured set of tenant id to database connections like so: ```cs var builder = Host.CreateApplicationBuilder(); var configuration = builder.Configuration; builder.UseWolverine(opts => { // First, you do have to have a "main" Oracle database for messaging persistence // that will store information about running nodes, agents, and non-tenanted operations opts.PersistMessagesWithOracle(configuration.GetConnectionString("main")) // Add known tenants at bootstrapping time .RegisterStaticTenants(tenants => { // Add connection strings for the expected tenant ids tenants.Register("tenant1", configuration.GetConnectionString("tenant1")); tenants.Register("tenant2", configuration.GetConnectionString("tenant2")); tenants.Register("tenant3", configuration.GetConnectionString("tenant3")); }); }); ``` If you need to be able to add new tenants at runtime or just have more tenants than is comfortable living in static configuration or plenty of other reasons I could think of, you can also use Wolverine's "master table tenancy" approach where tenant id to database connection string information is kept in a separate database table. Here's a possible usage of that model: ```cs var builder = Host.CreateApplicationBuilder(); var configuration = builder.Configuration; builder.UseWolverine(opts => { // You need a main database no matter what that will hold information about the Wolverine system itself // and.. opts.PersistMessagesWithOracle(configuration.GetConnectionString("wolverine")) // ...also a table holding the tenant id to connection string information .UseMasterTableTenancy(seed => { // These registrations are 100% just to seed data for local development // Maybe you want to omit this during production? // Or do something programmatic by looping through data in the IConfiguration? seed.Register("tenant1", configuration.GetConnectionString("tenant1")); seed.Register("tenant2", configuration.GetConnectionString("tenant2")); seed.Register("tenant3", configuration.GetConnectionString("tenant3")); }); }); ``` Here's some more important background on the multi-tenancy support: * Wolverine is spinning up a completely separate "durability agent" across the application to recover stranded messages in the transactional inbox and outbox, and that's done automatically for you * The lightweight saga support for Oracle absolutely works with this model of multi-tenancy * Wolverine is able to manage all of its database tables including the tenant table itself (`wolverine_tenants`) across both the main database and all the tenant databases including schema migrations * Wolverine's transactional middleware is aware of the multi-tenancy and can connect to the correct database based on the `IMessageContext.TenantId` or utilize the tenant id detection in Wolverine.HTTP as well * You can "plug in" a custom implementation of `ITenantSource` to manage tenant id to connection string assignments in whatever way works for your deployed system ::: warning Wolverine is not able to dynamically tear down tenants yet. That's long planned, and honestly probably only happens when an outside company sponsors that work. ::: ## Lightweight Saga Usage See the details on [Lightweight Saga Storage](/guide/durability/sagas.html#lightweight-saga-storage) for more information. ## Oracle-Specific Considerations ### Schema Names Oracle schema names are always stored in upper case by Wolverine. The default schema name for envelope storage is `WOLVERINE`, and the default schema name for transport queues is `WOLVERINE_QUEUES`. ### Advisory Locks Wolverine uses Oracle's `DBMS_LOCK` package for distributed locking to coordinate scheduled message processing across nodes. Lock names are derived from a deterministic hash of the schema name. ### Data Types The Oracle persistence uses the following data type mappings: | Purpose | Oracle Type | |---------|------------| | Message body | `BLOB` | | GUIDs | `RAW(16)` | | Timestamps | `TIMESTAMP WITH TIME ZONE` | | String identifiers | `NVARCHAR2` | ### Compatibility The Oracle persistence requires: * Oracle Database 19c+ * Uses the [Oracle.ManagedDataAccess.Core](https://www.nuget.org/packages/Oracle.ManagedDataAccess.Core) driver via Weasel.Oracle --- --- url: /guide/messaging/partitioning.md --- # Partitioned Sequential Messaging ::: tip Concurrency can be hard, especially anytime there is any element of a system like the storage for an entity or event stream or saga that is sensitive to simultaneous writes. I won't tell you *not* to worry about this because you absolutely should be concerned with concurrency, but fortunately Wolverine has [some helpful functionality to help you manage concurrency in your system](/tutorials/concurrency). ::: "Partitioned Sequential Messaging" is a feature in Wolverine that tries to guarantee sequential processing *within* groups of messages related to some sort of business domain entity within your system while also allowing work to be processed in parallel for better throughput *between* groups of messages. At this point, Wolverine supports this feature for: 1. Purely local processing within the current process 2. "Partitioning" the publishing of messages to external transports like Rabbit MQ or Amazon SQS over a range of queues where we have built specific support for this feature 3. "Partitioning" the processing of messages received from any external transport within a single process ## How It Works Let's jump right to a concrete example. Let's say your building an order management system, so you're processing plenty of command messages against a single `Order`. You also expect -- or already know from testing or production issues \-- that in normal operation you can expect your system to receive messages simultaneously that impact the same `Order` and that when that happens your system either throws up from concurrent writes to the same entity or event stream or even worse, you possibly get incorrect or incomplete system state when changes from one command are overwritten by changes from another command against the same `Order`. With all of that being said, let's utilize Wolverine's "Partitioned Sequential Messaging" feature to alleviate the concurrent access to any single `Order`, while hopefully allowing work against different `Order` entities to happily proceed in parallel. First though, just to make this easy, let's make a little marker interface for our internal message types that will make it easy for Wolverine to know which `Order` a given command relates to: ```cs public interface IOrderCommand { public string OrderId { get; } } public record ApproveOrder(string OrderId) : IOrderCommand; public record CancelOrder(string OrderId) : IOrderCommand; ``` snippet source | anchor If we were only running our system on a single node so we only care about a single process, we can do this: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { opts.MessagePartitioning // First, we're going to tell Wolverine how to determine the // message group id .ByMessage(x => x.OrderId) // Next we're setting up a publishing rule to local queues .PublishToPartitionedLocalMessaging("orders", 4, topology => { topology.MessagesImplementing(); // this feature exists topology.MaxDegreeOfParallelism = PartitionSlots.Five; // Just showing you how to make additional Wolverine configuration // for all the local queues built from this usage topology.ConfigureQueues(queue => { queue.TelemetryEnabled(true); }); }); }); ``` snippet source | anchor So let's talk about what we set up in the code above. First, we've taught Wolverine how to determine the group id of any message that implements the `IOrderCommand` interface. Next we've told Wolverine to publish any message implementing our `IOrderCommand` interface to one of four [local queues](/guide/messaging/transports/local) named "orders1", "orders2", "orders3", and "orders4." At runtime, when you publish an `IOrderCommand` within the system, Wolverine will determine the group id of the new message through the `IOrderCommand.OrderId` rule we created (it does get written to `Envelope.GroupId`). Once Wolverine has that `GroupId`, it needs to determine which of the "orders#" queues to send the message, and the easiest way to explain this is really just to show the internal code: ```cs /// /// Uses a combination of message grouping id rules and a deterministic hash /// to predictably assign envelopes to a slot to help "shard" message publishing. /// /// /// /// /// public static int SlotForSending(this Envelope envelope, int numberOfSlots, MessagePartitioningRules rules) { // This is where Wolverine determines the GroupId for the message // Note that you can also explicitly set the GroupId var groupId = rules.DetermineGroupId(envelope); // Pick one at random if we can't determine a group id, and has to be zero based if (groupId == null) return Random.Shared.Next(1, numberOfSlots) - 1; // Deterministically choose a slot based on the GroupId, but try // to more or less evenly distribute groups to the different // slots return Math.Abs(groupId.GetDeterministicHashCode() % numberOfSlots); } ``` snippet source | anchor The code above manages publishing between the "orders1", "orders2", "orders3", and "orders4" queues. Inside of each of the local queues Wolverine is also using yet another round of grouped message segregation with a slightly different mechanism sorting mechanism to sort messages by their group id into separate, strictly ordered Channels. The `PartitionSlots` enum controls the number of parallel channels processing messages within a single listener. ::: info From our early testing, we quickly found out that the second level of partitioning within listeners only distributed messages relatively evenly when you had an odd number of slots within the listener, so we opted for an enum to limit the values here rather than trying to assert on invalid even numbers. ::: Then end result is that you do create some parallelism between message processing while guaranteeing that messages from within a single group id will be executed sequentially. In the end, you really need just 2-3 things: 1. Some way for Wolverine to determine the group id of a message, assuming you aren't explicitly passing that to Wolverine 2. Potentially a publishing rule for partitioned sending 3. Potentially a rule on each listening endpoint to use partitioned handling ## Inferred Grouping for Event Streams or Sagas There are some built in message group id rules that you can opt into as shown below: ```cs // Telling Wolverine how to assign a GroupId to a message, that we'll use // to predictably sort into "slots" in the processing opts.MessagePartitioning // This tells Wolverine to use the Saga identity as the group id for any message // that impacts a Saga or the stream id of any command that is part of the "aggregate handler workflow" // integration with Marten .UseInferredMessageGrouping() .PublishToPartitionedLocalMessaging("letters", 4, topology => { topology.MessagesImplementing(); topology.MaxDegreeOfParallelism = PartitionSlots.Five; topology.ConfigureQueues(queue => { queue.BufferedInMemory(); }); }); ``` snippet source | anchor The built in rules *at this point* include: * Using the Sage identity of a message that is handled by a [Stateful Saga](/guide/durability/sagas) * Using the stream/aggregate id of messages that are part of the [Aggregate Handler Workflow](/guide/durability/marten/event-sourcing) integration with Marten ## Specifying Grouping Rules Internally, Wolverine is using a list of implementations of this interface: ```cs /// /// Strategy for determining the GroupId of a message /// public interface IGroupingRule { bool TryFindIdentity(Envelope envelope, out string groupId); } ``` snippet source | anchor Definitely note that these rules are fall through, and the order you declare the rules are important. Also note that when you call into this syntax below it's combinatorial (just meaning that you don't start over if you call into it multiple times): ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { opts.MessagePartitioning // Use saga identity or aggregate handler workflow identity // from messages as the group id .UseInferredMessageGrouping() // First, we're going to tell Wolverine how to determine the // message group id for any message type that can be // cast to this interface. Also works for concrete types too .ByMessage(x => x.OrderId) // Use the Envelope.TenantId as the message group id // this could be valuable to partition work by tenant .ByTenantId() // Use a custom rule implementing IGroupingRULE with explicit code to determine // the group id .ByRule(new MySpecialGroupingRule()); }); ``` snippet source | anchor ## Grouping by Property Name If your message contracts are auto-generated (e.g. from `.proto` files) and you cannot add a marker interface, you can use the `ByPropertyNamed()` rule to look for a property by name on any message type. This is a built-in `IGroupingRule` that inspects the incoming message type at runtime for a property matching one of the specified names and uses its value as the `GroupId`. The first matching property name wins, and property values of any type are converted to `string` via `ToString()`. Null property values result in `string.Empty`. If no matching property is found on a message type, the rule falls through to the next rule in the chain. The property accessor is compiled via `LambdaBuilder` and memoized per message type for performance. ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { opts.MessagePartitioning // Look for a property named "StreamId" or "Id" on the message type // and use its value as the GroupId for partitioned processing. // The first matching property name wins. // This is particularly useful when message types are auto-generated // (e.g. from .proto files) and cannot implement a marker interface. .ByPropertyNamed("StreamId", "Id"); }); ``` snippet source | anchor ## Explicit Group Ids ::: tip Any explicitly specified group id will take precedence over the grouping rules in the previous section ::: You can also explicitly specify a group id for a message when you send or publish it through `IMessageBus` like this: ```cs public static async Task SendMessageToGroup(IMessageBus bus) { await bus.PublishAsync( new ApproveInvoice("AAA"), new() { GroupId = "agroup" }); } ``` snippet source | anchor If you are using [cascaded messages](/guide/handlers/cascading) from your message handlers, there's an extension method helper just as a convenience like this: ```cs public static IEnumerable Handle(ApproveInvoice command) { yield return new PayInvoice(command.Id).WithGroupId("aaa"); } ``` snippet source | anchor ## Partitioned Publishing Locally ::: tip You will also need to set up message grouping rules for the message partitioning to function ::: If you need to use the partitioned sequential messaging just within a single process, the `PublishToPartitionedLocalMessaging()` method shown below will set up both a publishing rule for multiple local queues and partitioned processing for those local queues. ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { opts.MessagePartitioning // First, we're going to tell Wolverine how to determine the // message group id .ByMessage(x => x.OrderId) // Next we're setting up a publishing rule to local queues .PublishToPartitionedLocalMessaging("orders", 4, topology => { topology.MessagesImplementing(); // this feature exists topology.MaxDegreeOfParallelism = PartitionSlots.Five; // Just showing you how to make additional Wolverine configuration // for all the local queues built from this usage topology.ConfigureQueues(queue => { queue.TelemetryEnabled(true); }); }); }); ``` snippet source | anchor ## Partitioned Processing at any Endpoint You can add partitioned processing to any listening endpoint like this: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { opts.UseRabbitMq(); // You still need rules for determining the message group id // of incoming messages! opts.MessagePartitioning .ByMessage(x => x.OrderId); // We're going to listen opts.ListenToRabbitQueue("incoming") // To really keep our system from processing Order related // messages for the same order id concurrently, we'll // make it so that only one node actively processes messages // from this queue .ExclusiveNodeWithParallelism() // We're going to partition the message processing internally // based on the message group id while allowing up to 7 parallel // messages to be executed at once .PartitionProcessingByGroupId(PartitionSlots.Seven); }); ``` snippet source | anchor ## Partitioned Publishing to External Transports ::: info Wolverine supports the Azure Service Bus concept of [session identifiers](/guide/messaging/transports/azureservicebus/session-identifiers) that effectively provides the same benefits as this feature. ::: ::: tip Even if your system is not messaging to any other systems, using this mechanism will help distribute work across an application cluster while guaranteeing that messages within a group id are processed sequentially and still allowing for parallelism between message groups. ::: At this point Wolverine has direct support for partitioned routing to Rabbit MQ or Amazon SQS. Note that in both of the following examples, Wolverine is both setting up publishing rules out to these queues, and also configuring listeners for the queues. Beyond that, Wolverine is making each queue be "exclusive," meaning that only one node within a cluster is actively listening and processing messages from each partitioned queue at any one time. For Rabbit MQ: ```cs // opts is the WolverineOptions from within an Add/UseWolverine() call // Telling Wolverine how to assign a GroupId to a message, that we'll use // to predictably sort into "slots" in the processing opts.MessagePartitioning.ByMessage(x => x.Id.ToString()); // This is creating Rabbit MQ queues named "letters1" etc. opts.MessagePartitioning.PublishToShardedRabbitQueues("letters", 4, topology => { topology.MessagesImplementing(); topology.MaxDegreeOfParallelism = PartitionSlots.Five; topology.ConfigureSender(x => { // just to show that you can do this... x.DeliverWithin(5.Minutes()); }); topology.ConfigureListening(x => x.BufferedInMemory()); }); ``` snippet source | anchor And for Amazon SQS: ```cs // Telling Wolverine how to assign a GroupId to a message, that we'll use // to predictably sort into "slots" in the processing opts.MessagePartitioning.ByMessage(x => x.Id.ToString()); opts.MessagePartitioning.PublishToShardedAmazonSqsQueues("letters", 4, topology => { topology.MessagesImplementing(); topology.MaxDegreeOfParallelism = PartitionSlots.Five; topology.ConfigureListening(x => x.BufferedInMemory().MessageBatchSize(10)); }); ``` snippet source | anchor ## Propagating GroupId to PartitionKey When using Kafka (or any transport that uses `PartitionKey`), you may want cascaded messages from a handler to automatically inherit the originating message's `GroupId` as their `PartitionKey`. This ensures that cascaded messages land on the same Kafka partition as the originating message without manually specifying `DeliveryOptions` on every outgoing message. This is especially useful when you have a chain of message handlers where the first message arrives at a Kafka topic with a consumer group id, and you want all downstream cascaded messages to be published to the same partition. ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { // Automatically propagate the originating message's GroupId // to the PartitionKey of all cascaded outgoing messages. // This is particularly useful with Kafka where you want // cascaded messages to land on the same partition as the // originating message without manually specifying // DeliveryOptions on every outgoing message. opts.Policies.PropagateGroupIdToPartitionKey(); }); ``` snippet source | anchor ::: tip The rule will not override an explicitly set `PartitionKey` on an outgoing envelope. If you set `PartitionKey` via `DeliveryOptions`, that value takes precedence. ::: ## Partitioning Messages Received from External Systems ::: warning Brute force, no points for style, explicit coding ahead! ::: If you are receiving messages from an external source that will be vulnerable to concurrent access problems when the messages are executed, but you either do not want to make the external system publish the group ids or have no ability to make the upstream system care about your own internal group id details, you can simply relay the received messages back out to a partitioned message topology owned by your system. Using Amazon SQS as our transport, lets say that we're receiving messages from the external system at one queue like this: Hey folks, more coming soon. Hopefully before Wolverine 5.0. Watch this issue: https://github.com/JasperFx/wolverine/issues/1728 --- --- url: /guide/handlers/persistence.md --- # Persistence Helpers Philosophically, Wolverine is trying to enable you to write the message handlers or HTTP endpoint methods with low ceremony code that's easy to test and easy to reason about. To that end, Wolverine has quite a few tricks to utilize your persistence tooling from your handler or HTTP endpoint code without having to directly couple your behavioral code to persistence infrastructure: * The [storage action side effect model](/guide/handlers/side-effects.html#storage-side-effects) for pure function handlers that involve database "writes" * The [aggregate handler workflow](/guide/durability/marten/event-sourcing) with Marten for highly testable CQRS + Event Sourcing systems * Specific [integration with Marten and Wolverine.HTTP](/guide/http/marten) ## Automatically Loading Entities to Method Parameters A common need when building Wolverine message handlers or HTTP endpoints is to need to load an entity object based on an identity value in either the message itself, the HTTP request body, or an HTTP route argument. In these cases, you'll generally pluck the correct value out of the message or route arguments, then call into an EF Core `DbContext` or a Marten/RavenDb `IDocumentSession` to load the entity for you before proceeding on with your work. Since this usage is so common, Wolverine has the `[Wolverine.Persistence.Entity]` attribute to just do that for you and have the right entity "pushed" into your message handler. Here's a simple example of a message handler that's also a valid Wolverine.HTTP endpoint using this attribute. First though, the message type and/or HTTP request body: ```cs public record RenameTodo(string Id, string Name); ``` snippet source | anchor and the handler & endpoint code handling that message type: ```cs // Use "Id" as the default member [WolverinePost("/api/todo/update")] public static Update Handle( // The first argument is always the incoming message RenameTodo command, // By using this attribute, we're telling Wolverine // to load the Todo entity from the configured // persistence of the app using a member on the // incoming message type [Entity] Todo2 todo) { // Do your actual business logic todo.Name = command.Name; // Tell Wolverine that you want this entity // updated in persistence return Storage.Update(todo); } ``` snippet source | anchor In the code above, the `Todo2` argument would be filled by trying to load that `Todo2` entity from persistence using the value of `RenameTodo.Id`. If you were using Marten as your persistence mechanism, this would be using `IDocumentSession.LoadAsync(id)` to load the entity with the RavenDb usage being similar. If you were using EF Core and had an `Todo2DbContext` service registered in your system, it would be using `Todo2DbContext.FindAsync(id)`. By default, Wolverine is assuming that any parameter value marked with `[Entity]` is required, so if the `Todo2` entity was not found in the database, then: * As a message handler, it will just log that the entity could not be found and otherwise exit cleanly without doing any further processing * As an HTTP endpoint, the handler would write out a status code of 404 (not found) and exit otherwise If you need or want any other kind of failure handling on the entity not being found, you'll need to use explicit code instead, maybe with a `LoadAsync()` "before" method to still keep your main handler or endpoint method a *pure function*. If you genuinely don't need the `[Entity]` value to be required, you can do this instead: ```cs [WolverinePost("/api/todo/maybecomplete")] public static IStorageAction Handle(MaybeCompleteTodo command, [Entity(Required = false)] Todo2? todo) { if (todo == null) return Storage.Nothing(); todo.IsComplete = true; return Storage.Update(todo); } ``` snippet source | anchor So far, all of the examples have depended on a fall back to looking for either a case insensitive match "id"\ match on the message members for message handlers or the route arguments, then request input members for HTTP endpoints. Wolverine will also look for "\[Entity Type Name]Id", so in the case of `Todo2`, it would look as well for a more specific `Todo2Id` member or route argument for the identity value. You can of course override this by just telling Wolverine what member name or route argument name should have the identity like this: ```cs // Okay, I still used "id", but it *could* be something different here! [WolverineGet("/api/todo/{id}")] public static Todo2 Get([Entity("id")] Todo2 todo) => todo; ``` snippet source | anchor If you have any conflict between whether the identity should be found on either the route arguments or request body, you can specify the identity value source through the `EntityAttribute.ValueSource` property to one of these values: ```cs public enum ValueSource { /// /// This value can be sourced by any mechanism that matches the name. This is the default. /// Anything, /// /// The value should be sourced by a property or field on the message type or HTTP request type /// InputMember, /// /// The value should be sourced by a route argument of an HTTP request /// RouteValue, /// /// The value should be sourced by a query string parameter of an HTTP request /// FromQueryString } ``` snippet source | anchor ## Global Entity Defaults If you want consistent entity-missing behavior across your entire application without having to set `OnMissing` or `MaybeSoftDeleted` on every single `[Entity]`, `[Document]`, `[Aggregate]`, `[ReadAggregate]`, or `[WriteAggregate]` attribute, you can configure global defaults through `WolverineOptions.EntityDefaults`: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Set global defaults for all entity-loading attributes opts.EntityDefaults.OnMissing = OnMissing.ProblemDetailsWith404; opts.EntityDefaults.MaybeSoftDeleted = false; }).StartAsync(); ``` With the configuration above, every `[Entity]` parameter that does not explicitly set `OnMissing` will use `ProblemDetailsWith404` instead of the built-in `Simple404` default. Likewise, every `[Entity]` parameter that does not explicitly set `MaybeSoftDeleted` will treat soft-deleted entities as missing. You can still override the global default on any individual attribute: ```cs public static class MyHandler { // This handler uses the global default for OnMissing public static MyResult Handle(MyCommand command, [Entity] MyEntity entity) { // ... } // This handler explicitly overrides to ThrowException regardless of the global default public static MyResult Handle(MyOtherCommand command, [Entity(OnMissing = OnMissing.ThrowException)] MyEntity entity) { // ... } } ``` The resolution order is: **Explicit attribute value > Global default > Built-in default** (`Simple404` / `true`). Some other facts to know about `[Entity]` usage: * Supported by the Marten, EF Core, and RavenDb integration * For EF Core usage, Wolverine has to be able to figure out which `DbContext` type persists the entity type of the parameter * In all cases, Wolverine is trying to "know" what the identity type for the entity type is (`Guid`? `int`? Something else?) from the underlying persistence tooling and use that to help parse route arguments as needed * `[Entity]` cannot support any kind of composite key or identity * `[Entity]` can be used for both HTTP endpoints and message handler methods * `[Entity]` can be used for `Before` / `Validate` methods in compound handlers * If an `[Entity]` attribute is used in the main handler or endpoint method, you can still resolve the same entity type as a parameter to a `Before` method without needing to use the attribute again ::: tip As with other kinds of Wolverine "magic", lean on the [pre-generated code](/guide/codegen) to let Wolverine explain what it's doing with your method signatures. ::: --- --- url: /tutorials/ping-pong.md --- # Ping/Pong Messaging with TCP To show off some of the messaging, let's just build [a very simple "Ping/Pong" example](https://github.com/JasperFx/wolverine/tree/main/src/Samples/PingPong) that will exchange messages between two small .NET processes. ![Pinger and Ponger](/ping-pong.png) First off, I'm going to build out a very small shared library just to hold the messages we're going to exchange: ```cs public class Ping { public int Number { get; set; } } public class Pong { public int Number { get; set; } } ``` snippet source | anchor And next, I'll start a small *Pinger* service with the `dotnet new worker` template. There's just three pieces of code, starting with the boostrapping code: ```cs using Messages; using JasperFx; using Pinger; using Wolverine; using Wolverine.Transports.Tcp; return await Host.CreateDefaultBuilder(args) .UseWolverine(opts => { // Using Wolverine's built in TCP transport // listen to incoming messages at port 5580 opts.ListenAtPort(5580); // route all Ping messages to port 5581 opts.PublishMessage().ToPort(5581); // Registering the hosted service here, but could do // that with a separate call to IHostBuilder.ConfigureServices() opts.Services.AddHostedService(); }) .RunJasperFxCommands(args); ``` snippet source | anchor and the `Worker` class that's just going to publish a new `Ping` message once a second: ```cs using Messages; using Wolverine; namespace Pinger; public class Worker : BackgroundService { private readonly ILogger _logger; private readonly IServiceProvider _serviceProvider; public Worker(ILogger logger, IServiceProvider serviceProvider) { _logger = logger; _serviceProvider = serviceProvider; } protected override async Task ExecuteAsync(CancellationToken stoppingToken) { var pingNumber = 1; await using var scope= _serviceProvider.CreateAsyncScope(); var bus = scope.ServiceProvider.GetRequiredService(); while (!stoppingToken.IsCancellationRequested) { await Task.Delay(1000, stoppingToken); _logger.LogInformation("Sending Ping #{Number}", pingNumber); await bus.PublishAsync(new Ping { Number = pingNumber }); pingNumber++; } } } ``` snippet source | anchor and lastly a message handler for any `Pong` messages coming back from the `Ponger` we'll build next: ```cs using Messages; namespace Pinger; public class PongHandler { public void Handle(Pong pong, ILogger logger) { logger.LogInformation("Received Pong #{Number}", pong.Number); } } ``` snippet source | anchor Okay then, next let's move on to building the `Ponger` application. This time I'll use `dotnet new console` to start the new project, then add references to our *Messages* library and Wolverine itself. For the bootstrapping, add this code: ```cs using Microsoft.Extensions.Hosting; using JasperFx; using Wolverine; using Wolverine.Transports.Tcp; return await Host.CreateDefaultBuilder(args) .UseWolverine(opts => { opts.ApplicationAssembly = typeof(Program).Assembly; // Using Wolverine's built in TCP transport opts.ListenAtPort(5581); }) .RunJasperFxCommands(args); ``` snippet source | anchor And a message handler for the `Ping` messages that will turn right around and shoot a `Pong` response right back to the original sender: ```cs using Messages; using Microsoft.Extensions.Logging; using Wolverine; namespace Ponger; public class PingHandler { public ValueTask Handle(Ping ping, ILogger logger, IMessageContext context) { logger.LogInformation("Got Ping #{Number}", ping.Number); return context.RespondToSenderAsync(new Pong { Number = ping.Number }); } } ``` snippet source | anchor ```cs public static class PingHandler { // Simple message handler for the PingMessage message type public static ValueTask Handle( // The first argument is assumed to be the message type PingMessage message, // Wolverine supports method injection similar to ASP.Net Core MVC // In this case though, IMessageContext is scoped to the message // being handled IMessageContext context) { AnsiConsole.MarkupLine($"[blue]Got ping #{message.Number}[/]"); var response = new PongMessage { Number = message.Number }; // This usage will send the response message // back to the original sender. Wolverine uses message // headers to embed the reply address for exactly // this use case return context.RespondToSenderAsync(response); } } ``` snippet source | anchor If I start up first the *Ponger* service, then the *Pinger* service, I'll see console output like this from *Pinger*: ``` info: Pinger.Worker[0] Sending Ping #11 info: Pinger.PongHandler[0] Received Pong #1 info: Wolverine.Runtime.WolverineRuntime[104] Successfully processed message Pong#01817277-f692-42d5-a3e4-35d9b7d119fb from tcp://localhost:5581/ info: Pinger.PongHandler[0] Received Pong #2 info: Wolverine.Runtime.WolverineRuntime[104] Successfully processed message Pong#01817277-f699-4340-a59d-9616aee61cb8 from tcp://localhost:5581/ info: Pinger.PongHandler[0] Received Pong #3 info: Wolverine.Runtime.WolverineRuntime[104] Successfully processed message Pong#01817277-f699-48ea-988b-9e835bc53020 from tcp://localhost:5581/ info: Pinger.PongHandler[0] ``` and output like this in the *Ponger* process: ``` info: Ponger.PingHandler[0] Got Ping #1 info: Wolverine.Runtime.WolverineRuntime[104] Successfully processed message Ping#01817277-d673-4357-84e3-834c36f3446c from tcp://localhost:5580/ info: Ponger.PingHandler[0] Got Ping #2 info: Wolverine.Runtime.WolverineRuntime[104] Successfully processed message Ping#01817277-da61-4c9d-b381-6cda92038d41 from tcp://localhost:5580/ info: Ponger.PingHandler[0] Got Ping #3 ``` --- --- url: /guide/durability/postgresql.md --- # PostgreSQL Integration ::: info Wolverine can happily use the PostgreSQL durability options with any mix of Entity Framework Core and/or Marten as a higher level persistence framework ::: Wolverine supports a PostgreSQL backed message persistence strategy and even a PostgreSQL backed messaging transport option. To get started, add the `WolverineFx.Postgresql` dependency to your application: ```bash dotnet add package WolverineFx.Postgresql ``` ## Message Persistence To enable PostgreSQL to serve as Wolverine's [transactional inbox and outbox](./), you just need to use the `WolverineOptions.PersistMessagesWithPostgresql()` extension method as shown below in a sample: ```cs var builder = WebApplication.CreateBuilder(args); var connectionString = builder.Configuration.GetConnectionString("postgres"); builder.Host.UseWolverine(opts => { // Setting up Postgresql-backed message storage // This requires a reference to Wolverine.Postgresql opts.PersistMessagesWithPostgresql(connectionString); // Other Wolverine configuration }); // This is rebuilding the persistent storage database schema on startup // and also clearing any persisted envelope state builder.Host.UseResourceSetupOnStartup(); var app = builder.Build(); // Other ASP.Net Core configuration... // Using JasperFx opens up command line utilities for managing // the message storage return await app.RunJasperFxCommands(args); ``` snippet source | anchor ## Optimizing the Message Store For PostgreSQL, you can enable PostgreSQL backed partitioning for the inbox table as an optimization. This is not enabled by default just to avoid causing database migrations in a minor point release. Note that this will have some significant benefits for inbox/outbox metrics gathering in the future: ```cs var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.Durability.EnableInboxPartitioning = true; ``` snippet source | anchor ## PostgreSQL Messaging Transport ::: info All PostgreSQL queues are built into a *wolverine\_queues* schema at this point. ::: The `WolverineFx.PostgreSQL` Nuget also contains a simple messaging transport that was mostly meant to be usable for teams who want asynchronous queueing without introducing more specialized infrastructure. To enable this transport in your code, use this option which *also* activates PostgreSQL backed message persistence: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { var connectionString = builder.Configuration.GetConnectionString("postgres"); opts.UsePostgresqlPersistenceAndTransport( connectionString, // This argument is the database schema for the envelope storage // If separate logical services are targeting the same physical database, // you should use a separate schema name for each logical application // to make basically *everything* run smoother "myapp", // This schema name is for the actual PostgreSQL queue tables. If using // the PostgreSQL transport between two logical applications, make sure // to use the same transportSchema! transportSchema:"queues") // Tell Wolverine to build out all necessary queue or scheduled message // tables on demand as needed .AutoProvision() // Optional that may be helpful in testing, but probably bad // in production! .AutoPurgeOnStartup(); // Use this extension method to create subscriber rules opts.PublishAllMessages().ToPostgresqlQueue("outbound"); // Use this to set up queue listeners opts.ListenToPostgresqlQueue("inbound") .CircuitBreaker(cb => { // fine tune the circuit breaker // policies here }) // Optionally specify how many messages to // fetch into the listener at any one time .MaximumMessagesToReceive(50); }); using var host = builder.Build(); await host.StartAsync(); ``` snippet source | anchor The PostgreSQL transport is strictly queue-based at this point. The queues are configured as durable by default, meaning that they are utilizing the transactional inbox and outbox. The PostgreSQL queues can also be buffered: ```cs opts.ListenToPostgresqlQueue("sender").BufferedInMemory(); ``` snippet source | anchor Using this option just means that the PostgreSQL queues can be used for both sending or receiving with no integration with the transactional inbox or outbox. This is a little more performant, but less safe as messages could be lost if held in memory when the application shuts down unexpectedly. ### Polling Wolverine has a number of internal polling operations, and any PostgreSQL queues will be polled on a configured interval as Wolverine does not use the PostgreSQL `LISTEN/NOTIFY` feature at this time.\ The default polling interval is set in the `DurabilitySettings` class and can be configured at runtime as below: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { // Health check message queue/dequeue opts.Durability.HealthCheckPollingTime = TimeSpan.FromSeconds(10); // Node reassigment checks opts.Durability.NodeReassignmentPollingTime = TimeSpan.FromSeconds(5); // User queue poll frequency opts.Durability.ScheduledJobPollingTime = TimeSpan.FromSeconds(5); } ``` ::: info Control queue\ Wolverine has an internal control queue (`dbcontrol`) used for internal operations.\ This queue is hardcoded to poll every second and should not be changed to ensure the stability of the application. ::: ## Multi-Tenancy As of Wolverine 4.0, you have two ways to use multi-tenancy through separate databases per tenant with PostgreSQL: 1. Using [Marten's multi-tenancy support](https://martendb.io/configuration/multitenancy.html) and the `IntegrateWithWolverine()` option 2. Directly configure PostgreSQL databases with Wolverine managed multi-tenancy In both cases, if utilizing the PostgreSQL transport with multi-tenancy through separate databases per tenant, the PostgreSQL queues will be built and monitored for each tenant database as well as any main, non-tenanted database. Also, Wolverine is able to utilize completely different message storage for its transactional inbox and outbox for each unique database including any main database. Wolverine is able to activate additional durability agents for itself for any tenant databases added at runtime for tenancy modes that support dynamic discovery. To utilize Wolverine managed multi-tenancy, you have a couple main options. The simplest is just using a static configured set of tenant id to database connections like so: ```cs var builder = Host.CreateApplicationBuilder(); var configuration = builder.Configuration; builder.UseWolverine(opts => { // First, you do have to have a "main" PostgreSQL database for messaging persistence // that will store information about running nodes, agents, and non-tenanted operations opts.PersistMessagesWithPostgresql(configuration.GetConnectionString("main")) // Add known tenants at bootstrapping time .RegisterStaticTenants(tenants => { // Add connection strings for the expected tenant ids tenants.Register("tenant1", configuration.GetConnectionString("tenant1")); tenants.Register("tenant2", configuration.GetConnectionString("tenant2")); tenants.Register("tenant3", configuration.GetConnectionString("tenant3")); }); opts.Services.AddDbContextWithWolverineManagedMultiTenancy((builder, connectionString, _) => { builder.UseNpgsql(connectionString.Value, b => b.MigrationsAssembly("MultiTenantedEfCoreWithPostgreSQL")); }, AutoCreate.CreateOrUpdate); }); ``` snippet source | anchor Since the underlying [Npgsql library](https://www.npgsql.org/) supports the `DbDataSource` concept, and you might need to use this for a variety of reasons, you can also directly configure `NpgsqlDataSource` objects for each tenant. This one might be a little more involved, but let's start by saying that you might be using Aspire to configure PostgreSQL and both the main and tenant databases. In this usage, Aspire will register `NpgsqlDataSource` services as `Singleton` scoped in your IoC container. We can build an `IWolverineExtension` that utilizes the IoC container to register Wolverine like so: ```cs public class OurFancyPostgreSQLMultiTenancy : IWolverineExtension { private readonly IServiceProvider _provider; public OurFancyPostgreSQLMultiTenancy(IServiceProvider provider) { _provider = provider; } public void Configure(WolverineOptions options) { options.PersistMessagesWithPostgresql(_provider.GetRequiredService()) .RegisterStaticTenantsByDataSource(tenants => { tenants.Register("tenant1", _provider.GetRequiredKeyedService("tenant1")); tenants.Register("tenant2", _provider.GetRequiredKeyedService("tenant2")); tenants.Register("tenant3", _provider.GetRequiredKeyedService("tenant3")); }); } } ``` snippet source | anchor And add that to the greater application like so: ```cs var host = Host.CreateDefaultBuilder() .UseWolverine() .ConfigureServices(services => { services.AddSingleton(); }).StartAsync(); ``` snippet source | anchor ::: warning Neither Marten nor Wolverine is able to dynamically tear down tenants yet. That's long planned, and honestly probably only happens when an outside company sponsors that work. ::: If you need to be able to add new tenants at runtime or just have more tenants than is comfortable living in static configuration or plenty of other reasons I could think of, you can also use Wolverine's "master table tenancy" approach where tenant id to database connection string information is kept in a separate database table. Here's a possible usage of that model: ```cs var builder = Host.CreateApplicationBuilder(); var configuration = builder.Configuration; builder.UseWolverine(opts => { // You need a main database no matter what that will hold information about the Wolverine system itself // and.. opts.PersistMessagesWithPostgresql(configuration.GetConnectionString("wolverine")) // ...also a table holding the tenant id to connection string information .UseMasterTableTenancy(seed => { // These registrations are 100% just to seed data for local development // Maybe you want to omit this during production? // Or do something programmatic by looping through data in the IConfiguration? seed.Register("tenant1", configuration.GetConnectionString("tenant1")); seed.Register("tenant2", configuration.GetConnectionString("tenant2")); seed.Register("tenant3", configuration.GetConnectionString("tenant3")); }); }); ``` snippet source | anchor ::: info Wolverine's "master table tenancy" model was unsurprisingly based on Marten's [Master Table Tenancy](https://martendb.io/configuration/multitenancy.html#master-table-tenancy-model) feature and even shares a little bit of supporting code now. ::: Here's some more important background on the multi-tenancy support: * Wolverine is spinning up a completely separate "durability agent" across the application to recover stranded messages in the transactional inbox and outbox, and that's done automatically for you * The lightweight saga support for PostgreSQL absolutely works with this model of multi-tenancy * Wolverine is able to manage all of its database tables including the tenant table itself (`wolverine_tenants`) across both the main database and all the tenant databases including schema migrations * Wolverine's transactional middleware is aware of the multi-tenancy and can connect to the correct database based on the `IMesageContext.TenantId` or utilize the tenant id detection in Wolverine.HTTP as well * You can "plug in" a custom implementation of `ITenantSource` to manage tenant id to connection string assignments in whatever way works for your deployed system ## Lightweight Saga Usage See the details on [Lightweight Saga Storage](/guide/durability/sagas.html#lightweight-saga-storage) for more information. ## Integration with Marten The PostgreSQL message persistence and transport is automatically included with the `AddMarten().IntegrateWithWolverine()` configuration syntax. --- --- url: /guide/messaging/transports/postgresql.md --- # PostgreSQL Transport See the [PostgreSQL Transport](/guide/durability/postgresql#postgresql-messaging-transport) documentation in the [PostgreSQL Integration](/guide/durability/postgresql) topic. --- --- url: /guide/durability/marten/distribution.md --- # Projection/Subscription Distribution When Wolverine is combined with Marten into the full "Critter Stack" combination, and you're using the asynchronous projection or any event subscriptions with Marten, you can achieve potentially greater scalability for your system by better distributing the background work of these asynchronous event workers by letting Wolverine distribute the load evenly across a running cluster as shown below: ```cs opts.Services.AddMarten(m => { m.DisableNpgsqlLogging = true; m.Connection(Servers.PostgresConnectionString); m.DatabaseSchemaName = "csp"; m.Projections.Add(ProjectionLifecycle.Async); m.Projections.Add(ProjectionLifecycle.Async); m.Projections.Add(ProjectionLifecycle.Async); }) .IntegrateWithWolverine(m => { // This makes Wolverine distribute the registered projections // and event subscriptions evenly across a running application // cluster m.UseWolverineManagedEventSubscriptionDistribution = true; }); ``` snippet source | anchor ::: tip This option replaces the Marten `AddAsyncDaemon(HotCold)` option and should not be used in combination with Marten's own load distribution. ::: With this option, Wolverine is going to ensure that every single known asynchronous [event projection](https://martendb.io/events/projections/) and every [event subscription](https://martendb.io/events/subscriptions.html) is running on exactly one running node within your application cluster. Moreover, Wolverine will purposely stop and restart projections or subscriptions to purposely spread the running load across your entire cluster of running nodes. In the case of using multi-tenancy through separate databases per tenant with Marten, this Wolverine "agent distribution" will assign the work by tenant databases, meaning that all the running projections and subscriptions for a single tenant database will always be running on a single application node. This was done with the theory that this affinity would hopefully reduce the number of used database connections over all. If a node is taken offline, Wolverine will detect that the node is no longer accessible and try to move start the missing projection/subscription agents on another active node. *If you run your application on only a single server, Wolverine will of course run all projections and subscriptions on just that one server.* Some other facts about this integration: * Wolverine's agent distribution does indeed work with per-tenant database multi-tenancy * Wolverine does automatic health checking at the running node level so that it can fail over assigned agents * Wolverine can detect when new nodes come online and redistribute work * Wolverine is able to support blue/green deployment and only run projections or subscriptions on active nodes where a capability is present. This just means that you can add all new projections or subscriptions, or even just new versions of a projection or subscription on some application nodes in order to do try ["blue/green deployment."](https://en.wikipedia.org/wiki/Blue%E2%80%93green_deployment) * This capability does depend on Wolverine's built-in [leadership election](https://en.wikipedia.org/wiki/Leader_election) -- which fortunately got a *lot* better in Wolverine 3.0 ## Uri Structure The `Uri` structure for event subscriptions or projections is: ``` event-subscriptions://[event store type]/[event store name]/[database server].[database name]/[relative path of the shard] ``` For an example from the tests: `event-subscriptions://marten/main/localhost.postgres/day/all` where: * "marten" means that its a [Marten](https://martendb.io) based event store (we are planning on at least a SQL Server backed event store some day besides Marten) * "main" refers to this projection being in the main `DocumentStore` Marten store that is added from `IServiceCollection.AddMarten()`. Otherwise this value would be the type name of an ancillary store type in all lower case * "localhost" is the database server * "postgres" is the name of the database * "day/all" refers to a projection with the `ShardName` of "Day:All" ## Requirements This functionality requires Wolverine to both track running nodes and to send messages between running nodes within your clustered Wolverine service. One way or another, Wolverine needs some kind of "control queue" mechanism for this internal messaging. Not to worry though, because Wolverine will utilize in a very basic "database control queue" specifically for this if you are using the `AddMarten().IntegrateWithWolverine()` integration or any database backed message persistence as a default if you are not using any kind of external messaging broker that supports Wolverine control queues. At the point of this writing, the Rabbit MQ and Azure Service Bus transport options both create a "control queue" for each executing Wolverine node that Wolverine can use for this communication in a more efficient way than the database backed control queue mechanism. Other requirements: * You cannot disable external transports with the `StubAllExternalTransports()` * `WolverineOptions.Durability.Mode` must be `Balanced` If you are seeing any issues with timeouts due to the Wolverine load distribution, you can try: 1. Pre-generating any Marten types to speed up the "cold start" time 2. Use the `WolverineOptions.Durability.Mode = Solo` setting at development time 3. Try to use an external broker for faster communication between nodes ## With Ancillary Marten Stores Wolverine can also distribute projections and subscriptions running in [ancillary stores](/guide/durability/marten/ancillary-stores) as well. In this case, you do have to enable the Wolverine managed distribution on the main Marten store registration, but that applies to all known ancillary stores. ```cs var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.Durability.HealthCheckPollingTime = 1.Seconds(); opts.Durability.CheckAssignmentPeriod = 1.Seconds(); opts.UseMessagePackSerialization(); opts.UseSharedMemoryQueueing(); opts.Services.AddMarten(m => { m.DisableNpgsqlLogging = true; m.Connection(Servers.PostgresConnectionString); m.DatabaseSchemaName = "csp2"; m.Projections.Add(ProjectionLifecycle.Async); m.Projections.Add(ProjectionLifecycle.Async); m.Projections.Add(ProjectionLifecycle.Async); }) .IntegrateWithWolverine(m => { // This makes Wolverine distribute the registered projections // and event subscriptions evenly across a running application // cluster m.UseWolverineManagedEventSubscriptionDistribution = true; }); opts.Services.AddSingleton(new OutputLoggerProvider(_output)); opts.Services.AddMartenStore(m => { m.DisableNpgsqlLogging = true; m.Connection(Servers.PostgresConnectionString); m.DatabaseSchemaName = "csp3"; m.Projections.Add(ProjectionLifecycle.Async); m.Projections.Add(ProjectionLifecycle.Async); m.Projections.Add(ProjectionLifecycle.Async); }).IntegrateWithWolverine(); opts.CodeGeneration.TypeLoadMode = TypeLoadMode.Auto; }).StartAsync(); ``` snippet source | anchor --- --- url: /guide/messaging/transports/azureservicebus/publishing.md --- # Publishing You can configure explicit subscription rules to Azure Service Bus queues with this usage: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { // One way or another, you're probably pulling the Azure Service Bus // connection string out of configuration var azureServiceBusConnectionString = builder .Configuration .GetConnectionString("azure-service-bus"); // Connect to the broker in the simplest possible way opts.UseAzureServiceBus(azureServiceBusConnectionString).AutoProvision(); // Explicitly configure sending messages to a specific queue opts.PublishAllMessages().ToAzureServiceBusQueue("outgoing") // All the normal Wolverine options you'd expect .BufferedInMemory(); }); using var host = builder.Build(); await host.StartAsync(); ``` snippet source | anchor ## Conventional Subscriber Configuration In the case of publishing to a large number of queues, it may be beneficial to apply configuration to all the Azure Service Bus subscribers like this: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { // One way or another, you're probably pulling the Azure Service Bus // connection string out of configuration var azureServiceBusConnectionString = builder .Configuration .GetConnectionString("azure-service-bus"); // Connect to the broker in the simplest possible way opts.UseAzureServiceBus(azureServiceBusConnectionString).AutoProvision() // Apply default configuration to all Azure Service Bus subscribers // This can be overridden explicitly by any configuration for specific // sending/subscribing endpoints .ConfigureSenders(sender => sender.UseDurableOutbox()); }); using var host = builder.Build(); await host.StartAsync(); ``` snippet source | anchor Note that any of these settings would be overridden by specific configuration to a specific endpoint. --- --- url: /guide/messaging/transports/gcp-pubsub/publishing.md --- # Publishing Configuring Wolverine subscriptions through GCP Pub/Sub topics is done with the `ToPubsubTopic()` extension method shown in the example below: ```cs var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UsePubsub("your-project-id"); opts .PublishMessage() .ToPubsubTopic("outbound1"); opts .PublishMessage() .ToPubsubTopic("outbound2") .ConfigurePubsubTopic(options => { options.MessageRetentionDuration = Duration.FromTimeSpan(TimeSpan.FromMinutes(10)); }); }).StartAsync(); ``` snippet source | anchor --- --- url: /guide/messaging/transports/rabbitmq/publishing.md --- # Publishing ## Publish Directly to a Queue In simple use cases, if you want to direct Wolverine to publish messages to a specific queue without worrying about an exchange or binding, you have this syntax: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Connect to an unsecured, local Rabbit MQ broker // at port 5672 opts.UseRabbitMq(); opts.PublishAllMessages().ToRabbitQueue("outgoing") .UseDurableOutbox(); // fine-tune the queue characteristics if Wolverine // will be governing the queue setup opts.PublishAllMessages().ToRabbitQueue("special", queue => { queue.IsExclusive = true; }); }).StartAsync(); ``` snippet source | anchor ## Publish to an Exchange To publish messages to a Rabbit MQ exchange with optional declaration of the exchange, queue, and binding objects, use this syntax: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Connect to an unsecured, local Rabbit MQ broker // at port 5672 opts.UseRabbitMq(); opts.PublishAllMessages().ToRabbitExchange("exchange1"); // fine-tune the exchange characteristics if Wolverine // will be governing the queue setup opts.PublishAllMessages().ToRabbitExchange("exchange2", e => { // Default is Fanout, so overriding that here e.ExchangeType = ExchangeType.Direct; // If you want, you can also create binding here too e.BindQueue("queue1", "exchange2ToQueue1"); }); }).StartAsync(); ``` snippet source | anchor ## Publish to a Routing Key To publish messages directly to a known binding or routing key (and this actually works with queue names as well just to be confusing here), use this syntax: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseRabbitMq(rabbit => { rabbit.HostName = "localhost"; }) // I'm declaring an exchange, a queue, and the binding // key that we're referencing below. // This is NOT MANDATORY, but rather just allows Wolverine to // control the Rabbit MQ object lifecycle .DeclareExchange("exchange1", ex => { ex.BindQueue("queue1", "key1"); }) // This will direct Wolverine to create any missing Rabbit MQ exchanges, // queues, or binding keys declared in the application at application // start up time .AutoProvision(); opts.PublishAllMessages().ToRabbitExchange("exchange1"); }).StartAsync(); ``` snippet source | anchor --- --- url: /guide/messaging/transports/sqs/publishing.md --- # Publishing Configuring subscriptions through Amazon SQS queues is done with the `ToSqsQueue()` extension method shown in the example below: ```cs var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseAmazonSqsTransport(); opts.PublishMessage() .ToSqsQueue("outbound1") // Increase the outgoing message throughput, but at the cost // of strict ordering .MessageBatchMaxDegreeOfParallelism(Environment.ProcessorCount); opts.PublishMessage() .ToSqsQueue("outbound2").ConfigureQueueCreation(request => { request.Attributes[QueueAttributeName.MaximumMessageSize] = "1024"; }); }).StartAsync(); ``` snippet source | anchor --- --- url: /guide/durability/efcore/domain-events.md --- # Publishing Domain Events ::: info This section is all about using the traditional .NET "Domain Events" approach commonly used with EF Core, but piping the domain events raised through Wolverine messaging. ::: Wolverine's integration with EF Core also includes support for the typical "Domain Events" publishing that folks like to do with EF Core `DbContext` classes and some sort of `DomainEntity` [layer supertype](https://martinfowler.com/eaaCatalog/layerSupertype.html). See Jeremy's post [“Classic” .NET Domain Events with Wolverine and EF Core](https://jeremydmiller.com/2025/12/04/classic-net-domain-events-with-wolverine-and-ef-core/) for much more background. Jumping right into an example, let's say that you like to use a layer supertype in your domain model that gives your `Entity` types a chance to "raise" domain events like this one: ```cs // Of course, if you're into DDD, you'll probably // use many more marker interfaces than I do here, // but you do you and I'll do me in throwaway sample code public abstract class Entity { public List Events { get; } = new(); public void Publish(object @event) { Events.Add(@event); } } ``` snippet source | anchor Now, let's say we're building some kind of software project planning software (as if the world doesn't have enough "Jira but different" applications) where we'll have an entity like this one: ```cs public class BacklogItem : Entity { public Guid Id { get; private set; } public string Description { get; private set; } public virtual Sprint Sprint { get; private set; } public DateTime CreatedAtUtc { get; private set; } = DateTime.UtcNow; public void CommitTo(Sprint sprint) { Sprint = sprint; Publish(new BackLotItemCommitted(Id, sprint.Id)); } } ``` snippet source | anchor Let’s utilize this a little bit within a Wolverine handler, first with explicit code: ```cs public static class CommitToSprintHandler { public static void Handle( CommitToSprint command, // There's a naming convention here about how // Wolverine "knows" the id for the BacklogItem // from the incoming command [Entity] BacklogItem item, [Entity] Sprint sprint ) { // This method would cause an event to be published within // the BacklogItem object here that we need to gather up and // relay to Wolverine later item.CommitTo(sprint); // Wolverine's transactional middleware handles // everything around SaveChangesAsync() and transactions } } ``` snippet source | anchor Now, let’s add some Wolverine configuration to just make this pattern work: ```csharp builder.Host.UseWolverine(opts => { // Setting up Sql Server-backed message storage // This requires a reference to Wolverine.SqlServer opts.PersistMessagesWithSqlServer(connectionString, "wolverine"); // Set up Entity Framework Core as the support // for Wolverine's transactional middleware opts.UseEntityFrameworkCoreTransactions(); // THIS IS A NEW API IN Wolverine 5.6! opts.PublishDomainEventsFromEntityFrameworkCore(x => x.Events); // Enrolling all local queues into the // durable inbox/outbox processing opts.Policies.UseDurableLocalQueues(); }); ``` In the Wolverine configuration above, the EF Core transactional middleware now “knows” how to scrape out possible domain events from the active DbContext.ChangeTracker and publish them through Wolverine. Moreover, the [EF Core transactional middleware](/guide/durability/efcore/transactional-middleware) is doing all the operation ordering for you so that the events are enqueued as outgoing messages as part of the transaction and potentially persisted to the transactional inbox or outbox (depending on configuration) before the transaction is committed. ::: tip To make this as clear as possible, this approach is completely reliant on the EF Core transactional middleware. ::: Also note that this domain event “scraping” is also supported and tested with the `IDbContextOutbox` service if you want to use this in application code outside of Wolverine message handlers or HTTP endpoints. If I were building a system that embeds domain event publishing directly in domain model entity classes, I would prefer this approach. But, let’s talk about another option that will not require any changes to Wolverine… ## Relay Events from Entity to Wolverine Cascading Messages In this approach, which I’m granting that some people won’t like at all, we’ll simply pipe the event messages from the domain entity right to Wolverine and utilize Wolverine’s [cascading message](/guide/handlers/cascading) feature. This time I’m going to change the BacklogItem entity class to something like this: ```csharp public class BacklogItem { public Guid Id { get; private set; } public string Description { get; private set; } public virtual Sprint Sprint { get; private set; } public DateTime CreatedAtUtc { get; private set; } = DateTime.UtcNow; // The exact return type isn't hugely important here public object[] CommitTo(Sprint sprint) { Sprint = sprint; return [new BackLotItemCommitted(Id, sprint.Id)]; } } ``` With the handler signature: ```csharp public static class CommitToSprintHandler { public static object[] Handle( CommitToSprint command, // There's a naming convention here about how // Wolverine "knows" the id for the BacklogItem // from the incoming command [Entity] BacklogItem item, [Entity] Sprint sprint ) { return item.CommitTo(sprint); } } ``` The approach above let’s you make the handler be a single pure function which is always great for unit testing, eliminates the need to do any customization of the DbContext type, makes it unnecessary to bother with any kind of IEventPublisher interface, and let’s you keep the logic for what event messages should be raised completely in your domain model entity types. I’d also argue that this approach makes it more clear to later developers that “hey, additional messages may be published as part of handling the CommitToSprint command” and I think that’s invaluable. I’ll harp on this more later, but I think the traditional, MediatR-flavored approach to domain events from the first example at the top makes application code harder to reason about and therefore more buggy over time. ## Embedding IEventPublisher into the Entities Lastly, let’s move to what I think is my least favorite approach that I will from this moment be recommending against for any JasperFx clients but is now completely supported by Wolverine. Let’s use an `IEventPublisher` interface like this: ```csharp // Just assume that this little abstraction // eventually relays the event messages to Wolverine // or whatever messaging tool you're using public interface IEventPublisher { void Publish(T @event) where T : IDomainEvent; } // Using a Nullo just so you don't have potential // NullReferenceExceptions public class NulloEventPublisher : IEventPublisher { public void Publish(T @event) where T : IDomainEvent { // Do nothing. } } public abstract class Entity { public IEventPublisher Publisher { get; set; } = new NulloEventPublisher(); } public class BacklogItem : Entity { public Guid Id { get; private set; } = Guid.CreateVersion7(); public string Description { get; private set; } // ZOMG, I forgot how annoying ORMs are. Use a document database // and stop worrying about making things virtual just for lazy loading public virtual Sprint Sprint { get; private set; } public void CommitTo(Sprint sprint) { Sprint = sprint; Publisher.Publish(new BackLotItemCommitted(Id, sprint.Id)); } } ``` Now, on to a Wolverine implementation for this pattern. You’ll need to do just a couple things. First, add this line of configuration to Wolverine, and note there are no generic arguments here: ```csharp // This will set you up to scrape out domain events in the // EF Core transactional middleware using a special service // I'm just about to explain opts.PublishDomainEventsFromEntityFrameworkCore(); ``` Now, build a real implementation of that IEventPublisher interface above: ```csharp public class EventPublisher(OutgoingDomainEvents Events) : IEventPublisher { public void Publish(T e) where T : IDomainEvent { Events.Add(e); } } ``` `OutgoingDomainEvents` is a service from the WolverineFx.EntityFrameworkCore Nuget that is registered as Scoped by the usage of the EF Core transactional middleware. Next, register your custom IEventPublisher with the Scoped lifecycle: ```csharp opts.Services.AddScoped(); ``` How you wire up `IEventPublisher` to your domain entities getting loaded out of the your EF Core `DbContext`? Frankly, I don’t want to know. Maybe a repository abstraction around your DbContext types? Dunno. I hate that kind of thing in code, but I perfectly trust *you* to do that and to not make me see that code. What’s important is that within a message handler or HTTP endpoint, if you resolve the `IEventPublisher` through DI and use the EF Core transactional middleware, the domain events published to that interface will be piped correctly into Wolverine’s active messaging context. Likewise, if you are using `IDbContextOutbox`, the domain events published to `IEventPublisher` will be correctly piped to Wolverine if you: 1. Pull both `IEventPublisher` and `IDbContextOutbox` from the same scoped service provider (nested container in Lamar / StructureMap parlance) 2. Call `IDbContextOutbox.SaveChangesAndFlushMessagesAsync()` 3. So, we’re going to have to do some sleight of hand to keep your domain entities synchronous Last note, in unit testing you might use a stand in “Spy” like this: --- --- url: /guide/messaging/transports/rabbitmq/object-management.md --- # Queue, Topic, and Binding Management ::: tip Wolverine assumes that exchanges should be "fanout" unless explicitly configured otherwise ::: Reusing a code sample from up above, the `AutoProvision()` declaration will direct Wolverine to create any missing Rabbit MQ [exchanges, queues, or bindings](https://www.rabbitmq.com/tutorials/amqp-concepts.html) declared in the application configuration at application bootstrapping time. ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseRabbitMq(rabbit => { rabbit.HostName = "localhost"; }) // I'm declaring an exchange, a queue, and the binding // key that we're referencing below. // This is NOT MANDATORY, but rather just allows Wolverine to // control the Rabbit MQ object lifecycle .DeclareExchange("exchange1", ex => { ex.BindQueue("queue1", "key1"); }) // This will direct Wolverine to create any missing Rabbit MQ exchanges, // queues, or binding keys declared in the application at application // start up time .AutoProvision(); opts.PublishAllMessages().ToRabbitExchange("exchange1"); }).StartAsync(); ``` snippet source | anchor At development time -- or occasionally in production systems -- you may want to have the messaging queues purged of any old messages at application startup time. Wolverine supports that with Rabbit MQ using the `AutoPurgeOnStartup()` declaration: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseRabbitMq() .AutoPurgeOnStartup(); }).StartAsync(); ``` snippet source | anchor Or you can be more selective and only have certain queues of volatile messages purged at startup as shown below: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseRabbitMq() .DeclareQueue("queue1") .DeclareQueue("queue2", q => q.PurgeOnStartup = true); }).StartAsync(); ``` snippet source | anchor Wolverine's Rabbit MQ integration also supports the [Oakton stateful resource](https://jasperfx.github.io/oakton/guide/host/resources.html) model, so you can make a generic declaration to auto-provision the Rabbit MQ objects at startup time (as well as any other stateful Wolverine resources like envelope storage) with the Oakton declarations as shown in the setup below that uses the `AddResourceSetupOnStartup()` declaration: ```cs var builder = WebApplication.CreateBuilder(args); builder.Host.ApplyJasperFxExtensions(); builder.Host.UseWolverine(opts => { // I'm setting this up to publish to the same process // just to see things work opts.PublishAllMessages() .ToRabbitExchange("issue_events", exchange => exchange.BindQueue("issue_events")) .UseDurableOutbox(); opts.ListenToRabbitQueue("issue_events").UseDurableInbox(); opts.UseRabbitMq(factory => { // Just connecting with defaults, but showing // how you *could* customize the connection to Rabbit MQ factory.HostName = "localhost"; factory.Port = 5672; }) // Even when calling AddResourceSetupOnStartup(), we still // need to AutoProvision to ensure any declared queues, exchanges, or // bindings with the Rabbit MQ broker to be built as part of bootstrapping time .AutoProvision();; }); // This is actually important, this directs // the app to build out all declared Postgresql and // Rabbit MQ objects on start up if they do not already // exist builder.Services.AddResourceSetupOnStartup(); // Just pumping out a bunch of messages so we can see // statistics builder.Services.AddHostedService(); builder.Services.AddMarten(opts => { // I think you would most likely pull the connection string from // configuration like this: // var martenConnectionString = builder.Configuration.GetConnectionString("marten"); // opts.Connection(martenConnectionString); opts.Connection(Servers.PostgresConnectionString); opts.DatabaseSchemaName = "issues"; // Just letting Marten know there's a document type // so we can see the tables and functions created on startup opts.RegisterDocumentType(); // I'm putting the inbox/outbox tables into a separate "issue_service" schema }).IntegrateWithWolverine(x => x.MessageStorageSchemaName = "issue_service"); var app = builder.Build(); app.MapGet("/", () => "Hello World!"); // Actually important to return the exit code here! return await app.RunJasperFxCommands(args); ``` snippet source | anchor Note that this stateful resource model is also available at the command line as well for deploy time management. ## Runtime Declaration From a user request, there are some extension methods in the WolverineFx.RabbitMQ Nuget off of `IWolverineRuntime` that will enable you to first declare new exchanges, queues, and bindings at runtime, and also enable you to "unbind" a queue from an exchange. That syntax is shown below: ```cs // _host is an IHost var runtime = _host.Services.GetRequiredService(); // Declare new Exchanges, Queues, and Bindings at runtime runtime.ModifyRabbitMqObjects(o => { var queue = o.DeclareQueue(queueName); var exchange = o.DeclareExchange(exchangeName); queue.BindExchange(exchange.ExchangeName, bindingKey); }); // Unbind a queue from an exchange runtime.UnBindRabbitMqQueue(queueName, exchangeName, bindingKey); ``` snippet source | anchor ## Quorum Queues or Streams Wolverine can utilize [Rabbit MQ Quorum Queues](https://www.rabbitmq.com/docs/quorum-queues) or [Rabbit MQ Streams](https://www.rabbitmq.com/docs/streams), but ["Classic" queues](https://www.rabbitmq.com/docs/classic-queues) are the default. The only real difference as far as Wolverine is concerned is how the queues are declared to Rabbit MQ itself. Wolverine's internals are largely not impacted otherwise. Here are your options for configuring one or many queues as opting into being a "Quorum Queue" or a "Stream": ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { opts .UseRabbitMq(builder.Configuration.GetConnectionString("rabbit")) // You can configure the queue type for declaration with this // usage as well .DeclareQueue("stream", q => q.QueueType = QueueType.stream) // Use quorum queues by default as a policy .UseQuorumQueues() // Or instead use streams .UseStreamsAsQueues(); opts.ListenToRabbitQueue("quorum1") // Override the queue type in declarations for a // single queue, and the explicit configuration will win // out over any policy or convention .QueueType(QueueType.quorum); }); ``` snippet source | anchor There are just a few things to know: * Wolverine's internal reply or control queues will still be declared as "classic" so they can be non-durable * Streams cannot be purged, and Wolverine ignores the `AutoPurgeOnStartup()` setting for streams ## Inside of Wolverine Extensions If you need to declare Rabbit MQ queues, exchanges, or bindings within a [Wolverine extension](/guide/extensions), you can quickly access and make additions to the Rabbit MQ integration with your Wolverine application like so: ```cs public class MyModuleExtension : IWolverineExtension { public void Configure(WolverineOptions options) { options.ConfigureRabbitMq() // Make any Rabbit Mq configuration or declare // additional Rabbit Mq options through the normal // syntax .DeclareExchange("my-module") .DeclareQueue("my-queue"); } } ``` snippet source | anchor ## Identifier Prefixing for Shared Brokers Because Rabbit MQ is a centralized broker model, you may need to share a single broker between multiple developers or development environments. To isolate the broker objects (queues, exchanges, bindings) created by each environment, you can use the `PrefixIdentifiers()` method to automatically prepend a prefix to every queue and exchange name created by Wolverine: ```csharp using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseRabbitMq() .AutoProvision() // Prefix all queue and exchange names with "dev-john-" .PrefixIdentifiers("dev-john"); // A queue named "orders" becomes "dev-john-orders" opts.ListenToRabbitQueue("orders"); }).StartAsync(); ``` You can also use `PrefixIdentifiersWithMachineName()` as a convenience to use the current machine name as the prefix, which is often a good default for local development: ```csharp opts.UseRabbitMq() .AutoProvision() .PrefixIdentifiersWithMachineName(); ``` The default delimiter between the prefix and the original name is `-` for Rabbit MQ (e.g., `dev-john-orders`). --- --- url: /tutorials/railway-programming.md --- # Railway Programming with Wolverine (Kind Of) ::: tip I'm sure a grizzled, experienced developer in your life has already told you this many times, but throwing and catching `Exceptions` in .NET code is pretty expensive in terms of performance. ::: [Railway Programming](https://fsharpforfunandprofit.com/rop/) is an idea that came out of the F# community as a way to develop for "sad path" exception cases without having to resort to throwing .NET `Exceptions` as a way of doing flow control by chaining together functions in such a way that it's relatively easy to abort workflows is preliminary steps are invalid. As with just about anything in software development, Railway Programming can be abused or just not be terribly ideal in certain areas. Also see [Against Railway-Oriented Programming](https://fsharpforfunandprofit.com/posts/against-railway-oriented-programming/) from its originator just about where it's not a great fit. Most .NET implementations of Railway Programming that this author has seen involve using some kind of custom `Result` type that denotes in a standard way if the processing should continue or stop. [Andrew Lock](https://andrewlock.net/about/) wrote a series about doing this in his series [Working with the result pattern](https://andrewlock.net/working-with-the-result-pattern-part-1-replacing-exceptions-as-control-flow/). ::: warning Some teams have tried to do Railway Programming by using a mediator library where each message handler returns some kind of custom `Result` value, then try to chain complex workflows by calling a separate message handler for each step. The Wolverine team **very strongly recommends against this approach** as it creates a lot of code ceremony and flat out noise code while detracting from both your ability to reason about the code in your system. That approach can very easily create severe performance problems by being "chatty" in its interactions with backing databases and generally making it hard for teams to even see the relationship between system inputs and what database calls are being made. ::: Wolverine has some direct support for a quasi-Railway Programming approach by moving validation or data loading steps prior to the main message handler or HTTP endpoint logic. Let's jump into a quick sample that works with either message handlers or HTTP endpoints using the built in [HandlerContinuation](/guide/handlers/middleware.html#conditionally-stopping-the-message-handling) enum: ```csharp public static class ShipOrderHandler { // This would be called first public static async Task<(HandlerContinuation, Order?, Customer?)> LoadAsync(ShipOrder command, IDocumentSession session) { var order = await session.LoadAsync(command.OrderId); if (order == null) { return (HandlerContinuation.Stop, null, null); } var customer = await session.LoadAsync(command.CustomerId); return (HandlerContinuation.Continue, order, customer); } // The main method becomes the "happy path", which also helps simplify it public static IEnumerable Handle(ShipOrder command, Order order, Customer customer) { // use the command data, plus the related Order & Customer data to // "decide" what action to take next yield return new MailOvernight(order.Id); } } ``` By naming convention (but you can override the method naming with attributes as you see fit), Wolverine will try to generate code that will call methods named `Before/Validate/Load(Async)` before the main message handler method or the HTTP endpoint method. You can use this [compound handler](/guide/handlers/#compound-handlers) approach to do set up work like loading data required by business logic in the main method or in this case, as validation logic that can stop further processing based on failed validation or data requirements or system state. Some Wolverine users like using these method to keep the main methods relatively simple and focused on the "happy path" and business logic in pure functions that are easier to unit test in isolation. By returning a `HandlerContinuation` value either by itself or as part of a tuple returned by a `Before`, `Validate`, or `LoadAsync` method, you can direct Wolverine to stop all other processing. You have more specialized ways of doing that in HTTP endpoints by using the `ProblemDetails` specification to stop processing like this example that uses a `Validate()` method to potentially stop processing with a descriptive 400 and error message: ```cs public record CategoriseIncident( IncidentCategory Category, Guid CategorisedBy, int Version ); public static class CategoriseIncidentEndpoint { // This is Wolverine's form of "Railway Programming" // Wolverine will execute this before the main endpoint, // and stop all processing if the ProblemDetails is *not* // "NoProblems" public static ProblemDetails Validate(Incident incident) { return incident.Status == IncidentStatus.Closed ? new ProblemDetails { Detail = "Incident is already closed" } // All good, keep going! : WolverineContinue.NoProblems; } // This tells Wolverine that the first "return value" is NOT the response // body [EmptyResponse] [WolverinePost("/api/incidents/{incidentId:guid}/category")] public static IncidentCategorised Post( // the actual command CategoriseIncident command, // Wolverine is generating code to look up the Incident aggregate // data for the event stream with this id [Aggregate("incidentId")] Incident incident) { // This is a simple case where we're just appending a single event to // the stream. return new IncidentCategorised(incident.Id, command.Category, command.CategorisedBy); } } ``` snippet source | anchor The value `WolverineContinue.NoProblems` tells Wolverine that everything is good, full speed ahead. Anything else will write the `ProblemDetails` value out to the response, return a 400 status code (or whatever you decide to use), and stop processing. Returning a `ProblemDetails` object hopefully makes these filter methods easy to unit test themselves. You can also use the AspNetCore `IResult` as another formally supported "result" type in these filter methods like this shown below: ```cs public static class ExamineFirstHandler { public static bool DidContinue { get; set; } public static IResult Before([Entity] Todo2 todo) { return todo != null ? WolverineContinue.Result() : Results.Empty; } [WolverinePost("/api/todo/examinefirst")] public static void Handle(ExamineFirst command) => DidContinue = true; } ``` snippet source | anchor In this case, the "special" value `WolverineContinue.Result()` tells Wolverine to keep going, otherwise, Wolverine will execute the `IResult` returned from one of these filter methods and stop all other processing for the HTTP request. --- --- url: /guide/handlers/rate-limiting.md --- # Rate Limiting Wolverine can enforce distributed rate limits for message handlers by re-queuing and pausing the listener when limits are exceeded. This is intended for external API usage limits that must be respected across multiple worker nodes. ## Message Type Rate Limits Use `RateLimit` on a message type policy to set a default limit and optional time-of-day overrides: ```cs using Wolverine; using Wolverine.RateLimiting; opts.Policies.ForMessagesOfType() .RateLimit(RateLimit.PerMinute(900), schedule => { schedule.TimeZone = TimeZoneInfo.Utc; schedule.AddWindow(new TimeOnly(8, 0), new TimeOnly(17, 0), RateLimit.PerMinute(400)); }); ``` The middleware enforces the limit before handler execution. If the limit is exceeded, Wolverine re-schedules the message and pauses the listener for the computed delay. ## Endpoint Rate Limits You can also rate limit an entire listening endpoint: ```cs using Wolverine; using Wolverine.RateLimiting; opts.RateLimitEndpoint(new Uri("rabbitmq://queue/critical"), RateLimit.PerMinute(400)); ``` Endpoint limits take precedence over message type limits when both are configured. ## Distributed Store Rate limiting relies on a shared store. By default, Wolverine registers an in-memory store for tests and local development. For production, register a shared store implementation. ### SQL Server ```cs using Wolverine; using Wolverine.SqlServer; opts.PersistMessagesWithSqlServer(connectionString) .UseSqlServerRateLimiting(); ``` This uses the Wolverine message storage schema by default (same schema as the inbox/outbox tables). ## Scheduling Requirements Rate limiting re-schedules messages through Wolverine's scheduling pipeline. For external listeners, Wolverine requires durable inboxes to ensure rescheduled messages are persisted correctly. ```cs opts.ListenToRabbitQueue("critical").UseDurableInbox(); // or: opts.Policies.UseDurableInboxOnAllListeners(); ``` --- --- url: /guide/durability/ravendb.md --- # RavenDb Integration Wolverine supports a [RavenDb](https://ravendb.net/) backed message persistence strategy option as well as RavenDb-backed transactional middleware and saga persistence. To get started, add the `WolverineFx.RavenDb` dependency to your application: ```bash dotnet add package WolverineFx.RavenDb ``` and in your application, tell Wolverine to use RavenDb for message persistence: ```cs var builder = Host.CreateApplicationBuilder(); // You'll need a reference to RavenDB.DependencyInjection // for this one builder.Services.AddRavenDbDocStore(raven => { // configure your RavenDb connection here }); builder.UseWolverine(opts => { // That's it, nothing more to see here opts.UseRavenDbPersistence(); // The RavenDb integration supports basic transactional // middleware just fine opts.Policies.AutoApplyTransactions(); }); // continue with your bootstrapping... ``` snippet source | anchor Also see [RavenDb's own documentation](https://ravendb.net/docs/article-page/6.0/csharp/start/guides/aws-lambda/existing-project) for bootstrapping RavenDb inside of a .NET application. ## Message Persistence The [durable inbox and outbox](/guide/durability/) options in Wolverine are completely supported with RavenDb as the persistence mechanism. This includes scheduled execution (and retries), dead letter queue storage using the `DeadLetterMessage` collection, and the ability to replay designated messages in the dead letter queue storage. ## Saga Persistence The RavenDb integration can serve as a [Wolverine Saga persistence mechanism](/guide/durability/sagas). The only limitation is that your `Saga` types can *only* use strings as the identity for the `Saga`. ```cs public class Order : Saga { // Just use this for the identity // of RavenDb backed sagas public string Id { get; set; } // Handle and Start methods... } ``` snippet source | anchor There's nothing else to do, if RavenDb integration is applied to your Wolverine, it's going to kick in for saga persistence as long as your `Saga` type has a string identity property. ## Transactional Middleware ::: warning The RavenDb transactional middleware **only** supports the RavenDb `IAsyncDocumentSession` service ::: The normal configuration options for transactional middleware in Wolverine apply to the RavenDb backend, so either mark handlers explicitly with `[Transactional]` like so: ```cs public class CreateDocCommandHandler { [Transactional] public async Task Handle(CreateDocCommand message, IAsyncDocumentSession session) { await session.StoreAsync(new FakeDoc { Id = message.Id }); } } ``` snippet source | anchor Or if you choose to do this more conventionally (which folks do tend to use quite often): ```csharp builder.UseWolverine(opts => { // That's it, nothing more to see here opts.UseRavenDbPersistence(); // The RavenDb integration supports basic transactional // middleware just fine opts.Policies.AutoApplyTransactions(); }); ``` and the transactional middleware will kick in on any message handler or HTTP endpoint that uses the RavenDb `IAsyncDocumentSession` like this handler signature: ```cs public class AlternativeCreateDocCommandHandler { // Auto transactions would kick in just because of the dependency // on IAsyncDocumentSession public async Task Handle(CreateDocCommand message, IAsyncDocumentSession session) { await session.StoreAsync(new FakeDoc { Id = message.Id }); } } ``` snippet source | anchor The transactional middleware will also be applied for any usage of the `RavenOps` [side effects](/guide/handlers/side-effects) model for Wolverine's RavenDb integration: ```cs public record RecordTeam(string Team, int Year); public static class RecordTeamHandler { public static IRavenDbOp Handle(RecordTeam command) { return RavenOps.Store(new Team { Id = command.Team, YearFounded = command.Year }); } } ``` snippet source | anchor ## System Control Queues The RavenDb integration to Wolverine does not yet come with a built in database control queue mechanism, so you will need to add that from external messaging brokers as in this example using Azure Service Bus: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { // One way or another, you're probably pulling the Azure Service Bus // connection string out of configuration var azureServiceBusConnectionString = builder .Configuration .GetConnectionString("azure-service-bus")!; // Connect to the broker in the simplest possible way opts.UseAzureServiceBus(azureServiceBusConnectionString) .AutoProvision() // This enables Wolverine to use temporary Azure Service Bus // queues created at runtime for communication between // Wolverine nodes .EnableWolverineControlQueues(); }); ``` snippet source | anchor For local development, there is also an option to let Wolverine just use its TCP transport as a control endpoint with this configuration option: ```csharp WolverineOptions.UseTcpForControlEndpoint(); ``` In the option above, Wolverine is just looking for an unused port, and assigning that found port as the listener for the node being bootstrapped. ## RavenOps Side Effects The `RavenOps` static class can be used as a convenience for RavenDb integration with Wolverine: ```cs /// /// Side effect helper class for Wolverine's integration with RavenDb /// public static class RavenOps { /// /// Store a new RavenDb document /// /// /// /// public static IRavenDbOp Store(T document) => new StoreDoc(document); /// /// Delete this document in RavenDb /// /// /// public static IRavenDbOp DeleteDocument(object document) => new DeleteByDoc(document); /// /// Delete a RavenDb document by its id /// /// /// public static IRavenDbOp DeleteById(string id) => new DeleteById(id); } ``` snippet source | anchor See the Wolverine [side effects](/guide/handlers/side-effects) model for more information. This integration also includes full support for the [storage action side effects](/guide/handlers/side-effects.html#storage-side-effects) model when using RavenDb with Wolverine. ## Entity Attribute Loading The RavenDb integration is able to completely support the [Entity attribute usage](/guide/handlers/persistence.html#automatically-loading-entities-to-method-parameters). --- --- url: /guide/messaging/transports/sqs/message-attributes.md --- # Receiving SQS Message Attributes Here’s the deal: Amazon SQS won’t just give you the user-defined message attributes for free you have to explicitly ask for them in the receive request. Up until now, Wolverine never set that field, which meant any custom attributes upstream were effectively invisible. As of now, you can opt in to request those attributes. This is **interop-only**: Wolverine will ask SQS for the attributes if you configure it, but it’s still up to your own `ISqsEnvelopeMapper` to decide what to do with them. ::: tip Built-in mappers (`DefaultSqsEnvelopeMapper`, `RawJsonSqsEnvelopeMapper`) don’t touch message attributes. If you need them, you’ll need your own mapper. ::: ## Opting in You can request *all* user-defined attributes: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseAmazonSqsTransport() .ConfigureSenders(s => s.InteropWith(new CustomSqsMapper())); opts.ListenToSqsQueue("incoming", queue => { // Ask SQS for all user-defined attributes queue.MessageAttributeNames = new List { "All" }; }); }).StartAsync(); ``` snippet source | anchor Or just the attributes you actually care about: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseAmazonSqsTransport() .ConfigureSenders(s => s.InteropWith(new CustomSqsMapper())); opts.ListenToSqsQueue("incoming", queue => { // Ask only for specific attributes queue.MessageAttributeNames = new List { "wolverineId", "jasperId" }; }); }).StartAsync(); ``` snippet source | anchor Once you’ve opted in, those attributes are available in the dictionary passed to `ISqsEnvelopeMapper.ReadEnvelopeData`. From there, you can stash them in `Envelope.Headers`, set correlation IDs, or just ignore them. ## Things to know * If `MessageAttributeNames` is `null` or empty, nothing changes (this is the default). * `"All"` asks SQS for every user-defined attribute. * Pulling in lots of attributes increases your payload size. Use this only when you need it. * This affects **receiving only**. Sending attributes is still a job for your custom mapper. * System attributes (`MessageSystemAttributeNames`) are a different story and are not part of this feature. ::: info That’s it. If you’ve already got a custom mapper, you can now wire in SQS attributes directly without having to bend over backwards with the AWS SDK. ::: --- --- url: /guide/handlers/return-values.md --- # Return Values The valid return values for Wolverine handlers are: | Return Type | Description | |----------------------------------|--------------------------------------------------------------------------------------------------| | `void` | Synchronous methods because hey, who wants to litter their code with `Task.CompletedTask` every which way? | | `Task` | If you need to do asynchronous work | | `ValueTask` | If you need to *maybe* do asynchronous work with other people's APIs | | `IEnumerable` | Published 0 to many cascading messages | | `IAsyncEnumerable` | Asynchronous method that will lead to 0 to many cascading messages | | Implements `ISideEffect` | See [Side Effects](/guide/handlers/side-effects) for more information | | `OutgoingMessages` | Special collection type that is treated as [cascading messages](/guide/handlers/cascading) | | Inherits from `Saga` | Creates a new [stateful saga](/guide/durability/sagas) | | *Your message type* | By returning another type, Wolverine treats the return value as "cascaded" message to publish | In all cases up above, if the endpoint method is asynchronous using either `Task` or `ValueTask`, the `T` is the return value, with the same behavior as the synchronous `T` would have. Wolverine also supports [Tuple](https://learn.microsoft.com/en-us/dotnet/api/system.tuple?view=net-7.0) responses, in which case every single item in a tuple `(T, T1, T2)` is an individual return value that Wolverine treats independently. Here's an example from the saga support of message handler that returns both a new `OrderSaga` to be persisted and a separate `OrderTimeout` to be published as a cascaded message: ```cs // This method would be called when a StartOrder message arrives // to start a new Order public static (Order, OrderTimeout) Start(StartOrder order, ILogger logger) { logger.LogInformation("Got a new order with id {Id}", order.OrderId); // creating a timeout message for the saga return (new Order{Id = order.OrderId}, new OrderTimeout(order.OrderId)); } ``` snippet source | anchor ## Custom Return Value Handling It's actually possible to create custom conventions in Wolverine for how different return types are utilized in the generated code Wolverine wraps around message handler methods. ::: info You can achieve exactly what this sample demonstrates by just implementing the `ISideEffect` interface from `WriteFile` without having to write your own policy or even having to know very much about Wolverine internals to accomplish the isolation of the file writing side effect. ::: For an example, let's say that you want to isolate the [side effect](https://en.wikipedia.org/wiki/Side_effect_\(computer_science\)) of writing out file contents from your handler methods by returning a custom return value called `WriteFile`: ```cs // This has to be public btw public record WriteFile(string Path, string Contents) { public Task WriteAsync() { return File.WriteAllTextAsync(Path, Contents); } } ``` snippet source | anchor ```cs // ISideEffect is a Wolverine marker interface public class WriteFile : ISideEffect { public string Path { get; } public string Contents { get; } public WriteFile(string path, string contents) { Path = path; Contents = contents; } // Wolverine will call this method. public Task ExecuteAsync(PathSettings settings) { if (!Directory.Exists(settings.Directory)) { Directory.CreateDirectory(settings.Directory); } return File.WriteAllTextAsync(Path, Contents); } } ``` snippet source | anchor And now, let's teach Wolverine to call the `WriteAsync()` method on each `WriteFile` that is returned from a message handler at runtime instead of Wolverine using the default policy of treating it as a cascaded message. To do that, I'm going to write a custom `IChainPolicy` like so: ```cs internal class WriteFilePolicy : IChainPolicy { // IChain is a Wolverine model to configure the code generation of // a message or HTTP handler and the core model for the application // of middleware public void Apply(IReadOnlyList chains, GenerationRules rules, IServiceContainer container) { var method = ReflectionHelper.GetMethod(x => x.WriteAsync()); // Check out every message and/or http handler: foreach (var chain in chains) { var writeFiles = chain.ReturnVariablesOfType(); foreach (var writeFile in writeFiles) { // This is telling Wolverine to handle any return value // of WriteFile by calling its WriteAsync() method writeFile.UseReturnAction(_ => { // This is important, return a separate MethodCall // object for each individual WriteFile variable return new MethodCall(typeof(WriteFile), method!) { Target = writeFile }; }); } } } } ``` snippet source | anchor and lastly, I'll register that policy in my Wolverine application at configuration time: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.Policies.Add(); }).StartAsync(); ``` snippet source | anchor --- --- url: /guide/http/routing.md --- # Routing ::: warning The route argument to method name matching is case-sensitive. ::: Wolverine HTTP endpoints need to be decorated with one of the `[WolverineVerb("route")]` attributes that expresses the routing argument path in standard ASP.Net Core syntax (i.e., the same as when using MVC Core or Minimal API). If a parameter argument to the HTTP handler method *exactly matches* a route argument, Wolverine will treat that as a route argument and pass the route argument value at runtime from ASP.Net Core to your handler method. To make that concrete, consider this simple case from the test suite: ```cs [WolverineGet("/name/{name}")] public static string SimpleStringRouteArgument(string name) { return $"Name is {name}"; } ``` snippet source | anchor In the sample above, the `name` argument will be the value of the route argument at runtime. Here's another example, but this time using a numeric value: ```cs [WolverineGet("/age/{age}")] public static string IntRouteArgument(int age) { return $"Age is {age}"; } ``` snippet source | anchor The following code snippet from `WolverineFx.Http` itself shows the *native .NET* valid route parameter types that are supported at this time: ```cs public static readonly Dictionary TypeOutputs = new() { { typeof(bool), "bool" }, { typeof(byte), "byte" }, { typeof(sbyte), "sbyte" }, { typeof(char), "char" }, { typeof(decimal), "decimal" }, { typeof(float), "float" }, { typeof(short), "short" }, { typeof(int), "int" }, { typeof(double), "double" }, { typeof(long), "long" }, { typeof(ushort), "ushort" }, { typeof(uint), "uint" }, { typeof(ulong), "ulong" }, { typeof(Guid), typeof(Guid).FullName! }, { typeof(DateTime), typeof(DateTime).FullName! }, { typeof(DateTimeOffset), typeof(DateTimeOffset).FullName! }, { typeof(DateOnly), typeof(DateOnly).FullName! } }; ``` snippet source | anchor ::: warning Wolverine will return a 404 status code if a route parameter cannot be correctly parsed. So passing "ABC" into what is expected to be an integer will result in a 404 response. ::: ## Strong Typed Identifiers Wolverine.HTTP can support any type as a route argument that implements a `TryParse()` method. At this point, both the [Vogen](https://github.com/SteveDunn/Vogen) and [StronglyTypedId](https://github.com/andrewlock/StronglyTypedId) tools do this for you, and value types generated by these tools are legal route argument variables for Wolverine.HTTP now. As an example from the Wolverine tests, let's say you have an identity type like this sample that uses StronglyTypedId: ```cs [StronglyTypedId(Template.Guid)] public readonly partial struct LetterId; ``` snippet source | anchor You can use the `LetterId` type as a route argument as this example shows: ```cs [WolverineGet("/sti/aggregate/longhand/{id}")] public static ValueTask Handle2(LetterId id, IDocumentSession session) => session.Events.FetchLatest(id.Value); // This is an equivalent to the endpoint above [WolverineGet("/sti/aggregate/{id}")] public static StrongLetterAggregate Handle( [ReadAggregate] StrongLetterAggregate aggregate) => aggregate; ``` snippet source | anchor ## Route Name You can add a name to the ASP.Net route with this property that is on all of the route definition attributes: ```cs [WolverinePost("/named/route", RouteName = "NamedRoute")] public string Post() { return "Hello"; } ``` snippet source | anchor --- --- url: /guide/runtime.md --- # Runtime Architecture ::: info Wolverine makes absolutely no differentiation between logical [events and commands](https://codeopinion.com/commands-events-whats-the-difference) within your system. To Wolverine, everything is just a message. ::: The two key parts of a Wolverine application are messages: ```cs // A "command" message public record DebitAccount(long AccountId, decimal Amount); // An "event" message public record AccountOverdrawn(long AccountId); ``` snippet source | anchor And the message handling code for the messages, which in Wolverine's case just means a function or method that accepts the message type as its first argument like so: ```cs public static class DebitAccountHandler { public static void Handle(DebitAccount account) { Console.WriteLine($"I'm supposed to debit {account.Amount} from account {account.AccountId}"); } } ``` snippet source | anchor ## Invoking a Message Inline At runtime, you can use Wolverine to invoke the message handling for a message *inline* in the current executing thread with Wolverine effectively acting as a mediator: ![Invoke Wolverine Handler](/invoke-handler.png) It's a bit more complicated than that though, as the inline invocation looks like this simplified sequence diagram: ![Invoke a Message Inline](/invoke-message-sequence-diagram.png) As you can hopefully see, even the inline invocation is adding some value beyond merely "mediating" between the caller and the actual message handler by: 1. Wrapping Open Telemetry tracing and execution metrics around the execution 2. Correlating the execution in logs to the original calling activity 3. Providing some inline retry [error handling policies](/guide/handlers/error-handling) for transient errors 4. Publishing [cascading messages](/guide/handlers/cascading) from the message execution only *after* the execution succeeds as an in memory outbox ## Asynchronous Messaging ::: info You can, of course, happily publish messages to an external queue and consume those very same messages later in the same process. ::: Wolverine supports asynchronous messaging through both its [local, in-process queueing](/guide/messaging/transports/local) mechanism and through external messaging brokers like Kafka, Rabbit MQ, Azure Service Bus, or Amazon SQS. The local queueing is a valuable way to add background processing to a system, and can even be durably backed by a database with full-blown transactional inbox/outbox support to retain in process work across unexpected system shutdowns or restarts. What the local queue cannot do is share work across a cluster of running nodes. In other words, you will have to use external messaging brokers to achieve any kind of [competing consumer](https://www.enterpriseintegrationpatterns.com/patterns/messaging/CompetingConsumers.html) work sharing for better scalability. ::: info Wolverine listening agents all support competing consumers out of the box for work distribution across a node cluster -- unless you are purposely opting into [strictly ordered listeners](/guide/messaging/listeners.html#strictly-ordered-listeners) where only one node is allowed to handle messages from a given queue or subscription. ::: The other main usage of Wolverine is to send messages from your current process to another process through some kind of external transport like a Rabbit MQ/Azure Service Bus/Amazon SQS queue and have Wolverine execute that message in another process (or back to the original process): ![Send a Message](/sending-message.png) The internals of publishing a message are shown in this simplified sequence diagram: ![Publish a Message](/publish-message-sequence-diagram.png) Along the way, Wolverine has to: 1. Serialize the message body 2. Route the outgoing message to the proper subscriber(s) 3. Utilize any publishing rules like "this message should be discarded after 10 seconds" 4. Map the outgoing Wolverine `Envelope` representation of the message into whatever the underlying transport (Azure Service Bus et al.) uses 5. Actually invoke the actual messaging infrastructure to send out the message On the flip side, listening for a message follows this sequence shown for the "happy path" of receiving a message through Rabbit MQ: ![Listen for a Message](/listen-for-message-sequence-diagram.png) During the listening process, Wolverine has to: 1. Map the incoming Rabbit MQ message to Wolverine's own `Envelope` structure 2. Determine what the actual message type is based on the `Envelope` data 3. Find the correct executor strategy for the message type 4. Deserialize the raw message data to the actual message body 5. Call the inner message executor for that message type 6. Carry out quite a bit of Open Telemetry activity tracing, report metrics, and just plain logging 7. Evaluate any errors against the error handling policies of the application or the specific message type ## Endpoint Types ::: info Not all transports support all three types of endpoint modes, and will helpfully assert when you try to choose an invalid option. ::: ### Inline Endpoints Wolverine endpoints come in three basic flavors, with the first being **Inline** endpoints: ```cs // Configuring a Wolverine application to listen to // an Azure Service Bus queue with the "Inline" mode opts.ListenToAzureServiceBusQueue(queueName, q => q.Options.AutoDeleteOnIdle = 5.Minutes()).ProcessInline(); ``` snippet source | anchor With inline endpoints, as the name implies, calling `IMessageBus.SendAsync()` immediately sends the message to the external message broker. Likewise, messages received from an external message queue are processed inline before Wolverine acknowledges to the message broker that the message is received. ![Inline Endpoints](/inline-endpoint.png) In the absence of a durable inbox/outbox, using inline endpoints is "safer" in terms of guaranteed delivery. As you might think, using inline agents can bottle neck the message processing, but that can be alleviated by opting into parallel listeners. ### Buffered Endpoints In the second **Buffered** option, messages are queued locally between the actual external broker and the Wolverine handlers or senders. To opt into buffering, you use this syntax: ```cs // I overrode the buffering limits just to show // that they exist for "back pressure" opts.ListenToAzureServiceBusQueue("incoming") .BufferedInMemory(new BufferingLimits(1000, 200)); ``` snippet source | anchor At runtime, you have a local [TPL Dataflow queue](https://learn.microsoft.com/en-us/dotnet/standard/parallel-programming/dataflow-task-parallel-library) between the Wolverine callers and the broker: ![Buffered Endpoints](/buffered-endpoint.png) On the listening side, buffered endpoints do support [back pressure](https://www.educative.io/answers/techniques-to-exert-back-pressure-in-distributed-systems) (of sorts) where Wolverine will stop the actual message listener if too many messages are queued in memory to avoid chewing up your application memory. In transports like Amazon SQS that only support batched message sending or receiving, `Buffered` is the default mode as that facilitates message batching. `Buffered` message sending and receiving can lead to higher throughput, and should be considered for cases where messages are ephemeral or expire and throughput is more important than delivery guarantees. The downside is that messages in the in memory queues can be lost in the case of the application shutting down unexpectedly -- but Wolverine tries to "drain" the in memory queues on normal application shutdown. ### Durable Endpoints **Durable** endpoints behave like **buffered** endpoints, but also use the [durable inbox/outbox message storage](/guide/durability/) to create much stronger guarantees about message delivery and processing. You will need to use `Durable` endpoints in order to truly take advantage of the persistent outbox mechanism in Wolverine. To opt into making an endpoint durable, use this syntax: ```cs // I overrode the buffering limits just to show // that they exist for "back pressure" opts.ListenToAzureServiceBusQueue("incoming") .UseDurableInbox(new BufferingLimits(1000, 200)); opts.PublishAllMessages().ToAzureServiceBusQueue("outgoing") .UseDurableOutbox(); ``` snippet source | anchor Or use policies to do this in one fell swoop (which may not be what you actually want, but you could do this!): ```cs opts.Policies.UseDurableOutboxOnAllSendingEndpoints(); ``` snippet source | anchor As shown below, the `Durable` endpoint option adds an extra step to the `Buffered` behavior to add database storage of the incoming and outgoing messages: ![Durable Endpoints](/durable-endpoints.png) Outgoing messages are deleted in the durable outbox upon successful sending acknowledgements from the external broker. Likewise, incoming messages are also deleted from the durable inbox upon successful message execution. The `Durable` endpoint option makes Wolverine's [local queueing](/guide/messaging/transports/local) robust enough to use for cases where you need guaranteed processing of messages, but don't want to use an external broker. ## How Wolverine Calls Your Message Handlers ![A real wolverine](/real_wolverine.jpeg) Wolverine is a little different animal from the tools with similar features in the .NET ecosystem (pun intended:). Instead of the typical strategy of requiring you to implement an adapter interface of some sort in *your* code, Wolverine uses [dynamically generated code](./codegen) to "weave" its internal adapter code and even middleware around your message handler code. In ideal circumstances, Wolverine is able to completely remove the runtime usage of an IoC container for even better performance. The end result is a runtime pipeline that is able to accomplish its tasks with potentially much less performance overhead than comparable .NET frameworks that depend on adapter interfaces and copious runtime usage of IoC containers. See [Code Generation in Wolverine](/guide/codegen) for much more information about this model and how it relates to the execution pipeline. ## Nodes and Agents ![Nodes and Agents](/nodes-and-agents.png) Wolverine has some ability to distribute "sticky" or stateful work across running nodes in your application. To do so, Wolverine tracks the running "nodes" (just means an executing instance of your Wolverine application) and elects a single leader to distribute and assign "agents" to the running "nodes". Wolverine has built in health monitoring that can detect when any node is offline to redistribute working agents to other nodes. Wolverine is also able to "fail over" the leader assignment to a different node if the original leader is determined to be down. Likewise, Wolverine will redistribute running agent assignments when new nodes are brought online. ::: info You will have to have some kind of durable message storage configured for your application for the leader election and agent assignments to function. ::: The stateful, running "agents" are exposed through an `IAgent` interface like so: ```cs /// /// Models a constantly running background process within a Wolverine /// node cluster /// public interface IAgent : IHostedService // Standard .NET interface for background services { /// /// Unique identification for this agent within the Wolverine system /// Uri Uri { get; } // Not really used for anything real *yet*, but // hopefully becomes something useful for CritterWatch // health monitoring AgentStatus Status { get; } } ``` snippet source | anchor ```cs /// /// Models a constantly running background process within a Wolverine /// node cluster /// public interface IAgent : IHostedService // Standard .NET interface for background services { /// /// Unique identification for this agent within the Wolverine system /// Uri Uri { get; } // Not really used for anything real *yet*, but // hopefully becomes something useful for CritterWatch // health monitoring AgentStatus Status { get; } } public class CompositeAgent : IAgent { private readonly List _agents; public Uri Uri { get; } public CompositeAgent(Uri uri, IEnumerable agents) { Uri = uri; _agents = agents.ToList(); } public async Task StartAsync(CancellationToken cancellationToken) { foreach (var agent in _agents) { await agent.StartAsync(cancellationToken); } Status = AgentStatus.Running; } public async Task StopAsync(CancellationToken cancellationToken) { foreach (var agent in _agents) { await agent.StopAsync(cancellationToken); } Status = AgentStatus.Running ; } public AgentStatus Status { get; private set; } = AgentStatus.Stopped; } ``` snippet source | anchor With related groups of agents built and assigned by IoC-registered implementations of this interface: ```cs /// /// Pluggable model for managing the assignment and execution of stateful, "sticky" /// background agents on the various nodes of a running Wolverine cluster /// public interface IAgentFamily { /// /// Uri scheme for this family of agents /// string Scheme { get; } /// /// List of all the possible agents by their identity for this family of agents /// /// ValueTask> AllKnownAgentsAsync(); /// /// Create or resolve the agent for this family /// /// /// /// ValueTask BuildAgentAsync(Uri uri, IWolverineRuntime wolverineRuntime); /// /// All supported agent uris by this node instance /// /// ValueTask> SupportedAgentsAsync(); /// /// Assign agents to the currently running nodes when new nodes are detected or existing /// nodes are deactivated /// /// /// ValueTask EvaluateAssignmentsAsync(AssignmentGrid assignments); } ``` snippet source | anchor Built in examples of the agent and agent family are: * Wolverine's built-in durability agent to recover orphaned messages from nodes that are detected to be offline, with one agent per tenant database * Wolverine uses the agent assignment for "exclusive" message listeners like the strictly ordered listener option * The integrated Marten projection and subscription load distribution ## IoC Container Integration ::: info Wolverine has been tested with both the built in `ServiceProvider` and [Lamar](https://jasperfx.github.io/lamar), which was originally built specifically to support what ended up becoming Wolverine. The previous limitation to only supporting Lamar was lifted in Wolverine 3.0. ::: Wolverine is a significantly different animal than other .NET frameworks, and uses the IoC container quite differently than most .NET application frameworks. For the most part, Wolverine is looking at the IoC container registrations and trying to generate code to mimic the IoC behavior in the message handler and HTTP endpoint adapters that Wolverine generates internally. The benefits of this model are: * The pre-generated code can tell you a lot about how Wolverine is handling your code, including any registered middleware * The fastest IoC container is no IoC container * Less conditional logic at runtime * Much slimmer exception stack traces when things inevitably go wrong. Wolverine's predecessor tool ([FubuMVC](https://fubumvc.github.io)) use nested objects created on every request or message for its middleware strategy, and the exception messages coming out of handler code could be *epic* with a lot of middleware active. The downside is that Wolverine does not play well with the kind of runtime IoC tricks other frameworks rely on for passing state. For example, because Wolverine.HTTP does not use the ASP.Net Core request services to build endpoint types and its dependencies at runtime, it's a little clumsier to pass state from ASP.Net Core middleware written into scoped IoC services, with custom multi-tenancy approaches being the usual cause of this. Wolverine certainly has its own multi-tenancy support, and we don't think this is really a serious problem for most usages, but it has caused friction for some Wolverine users converting from other frameworks. --- --- url: /guide/durability/efcore/sagas.md --- # Saga Storage Wolverine can use registered EF Core `DbContext` types for [saga persistence](/guide/durability) as long as the EF Core transactional support is added to the application. There's absolutely nothing you need to do to enable this except for having a mapping for whatever `Saga` type you need to persist in a registered `DbContext` type. As long as your `DbContext` with a mapping for a particular `Saga` type is registered in the IoC container for your application and Wolverine's EF Core transactional support is active, Wolverine will be able to find and use the correct `DbContext` type for your `Saga` at runtime. You do *not* need to use the `WolverineOptions.AddSagaType()` option with EF Core saga, that option is strictly for the [lightweight saga storage](/guide/durability/sagas.html#lightweight-saga-storage) with SQL Server or PostgreSQL. To make that concrete, let's say you've got a simplistic `Order` saga type like this: ```cs public enum OrderStatus { Pending = 0, CreditReserved = 1, CreditLimitExceeded = 2, Approved = 3, Rejected = 4 } public class Order : Saga { public string? Id { get; set; } public OrderStatus OrderStatus { get; set; } = OrderStatus.Pending; public object[] Start( OrderPlaced orderPlaced, ILogger logger ) { Id = orderPlaced.OrderId; logger.LogInformation("Order {OrderId} placed", Id); OrderStatus = OrderStatus.Pending; return [ new ReserveCredit( orderPlaced.OrderId, orderPlaced.CustomerId, orderPlaced.Amount ) ]; } public object[] Handle( CreditReserved creditReserved, ILogger logger ) { OrderStatus = OrderStatus.CreditReserved; logger.LogInformation("Credit reserver for Order {OrderId}", Id); return [new ApproveOrder(creditReserved.OrderId, creditReserved.CustomerId)]; } public void Handle( OrderApproved orderApproved, ILogger logger ) { OrderStatus = OrderStatus.Approved; logger.LogInformation("Order {OrderId} approved", Id); } public object[] Handle( CreditLimitExceeded creditLimitExceeded, ILogger logger ) { OrderStatus = OrderStatus.CreditLimitExceeded; return [new RejectOrder(creditLimitExceeded.OrderId)]; } public void Handle( OrderRejected orderRejected, ILogger logger ) { OrderStatus = OrderStatus.Rejected; logger.LogInformation("Order {OrderId} rejected", Id); MarkCompleted(); } } ``` snippet source | anchor And a matching `OrdersDbContext` that can persist that type like so: ```cs public class OrdersDbContext : DbContext { protected OrdersDbContext() { } public OrdersDbContext(DbContextOptions options) : base(options) { } public DbSet Orders { get; set; } protected override void OnModelCreating(ModelBuilder modelBuilder) { // Your normal EF Core mapping modelBuilder.Entity(map => { map.ToTable("orders", "sample"); map.HasKey(x => x.Id); map.Property(x => x.OrderStatus) .HasConversion(v => v.ToString(), v => Enum.Parse(v)); }); } } ``` snippet source | anchor There's no other registration to do other than adding the `OrdersDbContext` to your IoC container and enabling the Wolverine EF Core middleware as shown in the [getting started with EF Core](/guide/durability/efcore/#getting-started) section. ## When to Use EF Core vs Lightweight Storage? As to the question, when should you opt for lightweight storage where Wolverine just sticks serialized JSON into a single field for a saga versus using fullblown EF Core mapping? If you have any need to *also* persist other data with a `DbContext` service while executing any of the `Saga` steps, use EF Core mapping with that same `DbContext` type so that Wolverine can easily manage the changes in one single transaction. If you prefer having a flat table, maybe just because it'll be easier to monitor through normal database tooling, use EF Core. If you just want to go fast and don't want to mess with ORM mapping, then use the lightweight storage with Wolverine. Do note that using `AddSagaType()` for a `Saga` type will win out over any EF Core mappings and Wolverine will try to use the lightweight storage in that case. --- --- url: /guide/durability/sagas.md --- # Sagas ::: tip To be honest, we're just not going to get hung up on "process manager" vs. "saga" here. The key point is that what Wolverine is calling a "saga" really just means a long running, multi-step process where you need to track some state between the steps. If that annoys Greg Young, then ¯\_(ツ)\_/¯. ::: As is so common in these docs, I would direct you to this from the old "EIP" book: [Process Manager](http://www.enterpriseintegrationpatterns.com/patterns/messaging/ProcessManager.html). A stateful saga in Wolverine is used to coordinate long running workflows or to break large, logical transactions into a series of smaller steps. A stateful saga in Wolverine consists of a couple parts: 1. A saga state document type that is persisted between saga messages that must inherit from the `Wolverine.Saga` type. This will also be your handler type for all messages that directly impact the saga 2. Messages that would update the saga state when handled 3. A saga persistence strategy registered in Wolverine that knows how to load and persist the saga state documents 4. An identity for the saga state in order to save, load, or delete the current saga state ## Your First Saga *See the [OrderSagaSample](https://github.com/JasperFx/wolverine/tree/main/src/Samples/OrderSagaSample) project in GitHub for all the sample code in this section.* Jumping right into an example, consider a very simple order management service that will have steps to: * Create a new order * Complete the order * Or alternatively, delete new orders if they have not been completed within 1 minute For the moment, I’m going to ignore the underlying persistence and just focus on the Wolverine message handlers to implement the order saga workflow with this simplistic saga code: ```cs public record StartOrder(string OrderId); public record CompleteOrder(string Id); // This message will always be scheduled to be delivered after // a one minute delay public record OrderTimeout(string Id) : TimeoutMessage(1.Minutes()); public class Order : Saga { public string? Id { get; set; } // This method would be called when a StartOrder message arrives // to start a new Order public static (Order, OrderTimeout) Start(StartOrder order, ILogger logger) { logger.LogInformation("Got a new order with id {Id}", order.OrderId); // creating a timeout message for the saga return (new Order{Id = order.OrderId}, new OrderTimeout(order.OrderId)); } // Apply the CompleteOrder to the saga public void Handle(CompleteOrder complete, ILogger logger) { logger.LogInformation("Completing order {Id}", complete.Id); // That's it, we're done. Delete the saga state after the message is done. MarkCompleted(); } // Delete this order if it has not already been deleted to enforce a "timeout" // condition public void Handle(OrderTimeout timeout, ILogger logger) { logger.LogInformation("Applying timeout to order {Id}", timeout.Id); // That's it, we're done. Delete the saga state after the message is done. MarkCompleted(); } public static void NotFound(CompleteOrder complete, ILogger logger) { logger.LogInformation("Tried to complete order {Id}, but it cannot be found", complete.Id); } } ``` snippet source | anchor A few explanatory notes on this code before we move on to detailed documentation: * Wolverine leans a bit on type and naming conventions to discover message handlers and to “know” how to call these message handlers. Some folks will definitely not like the magic, but this approach leads to substantially less code and arguably complexity compared to existing .Net tools * Wolverine supports the idea of [scheduled messages](/guide/messaging/message-bus.html#scheduling-message-delivery-or-execution), and the new `TimeoutMessage` base class we used up there is just a shorthand way to utilize that support for “saga timeout” conditions * Wolverine generally tries to adapt to your application code rather that using mandatory adapter interfaces * Subclassing `Saga` is meaningful first as this tells Wolverine "hey, this stateful type should be treated as a saga" for [handler discovery](/guide/handlers/discovery), but also for communicating to Wolverine that a logical saga is complete and should be deleted Now, to add saga persistence, I'm going to lean on the [Marten integration](/guide/durability/marten) with Wolverine and use this bootstrapping for our little order web service: ```cs using Marten; using JasperFx; using JasperFx.Resources; using OrderSagaSample; using Wolverine; using Wolverine.Marten; var builder = WebApplication.CreateBuilder(args); // Not 100% necessary, but enables some extra command line diagnostics builder.Host.ApplyJasperFxExtensions(); // Adding Marten builder.Services.AddMarten(opts => { var connectionString = builder.Configuration.GetConnectionString("Marten"); opts.Connection(connectionString); opts.DatabaseSchemaName = "orders"; }) // Adding the Wolverine integration for Marten. .IntegrateWithWolverine(); builder.Services.AddEndpointsApiExplorer(); builder.Services.AddSwaggerGen(); // Do all necessary database setup on startup builder.Services.AddResourceSetupOnStartup(); // The defaults are good enough here builder.Host.UseWolverine(); var app = builder.Build(); // Just delegating to Wolverine's local command bus for all app.MapPost("/start", (StartOrder start, IMessageBus bus) => bus.InvokeAsync(start)); app.MapPost("/complete", (CompleteOrder complete, IMessageBus bus) => bus.InvokeAsync(complete)); app.MapGet("/all", (IQuerySession session) => session.Query().ToListAsync()); app.MapGet("/", (HttpResponse response) => { response.Headers.Add("Location", "/swagger"); response.StatusCode = 301; }).ExcludeFromDescription(); app.UseSwagger(); app.UseSwaggerUI(); return await app.RunJasperFxCommands(args); ``` snippet source | anchor The call to `IServiceCollection.AddMarten().IntegrateWithWolverine()` adds the Marten backed saga persistence to your application. No other configuration is necessary. See the [Marten integration](/guide/durability/marten.html#saga-storage) for a little more information about using Marten backed sagas. ## How it works ::: warning Do not call `IMessageBus.InvokeAsync()` within a `Saga` related handler to execute a command on that same `Saga`. You will be acting on old or missing data. Utilize cascading messages for subsequent work. ::: Wolverine is wrapping some generated code around your `Saga.Start()` and `Saga.Handle()` methods for loading and persisting the state. Here's a (mildly cleaned up) version of the generated code for starting the `Order` saga shown above: ```cs public class StartOrderHandler133227374 : MessageHandler { private readonly OutboxedSessionFactory _outboxedSessionFactory; private readonly ILogger _logger; public StartOrderHandler133227374(OutboxedSessionFactory outboxedSessionFactory, ILogger logger) { _outboxedSessionFactory = outboxedSessionFactory; _logger = logger; } public override async Task HandleAsync(MessageContext context, CancellationToken cancellation) { var startOrder = (StartOrder)context.Envelope.Message; await using var documentSession = _outboxedSessionFactory.OpenSession(context); (var outgoing1, var outgoing2) = Order.Start(startOrder, _logger); // Register the document operation with the current session documentSession.Insert(outgoing1); // Outgoing, cascaded message await context.EnqueueCascadingAsync(outgoing2).ConfigureAwait(false); // Commit the unit of work await documentSession.SaveChangesAsync(cancellation).ConfigureAwait(false); } } ``` And here's the code that's generated for the `CompleteOrder` command from the sample above: ```cs public class CompleteOrderHandler1228388417 : MessageHandler { private readonly OutboxedSessionFactory _outboxedSessionFactory; private readonly ILogger _logger; public CompleteOrderHandler1228388417(OutboxedSessionFactory outboxedSessionFactory, ILogger logger) { _outboxedSessionFactory = outboxedSessionFactory; _logger = logger; } public override async Task HandleAsync(MessageContext context, CancellationToken cancellation) { await using var documentSession = _outboxedSessionFactory.OpenSession(context); var completeOrder = (CompleteOrder)context.Envelope.Message; string sagaId = context.Envelope.SagaId ?? completeOrder.Id; if (string.IsNullOrEmpty(sagaId)) throw new IndeterminateSagaStateIdException(context.Envelope); // Try to load the existing saga document var order = await documentSession.LoadAsync(sagaId, cancellation).ConfigureAwait(false); if (order == null) { throw new UnknownSagaException(typeof(Order), sagaId); } else { order.Handle(completeOrder, _logger); if (order.IsCompleted()) { // Register the document operation with the current session documentSession.Delete(order); } else { // Register the document operation with the current session documentSession.Update(order); } // Commit all pending changes await documentSession.SaveChangesAsync(cancellation).ConfigureAwait(false); } } } ``` ## Saga Message Identity ::: warning The automatic saga id tracking on messaging **only** works when the saga already exists and you are handling a message to an existing saga. In the case of creating a new `Saga` and needing to publish outgoing messages related to that `Saga` in the same logical transaction, you will have to embed the new `Saga` identity into the outgoing message bodies. ::: In the case of two Wolverine applications sending messages between themselves, or a single Wolverine application messaging itself in regards to an existing ongoing saga, Wolverine will quietly track the saga id through headers. In most other cases, you will need to expose the saga identity directly on the incoming messages. To do that, Wolverine determines what public member of the saga message refers to the saga identity. In order of precedence, Wolverine first looks for a member decorated with the `[SagaIdentity]` attribute like this: ```cs public class ToyOnTray { // There's always *some* reason to deviate, // so you can use this attribute to tell Wolverine // that this property refers to the Id of the // Saga state document [SagaIdentity] public int OrderId { get; set; } } ``` snippet source | anchor After that, you can also use a new `[SagaIdentityFrom]` (as of 5.9) attribute on~~~~ a handler parameter: ```cs public class SomeSaga { public Guid Id { get; set; } public void Handle([SagaIdentityFrom(nameof(SomeSagaMessage5.Hello))] SomeSagaMessage5 message) { } } ``` snippet source | anchor Next, Wolverine looks for a member named "{saga type name}Id." In the case of our `Order` saga type, that would be a public member named `OrderId` as shown in this code: ```csharp public record StartOrder(string OrderId); ``` And lastly, Wolverine looks for a public member named `Id` like this one: ```csharp public record CompleteOrder(string Id); ``` ## Starting a Saga ::: tip In all the cases where you return a `Saga` object from a handler method to denote the start of a new `Saga`, your code should set the identity for the new `Saga`. ::: To start a new saga, you have a couple options. You can use a static `Start()` or `StartAsync()` handler method on the `Saga` type itself like this one on an `OrderSaga`: ```cs // This method would be called when a StartOrder message arrives // to start a new Order public static (Order, OrderTimeout) Start(StartOrder order, ILogger logger) { logger.LogInformation("Got a new order with id {Id}", order.OrderId); // creating a timeout message for the saga return (new Order{Id = order.OrderId}, new OrderTimeout(order.OrderId)); } ``` snippet source | anchor ::: warning The automatic saga id tracking on messaging **only** works when the saga already exists and you are handling a message to an existing saga. In the case of creating a new `Saga` and needing to publish outgoing messages related to that `Saga` in the same logical transaction, you will have to embed the new `Saga` identity into the outgoing message bodies. ::: You can also simply return one or more `Saga` type objects from a handler method as shown below where `Reservation` is a Wolverine saga: ```cs public class Reservation : Saga { public string? Id { get; set; } // Apply the CompleteReservation to the saga public void Handle(BookReservation book, ILogger logger) { logger.LogInformation("Completing Reservation {Id}", book.Id); // That's it, we're done. Delete the saga state after the message is done. MarkCompleted(); } // Delete this Reservation if it has not already been deleted to enforce a "timeout" // condition public void Handle(ReservationTimeout timeout, ILogger logger) { logger.LogInformation("Applying timeout to Reservation {Id}", timeout.Id); // That's it, we're done. Delete the saga state after the message is done. MarkCompleted(); } } ``` snippet source | anchor and the handler that would start the new saga: ```cs public class StartReservationHandler { public static ( // Outgoing message ReservationBooked, // Starts a new Saga Reservation, // Additional message cascading for the new saga ReservationTimeout) Handle(StartReservation start) { return ( new ReservationBooked(start.ReservationId, DateTimeOffset.UtcNow), new Reservation { Id = start.ReservationId }, new ReservationTimeout(start.ReservationId) ); } } ``` snippet source | anchor ## Method Conventions ::: tip Note that there are several different legal synonyms for "Handle" or "Consume." This is due to early attempts to make Wolverine backward compatible with its ancestor tooling. Just pick one name or style in your application and use that consistently throughout. ::: The following method names are meaningful in `Saga` types: | Name | Description | |--------------------------------------|---------------------------------------------------------------------------------------------------------------------| | `Start`, `Starts` | Only called if the identified saga does not already exist *and* the incoming message contains the new saga identity | | `StartOrHandle`, `StartsOrHandles` | Called if the identified saga regardless of whether the saga already exists or is new | | `Handle`, `Handles` | Called only when the identified saga already exists | | `Consume`, `Consumes` | Called only when the identified saga already exists | | `Orchestrate`, `Orchestrates` | Called only when the identified saga already exists | | `NotFound` | Only called if the identified saga does not already exist, and there is no matching `Start` handler for the incoming message | Note that only `Start`, `Starts`, or `NotFound` methods can be static methods because these methods logically assume that the identified `Saga` does not yet exist. Wolverine as of 4.6 will assert that other named `Saga` methods are instance methods to try to head off confusion. ## When Sagas are Not Found ::: warning You need to explicitly use the `NotFound()` convention for Wolverine to quietly ignore messages related to a `Saga` that cannot be found. As an example, if you receive a "timeout" message for an active `Saga` that has been completed and deleted, you will need to implement `NotFound(message)` **even if it is an empty, do nothing method** just so Wolverine will not blow up with an exception (not) helpfully telling you the requested `Saga` cannot be found. ::: If you receive a command message against a `Saga` that no longer exists, Wolverine will throw an `Exception` unless you explicitly handle the "not found" case. To do so for a particular command type -- and note that Wolverine does not do any magic handling today based on abstractions -- you can implement a public static method called `NotFound` on your `Saga` class for a particular message type that will take action against that incoming message as shown below: ```cs public static void NotFound(CompleteOrder complete, ILogger logger) { logger.LogInformation("Tried to complete order {Id}, but it cannot be found", complete.Id); } ``` snippet source | anchor Note that you will have to explicitly use `IMessageBus` as an argument to a `NotFound` method to send out any messages to potentially take action on a missing saga if you so wish. ## Marking a Saga as Complete When a `Saga` workflow is complete, call the `MarkCompleted()` method as shown in the following method to let Wolverine know that the `Saga` can be safely deleted: ```cs // Apply the CompleteOrder to the saga public void Handle(CompleteOrder complete, ILogger logger) { logger.LogInformation("Completing order {Id}", complete.Id); // That's it, we're done. Delete the saga state after the message is done. MarkCompleted(); } ``` snippet source | anchor ## Timeout Messages You may frequently want to create "timeout" messages as part of a `Saga` to enforce time limitations. This can be done with scheduled messages in Wolverine, but because this usage is so common with `Saga` implementations and because Wolverine really wants you to be able to use pure functions as much as possible, you can subclass the Wolverine `TimeoutMessage` for any logical message that will be scheduled in the future like so: ```cs // This message will always be scheduled to be delivered after // a one minute delay public record OrderTimeout(string Id) : TimeoutMessage(1.Minutes()); ``` snippet source | anchor That `OrderTimeout` message can be published with normal cascaded messages (or by calling `IMessageBus.PublishAsync()` if you prefer) like so: ```cs // This method would be called when a StartOrder message arrives // to start a new Order public static (Order, OrderTimeout) Start(StartOrder order, ILogger logger) { logger.LogInformation("Got a new order with id {Id}", order.OrderId); // creating a timeout message for the saga return (new Order{Id = order.OrderId}, new OrderTimeout(order.OrderId)); } ``` snippet source | anchor And the handler for the message type is just a normal handler signature: ```cs // Delete this order if it has not already been deleted to enforce a "timeout" // condition public void Handle(OrderTimeout timeout, ILogger logger) { logger.LogInformation("Applying timeout to order {Id}", timeout.Id); // That's it, we're done. Delete the saga state after the message is done. MarkCompleted(); } ``` snippet source | anchor ## Saga Concurrency Both the Marten and EF Core backed saga support has built in support for optimistic concurrency checks on persisting a saga after handling a command. See [Dealing with Concurrency](/tutorials/concurrency) and especially the [partitioned sequential messaging](/tutorials/concurrency) and its option for "inferred" message grouping to maybe completely side step concurrency issues with saga message handling. ## Lightweight Saga Storage The Wolverine integration with either Sql Server or PostgreSQL comes with a lightweight saga storage mechanism where Wolverine will happily stand up a database table per `Saga` type in your configured envelope storage database and merely store the saga state as serialized JSON (System.Text.Json is used for serialization in all cases). There's a handful of things to know about this: * The automatic migration of lightweight saga tables can be disabled by the [AutoBuildMessageStorageOnStartup](/guide/durability/managing.html#disable-automatic-storage-migration) flag * The lightweight saga storage supports optimistic concurrency by default and will throw a `SagaConcurrencyException` in the case of a `Saga` being modified by another `Saga` command while the current command is being processed * The lightweight saga storage is supported by both the [PostgreSQL](/guide/durability/postgresql.html) and [Sql Server](/guide/durability/sqlserver.html) integration * If the Marten integration is active, Marten will take precedence for the `Saga` storage for each type * If the EF Core integration is active, the EF Core `DbContext` backed `Saga` persistence will take precedence *if* Wolverine can find a `DbContext` that has a mapping for that `Saga` type * Wolverine's default table naming convention is just "{Saga class name}\_saga" To either control the saga table names or to ensure that the lightweight tables are part of Wolverine's offline database migration capabilities, you can manually register saga types at configuration time: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.AddSagaType("red"); opts.AddSagaType(typeof(BlueSaga),"blue"); opts.PersistMessagesWithSqlServer(Servers.SqlServerConnectionString, "color_sagas"); opts.Services.AddResourceSetupOnStartup(); }).StartAsync(); ``` snippet source | anchor Note that this manual registration is not necessary at development time or if you're content to just let Wolverine handle database migrations at runtime. ## Overriding Logging We recently had a question about how to turn down logging levels for `Saga` message processing when the log output was getting too verbose. `Saga` types are officially message handlers to the Wolverine internals, so you can still use the `public static void Configure(HandlerChain)` mechanism for one off configurations to every message handler method on the `Saga` like this: ```cs public class RevisionedSaga : Wolverine.Saga { // This works just the same as on any other message handler // type public static void Configure(HandlerChain chain) { chain.ProcessingLogLevel = LogLevel.None; chain.SuccessLogLevel = LogLevel.None; } ``` snippet source | anchor Or if you wanted to just do it globally, something like this approach: ```cs public class TurnDownLoggingOnSagas : IChainPolicy { public void Apply(IReadOnlyList chains, GenerationRules rules, IServiceContainer container) { foreach (var sagaChain in chains.OfType()) { sagaChain.ProcessingLogLevel = LogLevel.None; sagaChain.SuccessLogLevel = LogLevel.None; } } } ``` snippet source | anchor and register that policy something like this: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.Policies.Add(); }).StartAsync(); ``` snippet source | anchor ## Multiple Sagas Handling the Same Message Type By default, Wolverine does not allow multiple saga types to handle the same message type and will throw an `InvalidSagaException` at startup if this is detected. However, there are valid architectural reasons to have multiple, independent saga workflows react to the same event — for example, an `OrderPlaced` event might start both a `ShippingSaga` and a `BillingSaga`. To enable this, set `MultipleHandlerBehavior` to `Separated`: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.MultipleHandlerBehavior = MultipleHandlerBehavior.Separated; // Your persistence configuration here (Marten, EF Core, etc.) }).StartAsync(); ``` When `Separated` mode is active, Wolverine creates an independent handler chain for each saga type, routed to its own local queue. Each saga independently manages its own lifecycle — loading, creating, updating, and deleting state — without interfering with the other. Here is an example with two sagas that both start from an `OrderPlaced` message but complete independently: ```cs // Shared message that both sagas react to public record OrderPlaced(Guid OrderPlacedId, string ProductName); // Messages specific to each saga public record OrderShipped(Guid ShippingSagaId); public record PaymentReceived(Guid BillingSagaId); public class ShippingSaga : Saga { public Guid Id { get; set; } public string ProductName { get; set; } = string.Empty; public static ShippingSaga Start(OrderPlaced message) { return new ShippingSaga { Id = message.OrderPlacedId, ProductName = message.ProductName }; } public void Handle(OrderShipped message) { MarkCompleted(); } } public class BillingSaga : Saga { public Guid Id { get; set; } public string ProductName { get; set; } = string.Empty; public static BillingSaga Start(OrderPlaced message) { return new BillingSaga { Id = message.OrderPlacedId, ProductName = message.ProductName }; } public void Handle(PaymentReceived message) { MarkCompleted(); } } ``` When an `OrderPlaced` message is published, both sagas will be started independently. Completing one saga (e.g., by sending `OrderShipped`) does not affect the other. ::: warning In `Separated` mode, messages routed to multiple sagas must be **published** (via `SendAsync` or `PublishAsync`), not **invoked** (via `InvokeAsync`). `InvokeAsync` bypasses message routing and will not reach the separated saga endpoints. ::: --- --- url: /guide/samples.md --- # Sample Projects There are several sample projects in the Wolverine codebase showing off bits and pieces of Wolverine functionality: | Project | Description | |----------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------| | [Quickstart](https://github.com/JasperFx/wolverine/tree/main/src/Samples/Quickstart) | The sample application in the quick start tutorial | | [CQRSWithMarten](https://github.com/JasperFx/wolverine/tree/main/src/Samples/CQRSWithMarten) | Shows off the event sourcing integration between [Marten](https://martendb.io) and Wolverine | | [CommandBus](https://github.com/JasperFx/wolverine/tree/main/src/Samples/CommandBus) | Wolverine as an in memory "command bus" for asynchronous processing | | [InMemoryMediator](https://github.com/JasperFx/wolverine/tree/main/src/Samples/InMemoryMediator) | Wolverine with EF Core and Sql Server as a mediator inside an ASP.Net Core service | | [OptimizedArtifactWorkflowSample](https://github.com/JasperFx/wolverine/tree/main/src/Samples/OptimizedArtifactWorkflowSample) | Using Wolverine's optimized workflow for pre-generating handler types | | [OrderSagaSample](https://github.com/JasperFx/wolverine/tree/main/src/Samples/OrderSagaSample) | Stateful sagas with Marten | | [WebApiWithMarten](https://github.com/JasperFx/wolverine/tree/main/src/Samples/WebApiWithMarten) | Using Marten with Wolverine for ASP.Net Core web services | | [ItemService](https://github.com/JasperFx/wolverine/tree/main/src/Samples/EFCoreSample/ItemService) | EF Core, Sql Server, and Wolverine.Http to integrate the Wolverine inbox/outbox | | [AppWithMiddleware](https://github.com/JasperFx/wolverine/tree/main/src/Samples/Middleware/AppWithMiddleware) | Building middleware for Wolverine handlers | | [PingPong](https://github.com/JasperFx/wolverine/tree/main/src/Samples/PingPong) | A classic "ping/pong" sample of sending messages between two Wolverine processes using the TCP transport | | [PingPongWithRabbitMq](https://github.com/JasperFx/wolverine/tree/main/src/Samples/PingPongWithRabbitMq) | Another "ping/pong" sample, but this time using Rabbit MQ | | [TodoWebService](https://github.com/JasperFx/wolverine/tree/main/src/Samples/TodoWebService/TodoWebService) | Using Marten, Wolverine, and Wolverine.Http to build a simple ASP.Net Core service | | [MultiTenantedTodoWebService](https://github.com/JasperFx/wolverine/tree/main/src/Samples/MultiTenantedTodoService/MultiTenantedTodoService) | Same as above, but this time with separate databases for each tenant | | [IncidentService](https://github.com/jasperfx/wolverine/tree/main/src/Samples/IncidentService) | Use the full "Critter Stack" to build a CQRS architcture with event sourcing | --- --- url: /guide/messaging/transports/azureservicebus/scheduled.md --- # Scheduled Delivery ::: info This functionality as introduced in Wolverine 1.6.0 ::: WolverineFx.AzureServiceBus now supports [native Azure Service Bus scheduled delivery](https://learn.microsoft.com/en-us/azure/service-bus-messaging/message-sequencing). There's absolutely nothing you need to do explicitly to enable this functionality. So for message types that are routed to Azure Service Bus queues or topics, you can use this functionality: ```cs public async Task SendScheduledMessage(IMessageContext bus, Guid invoiceId) { var message = new ValidateInvoiceIsNotLate { InvoiceId = invoiceId }; // Schedule the message to be processed in a certain amount // of time await bus.ScheduleAsync(message, 30.Days()); // Schedule the message to be processed at a certain time await bus.ScheduleAsync(message, DateTimeOffset.Now.AddDays(30)); } ``` snippet source | anchor And also use Azure Service Bus scheduled delivery for scheduled retries (assuming that the listening endpoint was an **inline** Azure Service Bus listener): ```cs using var host = Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.Policies.OnException() // Just retry the message again on the // first failure .RetryOnce() // On the 2nd failure, put the message back into the // incoming queue to be retried later .Then.Requeue() // On the 3rd failure, retry the message again after a configurable // cool-off period. This schedules the message .Then.ScheduleRetry(15.Seconds()) // On the next failure, move the message to the dead letter queue .Then.MoveToErrorQueue(); }).StartAsync(); ``` snippet source | anchor --- --- url: /guide/messaging/sending-error-handling.md --- # Sending Error Handling Wolverine's existing [error handling](/guide/handlers/error-handling) policies apply to failures that happen while *processing* incoming messages in handlers. But what about failures that happen while *sending* outgoing messages to external transports? When an outgoing message fails to send — maybe the message broker is temporarily unavailable, a message is too large for the transport, or a network error occurs — Wolverine's default behavior is to retry and eventually trip a circuit breaker on the sending endpoint. Starting in Wolverine 5.x, you can now configure **sending failure policies** to take fine-grained action on these outgoing send failures, using the same fluent API you already know from handler error handling. ## Why Use Sending Failure Policies? Without sending failure policies, all send failures follow the same path: retry a few times, then trip the circuit breaker, which pauses all sending on that endpoint. This is often fine, but sometimes you need more control: * **Oversized messages**: If a message is too large for the transport's batch size, retrying will never succeed. You want to discard or dead-letter it immediately. * **Permanent failures**: Some exceptions indicate the message can never be delivered (e.g., invalid routing, serialization issues). Retrying wastes resources. * **Custom notification**: You may want to publish a compensating event when a send fails. * **Selective pausing**: You may want to pause sending only for certain exception types, then automatically resume after a cooldown period. ## Configuring Global Sending Failure Policies Use `WolverineOptions.SendingFailure` to configure policies that apply to all outgoing endpoints: ```csharp using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Discard messages that are too large for any transport batch opts.SendingFailure .OnException() .Discard(); // Retry sending up to 3 times, then move to dead letter storage opts.SendingFailure .OnException() .RetryTimes(3).Then.MoveToErrorQueue(); // Schedule retries with exponential backoff opts.SendingFailure .OnException() .ScheduleRetry(1.Seconds(), 5.Seconds(), 30.Seconds()); }).StartAsync(); ``` ::: tip If no sending failure policy matches the exception, Wolverine falls through to the existing retry and circuit breaker behavior. Your existing applications are completely unaffected unless you explicitly configure sending failure policies. ::: ## Per-Endpoint Sending Failure Policies You can also configure sending failure policies on a per-endpoint basis using the `ConfigureSending()` method on any subscriber configuration. Per-endpoint rules take priority over global rules: ```csharp using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Global default: retry 3 times then dead letter opts.SendingFailure .OnException() .RetryTimes(3).Then.MoveToErrorQueue(); // Override for a specific endpoint: just discard on any failure opts.PublishAllMessages().ToRabbitQueue("low-priority") .ConfigureSending(sending => { sending.OnException().Discard(); }); }).StartAsync(); ``` ## Available Actions Sending failure policies support the same actions as handler error handling: | Action | Description | |----------------------|-------------------------------------------------------------------------------------------------| | Retry | Immediately retry the send inline | | Retry with Cooldown | Wait a short time, then retry inline | | Schedule Retry | Schedule the message to be retried at a certain time | | Discard | Log and discard the message without further send attempts | | Move to Error Queue | Move the message to dead letter storage | | Pause Sending | Pause the sending agent for a duration, then automatically resume | | Custom Action | Execute arbitrary logic, including publishing compensating messages | ## Oversized Message Detection Wolverine can detect messages that are too large to ever fit in a transport batch. When a message fails to be added to an *empty* batch (meaning even a single message exceeds the maximum batch size), Wolverine throws a `MessageTooLargeException`. You can handle this with a sending failure policy: ```csharp using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Immediately discard messages that are too large for the broker opts.SendingFailure .OnException() .Discard(); }).StartAsync(); ``` This is currently supported for the Azure Service Bus transport, and will be extended to other transports over time. ## Pausing the Sender Similar to pausing a listener, you can pause the sending agent when a certain failure condition is detected. Unlike a permanent latch, pausing automatically resumes sending after the specified duration: ```csharp using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // On a catastrophic broker failure, pause sending for 30 seconds opts.SendingFailure .OnException() .PauseSending(30.Seconds()); // Or combine with another action: dead letter the message, then pause opts.SendingFailure .OnException() .MoveToErrorQueue().AndPauseSending(1.Minutes()); }).StartAsync(); ``` When paused, the sending agent drains any in-flight messages and stops accepting new ones. After the specified duration elapses, Wolverine automatically attempts to resume the sender. If the resume attempt fails (e.g., the broker is still unreachable), Wolverine falls back to its built-in circuit watcher which will keep retrying periodically. ## Custom Actions You can define custom logic to execute when a send failure occurs. This is useful for publishing compensating events or logging to external systems: ```csharp using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.SendingFailure .OnException() .CustomAction(async (runtime, lifecycle, ex) => { // Publish a notification about the send failure await lifecycle.PublishAsync(new SendingFailed( lifecycle.Envelope!.Id, ex.Message )); }, "Notify on send failure"); }).StartAsync(); ``` ## Send Attempts Tracking Wolverine tracks sending attempts separately from handler processing attempts through the `Envelope.SendAttempts` property. This counter is incremented each time a sending failure policy is evaluated, and is used internally by the failure rule infrastructure to determine which action slot to execute (e.g., retry twice, then move to error queue on the third failure). ## How It Works Sending failure policies are evaluated *before* the existing circuit breaker logic in the sending agent. The evaluation flow is: 1. An outgoing message fails to send, producing an exception 2. `Envelope.SendAttempts` is incremented 3. Sending failure policies are evaluated against the exception and envelope 4. If a matching policy is found, its continuation is executed (discard, dead letter, retry, custom action, etc.) 5. If no policy matches, the existing retry/circuit breaker behavior proceeds as before This means sending failure policies are purely additive — they only change behavior when explicitly configured and when a rule matches. --- --- url: /guide/http/messaging.md --- # Sending Messages from HTTP Endpoints ::: tip You can also use `IMessageBus` directly from an MVC Controller or a Minimal API method, but you'll be responsible for the outbox mechanics that Wolverine takes care of for you in Wolverine message handlers or Wolverine http endpoints. ::: So there's absolutely nothing stopping you from just using `IMessageBus` as an injected dependency to a Wolverine HTTP endpoint method to publish messages like this sample: ```cs // This would have an empty response and a 204 status code [WolverinePost("/spawn3")] public static async ValueTask SendViaMessageBus(IMessageBus bus) { await bus.PublishAsync(new HttpMessage1("foo")); await bus.PublishAsync(new HttpMessage2("bar")); } ``` snippet source | anchor But of course there's some other alternatives to directly using `IMessageBus` by utilizing Wolverine's [cascading messages](/guide/handlers/cascading) capability and the ability to customize how Wolverine handles return values. ## Sending or publishing directly from URL ::: tip It's an imperfect world, and the following code sample has to deserialize the incoming HTTP request to the message body, then publishes that directly to Wolverine which might turn around and serialize it back to a binary. ::: The following syntax shows a shorthand mechanism to map an incoming HTTP request message type to be immediately published to Wolverine without any need for additional Wolverine endpoints or MVC controllers. Note that this mechanism will return an empty body with a status code of 202 to denote future processing. ```cs var builder = WebApplication.CreateBuilder(); builder.Host.UseWolverine(); var app = builder.Build(); app.MapWolverineEndpoints(opts => { opts.SendMessage("/orders/create", chain => { // You can make any necessary metadata configurations exactly // as you would for Minimal API endpoints with this syntax // to fine tune OpenAPI generation or security chain.Metadata.RequireAuthorization(); }); opts.SendMessage(HttpMethod.Put, "/orders/ship"); }); // and the rest of your application configuration and bootstrapping ``` snippet source | anchor On the other hand, the `PublishAsync()` method will send a message if there is a known subscriber and ignore the message if there is no subscriber (as explained in [sending or publishing Messages](/guide/messaging/message-bus#sending-or-publishing-messages)): ```cs var builder = WebApplication.CreateBuilder(); builder.Host.UseWolverine(); var app = builder.Build(); app.MapWolverineEndpoints(opts => { opts.PublishMessage("/orders/create", chain => { // You can make any necessary metadata configurations exactly // as you would for Minimal API endpoints with this syntax // to fine tune OpenAPI generation or security chain.Metadata.RequireAuthorization(); }); opts.PublishMessage(HttpMethod.Put, "/orders/ship"); }); // and the rest of your application configuration and bootstrapping ``` snippet source | anchor Middleware policies from Wolverine.Http are applicable to these endpoints, so for example, it's feasible to use the FluentValidation middleware for HTTP with these forwarding endpoints. ## Cascading Messages To utilize *cascaded messages* from HTTP endpoints (messages that are returned form the HTTP handler method), you have two main options. First, you can use Wolverine's `OutgoingMessages` collection as a tuple return value that makes it clear to Wolverine that this collection of objects is meant to be cascaded messages that are published upon the success of this HTTP endpoint. Here's an example: ```cs // This would have a string response and a 200 status code [WolverinePost("/spawn")] public static (string, OutgoingMessages) Post(SpawnInput input) { var messages = new OutgoingMessages { new HttpMessage1(input.Name), new HttpMessage2(input.Name), new HttpMessage3(input.Name), new HttpMessage4(input.Name) }; return ("got it", messages); } ``` snippet source | anchor Otherwise, if you want to make it clearer from the signature of your HTTP handler method what messages are cascaded and there's no variance in the type of messages published, you can use additional tuple return values like this: ```cs // This would have an empty response and a 204 status code [EmptyResponse] [WolverinePost("/spawn2")] public static (HttpMessage1, HttpMessage2) Post() { return new(new HttpMessage1("foo"), new HttpMessage2("bar")); } ``` snippet source | anchor --- --- url: /guide/messaging/message-bus.md --- # Sending Messages with IMessageBus The main entry point into Wolverine to initiate any message handling or publishing is the `IMessageBus` service that is registered by Wolverine into your application's IoC container as a scoped service. Here's a brief sample of the most common operations you'll use with `IMessageBus` and Wolverine itself: There's also a second abstraction called `IMessageContext` that can be optionally consumed within message handlers to add some extra operations and metadata for the current message being processed in a handler: ```mermaid classDiagram class IMessageBus class IMessageContext IMessageContext ..> IMessageBus IMessageContext --> Envelope : Current Message ``` Here's a quick sample usage of the most common operations you'll use with Wolverine: ```cs public static async Task use_message_bus(IMessageBus bus) { // Execute this command message right now! And wait until // it's completed or acknowledged await bus.InvokeAsync(new DebitAccount(1111, 100)); // Execute this message right now, but wait for the declared response var status = await bus.InvokeAsync(new DebitAccount(1111, 250)); // Send the message expecting there to be at least one subscriber to be executed later, but // don't wait around await bus.SendAsync(new DebitAccount(1111, 250)); // Or instead, publish it to any interested subscribers, // but don't worry about it if there are actually any subscribers // This is probably best for raising event messages await bus.PublishAsync(new DebitAccount(1111, 300)); // Send a message to be sent or executed at a specific time await bus.ScheduleAsync(new DebitAccount(1111, 100), DateTimeOffset.UtcNow.AddDays(1)); // Or do the same, but this time express the time as a delay await bus.ScheduleAsync(new DebitAccount(1111, 225), 1.Days()); } ``` snippet source | anchor ::: tip The only practical difference between `SendAsync()` and `PublishAsync()` is that `SendAsync()` will assert that there is at least one subscriber for the message and throw an exception if there is not. ::: ## Accessing IMessageBus from IoC ::: tip This applies to `IHostedService` registrations that use Wolverine. ::: `IMessageBus` is registered as `Scoped` in your IoC container. In most common application scenarios you can just let your IoC container inject an instance into your services, but if you ever need to use `IMessageBus` outside of a scoped container (AspNetCore requests or Wolverine message handlers), you might run into trouble with the built in `ServiceProvider` not letting you resolve a `Scoped` service from the root container or injected into a `Singleton` scoped service. Not to worry! You have a couple options: 1. Switch to using [Lamar](https://jasperfx.github.io/lamar) as your IoC container that doesn't have the fussy, whiney limitations about scoping that `ServiceProvider` does and generally works a little better with Wolverine anyway 2. Follow the admittedly annoying steps in [this article about using `Scoped` services from `Singleton` services](https://learn.microsoft.com/en-us/dotnet/core/extensions/scoped-service) 3. Inject `IWolverineRuntime`, and build `new MessageBus(runtime)` instances at will. ## Invoking Message Execution To execute the message processing immediately and wait until it's finished, use this syntax: ```cs public static async Task invoke_locally(IMessageBus bus) { // Execute the message inline await bus.InvokeAsync(new Message1()); // Execute the message inline, but this time pass in // messaging metadata for Wolverine await bus.InvokeAsync(new Message1(), new DeliveryOptions { TenantId = "one", SagaId = "two" }.WithHeader("user.id", "admin")); } ``` snippet source | anchor If the `Message1` message has a local subscription, the message handler will be invoked in the calling thread. In this usage, the `InvokeAsync()` feature will utilize any registered [retry or retry with cooldown error handling rules](/guide/handlers/error-handling) for potentially transient errors. ::: tip While the syntax for a remote invocation of a message is identical to a local invocation, it's obviously much more expensive and slower to do so remotely. The Wolverine team recommends using remote invocations cautiously. ::: If the `Message1` message has a remote subscription (to a Rabbit MQ queue for example), Wolverine will send the message through its normal transport, but the thread will wait until Wolverine receives an acknowledgement message back from the remote service. In this case, Wolverine does enforce timeout conditions with a default of 5 seconds which can be overridden by the caller. ## Request/Reply ::: warning There was a breaking change in behavior for this functionality in Wolverine 3.0. The response type, the `T` in `InvokeAsync()` is **not** sent as a cascaded message if that type is the requested response type. You will have to explicitly send the response out through `IMessageBus.PublishAsync()` to force that to be sent out instead of just being the response. ::: Wolverine also has direct support for the [request/reply](https://www.enterpriseintegrationpatterns.com/RequestReply.html) pattern or really just mediating between your code and complex query handlers through the `IMessageBus.InvokeAsync()` API. To make that concrete, let's assume you want to request the results of a mathematical operation as shown below in these message types and a corresponding message handler: ```cs public record Numbers(int X, int Y); public record Results(int Sum, int Product); public static class NumbersHandler { public static Results Handle(Numbers numbers) { return new Results(numbers.X + numbers.Y, numbers.X * numbers.Y); } } ``` snippet source | anchor Note in the sample above that the message handler that accepts `Numbers` returns a `Results` object. That return value is necessary for Wolverine to be able to use that handler in a request/reply operation. Finally, to actually invoke the handler and retrieve a `Results` object, we can use the `IMessageBus.InvokeAsync(message)` API as shown below: ```cs public async Task invoke_math_operations(IMessageBus bus) { var results = await bus.InvokeAsync(new Numbers(3, 4)); // Same functionality, but this time we'll configure the active // tenant id and add a message header var results2 = await bus.InvokeAsync(new Numbers(5, 6), new DeliveryOptions { TenantId = "north.america" }.WithHeader("user.id", "professor")); } ``` snippet source | anchor Note that this API hides whether or not this operation is a local operation running on the same thread and invoking a local message handler or sending a message through to a remote endpoint and waiting for the response. The same timeout mechanics and performance concerns apply to this operation as the `InvokeAsync()` method described in the previous section. Note that if you execute the `Numbers` message from above with `InvokeAsync()`, the `Results` response will only be returned as the response and will not be published as a message. This was a breaking change in Wolverine 3.0. We think (hope) that this will be less confusing. You can explicitly override this behavior on a handler by handler basis with the `[AlwaysPublishResponse]` attribute as shown below: ```cs public class CreateItemCommandHandler { // Using this attribute will force Wolverine to also publish the ItemCreated event even if // this is called by IMessageBus.InvokeAsync() [AlwaysPublishResponse] public async Task<(ItemCreated, SecondItemCreated)> Handle(CreateItemCommand command, IDocumentSession session) { var item = new Item { Id = Guid.NewGuid(), Name = command.Name }; session.Store(item); return (new ItemCreated(item.Id, item.Name), new SecondItemCreated(item.Id, item.Name)); } } ``` snippet source | anchor ## Global Timeout Default for Request/Reply The default timeout for all remote invocations, request/reply, or send and wait messaging is 5 seconds. You can override that on a case by case basis, or you can set a default global timeout value: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Set a global default timeout for remote // invocation or request/reply operations opts.DefaultRemoteInvocationTimeout = 10.Seconds(); }).StartAsync(); ``` snippet source | anchor ## Disabling Remote Request/Reply When you call `IMessageBus.InvokeAsync()` or `IMessageBus.InvokeAsync()`, depending on whether Wolverine has a local message handler for the message type or has a configured subscription rule for the message type, Wolverine *might* be making a remote call through external messaging transports to execute that message. It's a perfectly valid use case to do the remote invocation, but if you don't want this to ever happen or catch a team by surprise when an operation fails, you can completely disable all remote request/reply usage through `InvokeAsync()` by changing this setting: ```cs using var host = Host.CreateDefaultBuilder() .UseWolverine(opts => { // This will disallow Wolverine from making remote calls // through IMessageBus.InvokeAsync() or InvokeAsync() // Instead, Wolverine will throw an InvalidOperationException opts.EnableRemoteInvocation = false; }).StartAsync(); ``` snippet source | anchor ## Sending or Publishing Messages [Publish/Subscribe](https://docs.microsoft.com/en-us/azure/architecture/patterns/publisher-subscriber) is a messaging pattern where the senders of messages do not need to specifically know what the specific subscribers are for a given message. In this case, some kind of middleware or infrastructure is responsible for either allowing subscribers to express interest in what messages they need to receive or apply routing rules to send the published messages to the right places. Wolverine's messaging support was largely built to support the publish/subscribe messaging pattern. To send a message with Wolverine, use the `IMessageBus` interface or the bigger `IMessageContext` interface that are registered in your application's IoC container. The sample below shows the most common usage: ```cs public ValueTask SendMessage(IMessageContext bus) { // In this case, we're sending an "InvoiceCreated" // message var @event = new InvoiceCreated { Time = DateTimeOffset.Now, Purchaser = "Guy Fieri", Amount = 112.34, Item = "Cookbook" }; return bus.SendAsync(@event); } ``` snippet source | anchor That by itself will send the `InvoiceCreated` message to whatever subscribers are interested in that message. The `SendAsync()` method will throw an exception if Wolverine doesn't know where to send the message. In other words, there has to be a subscriber of some sort for that message. On the other hand, the `PublishAsync()` method will send a message if there is a known subscriber and ignore the message if there is no subscriber: ```cs public ValueTask PublishMessage(IMessageContext bus) { // In this case, we're sending an "InvoiceCreated" // message var @event = new InvoiceCreated { Time = DateTimeOffset.Now, Purchaser = "Guy Fieri", Amount = 112.34, Item = "Cookbook" }; return bus.PublishAsync(@event); } ``` snippet source | anchor ## Scheduling Message Delivery or Execution ::: tip While Wolverine has an in memory scheduled delivery and execution model by default, that was only intended for delayed message execution retries. You will most likely want to either use a transport type that supports native scheduled delivery like the [Azure Service Bus transport](/guide/messaging/transports/azureservicebus/scheduled), or utilize the [database backed message persistence](/guide/durability/) to enable durable message scheduling. ::: Wolverine supports the concept of scheduled message delivery. Likewise, Wolverine also supports scheduled message execution if you're publishing to a [local queue](/guide/messaging/transports/local) within your current application. The actual mechanics for message scheduling will vary according to the endpoint destination that a message is being published to, including whether or not the scheduled message is durable and will outlive any unexpected or planned process terminations. ::: tip The built in outbox message scheduling was meant for relatively low numbers of messages, and was primarily meant for scheduled message retries. If you have an excessive number of scheduled messages, you may want to utilize the database backed queues in Wolverine which are optimized for much higher number of scheduled messages. ::: First off, your guide for understanding the scheduled message delivery mechanics in effective order: * If the destination endpoint has native message delivery capabilities, Wolverine uses that capability. Outbox mechanics still apply to when the outgoing message is released to the external endpoint's sender * If the destination endpoint is durable, meaning that it's enrolled in Wolverine's [transactional outbox](/guide/durability/), then Wolverine will store the scheduled messages in the outgoing envelope storage for later execution. In this case, Wolverine is polling for the ready to execute or deliver messages across all running Wolverine nodes. This option is durable in case of process exits. * In lieu of any other support, Wolverine has an in memory option that can do scheduled delivery or execution To schedule message delivery (scheduled execution really just means scheduling message publishing to a local queue), you actually have a couple different syntactical options. First, if you're directly using the `IMessageBus` interface, you can schedule a message with a delay using this extension method: ```cs public async Task schedule_send(IMessageContext context, Guid issueId) { var timeout = new WarnIfIssueIsStale { IssueId = issueId }; // Process the issue timeout logic 3 days from now await context.ScheduleAsync(timeout, 3.Days()); // The code above is short hand for this: await context.PublishAsync(timeout, new DeliveryOptions { ScheduleDelay = 3.Days() }); } ``` snippet source | anchor Or using an absolute time, with this overload of the extension method: ```cs public async Task schedule_send_at_5_tomorrow_afternoon(IMessageContext context, Guid issueId) { var timeout = new WarnIfIssueIsStale { IssueId = issueId }; var time = DateTime.Today.AddDays(1).AddHours(17); // Process the issue timeout at 5PM tomorrow // Do note that Wolverine quietly converts this // to universal time in storage await context.ScheduleAsync(timeout, time); } ``` snippet source | anchor Now, Wolverine tries really hard to enable you to use [pure functions](https://en.wikipedia.org/wiki/Pure_function) for as many message handlers as possible, so there's of course an option to schedule message delivery while still using [cascading messages](/guide/handlers/cascading) with the `DelayedFor()` and `ScheduledAt()` extension methods shown below: ```cs public static IEnumerable Consume(Incoming incoming) { // Delay the message delivery by 10 minutes yield return new Message1().DelayedFor(10.Minutes()); // Schedule the message delivery for a certain time yield return new Message2().ScheduledAt(new DateTimeOffset(DateTime.Today.AddDays(2))); // Customize the message delivery however you please... yield return new Message3() .WithDeliveryOptions(new DeliveryOptions().WithHeader("foo", "bar")); // Send back to the original sender yield return Respond.ToSender(new Message4()); } ``` snippet source | anchor Lastly, there's a special base class called `TimeoutMessage` that your message types can extend to add scheduling logic directly to the message itself for easy usage as a cascaded message. Here's an example message type: ```cs // This message will always be scheduled to be delivered after // a one minute delay public record OrderTimeout(string Id) : TimeoutMessage(1.Minutes()); ``` snippet source | anchor Which is used within this sample saga implementation: ```cs // This method would be called when a StartOrder message arrives // to start a new Order public static (Order, OrderTimeout) Start(StartOrder order, ILogger logger) { logger.LogInformation("Got a new order with id {Id}", order.OrderId); // creating a timeout message for the saga return (new Order{Id = order.OrderId}, new OrderTimeout(order.OrderId)); } ``` snippet source | anchor ## Customizing Message Delivery TODO -- more text here. NEW PAGE??? ```cs public static async Task SendMessagesWithDeliveryOptions(IMessageBus bus) { await bus.PublishAsync(new Message1(), new DeliveryOptions { AckRequested = true, ContentType = "text/xml", // you can do this, but I'm not sure why you'd want to override this DeliverBy = DateTimeOffset.Now.AddHours(1), // set a message expiration date DeliverWithin = 1.Hours(), // convenience method to set the deliver-by expiration date ScheduleDelay = 1.Hours(), // Send this in one hour, or... ScheduledTime = DateTimeOffset.Now.AddHours(1), ResponseType = typeof(Message2) // ask the receiver to send this message back to you if it can } // There's a chained fluent interface for adding header values too .WithHeader("tenant", "one")); } ``` snippet source | anchor ## Sending Raw Message Data In some particular cases, you may want to use Wolverine to send a message to another system (or the same system) when you already have the raw binary message data but not an actual .NET message object. An example use case is integrating scheduling libraries like Quartz.NET or Hangfire where you might be persisting a `byte[]` for a message to be sent via Wolverine at a certain time. Regardless of why you need to do this, Wolverine has a capability to do exactly this, but with the proviso that you will have to select the messaging endpoint first. To make this concrete, let's say that you've got this application set up: ```cs var builder = Host.CreateApplicationBuilder(); var connectionString = builder.Configuration.GetConnectionString("rabbit"); builder.UseWolverine(opts => { opts.UseRabbitMq(connectionString).AutoProvision(); opts.ListenToRabbitQueue("batches") // Pay attention to this. This helps Wolverine // "know" that if the message type isn't specified // on the incoming Rabbit MQ message to assume that // the .NET message type is RunBatch .DefaultIncomingMessage() // The default endpoint name would be "batches" anyway, but still // good to show this if you want to use more readable names: .Named("batches"); opts.ListenToRabbitQueue("control"); }); using var host = builder.Build(); await host.StartAsync(); ``` snippet source | anchor And some more context for the subsequent sample usages: ```cs // Helper method for testing in Wolverine that // gives you a new IMessageBus instance without having to // muck around with scoped service providers IMessageBus bus = host.MessageBus(); // The raw message data, but pretend this was sourced from a database // table or some other non-Wolverine storage in your system byte[] messageData = Encoding.Default.GetBytes("{\"Name\": \"George Karlaftis\"}"); ``` snippet source | anchor The simplest possible usage is when you can assume that the receiving Wolverine endpoint or downstream system will "know" what the message type is without you having to tell it: ```cs // Simplest possible usage. This can work because the // listening endpoint has a configured default message // type await bus // choosing the destination endpoint by its name // Rabbit MQ queues use the queue name by default .EndpointFor("batches") .SendRawMessageAsync(messageData); // Same usage, but locate by the Wolverine Uri await bus // choosing the destination endpoint by its name // Rabbit MQ queues use the queue name by default .EndpointFor(new Uri("rabbitmq://queue/batches")) .SendRawMessageAsync(messageData); ``` snippet source | anchor Note that in this case, you'll have to help Wolverine out by explicitly choosing the destination for the raw message data by either using a `Uri` or the endpoint name. You can also specify the .NET message type to help Wolverine create the necessary metadata for the outgoing message like so: ```cs await bus .EndpointFor(new Uri("rabbitmq://queue/control")) // In this case I helped Wolverine out by telling it // what the .NET message type is for this message .SendRawMessageAsync(messageData, typeof(RunBatch)); await bus .EndpointFor(new Uri("rabbitmq://queue/control")) // In this case I helped Wolverine out by telling it // what the .NET message type is for this message .SendRawMessageAsync(messageData, configure: env => { // Alternative usage to just work directly // with Wolverine's Envelope wrapper env.SetMessageType(); // And you can do *anything* with message metadata // using the Envelope wrapper // Use a little bit of caution with this though env.Headers["user"] = "jack"; }); ``` snippet source | anchor --- --- url: /guide/messaging/transports/azureservicebus/session-identifiers.md --- # Session Identifiers and FIFO Queues ::: info This functionality was introduced in Wolverine 1.6.0. ::: ::: warning Even if Wolverine isn't controlling the creation of the queues or subscriptions, you still need to tell Wolverine when sessions are required on any listening endpoint so that it can opt into session compliant listeners ::: You can now take advantage of [sessions and first-in, first out queues in Azure Service Bus](https://learn.microsoft.com/en-us/azure/service-bus-messaging/message-sessions) with Wolverine. To tell Wolverine that an Azure Service Bus queue or subscription should require sessions, you have this syntax shown in an internal test: ```cs _host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseAzureServiceBusTesting() .AutoProvision().AutoPurgeOnStartup(); opts.ListenToAzureServiceBusQueue("send_and_receive"); opts.PublishMessage().ToAzureServiceBusQueue("send_and_receive"); opts.ListenToAzureServiceBusQueue("fifo1") // Require session identifiers with this queue .RequireSessions() // This controls the Wolverine handling to force it to process // messages sequentially .Sequential(); opts.PublishMessage() .ToAzureServiceBusQueue("fifo1"); opts.PublishMessage().ToAzureServiceBusTopic("asb3").SendInline(); opts.ListenToAzureServiceBusSubscription("asb3") .FromTopic("asb3") // Require sessions on this subscription .RequireSessions(1) .ProcessInline(); opts.PublishMessage().ToAzureServiceBusTopic("asb4").BufferedInMemory(); opts.ListenToAzureServiceBusSubscription("asb4") .FromTopic("asb4") // Require sessions on this subscription .RequireSessions(1) .ProcessInline(); }).StartAsync(); ``` snippet source | anchor To publish messages to Azure Service Bus with a session id, you will need to of course supply the session id: ```cs // bus is an IMessageBus await bus.SendAsync(new AsbMessage3("Red"), new DeliveryOptions { GroupId = "2" }); await bus.SendAsync(new AsbMessage3("Green"), new DeliveryOptions { GroupId = "2" }); await bus.SendAsync(new AsbMessage3("Refactor"), new DeliveryOptions { GroupId = "2" }); ``` snippet source | anchor ::: info Wolverine is using the "group-id" nomenclature from the AMPQ standard, but for Azure Service Bus, this is directly mapped to the `SessionId` property on the Azure Service Bus client internally. ::: You can also send messages with session identifiers through cascading messages as shown in a fake message handler below: ```cs public static IEnumerable Handle(IncomingMessage message) { yield return new Message1().WithGroupId("one"); yield return new Message2().WithGroupId("one"); yield return new Message3().ScheduleToGroup("one", 5.Minutes()); // Long hand yield return new Message4().WithDeliveryOptions(new() { GroupId = "one" }); } ``` snippet source | anchor --- --- url: /guide/durability/sqlserver.md --- # Sql Server Integration Wolverine supports a Sql Server backed message persistence strategy and even a Sql Server backed messaging transport option. To get started, add the `WolverineFx.SqlServer` dependency to your application: ```bash dotnet add package WolverineFx.SqlServer ``` ## Message Persistence To enable Sql Server to serve as Wolverine's [transactional inbox and outbox](./), you just need to use the `WolverineOptions.PersistMessagesWithSqlServer()` extension method as shown below in a sample (that also uses Entity Framework Core): ```cs builder.Host.UseWolverine(opts => { // Setting up Sql Server-backed message storage // This requires a reference to Wolverine.SqlServer opts.PersistMessagesWithSqlServer(connectionString, "wolverine"); // Set up Entity Framework Core as the support // for Wolverine's transactional middleware opts.UseEntityFrameworkCoreTransactions(); // Enrolling all local queues into the // durable inbox/outbox processing opts.Policies.UseDurableLocalQueues(); }); ``` snippet source | anchor ## Sql Server Messaging Transport ::: info The Sql Server transport was originally conceived as a way to handle much more volume through the scheduled message functionality of Wolverine over using local queues backed by the transactional inbox. ::: The `WolverineFx.SqlServer` Nuget also contains a simple messaging transport that was mostly meant to be usable for teams who want asynchronous queueing without introducing more specialized infrastructure. To enable this transport in your code, use this option which *also* activates Sql Server backed message persistence: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { var connectionString = builder.Configuration.GetConnectionString("sqlserver"); opts.UseSqlServerPersistenceAndTransport(connectionString, "myapp") // Tell Wolverine to build out all necessary queue or scheduled message // tables on demand as needed .AutoProvision() // Optional that may be helpful in testing, but probably bad // in production! .AutoPurgeOnStartup(); // Use this extension method to create subscriber rules opts.PublishAllMessages().ToSqlServerQueue("outbound"); // Use this to set up queue listeners opts.ListenToSqlServerQueue("inbound") .CircuitBreaker(cb => { // fine tune the circuit breaker // policies here }) // Optionally specify how many messages to // fetch into the listener at any one time .MaximumMessagesToReceive(50); }); using var host = builder.Build(); await host.StartAsync(); ``` snippet source | anchor The Sql Server transport is strictly queue-based at this point. The queues are configured as durable by default, meaning that they are utilizing the transactional inbox and outbox. The Sql Server queues can also be buffered: ```cs opts.ListenToSqlServerQueue("sender").BufferedInMemory(); ``` snippet source | anchor Using this option just means that the Sql Server queues can be used for both sending or receiving with no integration with the transactional inbox or outbox. This is a little more performant, but less safe as messages could be lost if held in memory when the application shuts down unexpectedly. If you want to use Sql Server as a queueing mechanism between multiple applications, you'll need: 1. To target the same Sql Server database, even if the two applications target different database schemas 2. Be sure to configure the `transportSchema` of the Sql Server transport to be the same between the two applications Here's an example from the Wolverine tests. Note the `transportSchema` configuration: ```cs _sender = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseSqlServerPersistenceAndTransport( Servers.SqlServerConnectionString, "sender", // If using Sql Server as a queue between multiple applications, // be sure to use the same transportSchema setting transportSchema:"transport") .AutoProvision() .AutoPurgeOnStartup(); opts.PublishMessage().ToSqlServerQueue("foobar"); opts.PublishMessage().ToSqlServerQueue("foobar"); opts.Policies.DisableConventionalLocalRouting(); opts.Discovery.DisableConventionalDiscovery(); }).StartAsync(); _listener = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseSqlServerPersistenceAndTransport(Servers.SqlServerConnectionString, "listener", transportSchema:"transport") .AutoProvision() .AutoPurgeOnStartup(); opts.PublishMessage().ToSqlServerQueue("foobar"); opts.ListenToSqlServerQueue("foobar"); opts.Discovery.DisableConventionalDiscovery() .IncludeType(); }).StartAsync(); ``` snippet source | anchor ## Lightweight Saga Usage See the details on [Lightweight Saga Storage](/guide/durability/sagas.html#lightweight-saga-storage) for more information. ## Multi-Tenancy You can utilize multi-tenancy through separate databases for each tenant with SQL Server and Wolverine. If utilizing the SQL Server transport with multi-tenancy through separate databases per tenant, the SQL Server queues will be built and monitored for each tenant database as well as any main, non-tenanted database. Also, Wolverine is able to utilize completely different message storage for its transactional inbox and outbox for each unique database including any main database. Wolverine is able to activate additional durability agents for itself for any tenant databases added at runtime for tenancy modes that support dynamic discovery. To utilize Wolverine managed multi-tenancy, you have a couple main options. The simplest is just using a static configured set of tenant id to database connections like so: ```cs var builder = Host.CreateApplicationBuilder(); var configuration = builder.Configuration; builder.UseWolverine(opts => { // First, you do have to have a "main" PostgreSQL database for messaging persistence // that will store information about running nodes, agents, and non-tenanted operations opts.PersistMessagesWithSqlServer(configuration.GetConnectionString("main")) // Add known tenants at bootstrapping time .RegisterStaticTenants(tenants => { // Add connection strings for the expected tenant ids tenants.Register("tenant1", configuration.GetConnectionString("tenant1")); tenants.Register("tenant2", configuration.GetConnectionString("tenant2")); tenants.Register("tenant3", configuration.GetConnectionString("tenant3")); }); // Just to show that you *can* use more than one DbContext opts.Services.AddDbContextWithWolverineManagedMultiTenancy((builder, connectionString, _) => { // You might have to set the migration assembly builder.UseSqlServer(connectionString.Value, b => b.MigrationsAssembly("MultiTenantedEfCoreWithSqlServer")); }, AutoCreate.CreateOrUpdate); opts.Services.AddDbContextWithWolverineManagedMultiTenancy((builder, connectionString, _) => { builder.UseSqlServer(connectionString.Value, b => b.MigrationsAssembly("MultiTenantedEfCoreWithSqlServer")); }, AutoCreate.CreateOrUpdate); }); ``` snippet source | anchor ::: warning Wolverine is not yet able to dynamically tear down tenants yet. That's long planned, and honestly probably only happens when an outside company sponsors that work. ::: If you need to be able to add new tenants at runtime or just have more tenants than is comfortable living in static configuration or plenty of other reasons I could think of, you can also use Wolverine's "master table tenancy" approach where tenant id to database connection string information is kept in a separate database table. Here's a possible usage of that model: ```cs var builder = Host.CreateApplicationBuilder(); var configuration = builder.Configuration; builder.UseWolverine(opts => { // You need a main database no matter what that will hold information about the Wolverine system itself // and.. opts.PersistMessagesWithSqlServer(configuration.GetConnectionString("wolverine")) // ...also a table holding the tenant id to connection string information .UseMasterTableTenancy(seed => { // These registrations are 100% just to seed data for local development // Maybe you want to omit this during production? // Or do something programmatic by looping through data in the IConfiguration? seed.Register("tenant1", configuration.GetConnectionString("tenant1")); seed.Register("tenant2", configuration.GetConnectionString("tenant2")); seed.Register("tenant3", configuration.GetConnectionString("tenant3")); }); }); ``` snippet source | anchor ::: info Wolverine's "master table tenancy" model was unsurprisingly based on Marten's [Master Table Tenancy](https://martendb.io/configuration/multitenancy.html#master-table-tenancy-model) feature and even shares a little bit of supporting code now. ::: Here's some more important background on the multi-tenancy support: * Wolverine is spinning up a completely separate "durability agent" across the application to recover stranded messages in the transactional inbox and outbox, and that's done automatically for you * The lightweight saga support for PostgreSQL absolutely works with this model of multi-tenancy * Wolverine is able to manage all of its database tables including the tenant table itself (`wolverine_tenants`) across both the main database and all the tenant databases including schema migrations * Wolverine's transactional middleware is aware of the multi-tenancy and can connect to the correct database based on the `IMesageContext.TenantId` or utilize the tenant id detection in Wolverine.HTTP as well * You can "plug in" a custom implementation of `ITenantSource` to manage tenant id to connection string assignments in whatever way works for your deployed system --- --- url: /guide/messaging/transports/sqlserver.md --- # Sql Server Transport See the [Sql Server Transport](/guide/durability/sqlserver.html#sql-server-messaging-transport) section in the documentation on `WolverineFx.SqlServer`. --- --- url: /guide/durability/sqlite.md --- # SQLite Integration ::: info Wolverine can use the SQLite durability options with any mix of Entity Framework Core as a higher level persistence framework. SQLite is a great choice for smaller applications, development/testing scenarios, or single-node deployments where you want durable messaging without the overhead of a separate database server. ::: Wolverine supports a SQLite backed message persistence strategy and even a SQLite backed messaging transport option. To get started, add the `WolverineFx.Sqlite` dependency to your application: ```bash dotnet add package WolverineFx.Sqlite ``` ## Message Persistence To enable SQLite to serve as Wolverine's [transactional inbox and outbox](./), you just need to use the `WolverineOptions.PersistMessagesWithSqlite()` extension method as shown below in a sample: ```cs var builder = WebApplication.CreateBuilder(args); var connectionString = builder.Configuration.GetConnectionString("sqlite"); builder.Host.UseWolverine(opts => { // Setting up SQLite-backed message storage // This requires a reference to Wolverine.Sqlite opts.PersistMessagesWithSqlite(connectionString); // Other Wolverine configuration }); // This is rebuilding the persistent storage database schema on startup // and also clearing any persisted envelope state builder.Host.UseResourceSetupOnStartup(); var app = builder.Build(); // Other ASP.Net Core configuration... // Using JasperFx opens up command line utilities for managing // the message storage return await app.RunJasperFxCommands(args); ``` snippet source | anchor ### Connection String Examples Use file-based SQLite databases for Wolverine durability: ```cs // File-based database (recommended) opts.PersistMessagesWithSqlite("Data Source=wolverine.db"); // File-based database in an application data folder opts.PersistMessagesWithSqlite("Data Source=./data/wolverine.db"); ``` snippet source | anchor ::: warning In-memory SQLite connection strings are intentionally not supported for Wolverine durability. Use file-backed SQLite databases instead. ::: ## SQLite Messaging Transport The `WolverineFx.Sqlite` Nuget also contains a simple messaging transport that was mostly meant to be usable for teams who want asynchronous queueing without introducing more specialized infrastructure. To enable this transport in your code, use this option which *also* activates SQLite backed message persistence: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { var connectionString = builder.Configuration.GetConnectionString("sqlite"); opts.UseSqlitePersistenceAndTransport(connectionString) // Tell Wolverine to build out all necessary queue or scheduled message // tables on demand as needed .AutoProvision() // Optional that may be helpful in testing, but probably bad // in production! .AutoPurgeOnStartup(); // Use this extension method to create subscriber rules opts.PublishAllMessages().ToSqliteQueue("outbound"); // Use this to set up queue listeners opts.ListenToSqliteQueue("inbound") // Optionally specify how many messages to // fetch into the listener at any one time .MaximumMessagesToReceive(50); }); using var host = builder.Build(); await host.StartAsync(); ``` snippet source | anchor The SQLite transport is strictly queue-based at this point. The queues are configured as durable by default, meaning that they are utilizing the transactional inbox and outbox. The SQLite queues can also be buffered: ```cs opts.ListenToSqliteQueue("sender").BufferedInMemory(); ``` snippet source | anchor Using this option just means that the SQLite queues can be used for both sending or receiving with no integration with the transactional inbox or outbox. This is a little more performant, but less safe as messages could be lost if held in memory when the application shuts down unexpectedly. ### Polling Wolverine has a number of internal polling operations, and any SQLite queues will be polled on a configured interval. The default polling interval is set in the `DurabilitySettings` class and can be configured at runtime as below: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { // Health check message queue/dequeue opts.Durability.HealthCheckPollingTime = TimeSpan.FromSeconds(10); // Node reassignment checks opts.Durability.NodeReassignmentPollingTime = TimeSpan.FromSeconds(5); // User queue poll frequency opts.Durability.ScheduledJobPollingTime = TimeSpan.FromSeconds(5); }); ``` snippet source | anchor ::: info Control queue Wolverine has an internal control queue (`dbcontrol`) used for internal operations. This queue is hardcoded to poll every second and should not be changed to ensure the stability of the application. ::: ## Lightweight Saga Usage See the details on [Lightweight Saga Storage](/guide/durability/sagas.html#lightweight-saga-storage) for more information. SQLite saga storage uses a `TEXT` column (JSON serialized) for saga state and supports optimistic concurrency with version tracking. ## SQLite-Specific Considerations ### Advisory Locks SQLite does not have native advisory locks like PostgreSQL. Wolverine uses a table-based locking mechanism (`wolverine_locks` table) to emulate advisory locks for distributed locking. Locks are acquired by inserting rows and released by deleting them. ### Data Types The SQLite persistence uses the following data type mappings: | Purpose | SQLite Type | |---------|-------------| | Message body | `BLOB` | | Saga state | `TEXT` (JSON) | | Timestamps | `TEXT` (stored as `datetime('now')` UTC format) | | GUIDs | `TEXT` | | IDs (auto-increment) | `INTEGER` | ### Schema Names SQLite only supports the `main` schema name at this time. Unlike PostgreSQL or SQL Server, SQLite does not have a traditional schema system for Wolverine queue and envelope tables. `UseSqlitePersistenceAndTransport()` is intentionally connection-string only: ```cs opts.UseSqlitePersistenceAndTransport("Data Source=wolverine.db"); ``` snippet source | anchor ### Multi-Tenancy SQLite multi-tenancy is supported by mapping tenant ids to separate SQLite files (connection strings). You can do this with static configuration: ```cs opts.PersistMessagesWithSqlite("Data Source=main.db") .RegisterStaticTenants(tenants => { tenants.Register("red", "Data Source=red.db"); tenants.Register("blue", "Data Source=blue.db"); }) .EnableMessageTransport(x => x.AutoProvision()); opts.ListenToSqliteQueue("incoming").UseDurableInbox(); ``` snippet source | anchor Or with Wolverine-managed master-table tenancy for dynamic tenant onboarding: ```cs opts.PersistMessagesWithSqlite("Data Source=main.db") .UseMasterTableTenancy(seed => { seed.Register("red", "Data Source=red.db"); seed.Register("blue", "Data Source=blue.db"); }) .EnableMessageTransport(x => x.AutoProvision()); ``` snippet source | anchor For tenant-specific sends, set `DeliveryOptions.TenantId`. ```cs await host.SendAsync(new SampleTenantMessage("hello"), new DeliveryOptions { TenantId = "red" }); ``` snippet source | anchor When transport is enabled, each tenant database gets its own durable queue tables and scheduled polling. ### Concurrency SQLite uses file-level locking, which means only one writer can access the database at a time. For applications with high write throughput, consider using PostgreSQL or SQL Server instead. However, for moderate workloads and single-node deployments, SQLite performs well and eliminates the need for external database infrastructure. ### Compatibility The SQLite persistence is compatible with any platform supported by [Microsoft.Data.Sqlite](https://learn.microsoft.com/en-us/dotnet/standard/data/sqlite/). The implementation uses the Weasel.Sqlite library for schema management. --- --- url: /guide/messaging/transports/sqlite.md --- # SQLite Transport See the [SQLite Transport](/guide/durability/sqlite#sqlite-messaging-transport) documentation in the [SQLite Integration](/guide/durability/sqlite) topic. --- --- url: /guide/handlers/sticky.md --- # Sticky Handler to Endpoint Assignments ::: info The original behavior of Wolverine and the way it combines all handlers for a given message type into one logical transaction was an explicit design choice in a predecessor tool named *FubuTransportation* and was carried through into *Jasper* and finally into today's Wolverine. That decision absolutely made sense in the context of the original system that *FubuTransportation* was designed for, but maybe not so much today. Such is software development. ::: By default, Wolverine will combine all the discovered handlers for a certain message type in one logical transaction. Another Wolverine default behavior is that there is no explicit mapping of handler types to listening endpoints or local endpoints, which was a very conscious decision to simplify Wolverine usage compared to older .NET messaging frameworks. At other times though, you may want the same message (usually a logical "event" message) to be handled separately by two or more distinct message handlers and even be routed to separate local queues. In another instance, you may want to have separate message handlers apply based on where the message is received from. In all cases, this is what the "sticky handler" functionality is meant to accomplish. Let's start with a simple example and say that you have a message type called `StickyMessage` that when published should be handled completely separately by two different handlers performing two different logical operations using the same message as an input. ```cs public class StickyMessage; ``` snippet source | anchor And we're going to handle that `StickyMessage` message separately with two different handler types: ```cs [StickyHandler("blue")] public static class BlueStickyHandler { public static StickyMessageResponse Handle(StickyMessage message, Envelope envelope) { return new StickyMessageResponse("blue", message, envelope.Destination); } } [StickyHandler("green")] public static class GreenStickyHandler { public static StickyMessageResponse Handle(StickyMessage message, Envelope envelope) { return new StickyMessageResponse("green", message, envelope.Destination); } } ``` snippet source | anchor ::: tip `[StickyHandler]` can be used on either the handler class or the handler method ::: I'd ask you to notice the usage of the `[StickyHandler]` attribute on the two message handlers. In this case, Wolverine sees these attributes on the handler types and "knows" to only execute that message handler on the endpoint named in the attribute. The endpoint resolution rules are: 1. Try to find and existing endpoint with the same name and "stick" the handler type to that endpoint 2. If no endpoint with that name exists, create a new local queue endpoint *and* create a routing rule for that message type to that local queue As an example of an explicitly named endpoint, see this sample: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // I'm explicitly configuring an incoming TCP // endpoint named "blue" opts.ListenAtPort(4000).Named("blue"); }).StartAsync(); ``` snippet source | anchor With all of that being said, the end result of the two `StickyMessage` handlers that are marked with `[StickyHandler]` is that when a `StickyMessage` message is published in the system, it will be: 1. Published to a local queue named "green" where it will be handled by the `GreenStickyHandler` handler 2. Published to a local queue named "blue" where it will be handled by the `BlueStickyHandler` handler In both cases, the message is tracked separately in terms of queueing, failures, and retries. ::: tip If there are multiple handlers for the same message and only some of the handlers have explicit "sticky" rules, the handlers with no configured "sticky" rules will be executed if that message is published to any other endpoint. Call these the "leftovers" ::: It's also possible -- and maybe advantageous -- to define the stickiness with the fluent interface directly against the listening endpoints. In the case of wanting to handle external messages separately depending on where they come from, you can tag the handler stickiness to an endpoint like so: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.ListenAtPort(400) // This handler type should be executed at this listening // endpoint, but other handlers for the same message type // should not .AddStickyHandler(typeof(GreenStickyHandler)); opts.ListenAtPort(5000) // Likewise, the same StickyMessage received at this // endpoint should be handled by BlueStickHandler .AddStickyHandler(typeof(BlueStickyHandler)); }).StartAsync(); ``` snippet source | anchor ## Configuring Local Queues There is a world of reasons why you might want to fine tune the behavior of local queues (sequential ordering? parallelism? circuit breakers?), but the "sticky" handler usage did make it a little harder to configure the exact right local queue for a sticky handler. To alleviate that, see the [IConfigureLocalQueue](/guide/messaging/transports/local.html#using-iconfigurelocalqueue-to-configure-local-queues) usage. --- --- url: /guide/durability/efcore/operations.md --- # Storage Operations with EF Core Just know that Wolverine completely supports the concept of [Storage Operations](/guide/handlers/side-effects.html#storage-side-effects) for EF Core. Assuming you have an EF Core `DbContext` type like this registered in your system: ```cs public class TodoDbContext : DbContext { public TodoDbContext(DbContextOptions options) : base(options) { } public DbSet Todos { get; set; } protected override void OnModelCreating(ModelBuilder modelBuilder) { modelBuilder.Entity(map => { map.ToTable("todos", "todo_app"); map.HasKey(x => x.Id); map.Property(x => x.Name); map.Property(x => x.IsComplete).HasColumnName("is_complete"); }); } } ``` snippet source | anchor You can use storage operations in Wolverine message handlers or HTTP endpoints like these samples from the Wolverine test suite: ```cs public static class TodoHandler { public static Insert Handle(CreateTodo command) => Storage.Insert(new Todo { Id = command.Id, Name = command.Name }); public static Store Handle(CreateTodo2 command) => Storage.Store(new Todo { Id = command.Id, Name = command.Name }); // Use "Id" as the default member public static Update Handle( // The first argument is always the incoming message RenameTodo command, // By using this attribute, we're telling Wolverine // to load the Todo entity from the configured // persistence of the app using a member on the // incoming message type [Entity] Todo todo) { // Do your actual business logic todo.Name = command.Name; // Tell Wolverine that you want this entity // updated in persistence return Storage.Update(todo); } // Use "TodoId" as the default member public static Update Handle(RenameTodo2 command, [Entity] Todo todo) { todo.Name = command.Name; return Storage.Update(todo); } // Use the explicit member public static Update Handle(RenameTodo3 command, [Entity("Identity")] Todo todo) { todo.Name = command.Name; return Storage.Update(todo); } public static Delete Handle(DeleteTodo command, [Entity("Identity")] Todo todo) { return Storage.Delete(todo); } public static IStorageAction Handle(AlterTodo command, [Entity("Identity")] Todo todo) { switch (command.Action) { case StorageAction.Delete: return Storage.Delete(todo); case StorageAction.Update: todo.Name = command.Name; return Storage.Update(todo); case StorageAction.Store: todo.Name = command.Name; return Storage.Store(todo); default: return Storage.Nothing(); } } public static IStorageAction Handle(MaybeInsertTodo command) { if (command.ShouldInsert) { return Storage.Insert(new Todo { Id = command.Id, Name = command.Name }); } return Storage.Nothing(); } public static Insert? Handle(ReturnNullInsert command) => null; public static IStorageAction? Handle(ReturnNullStorageAction command) => null; public static IStorageAction Handle(CompleteTodo command, [Entity] Todo todo) { if (todo == null) throw new ArgumentNullException(nameof(todo)); todo.IsComplete = true; return Storage.Update(todo); } public static IStorageAction Handle(MaybeCompleteTodo command, [Entity(Required = false)] Todo? todo) { if (todo == null) return Storage.Nothing(); todo.IsComplete = true; return Storage.Update(todo); } } ``` snippet source | anchor ::: warning When a handler returns an `IStorageAction`, Wolverine automatically applies [transactional middleware](/guide/durability/marten/transactional-middleware) for that handler — even if the handler is not explicitly decorated with `[Transactional]` or `AutoApplyTransactions()` is not configured. This behavior is required because Wolverine needs to automatically call `SaveChangesAsync()` on the EF Core `DbContext` to persist the storage operation, which should be done within a single transaction together with publication of messages to the outbox/inbox. ::: ## \[Entity] Wolverine also supports the usage of the `[Entity]` attribute to load entity data by its identity with EF Core. As you'd expect, Wolverine can "find" the right EF Core `DbContext` type for the entity type through IoC service registrations. The loaded EF core entity does not included related entities. For more information on the usage of this attribute see [Automatically loading entities to method parameters](/guide/handlers/persistence#automatically-loading-entities-to-method-parameters). --- --- url: /introduction/support-policy.md --- # Support Policy Community support (via our Github & Discord) is offered for the current major version of Wolverine. The previous major version is supported for high-priority bug and security patches for up to 6 months after a new major version is released. Customers with a [JasperFx Support Plan](https://jasperfx.net/support-plans/) are granted priority for questions, feature requests and bug fixes. Support for previous versions of Wolverine is also available under these plans. | Wolverine Version | End-of-Life | Status | Support Options | | ----------------- | :---------: | :-----------: | :----------------: | | 4 | Current | Current | Community/JasperFx | | 3 | Dec 2025 | P1 Fixes Only | Community/JasperFx | | 2 | May 2025 | EoL | JasperFx | | 1 | Sep 2024 | EoL | JasperFx | --- --- url: /guide/testing.md --- # Test Automation Support The Wolverine team absolutely believes in Test Driven Development and the importance of strong test automation strategies as a key part of sustainable development. To that end, Wolverine's conceptual design from the very beginning (Wolverine started as "Jasper" in 2015!) has been to maximize testability by trying to decouple application code from framework or other infrastructure concerns. See Jeremy's blog post [How Wolverine allows for easier testing](https://jeremydmiller.com/2022/12/13/how-wolverine-allows-for-easier-testing/) for an introduction to unit testing Wolverine message handlers. Also see [Wolverine Best Practices](/introduction/best-practices) for other helpful tips. And this: @[youtube](ODSAGAllsxw) ## Integration Testing with Tracked Sessions ::: tip This is the recommended approach for integration testing against Wolverine message handlers if there are any outgoing messages or asynchronous behavior as a result of the messages being handled in your test scenario. ::: ::: info As of Wolverine 3.13, the same extension methods shown here are available off of `IServiceProvider` in addition to the original support off of `IHost` if you happen to be writing integration tests by spinning up just an IoC container and not the full `IHost` in your test harnesses. ::: So far we've been mostly focused on unit testing Wolverine handler methods individually with unit tests without any direct coupling to infrastructure. Great, that's a great start, but you're eventually going to also need some integration tests, and invoking or publishing messages is a very logical entry point for integration testing. First, why integration testing with Wolverine? 1. Wolverine is probably most effective when you're heavily leveraging middleware or Wolverine conventions, and only an integration test is really going to get through the entire "stack" 2. You may frequently want to test the interaction between your application code and infrastructure concerns like databases 3. Handling messages will frequently spawn other messages that will be executed in other threads or other processes, and you'll frequently want to write bigger tests that span across messages ::: tip I'm not getting into it here, but remember that `IHost` is relatively expensive to build, so you'll probably want it cached between tests. Or at least be aware that it's expensive. ::: This sample was taken from [an introductory blog post](https://jeremydmiller.com/2022/12/12/introducing-wolverine-for-effective-server-side-net-development/) that may give you some additional context for what's happening here. Going back to our sample message handler for the `DebitAccount` in the previous sections, let's say that we want an integration test that spans the middleware that looks up the `Account` data, the Fluent Validation middleware, [Marten](https://martendb.io) usage, and even across to any cascading messages that are also handled in process as a result of the original message. One of the big challenges with automated testing against asynchronous processing is *knowing* when the "action" part of the "arrange/act/assert" phase of the test is complete and it's safe to start making assertions. Anyone who has had the misfortune to work with complicated Selenium test suites is very aware of this challenge. Not to fear though, Wolverine comes out of the box with the concept of "tracked sessions" that you can use to write predictable and reliable integration tests. ::: warning I'm omitting the code necessary to set up system state first just to concentrate on the Wolverine mechanics here. ::: To start with tracked sessions, let's assume that you have an `IHost` for your Wolverine application in your testing harness. Assuming you do, you can start a tracked session using the `IHost.InvokeMessageAndWaitAsync()` extension method in Wolverine like this: ```cs public async Task using_tracked_sessions() { // The point here is just that you somehow have // an IHost for your application using var host = await Host.CreateDefaultBuilder() .UseWolverine().StartAsync(); var debitAccount = new DebitAccount(111, 300); var session = await host.InvokeMessageAndWaitAsync(debitAccount); var overdrawn = session.Sent.SingleMessage(); overdrawn.AccountId.ShouldBe(debitAccount.AccountId); } ``` snippet source | anchor The tracked session mechanism utilizes Wolverine's internal instrumentation to "know" when all the outstanding work in the system is complete. In this case, if the `AccountOverdrawn` message spawned from `DebitAccount` is handled locally, the `InvokeMessageAndWaitAsync()` call will not return until the other messages that are routed locally are finished processing or the test times out. The tracked session will also throw an `AggregateException` with any exceptions encountered by any message being handled within the activity that is tracked. Note that you'll probably *mostly* *invoke* messages in these tests, but there are additional extension methods on `IHost` for other `IMessageBus` operations. :::info The Tracked Session includes only messages sent, published, or scheduled during the tracked session. Messages sent before the tracked session are not included in the tracked session. ::: Finally, there are some more advanced options in tracked sessions you may find useful as shown below: ```cs public async Task using_tracked_sessions_advanced(IHost otherWolverineSystem) { // The point here is just that you somehow have // an IHost for your application using var host = await Host.CreateDefaultBuilder() .UseWolverine().StartAsync(); var debitAccount = new DebitAccount(111, 300); var session = await host // Start defining a tracked session .TrackActivity() // Override the timeout period for longer tests .Timeout(1.Minutes()) // Be careful with this one! This makes Wolverine wait on some indication // that messages sent externally are completed .IncludeExternalTransports() // Make the tracked session span across an IHost for another process // May not be super useful to the average user, but it's been crucial // to test Wolverine itself .AlsoTrack(otherWolverineSystem) // This is actually helpful if you are testing for error handling // functionality in your system .DoNotAssertOnExceptionsDetected() // Hey, just in case failure acks are getting into your testing session // and you do not care for the tests, tell Wolverine to ignore them .IgnoreFailureAcks() // Again, this is testing against processes, with another IHost .WaitForMessageToBeReceivedAt(otherWolverineSystem) // Wolverine does this automatically, but it's sometimes // helpful to tell Wolverine to not track certain message // types during testing. Especially messages originating from // some kind of polling operation .IgnoreMessageType() // Another option .IgnoreMessagesMatchingType(type => type.CanBeCastTo()) // There are many other options as well .InvokeMessageAndWaitAsync(debitAccount); var overdrawn = session.Sent.SingleMessage(); overdrawn.AccountId.ShouldBe(debitAccount.AccountId); } ``` snippet source | anchor The samples shown above inlcude `Sent` message records, but there are more properties available in the `TrackedSession` object. In accordance with the `MessageEventType` enum, you can access these properties on the `TrackedSession` object: ```cs public enum MessageEventType { Received, Sent, ExecutionStarted, ExecutionFinished, MessageSucceeded, MessageFailed, NoHandlers, NoRoutes, MovedToErrorQueue, Requeued, Scheduled, Discarded, Status } ``` snippet source | anchor Let's consider we're testing a Wolverine application which publishes a message, when a change to a watched folder is detected. The part we want to test is that a message is actually published when a file is added to the watched folder. We can use the `TrackActivity` method to start a tracked session and then use the `ExecuteAndWaitAsync` method to wait for the message to be published when the file change has happened. ```cs public record FileAdded(string FileName); public class FileAddedHandler { public Task Handle( FileAdded message ) => Task.CompletedTask; } public class RandomFileChange { private readonly IMessageBus _messageBus; public RandomFileChange( IMessageBus messageBus ) => _messageBus = messageBus; public async Task SimulateRandomFileChange() { // Delay task with a random number of milliseconds // Here would be your FileSystemWatcher / IFileProvider await Task.Delay( TimeSpan.FromMilliseconds( new Random().Next(100, 1000) ) ); var randomFileName = Path.GetRandomFileName(); await _messageBus.SendAsync(new FileAdded(randomFileName)); } } public class When_message_is_sent : IAsyncLifetime { private IHost _host; public async Task InitializeAsync() { var hostBuilder = Host.CreateDefaultBuilder(); hostBuilder.ConfigureServices( services => { services.AddSingleton(); } ); hostBuilder.UseWolverine(); _host = await hostBuilder.StartAsync(); } [Fact] public async Task should_be_in_session_using_service_provider() { var randomFileChange = _host.Services.GetRequiredService(); var session = await _host.Services .TrackActivity() .Timeout(2.Seconds()) .ExecuteAndWaitAsync( (Func)( async ( _ ) => await randomFileChange.SimulateRandomFileChange() ) ); session .Sent .AllMessages() .Count() .ShouldBe(1); session .Sent .AllMessages() .First() .ShouldBeOfType(); } [Fact] public async Task should_be_in_session() { var randomFileChange = _host.Services.GetRequiredService(); var session = await _host .TrackActivity() .Timeout(2.Seconds()) .ExecuteAndWaitAsync( (Func)( async ( _ ) => await randomFileChange.SimulateRandomFileChange() ) ); session .Sent .AllMessages() .Count() .ShouldBe(1); session .Sent .AllMessages() .First() .ShouldBeOfType(); } public async Task DisposeAsync() => await _host.StopAsync(); } ``` snippet source | anchor As you can see, we just have to start our application, attach a tracked session to it, and then wait for the message to be published. This way, we can test the whole process of the application, from the file change to the message publication, in a single test. ## Dealing with Scheduled Messages As I'm sure you can imagine, [scheduled local execution](/guide/messaging/transports/local.html#scheduling-local-execution) and [scheduled message delivery](/guide/messaging/message-bus.html#scheduling-message-delivery-or-execution) can easily be confusing for testing and have occasionally caused trouble for Wolverine users using the tracked session functionality. At this point, Wolverine now tracks any scheduled messages a little separately under an `ITrackedSession.Scheduled` collection, and any message that is scheduled for later execution or delivery is automatically interpreted as "complete" in the tracked session. You can also force the "tracked session" to immediately "replay" any scheduled messages tracked in the original session by: 1. Invoking any messages that were scheduled for local execution 2. Sending any messages that were scheduled for delivery to the original destination and returning a brand new tracked session for the "replay." Here's an example from our test suite. First though, here's the message handlers in question (remember, this is rigged up for testing): ```cs public static DeliveryMessage Handle(TriggerScheduledMessage message) { // This causes a message to be scheduled for delivery in 5 minutes from now return new ScheduledMessage(message.Text).DelayedFor(5.Minutes()); } public static void Handle(ScheduledMessage message) => Debug.WriteLine("Got scheduled message"); ``` snippet source | anchor And the test that exercises this functionality: ```cs // In this case we're just executing everything in memory using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.PersistMessagesWithPostgresql(Servers.PostgresConnectionString, "wolverine"); opts.Policies.UseDurableInboxOnAllListeners(); }).StartAsync(); // Should finish cleanly var tracked = await host.SendMessageAndWaitAsync(new TriggerScheduledMessage("Chiefs")); // Here's how you can query against the messages that were detected to be scheduled tracked.Scheduled.SingleMessage() .Text.ShouldBe("Chiefs"); // This API will try to immediately play any scheduled messages immediately var replayed = await tracked.PlayScheduledMessagesAsync(10.Seconds()); replayed.Executed.SingleMessage().Text.ShouldBe("Chiefs"); ``` snippet source | anchor And now, a slightly more complicated test that tests the replay of a message scheduled to go to a completely separate application: ```cs var port1 = PortFinder.GetAvailablePort(); var port2 = PortFinder.GetAvailablePort(); using var sender = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.PublishMessage().ToPort(port2); opts.ListenAtPort(port1); }).StartAsync(); using var receiver = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.ListenAtPort(port2); }).StartAsync(); // Should finish cleanly var tracked = await sender .TrackActivity() .IncludeExternalTransports() .AlsoTrack(receiver) .InvokeMessageAndWaitAsync(new TriggerScheduledMessage("Broncos")); tracked.Scheduled.SingleMessage() .Text.ShouldBe("Broncos"); var replayed = await tracked.PlayScheduledMessagesAsync(10.Seconds()); replayed.Executed.SingleMessage().Text.ShouldBe("Broncos"); ``` snippet source | anchor ## Extension Methods for Outgoing Messages Your Wolverine message handlers will often have some need to publish, send, or schedule other messages as part of their work. At the unit test level you'll frequently want to validate the *decision* about whether or not to send a message. To aid in those assertions, Wolverine out of the box includes some testing helper extension methods on `IEnumerable` inspired by the [Shouldly](https://github.com/shouldly/shouldly) project. For an example, let's look at this message handler for applying a debit to a bank account that will use [cascading messages](/guide/handlers/cascading) to raise a variable number of additional messages: ```cs [Transactional] public static IEnumerable Handle( DebitAccount command, Account account, IDocumentSession session) { account.Balance -= command.Amount; // This just marks the account as changed, but // doesn't actually commit changes to the database // yet. That actually matters as I hopefully explain session.Store(account); // Conditionally trigger other, cascading messages if (account.Balance > 0 && account.Balance < account.MinimumThreshold) { yield return new LowBalanceDetected(account.Id) .WithDeliveryOptions(new DeliveryOptions { ScheduleDelay = 1.Hours() }); } else if (account.Balance < 0) { yield return new AccountOverdrawn(account.Id); // Give the customer 10 days to deal with the overdrawn account yield return new EnforceAccountOverdrawnDeadline(account.Id); } yield return new AccountUpdated(account.Id, account.Balance); } ``` snippet source | anchor The testing extensions can be seen in action by the following test: ```cs [Fact] public void handle_a_debit_that_makes_the_account_have_a_low_balance() { var account = new Account { Balance = 1000, MinimumThreshold = 200, Id = 1111 }; // Let's otherwise ignore this for now, but this is using NSubstitute var session = Substitute.For(); var message = new DebitAccount(account.Id, 801); var messages = AccountHandler.Handle(message, account, session).ToList(); // Now, verify that the only the expected messages are published: // One message of type AccountUpdated messages .ShouldHaveMessageOfType() .AccountId.ShouldBe(account.Id); // You can optionally assert against DeliveryOptions messages .ShouldHaveMessageOfType(delivery => { delivery.ScheduleDelay.Value.ShouldNotBe(TimeSpan.Zero); }) .AccountId.ShouldBe(account.Id); // Assert that there are no messages of type AccountOverdrawn messages.ShouldHaveNoMessageOfType(); } ``` snippet source | anchor The supported extension methods so far are in the [TestingExtensions](https://github.com/JasperFx/wolverine/blob/main/src/Wolverine/TestingExtensions.cs) class. As we'll see in the next section, you can also find a matching `Envelope` for a message type. ::: tip I'd personally organize the testing against that handler with a context/specification pattern, but I just wanted to show the extension methods here. ::: ## TestMessageContext ::: tip This testing mechanism is admittedly just a copy of the test support in older messaging frameworks in .NET. It's only useful as an argument passed into a handler method. We recommend using the "Tracked Session" approach instead. ::: In the section above we used cascading messages, but since there are some use cases -- or maybe even just user preference -- that would lead you to directly use `IMessageContext` to send additional messages from a message handler, Wolverine comes with the `TestMessageContext` class that can be used as a [test double spy](https://martinfowler.com/bliki/TestDouble.html) within unit tests. Here's a different version of the message handler from the previous section, but this time using `IMessageContext` directly: ```cs [Transactional] public static async Task Handle( DebitAccount command, Account account, IDocumentSession session, IMessageContext messaging) { account.Balance -= command.Amount; // This just marks the account as changed, but // doesn't actually commit changes to the database // yet. That actually matters as I hopefully explain session.Store(account); // Conditionally trigger other, cascading messages if (account.Balance > 0 && account.Balance < account.MinimumThreshold) { await messaging.SendAsync(new LowBalanceDetected(account.Id)); } else if (account.Balance < 0) { await messaging.SendAsync(new AccountOverdrawn(account.Id), new DeliveryOptions{DeliverWithin = 1.Hours()}); // Give the customer 10 days to deal with the overdrawn account await messaging.ScheduleAsync(new EnforceAccountOverdrawnDeadline(account.Id), 10.Days()); } // "messaging" is a Wolverine IMessageContext or IMessageBus service // Do the deliver within rule on individual messages await messaging.SendAsync(new AccountUpdated(account.Id, account.Balance), new DeliveryOptions { DeliverWithin = 5.Seconds() }); } ``` snippet source | anchor To test this handler, we can use `TestMessageContext` as a stand in to just record the outgoing messages and even let us do some assertions on exactly *how* the messages were published. I'm using [xUnit.Net](https://xunit.net/) here, but this is certainly usable from other test harness tools: ```cs public class when_the_account_is_overdrawn : IAsyncLifetime { private readonly Account theAccount = new Account { Balance = 1000, MinimumThreshold = 100, Id = Guid.NewGuid() }; private readonly TestMessageContext theContext = new TestMessageContext(); // I happen to like NSubstitute for mocking or dynamic stubs private readonly IDocumentSession theDocumentSession = Substitute.For(); public async Task InitializeAsync() { var command = new DebitAccount(theAccount.Id, 1200); await DebitAccountHandler.Handle(command, theAccount, theDocumentSession, theContext); } [Fact] public void the_account_balance_should_be_negative() { theAccount.Balance.ShouldBe(-200); } [Fact] public void raises_an_account_overdrawn_message() { // ShouldHaveMessageOfType() is an extension method in // Wolverine itself to facilitate unit testing assertions like this theContext.Sent.ShouldHaveMessageOfType() .AccountId.ShouldBe(theAccount.Id); } [Fact] public void raises_an_overdrawn_deadline_message_in_10_days() { theContext.ScheduledMessages() // Find the wrapping envelope for this message type, // then we can chain assertions against the wrapping Envelope .ShouldHaveEnvelopeForMessageType() .ScheduleDelay.ShouldBe(10.Days()); } public Task DisposeAsync() { return Task.CompletedTask; } } ``` snippet source | anchor The `TestMessageContext` mostly just collects an array of objects that are sent, published, or scheduled. The same extension methods explained in the previous section can be used to verify the outgoing messages and even *how* they were published. As of Wolverine 1.8, `TestMessageContext` also supports limited expectations for request and reply using `IMessageBus.InvokeAsync()` as shown below: ```cs var spy = new TestMessageContext(); var context = (IMessageContext)spy; // Set up an expected response for a message spy.WhenInvokedMessageOf() .RespondWith(new NumberResponse(12)); // Used for: var response1 = await context.InvokeAsync(new NumberRequest(4, 5)); // Set up an expected response with a matching filter spy.WhenInvokedMessageOf(x => x.X == 4) .RespondWith(new NumberResponse(12)); // Set up an expected response for a message to an explicit destination Uri spy.WhenInvokedMessageOf(destination:new Uri("rabbitmq://queue/incoming")) .RespondWith(new NumberResponse(12)); // Used to set up: var response2 = await context.EndpointFor(new Uri("rabbitmq://queue/incoming")) .InvokeAsync(new NumberRequest(5, 6)); // Set up an expected response for a message to a named endpoint spy.WhenInvokedMessageOf(endpointName:"incoming") .RespondWith(new NumberResponse(12)); // Used to set up: var response3 = await context.EndpointFor("incoming") .InvokeAsync(new NumberRequest(5, 6)); ``` snippet source | anchor ## Stubbing All External Transports ::: tip In all cases here, Wolverine is disabling all external listeners, stubbing all outgoing subscriber endpoints, and **not** making any connection to external brokers. ::: Unlike some older .NET messaging tools, Wolverine comes out of the box with its in-memory "mediator" functionality that allows you to directly invoke any possible message handler in the system on demand without any explicit configuration. Great, and that means that there's value in just spinning up the application as is and executing locally -- but what about any external transport dependencies that may be very inconvenient to utilize in automated tests? To that end, Wolverine allows you to completely disable all external transports including the built in TCP transport. There's a couple different ways to go about it. The simplest conceptual approach is to leverage the .NET environment name like this: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { // Other configuration... // IF the environment is "Testing", turn off all external transports if (builder.Environment.IsDevelopment()) { opts.StubAllExternalTransports(); } }); using var host = builder.Build(); await host.StartAsync(); ``` snippet source | anchor I'm not necessarily comfortable with a lot of conditional hosting setup all the time, so there's another option to use the `IServiceCollection.DisableAllExternalWolverineTransports()` extension method as shown below: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // do whatever you need to configure Wolverine }) // Override the Wolverine configuration to disable all // external transports, broker connectivity, and incoming/outgoing // messages to run completely locally .ConfigureServices(services => services.DisableAllExternalWolverineTransports()) .StartAsync(); ``` snippet source | anchor Finally, to put that in a little more context about how you might go about using it in real life, let's say that we have out main application with a relatively clean bootstrapping setup and a separate integration testing project. In this case we'd like to bootstrap the application from the integration testing project **as it is, except for having all the external transports disabled**. In the code below, I'm using the [Alba](https://jasperfx.github.io/alba) and [WebApplicationFactory](https://learn.microsoft.com/en-us/aspnet/core/test/integration-tests): ```cs // This is using Alba to bootstrap a Wolverine application // for integration tests, but it's using WebApplicationFactory // to do the actual bootstrapping await using var host = await AlbaHost.For(x => { // I'm overriding x.ConfigureServices(services => services.DisableAllExternalWolverineTransports()); }); ``` snippet source | anchor In the sample above, I'm bootstrapping the `IHost` for my production application with all the external transports turned off in a way that's appropriate for integration testing message handlers within the main application. ## Running Wolverine in "Solo" Mode Wolverine's [leadership election](/guide/durability/leadership-and-troubleshooting.html#troubleshooting-and-leadership-election) process is necessary for distributing several background tasks in real life production, but that subsystem can lead to some inconvenient sluggishness in [cold start times](https://dontpaniclabs.com/blog/post/2022/09/20/net-cold-starts/#:~:text=In%20software%20development%2C%20cold%20starts,have%20an%20increased%20start%20time.) in automation testing. To sidestep that problem, you can direct Wolverine to run in "Solo" mode where the current process assumes that it's the only running node and automatically starts up all known background tasks immediately. To do so, you could do something like this in your main `Program` file: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { opts.Services.AddMarten("some connection string") // This adds quite a bit of middleware for // Marten .IntegrateWithWolverine(); // You want this maybe! opts.Policies.AutoApplyTransactions(); if (builder.Environment.IsDevelopment()) { // But wait! Optimize Wolverine for usage as // if there would never be more than one node running opts.Durability.Mode = DurabilityMode.Solo; } }); using var host = builder.Build(); await host.StartAsync(); ``` snippet source | anchor Or if you're using something like [WebHostFactory](https://learn.microsoft.com/en-us/aspnet/core/test/integration-tests?view=aspnetcore-8.0) to bootstrap your Wolverine application in an integration testing harness, you can use this helper to override Wolverine into being "Solo": ```cs // This is bootstrapping the actual application using // its implied Program.Main() set up // For non-Alba users, this is using IWebHostBuilder Host = await AlbaHost.For(x => { x.ConfigureServices(services => { // Override the Wolverine configuration in the application // to run the application in "solo" mode for faster // testing cold starts services.RunWolverineInSoloMode(); // And just for completion, disable all Wolverine external // messaging transports services.DisableAllExternalWolverineTransports(); }); }); ``` snippet source | anchor ## Stubbing Message Handlers To extend the test automation support even further, Wolverine now has a capability to "stub" out message handlers in testing scenarios with pre-canned behavior for more reliable testing in some situations. This feature was mostly conceived of for stubbing out calls to external systems through `IMessageBus.InvokeAsync()` where the request would normally be sent to an external system through a subscriber. Jumping into an example, let's say that your system interacts with another service that estimates delivery costs for ordering items. At some point in the system you might reach out through a request/reply call in Wolverine to estimate an item delivery before making a purchase like this code: ```cs // This query message is normally sent to an external system through Wolverine // messaging public record EstimateDelivery(int ItemId, DateOnly Date, string PostalCode); // This message type is a response from an external system public record DeliveryInformation(TimeOnly DeliveryTime, decimal Cost); public record MaybePurchaseItem(int ItemId, Guid LocationId, DateOnly Date, string PostalCode, decimal BudgetedCost); public record MakePurchase(int ItemId, Guid LocationId, DateOnly Date); public record PurchaseRejected(int ItemId, Guid LocationId, DateOnly Date); public static class MaybePurchaseHandler { public static Task LoadAsync( MaybePurchaseItem command, IMessageBus bus, CancellationToken cancellation) { var (itemId, _, date, postalCode, budget) = command; var estimateDelivery = new EstimateDelivery(itemId, date, postalCode); // Let's say this is doing a remote request and reply to another system // through Wolverine messaging return bus.InvokeAsync(estimateDelivery, cancellation); } public static object Handle( MaybePurchaseItem command, DeliveryInformation estimate) { if (estimate.Cost <= command.BudgetedCost) { return new MakePurchase(command.ItemId, command.LocationId, command.Date); } return new PurchaseRejected(command.ItemId, command.LocationId, command.Date); } } ``` snippet source | anchor And for a little more context, the `EstimateDelivery` message will always be sent to an external system in this configuration: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { opts .UseRabbitMq(builder.Configuration.GetConnectionString("rabbit")) .AutoProvision(); // Just showing that EstimateDelivery is handled by // whatever system is on the other end of the "estimates" queue opts.PublishMessage() .ToRabbitQueue("estimates"); }); ``` snippet source | anchor Using our In testing scenarios, maybe the external system isn't available at all, or it's just much more challenging to run tests that also include the external system, or maybe you'd just like to write more isolated tests against your service's behavior before even trying to integrate with the other system (my personal preference anyway). To that end we can now stub the remote handling like this: ```cs public static async Task try_application(IHost host) { host.StubWolverineMessageHandling( query => new DeliveryInformation(new TimeOnly(17, 0), 1000)); var locationId = Guid.NewGuid(); var itemId = 111; var expectedDate = new DateOnly(2025, 12, 1); var postalCode = "78750"; var maybePurchaseItem = new MaybePurchaseItem(itemId, locationId, expectedDate, postalCode, 500); var tracked = await host.InvokeMessageAndWaitAsync(maybePurchaseItem); // The estimated cost from the stub was more than we budgeted // so this message should have been published // This line is an assertion too that there was a single message // of this type published as part of the message handling above var rejected = tracked.Sent.SingleMessage(); rejected.ItemId.ShouldBe(itemId); rejected.LocationId.ShouldBe(locationId); } ``` snippet source | anchor After calling making this call: ```csharp host.StubWolverineMessageHandling( query => new DeliveryInformation(new TimeOnly(17, 0), 1000)); ``` Calling this from our Wolverine application: ```csharp // Let's say this is doing a remote request and reply to another system // through Wolverine messaging return bus.InvokeAsync(estimateDelivery, cancellation); ``` Will use the stubbed logic we registered. This is enabling you to use fake behavior for difficult to use external services. For the next test, we can completely remove the stub behavior and revert back to the original configuration like this: ```cs public static void revert_stub(IHost host) { // Selectively clear out the stub behavior for only one message // type host.WolverineStubs(stubs => { stubs.Clear(); }); // Or just clear out all active Wolverine message handler // stubs host.ClearAllWolverineStubs(); } ``` snippet source | anchor Or instead, we can just completely replace the previously registered stub behavior with completely new logic that will override our previous stub: ```cs public static void override_stub(IHost host) { host.StubWolverineMessageHandling( query => new DeliveryInformation(new TimeOnly(17, 0), 250)); } ``` snippet source | anchor So far, we've only looked at simple request/reply behavior, but what if a remote system receiving our message potentially makes multiple calls back to our system? Or really just any kind of interaction more complicated than a single response for a request message? We're still in business, we just have to use a little uglier signature for our stub: ```cs public static void more_complex_stub(IHost host) { host.WolverineStubs(stubs => { stubs.Stub(async ( EstimateDelivery message, IMessageContext context, IServiceProvider services, CancellationToken cancellation) => { // do whatever you want, including publishing any number of messages // back through IMessageContext // And grab any other services you might need from the application // through the IServiceProvider -- but note that you will have // to deal with scopes yourself here // This is an equivalent to get the response back to the // original caller await context.PublishAsync(new DeliveryInformation(new TimeOnly(17, 0), 250)); }); }); } ``` snippet source | anchor A few notes about this capability: * You can use any number of stubs for different message types at the same time * Most of the testing samples use extension methods on `IHost`, but we know there are some users who bootstrap only an IoC container for integration tests, so all of the extension methods shown in this section are also available off of `IServiceProvider` * The "stub" functions are effectively singletons. There's nothing fancier about argument matching or anything you might expect from a full fledged mock library like NSubstitute or FakeItEasy * You can actually fake out the routing to message types that are normally handled by handlers within the application * We don't believe this feature will be helpful for "sticky" message handlers where you may have multiple handlers for the same message type interally --- --- url: /guide/messaging/transports/rabbitmq/topics.md --- # Topics Wolverine supports publishing to [Rabbit MQ topic exchanges](https://www.rabbitmq.com/tutorials/tutorial-one-dotnet.html) with this usage: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseRabbitMq(); opts.Publish(x => { x.MessagesFromNamespace("SomeNamespace"); x.ToRabbitTopics("topics-exchange", ex => { // optionally configure the exchange }); }); opts.ListenToRabbitQueue(""); }).StartAsync(); ``` snippet source | anchor While we're specifying the exchange name ("topics-exchange"), we did nothing to specify the topic name. With this set up, when you publish a message in this application like so: ```cs var publisher = host.MessageBus(); await publisher.SendAsync(new Message1()); ``` snippet source | anchor You will be sending that message to the "topics-exchange" with a topic name derived from the message type. By default that topic name will be Wolverine's [message type alias](/guide/messages.html#message-type-name-or-alias). Unless explicitly overridden, that alias is the full type name of the message type. That topic name derivation can be overridden explicitly by placing the `[Topic]` attribute on a message type like so: ```cs [Topic("color.blue")] public class FirstMessage { public Guid Id { get; set; } = Guid.NewGuid(); } ``` snippet source | anchor Of course, you can always explicitly send a message to a specific topic with this syntax: ```cs await publisher.BroadcastToTopicAsync("color.*", new Message1()); ``` snippet source | anchor Note two things about the code above: 1. The `IMessageBus.BroadcastToTopicAsync()` method will fail if there is not the declared topic exchange endpoint that we configured above 2. You can use Rabbit MQ topic matching patterns in addition to using the exact topic Lastly, to set up listening to specific topic names or topic patterns, you just need to declare bindings between a topic name or pattern, the topics exchange, and the queues you're listening to in your application. Lot of words, here's some code from the Wolverine test suite: ```cs theSender = Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseRabbitMq("host=localhost;port=5672").AutoProvision(); opts.PublishAllMessages().ToRabbitTopics("wolverine.topics", exchange => { exchange.BindTopic("color.green").ToQueue("green"); exchange.BindTopic("color.blue").ToQueue("blue"); exchange.BindTopic("color.*").ToQueue("all"); // Need this to be able to go to ONLY the green receiver for a test exchange.BindTopic("special").ToQueue("green"); }); opts.Discovery.DisableConventionalDiscovery() .IncludeType(); opts.ServiceName = "TheSender"; opts.PublishMessagesToRabbitMqExchange("wolverine.topics", m => m.TopicName); }).Start(); ``` snippet source | anchor ## Publishing by Topic Rule As of Wolverine 1.16, you can specify publishing rules for messages by supplying the logic to determine the topic name from the message itself. Let's say that we have an interface that several of our message types implement like so: ```cs public interface ITenantMessage { string TenantId { get; } } ``` snippet source | anchor Let's say that any message that implements that interface, we want published to the topic for that messages `TenantId`. We can implement that rule like so: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { opts.UseRabbitMq(); // Publish any message that implements ITenantMessage to // a Rabbit MQ "Topic" exchange named "tenant.messages" opts.PublishMessagesToRabbitMqExchange("tenant.messages", m => $"{m.GetType().Name.ToLower()}/{m.TenantId}") // Specify or configure sending through Wolverine for all // messages through this Exchange .BufferedInMemory(); }); using var host = builder.Build(); await host.StartAsync(); ``` snippet source | anchor --- --- url: /guide/messaging/transports/azureservicebus/topics.md --- # Topics and Subscriptions Wolverine.AzureServiceBus supports [Azure Service Bus topics and subscriptions](https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-queues-topics-subscriptions). To register endpoints to send messages to topics or to receive messages from subscriptions, use this syntax: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseAzureServiceBus("some connection string") // If this is part of your configuration, Wolverine will try to create // any missing topics or subscriptions in the configuration at application // start up time .AutoProvision(); // Publish to a topic opts.PublishMessage().ToAzureServiceBusTopic("topic1") // Option to configure how the topic would be configured if // built by Wolverine .ConfigureTopic(topic => { topic.MaxSizeInMegabytes = 100; }); opts.ListenToAzureServiceBusSubscription("subscription1", subscription => { // Optionally alter how the subscription is created or configured in Azure Service Bus subscription.DefaultMessageTimeToLive = 5.Minutes(); }) .FromTopic("topic1", topic => { // Optionally alter how the topic is created in Azure Service Bus topic.DefaultMessageTimeToLive = 5.Minutes(); }); }).StartAsync(); ``` snippet source | anchor To fully utilize subscription listening, be careful with using [Requeue error handling](/guide/handlers/error-handling) actions. In order to truly make that work, Wolverine tries to build out a queue called `wolverine.response.[Your Wolverine service name]` specifically for requeues from subscription listening. If your Wolverine application doesn't have permissions to create queues at runtime, you may want to build that queue manually or forgo using "Requeue" as an error handling technique. ## Topic Filters If Wolverine is provisioning the subscriptions for you, you can customize the subscription filter being created. ```cs opts.ListenToAzureServiceBusSubscription( "subscription1", configureSubscriptionRule: rule => { rule.Filter = new SqlRuleFilter("NOT EXISTS(user.ignore) OR user.ignore NOT LIKE 'true'"); }) .FromTopic("topic1"); ``` snippet source | anchor The default filter if not customized is a simple `1=1` (always true) filter. For more information regarding subscription filters, see the [Azure Service Bus documentation](https://learn.microsoft.com/en-us/azure/service-bus-messaging/topic-filters). --- --- url: /guide/durability/efcore/outbox-and-inbox.md --- # Transactional Inbox and Outbox with EF Core Wolverine is able to integrate with EF Core inside of its transactional middleware in either message handlers or HTTP endpoints to apply the [transactional inbox and outbox mechanics](/guide/durability/) for outgoing messages (local messages actually go straight to the inbox). ::: tip Database round trips, or really any network round trips, are a frequent cause of poor system performance. Wolverine and other Critter Stack tools try to take this into account in its internals. With the EF Core integration, you might need to do just a little bit to help Wolverine out with mapping envelope types to take advantage of database query batching. ::: You can optimize this by adding mappings for Wolverine's envelope storage to your `DbContext` types such that Wolverine can just use EF Core to persist new messages and depend on EF Core database command batching. Otherwise Wolverine has to use the exposed database `DbConnection` off of the active `DbContext` and make completely separate calls to the database (but at least in the same transaction!) to persist new messages at the same time it's calling `DbContext.SaveChangesAsync()` with any pending entity changes. You can help Wolverine out by either using the manual envelope mapping explained next, or registering your `DbContext` with the `AddDbContextWithWolverineIntegration()` option that quietly adds the Wolverine envelope storage mapping to that `DbContext` for you. ## Manually adding Envelope Mapping If not using the `AddDbContextWithWolverineIntegration()` extension method to register a `DbContext` in your system, you can still explicitly add the Wolverine persistent message mapping into your `DbContext` with this call: ```cs public class SampleMappedDbContext : DbContext { public SampleMappedDbContext(DbContextOptions options) : base(options) { } public DbSet Items { get; set; } protected override void OnModelCreating(ModelBuilder modelBuilder) { // This enables your DbContext to map the incoming and // outgoing messages as part of the outbox modelBuilder.MapWolverineEnvelopeStorage(); // Your normal EF Core mapping modelBuilder.Entity(map => { map.ToTable("items", "mt_items"); map.HasKey(x => x.Id); map.Property(x => x.Name); }); } } ``` snippet source | anchor ## Outbox Outside of Wolverine Handlers ::: warning Honestly, we had to do this feature, but it's just always going to be easiest to use Wolverine HTTP handlers or message handlers for the EF Core + transactional outbox support. ::: ::: tip In all cases, the `IDbContextOutbox` services expose all the normal `IMessageBus` API. ::: To use EF Core with the Wolverine outbox outside of a Wolverine message handler (maybe inside an ASP.Net MVC Core `Controller`, or within Minimal API maybe?), you have a couple options. First, you can use the `IDbContextOutbox` service where the `T` is your `DbContext` type as shown below: ```cs [HttpPost("/items/create2")] public async Task Post( [FromBody] CreateItemCommand command, [FromServices] IDbContextOutbox outbox) { // Create a new Item entity var item = new Item { Name = command.Name }; // Add the item to the current // DbContext unit of work outbox.DbContext.Items.Add(item); // Publish a message to take action on the new item // in a background thread await outbox.PublishAsync(new ItemCreated { Id = item.Id }); // Commit all changes and flush persisted messages // to the persistent outbox // in the correct order await outbox.SaveChangesAndFlushMessagesAsync(); } ``` snippet source | anchor Or use the `IDbContextOutbox` as shown below, but in this case you will need to explicitly call `Enroll()` on the `IDbContextOutbox` to connect the outbox sending to the `DbContext`: ```cs [HttpPost("/items/create3")] public async Task Post3( [FromBody] CreateItemCommand command, [FromServices] ItemsDbContext dbContext, [FromServices] IDbContextOutbox outbox) { // Create a new Item entity var item = new Item { Name = command.Name }; // Add the item to the current // DbContext unit of work dbContext.Items.Add(item); // Gotta attach the DbContext to the outbox // BEFORE sending any messages outbox.Enroll(dbContext); // Publish a message to take action on the new item // in a background thread await outbox.PublishAsync(new ItemCreated { Id = item.Id }); // Commit all changes and flush persisted messages // to the persistent outbox // in the correct order await outbox.SaveChangesAndFlushMessagesAsync(); } ``` snippet source | anchor --- --- url: /guide/durability/efcore/transactional-middleware.md --- # Transactional Middleware Support for using Wolverine transactional middleware requires an explicit registration on `WolverineOptions` shown below (it's an extension method): ```cs builder.Host.UseWolverine(opts => { // Setting up Sql Server-backed message storage // This requires a reference to Wolverine.SqlServer opts.PersistMessagesWithSqlServer(connectionString, "wolverine"); // Set up Entity Framework Core as the support // for Wolverine's transactional middleware opts.UseEntityFrameworkCoreTransactions(); // Enrolling all local queues into the // durable inbox/outbox processing opts.Policies.UseDurableLocalQueues(); }); ``` snippet source | anchor ::: tip When using the opt in `Handlers.AutoApplyTransactions()` option, Wolverine (really Lamar) can detect that your handler method uses a `DbContext` if it's a method argument, a dependency of any service injected as a method argument, or a dependency of any service injected as a constructor argument of the handler class. ::: That will enroll EF Core as both a strategy for stateful saga support and for transactional middleware. With this option added, Wolverine will wrap transactional middleware around any message handler that has a dependency on any type of `DbContext` like this one: ```cs [Transactional] public static ItemCreated Handle( // This would be the message CreateItemCommand command, // Any other arguments are assumed // to be service dependencies ItemsDbContext db) { // Create a new Item entity var item = new Item { Name = command.Name }; // Add the item to the current // DbContext unit of work db.Items.Add(item); // This event being returned // by the handler will be automatically sent // out as a "cascading" message return new ItemCreated { Id = item.Id }; } ``` snippet source | anchor When using the transactional middleware around a message handler, the `DbContext` is used to persist the outgoing messages as part of Wolverine's outbox support. ### Opting Out with \[NonTransactional] When using `AutoApplyTransactions()`, you can opt specific handlers or HTTP endpoints out of transactional middleware by decorating them with the `[NonTransactional]` attribute: ```cs using Wolverine.Attributes; public static class MyHandler { // This handler will NOT have transactional middleware applied // even when AutoApplyTransactions() is enabled [NonTransactional] public static void Handle(MyCommand command, MyDbContext db) { // You're managing the DbContext yourself here } } ``` The `[NonTransactional]` attribute can be placed on individual handler methods or on the handler class to opt out all methods in that class. ## Eager vs Lightweight Transactions By default, the EF Core middleware will run in `Eager` mode meaning that Wolverine will call `DbContext.Database.BeginTransactionAsync()` before your message handler or HTTP endpoint handler. We do this so that bulk operations can succeed. If all you need to do is persist entities such that `DbContext.SaveChangesAsync()` gives you all the transactional integrity you need, you can opt into lightweight transaction code generation instead: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.Durability.Mode = DurabilityMode.Solo; opts.Services.AddDbContextWithWolverineIntegration(x => x.UseSqlServer(Servers.SqlServerConnectionString)); opts.PersistMessagesWithSqlServer(Servers.SqlServerConnectionString, "txmode"); // ONLY use SaveChangesAsync() for transaction boundaries // Treat the DbContext as a unit of work, assume there are no // bulk operations opts.UseEntityFrameworkCoreTransactions(TransactionMiddlewareMode.Lightweight); opts.Policies.AutoApplyTransactions(); opts.Discovery.DisableConventionalDiscovery() .IncludeType(); }).StartAsync(); ``` snippet source | anchor You can also selectively configure the transaction middleware mode on singular message handlers or HTTP endpoints with the `[Transactional]` attribute like this: ```cs public class LightweightAttributeHandler { [Transactional(Mode = TransactionMiddlewareMode.Lightweight)] public static void Handle(LightweightAttributeMessage message, CleanDbContext db) { } } ``` snippet source | anchor ## Auto Apply Transactional Middleware You can opt into automatically applying the transactional middleware to any handler that depends on a `DbContext` type with the `AutoApplyTransactions()` option as shown below: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { var connectionString = builder.Configuration.GetConnectionString("database"); opts.Services.AddDbContextWithWolverineIntegration(x => { x.UseSqlServer(connectionString); }); // Add the auto transaction middleware attachment policy opts.Policies.AutoApplyTransactions(); }); using var host = builder.Build(); await host.StartAsync(); ``` snippet source | anchor With this option, you will no longer need to decorate handler methods with the `[Transactional]` attribute. ## Transaction Middleware Mode By default, the EF Core transactional middleware uses `TransactionMiddlewareMode.Eager`, which eagerly opens an explicit database transaction via `Database.BeginTransactionAsync()` before the handler executes. This is appropriate when you need explicit transaction control, such as when using EF Core bulk operations. If you prefer to rely solely on `DbContext.SaveChangesAsync()` as your transactional boundary without opening an explicit database transaction, you can use `TransactionMiddlewareMode.Lightweight`: ```cs builder.Host.UseWolverine(opts => { opts.PersistMessagesWithSqlServer(connectionString, "wolverine"); // Use Lightweight mode — no explicit transaction, relies on SaveChangesAsync() opts.UseEntityFrameworkCoreTransactions(TransactionMiddlewareMode.Lightweight); opts.Policies.UseDurableLocalQueues(); }); ``` ::: tip `TransactionMiddlewareMode.Lightweight` is **not** supported or necessary for Marten or RavenDb, which have their own unit of work implementations. ::: ### Per-Handler Override You can override the global `TransactionMiddlewareMode` for individual handlers using the `[Transactional]` attribute's `Mode` property: ```cs // This handler will use an explicit transaction even if the global mode is Lightweight [Transactional(Mode = TransactionMiddlewareMode.Eager)] public static ItemCreated Handle(CreateItemCommand command, ItemsDbContext db) { var item = new Item { Name = command.Name }; db.Items.Add(item); return new ItemCreated { Id = item.Id }; } // This handler skips the explicit transaction even if the global mode is Eager [Transactional(Mode = TransactionMiddlewareMode.Lightweight)] public static void Handle(UpdateItemCommand command, ItemsDbContext db) { // Just uses SaveChangesAsync() without an explicit transaction } ``` --- --- url: /guide/durability/marten/transactional-middleware.md --- # Transactional Middleware ::: warning When using the transactional middleware with Marten, Wolverine is assuming that there will be a single, atomic transaction for the entire message handler. Because of the integration with Wolverine's outbox and the Marten `IDocumentSession`, it is **very strongly** recommended that you do not call `IDocumentSession.SaveChangesAsync()` yourself as that may result in unexpected behavior in terms of outgoing messages. ::: ::: tip You will need to make the `IServiceCollection.AddMarten(...).IntegrateWithWolverine()` call to add this middleware to a Wolverine application. ::: It is no longer necessary to mark a handler method with `[Transactional]` if you choose to use the `AutoApplyTransactions()` option as shown below: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.Services.AddMarten("some connection string") .IntegrateWithWolverine(); // Opt into using "auto" transaction middleware opts.Policies.AutoApplyTransactions(); }).StartAsync(); ``` snippet source | anchor With this enabled, Wolverine will automatically use the Marten transactional middleware for handlers that have a dependency on `IDocumentSession` (meaning the method takes in `IDocumentSession` or has some dependency that itself depends on `IDocumentSession`) as long as the `IntegrateWithWolverine()` call was used in application bootstrapping. ### Opting Out with \[NonTransactional] When using `AutoApplyTransactions()`, there may be specific handlers or HTTP endpoints where you want to explicitly opt out of transactional middleware even though they use `IDocumentSession`. You can do this with the `[NonTransactional]` attribute: ```cs using Wolverine.Attributes; public static class MySpecialHandler { // This handler will NOT have transactional middleware applied // even when AutoApplyTransactions() is enabled [NonTransactional] public static void Handle(MyCommand command, IDocumentSession session) { // You're managing the session yourself here } } ``` The `[NonTransactional]` attribute can be placed on individual handler methods or on the handler class itself to opt out all methods: ```cs using Wolverine.Attributes; // No methods in this handler class will have // transactional middleware applied [NonTransactional] public static class NonTransactionalHandlers { public static void Handle(CommandA command, IDocumentSession session) { // ... } public static void Handle(CommandB command, IDocumentSession session) { // ... } } ``` This also works for Wolverine HTTP endpoints: ```cs using Wolverine.Attributes; using Wolverine.Http; public static class MyEndpoints { // This endpoint will NOT use transactional middleware [NonTransactional] [WolverinePost("/my-non-transactional-endpoint")] public static string Post(IDocumentSession session) { return "not transactional"; } } ``` In the previous section we saw an example of incorporating Wolverine's outbox with Marten transactions. We also wrote a fair amount of code to do so that could easily feel repetitive over time. Using Wolverine's transactional middleware support for Marten, the long hand handler above can become this equivalent: ```cs // Note that we're able to avoid doing any kind of asynchronous // code in this handler [Transactional] public static OrderCreated Handle(CreateOrder command, IDocumentSession session) { var order = new Order { Description = command.Description }; // Register the new document with Marten session.Store(order); // Utilizing Wolverine's "cascading messages" functionality // to have this message sent through Wolverine return new OrderCreated(order.Id); } ``` snippet source | anchor Or if you need to take more control over how the outgoing `OrderCreated` message is sent, you can use this slightly different alternative: ```cs [Transactional] public static ValueTask Handle( CreateOrder command, IDocumentSession session, IMessageBus bus) { var order = new Order { Description = command.Description }; // Register the new document with Marten session.Store(order); // Utilizing Wolverine's "cascading messages" functionality // to have this message sent through Wolverine return bus.SendAsync( new OrderCreated(order.Id), new DeliveryOptions { DeliverWithin = 5.Minutes() }); } ``` snippet source | anchor In both cases Wolverine's transactional middleware for Marten is taking care of registering the Marten session with Wolverine's outbox before you call into the message handler, and also calling Marten's `IDocumentSession.SaveChangesAsync()` afterward. Used judiciously, this might allow you to avoid more messy or noisy asynchronous code in your application handler code. ::: tip This \[Transactional] attribute can appear on either the handler class that will apply to all the actions on that class, or on a specific action method. ::: If so desired, you *can* also use a policy to apply the Marten transaction semantics with a policy. As an example, let's say that you want every message handler where the message type name ends with "Command" to use the Marten transaction middleware. You could accomplish that with a handler policy like this: ```cs public class CommandsAreTransactional : IHandlerPolicy { public void Apply(IReadOnlyList chains, GenerationRules rules, IServiceContainer container) { // Important! Create a brand new TransactionalFrame // for each chain chains .Where(chain => chain.MessageType.Name.EndsWith("Command")) .Each(chain => chain.Middleware.Add(new CreateDocumentSessionFrame(chain))); } } ``` snippet source | anchor Then add the policy to your application like this: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // And actually use the policy opts.Policies.Add(); }).StartAsync(); ``` snippet source | anchor ## Using IDocumentOperations When using the transactional middleware with Marten, it's best to **not** directly call `IDocumentSession.SaveChangesAsync()` yourself because that negates the transactional middleware's ability to mark the transaction boundary and can cause unexpected problems with the outbox. As a way of preventing this problem, you can choose to directly use Marten's `IDocumentOperations` as an argument to your handler or endpoint methods, which is effectively `IDocumentSession` minus the ability to commit the ongoing unit of work with a `SaveChangesAsync` API. Here's an example: ```cs public class CreateDocCommand2Handler { [Transactional] public void Handle( CreateDocCommand2 message, // This is the IDocumentSession for the handler & // transactional middleware, it's just that you're // going to use the slimmer interface that won't let // you accidentally call SaveChangesAysnc IDocumentOperations operations) { operations.Store(new FakeDoc { Id = message.Id }); } } ``` snippet source | anchor --- --- url: /guide/durability/leadership-and-troubleshooting.md --- # Troubleshooting and Leadership Election ::: info The main reason to care about this topic is to be able to troubleshoot why messages left stranded by a failed node are not being recovered in a timely manner ::: For some technical background, the Wolverine transactional inbox today works through a process of [leadership election](https://en.wikipedia.org/wiki/Leader_election), where only one node at any one time is the leader. The recovery of messages from dormant nodes that shut down somehow before they could finish sending their outgoing or processing all their incoming messages is done through a persistent background agent assigned to one node by the leader node. Long story short, if the message recovery isn't happening very quickly, it's likely some kind of issue with the leadership election failing to start or to fail over from the previous leader dropping off. ::: tip There is no harm in deleting rows from this table. It is strictly a log ::: As of Wolverine 1.10, there is a table in the PostgreSQL or Sql Server backed message storage called `wolverine_node_records` that just has a record of detected events relevant to the leader election. All of this information is also logged through the standard .Net `ILogger`, but it might be easier to understand the data in this table. Next, check the `wolverine_nodes` and `wolverine_node_assignments` to see where Wolverine thinks all of the running agents are across the active nodes. The actual leadership agent is `wolverine://leader`, and you can spot the current leader by the matching row in the `wolverine_node_assignments` table that refers to the "leader" agent. If you are frequently stopping and starting a local process -- especially if you are doing that through a debugger -- you may want to utilize the `Solo` durability mode explained below: ## Solo Mode Let's say that you're working on an individual development machine and frequently stopping and starting the application. You'd ideally like the transactional inbox and outbox processing to kick in fast, but that subsystem has some known hiccups recovering from exactly the kind of ungraceful process shutdown that happens when developers suddenly kill off the application running in a debugger. To alleviate the issues that developers have had in the past with this mode, Wolverine 1.10 introduced the "Solo" mode where the system can be optimized to run as if there's never more than one running node: [..](..%2F..) ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { opts.Services.AddMarten("some connection string") // This adds quite a bit of middleware for // Marten .IntegrateWithWolverine(); // You want this maybe! opts.Policies.AutoApplyTransactions(); if (builder.Environment.IsDevelopment()) { // But wait! Optimize Wolverine for usage as // if there would never be more than one node running opts.Durability.Mode = DurabilityMode.Solo; } }); using var host = builder.Build(); await host.StartAsync(); ``` snippet source | anchor Running your Wolverine application like this means that Wolverine is able to more quickly start the transactional inbox and outbox at start up time, and also to immediately recover any persisted incoming or outgoing messages from the previous execution of the service on your local development box. ## Metrics ::: tip These metrics can be used to understand when a Wolverine system is distressed when these numbers grow larger ::: Wolverine emits observable gauge metrics for the size of the persisted inbox, outbox, and scheduled message counts: 1. `wolverine-inbox-count` - number of persisted, `Incoming` envelopes in the durable inbox 2. `wolverine-outbox-count` - number of persisted, `Outgoing` envelopes in the durable outbox 3. `wolverine-scheduled-count` - number of persisted, `Scheduled` envelopes in the durable inbox In all cases, if you are using some sort of multi-tenancy where envelopes are stored in separate databsases per tenant, the metric names above will be suffixed with ".\[database name]". You can disable or modify the polling of these metrics by these settings: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // This does assume that you have *some* kind of message // persistence set up // This is enabled by default, but just showing that // you *could* disable it opts.Durability.DurabilityMetricsEnabled = true; // The default is 5 seconds, but maybe you want it slower // because this does have to do a non-trivial query opts.Durability.UpdateMetricsPeriod = 10.Seconds(); }).StartAsync(); ``` snippet source | anchor --- --- url: /guide/messaging/unknown.md --- # Unknown Messages When Wolverine receives a message from the outside world, it's keying off the [message type name](/guide/messages.html#message-type-name-or-alias) from the `Envelope` to "know" what message type it's receiving and therefore, which handler(s) to execute. It's an imperfect world of course, so it's perfectly possible that your system will receive a message from the outside world with a message type name that your system does not recognize. Out of the box Wolverine will simply log that it received an unknown message type and discard the message, but there are means to take additional actions on "missing handler" messages where Wolverine does not recognize the message type. ## Move to the Dead Letter Queue You can declaratively tell Wolverine to persist every message received with an unknown message type name to the dead letter queue with this flag: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { var connectionString = builder.Configuration.GetConnectionString("rabbit"); opts.UseRabbitMq(connectionString).UseConventionalRouting(); // All unknown message types received should be placed into // the proper dead letter queue mechanism opts.UnknownMessageBehavior = UnknownMessageBehavior.DeadLetterQueue; }); ``` snippet source | anchor The message will be moved to the dead letter queue mechanism for the listening endpoint where the message was received. ## Custom Actions ::: note The missing handlers are additive, meaning that you can provide more than one and Wolverine will try to execute each one that is registered for the missing handler behavior. ::: You can direct Wolverine to take custom actions on messages received with unknown message type names by providing a custom implementation of this interface: ```cs namespace Wolverine; /// /// Hook interface to receive notifications of envelopes received /// that do not match any known handlers within the system /// public interface IMissingHandler { /// /// Executes for unhandled envelopes /// /// /// /// ValueTask HandleAsync(IEnvelopeLifecycle context, IWolverineRuntime root); } ``` snippet source | anchor Here's a made up sample that theoretically posts a message to a Slack room by sending a Wolverine message in response: ```cs public class MyCustomActionForMissingHandlers : IMissingHandler { public ValueTask HandleAsync(IEnvelopeLifecycle context, IWolverineRuntime root) { var bus = new MessageBus(root); return bus.PublishAsync(new PostInSlack("Incidents", $"Got an unknown message with type '{context.Envelope.MessageType}' and id {context.Envelope.Id}")); } } ``` snippet source | anchor And simply registering that with your application's IoC container against the `IMissingHandler` interface like this: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { // configuration opts.UnknownMessageBehavior = UnknownMessageBehavior.DeadLetterQueue; }); builder.Services.AddSingleton(); ``` snippet source | anchor ## Tracked Session Testing Just know that the [Tracked Session](/guide/testing.html#integration-testing-with-tracked-sessions) subsystem for integration testing exposes a separate record collection for `NoHandlers` and reports when that happens through its output for hopefully easy troubleshooting on test failures. --- --- url: /guide/http/policies.md --- ## HTTP Policies Custom policies can be created for HTTP endpoints through either creating your own implementation of `IHttpPolicy` shown below: ```cs /// /// Use to apply your own conventions or policies to HTTP endpoint handlers /// public interface IHttpPolicy { /// /// Called during bootstrapping to alter how the message handlers are configured /// /// /// /// The application's underlying IoC Container void Apply(IReadOnlyList chains, GenerationRules rules, IServiceContainer container); } ``` snippet source | anchor And then adding a policy to the `WolverineHttpOptions` like this code from the Fluent Validation extension for HTTP: ```cs /// /// Apply Fluent Validation middleware to all Wolverine HTTP endpoints with a known Fluent Validation /// validator for the request type /// /// public static void UseFluentValidationProblemDetailMiddleware(this WolverineHttpOptions httpOptions) { httpOptions.AddPolicy(); } ``` snippet source | anchor Or lastly through lambdas (which creates an `IHttpPolicy` object behind the scenes): ```cs app.MapWolverineEndpoints(opts => { // This is strictly to test the endpoint policy opts.ConfigureEndpoints(httpChain => { // The HttpChain model is a configuration time // model of how the HTTP endpoint handles requests // This adds metadata for OpenAPI httpChain.WithMetadata(new CustomMetadata()); }); // more configuration for HTTP... // Opting into the Fluent Validation middleware from // Wolverine.Http.FluentValidation opts.UseFluentValidationProblemDetailMiddleware(); // Or instead, you could use Data Annotations that are built // into the Wolverine.HTTP library opts.UseDataAnnotationsValidationProblemDetailMiddleware(); ``` snippet source | anchor The `HttpChain` model is a configuration time structure that Wolverine.Http will use at runtime to create the full HTTP handler (RequestDelegate and RoutePattern for ASP.Net Core). But at bootstrapping / configuration time, we have the option to add -- or remove -- any number of middleware, post processors, and custom metadata (OpenAPI or otherwise) for the endpoint. Here's an example from the Wolverine.Http tests of using a policy to add custom metadata: ```cs app.MapWolverineEndpoints(opts => { // This is strictly to test the endpoint policy opts.ConfigureEndpoints(httpChain => { // The HttpChain model is a configuration time // model of how the HTTP endpoint handles requests // This adds metadata for OpenAPI httpChain.WithMetadata(new CustomMetadata()); }); // more configuration for HTTP... // Opting into the Fluent Validation middleware from // Wolverine.Http.FluentValidation opts.UseFluentValidationProblemDetailMiddleware(); // Or instead, you could use Data Annotations that are built // into the Wolverine.HTTP library opts.UseDataAnnotationsValidationProblemDetailMiddleware(); ``` snippet source | anchor ## Resource Writer Policies Wolverine has an additional type of policy that deals with how an endpoints primary result is handled. ```cs /// /// Use to apply custom handling to the primary result of an HTTP endpoint handler /// public interface IResourceWriterPolicy { /// /// Called during bootstrapping to see whether this policy can handle the chain. If yes no further policies are tried. /// /// The chain to test against /// True if it applies to the chain, false otherwise bool TryApply(HttpChain chain); } ``` snippet source | anchor Only one of these so called resource writer policies can apply to each endpoint and there are a couple of built in policies already. If you need special handling of a primary return type you can implement `IResourceWriterPolicy` and register it in `WolverineHttpOptions` ```cs opts.AddResourceWriterPolicy(); ``` snippet source | anchor Resource writer policies registered this way will be applied in order before all built in policies. --- --- url: /guide/http/problemdetails.md --- ## Using ProblemDetails Wolverine has some first class support for the [ProblemDetails](https://learn.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.mvc.problemdetails?view=aspnetcore-7.0) specification in its [HTTP middleware model](./middleware). Wolverine also has a [Fluent Validation middleware package](./fluentvalidation) for HTTP endpoints, but it's frequently valuable to write one off, explicit validation for certain endpoints. Consider this contrived sample endpoint with explicit validation being done in a "Before" middleware method: ```cs public class ProblemDetailsUsageEndpoint { public ProblemDetails Validate(NumberMessage message) { // If the number is greater than 5, fail with a // validation message if (message.Number > 5) return new ProblemDetails { Detail = "Number is bigger than 5", Status = 400 }; // All good, keep on going! return WolverineContinue.NoProblems; } [WolverinePost("/problems")] public static string Post(NumberMessage message) { return "Ok"; } } public record NumberMessage(int Number); ``` snippet source | anchor Wolverine.Http now (as of 1.2.0) has a convention that sees a return value of `ProblemDetails` and looks at that as a "continuation" to tell the http handler code what to do next. One of two things will happen: 1. If the `ProblemDetails` return value is the same instance as `WolverineContinue.NoProblems`, just keep going 2. Otherwise, write the `ProblemDetails` out to the HTTP response and exit the HTTP request handling To make that clearer, here's the generated code: ```csharp public class POST_problems : Wolverine.Http.HttpHandler { private readonly Wolverine.Http.WolverineHttpOptions _wolverineHttpOptions; public POST_problems(Wolverine.Http.WolverineHttpOptions wolverineHttpOptions) : base(wolverineHttpOptions) { _wolverineHttpOptions = wolverineHttpOptions; } public override async System.Threading.Tasks.Task Handle(Microsoft.AspNetCore.Http.HttpContext httpContext) { var problemDetailsUsageEndpoint = new WolverineWebApi.ProblemDetailsUsageEndpoint(); var (message, jsonContinue) = await ReadJsonAsync(httpContext); if (jsonContinue == Wolverine.HandlerContinuation.Stop) return; var problemDetails = problemDetailsUsageEndpoint.Before(message); if (!(ReferenceEquals(problemDetails, Wolverine.Http.WolverineContinue.NoProblems))) { await Microsoft.AspNetCore.Http.Results.Problem(problemDetails).ExecuteAsync(httpContext).ConfigureAwait(false); return; } var result_of_Post = WolverineWebApi.ProblemDetailsUsageEndpoint.Post(message); await WriteString(httpContext, result_of_Post); } } ``` And for more context, here's the matching "happy path" and "sad path" tests for the endpoint above: ```cs [Fact] public async Task continue_happy_path() { // Should be good await Scenario(x => { x.Post.Json(new NumberMessage(3)).ToUrl("/problems"); }); } [Fact] public async Task stop_with_problems_if_middleware_trips_off() { // This is the "sad path" that should spawn a ProblemDetails // object var result = await Scenario(x => { x.Post.Json(new NumberMessage(10)).ToUrl("/problems"); x.StatusCodeShouldBe(400); x.ContentTypeShouldBe("application/problem+json"); }); } ``` snippet source | anchor Lastly, if Wolverine sees the existence of a `ProblemDetails` return value in any middleware, Wolverine will fill in OpenAPI metadata for the "application/problem+json" content type and a status code of 400. This behavior can be easily overridden with your own metadata if you need to use a different status code like this: ```csharp // Use 418 as the status code instead [ProducesResponseType(typeof(ProblemDetails), 418)] ``` ### Using ProblemDetails with Marten aggregates Of course, if you are using [Marten's aggregates within your Wolverine http handlers](./marten), you also want to be able to validation using the aggregate's details in your middleware and this is perfectly possible like this: ```cs [AggregateHandler] public static ProblemDetails Before(IShipOrder command, Order order) { if (order.IsShipped()) { return new ProblemDetails { Detail = "Order already shipped", Status = 428 }; } return WolverineContinue.NoProblems; } ``` snippet source | anchor ## Within Message Handlers `ProblemDetails` can be used within message handlers as well with similar rules. See this example from the tests: ```cs public static class NumberMessageHandler { public static ProblemDetails Validate(NumberMessage message) { if (message.Number > 5) { return new ProblemDetails { Detail = "Number is bigger than 5", Status = 400 }; } // All good, keep on going! return WolverineContinue.NoProblems; } // This "Before" method would only be utilized as // an HTTP endpoint [WolverineBefore(MiddlewareScoping.HttpEndpoints)] public static void BeforeButOnlyOnHttp(HttpContext context) { Debug.WriteLine("Got an HTTP request for " + context.TraceIdentifier); CalledBeforeOnlyOnHttpEndpoints = true; } // This "Before" method would only be utilized as // a message handler [WolverineBefore(MiddlewareScoping.MessageHandlers)] public static void BeforeButOnlyOnMessageHandlers() { CalledBeforeOnlyOnMessageHandlers = true; } // Look at this! You can use this as an HTTP endpoint too! [WolverinePost("/problems2")] public static void Handle(NumberMessage message) { Debug.WriteLine("Handled " + message); Handled = true; } // These properties are just a cheap trick in Wolverine internal tests public static bool Handled { get; set; } public static bool CalledBeforeOnlyOnMessageHandlers { get; set; } public static bool CalledBeforeOnlyOnHttpEndpoints { get; set; } } ``` snippet source | anchor This functionality was added so that some handlers could be both an endpoint and message handler without having to duplicate code or delegate to the handler through an endpoint. --- --- url: /guide/http/files.md --- # Uploading Files As of 1.11.0, Wolverine supports file uploads through the standard ASP.Net Core `IFormFile` or `IFormFileCollection` types. All you need to do is to have an input parameter to your Wolverine.HTTP endpoint of these types like so: ```cs public class FileUploadEndpoint { // If you have exactly one file upload, take // in IFormFile [WolverinePost("/upload/file")] public static Task Upload(IFormFile file) { // access the file data return Task.CompletedTask; } // If you have multiple files at one time, // use IFormCollection [WolverinePost("/upload/files")] public static Task Upload(IFormFileCollection files) { // access files return Task.CompletedTask; } } ``` snippet source | anchor See [Upload files in ASP.NET Core](https://learn.microsoft.com/en-us/aspnet/core/mvc/models/file-uploads?view=aspnetcore-7.0) for more information about these types. ## Multipart Uploads Wolverine also supports multipart uploads where you need to combine file uploads with form metadata. You can: * Use multiple named `IFormFile` parameters, each bound by form field name * Combine a `[FromForm]` complex type with a separate `IFormFile` parameter * Use `IFormCollection` for raw access to all form fields and files ```cs public static class MultipartUploadEndpoints { // Multiple named file parameters are bound by form field name [WolverinePost("/upload/named-files")] public static string UploadNamedFiles(IFormFile document, IFormFile thumbnail) { return $"{document?.FileName}|{document?.Length}|{thumbnail?.FileName}|{thumbnail?.Length}"; } // Combine [FromForm] metadata with a file upload in a single endpoint [WolverinePost("/upload/mixed")] public static string UploadMixed([FromForm] UploadMetadata metadata, IFormFile file) { return $"{metadata.Title}|{metadata.Description}|{file?.FileName}|{file?.Length}"; } // Use IFormCollection for raw access to all form data and files [WolverinePost("/upload/form-collection")] public static string UploadFormCollection(IFormCollection form) { var keys = string.Join(",", form.Keys.OrderBy(k => k)); var fileCount = form.Files.Count; return $"keys:{keys}|files:{fileCount}"; } } ``` snippet source | anchor Each `IFormFile` parameter is matched to the uploaded file by its parameter name. When sending a multipart request, make sure the form field names match the parameter names in your endpoint method. --- --- url: /guide/messaging/transports/sns.md --- # Using Amazon SNS ::: warning At this moment, Wolverine cannot support request/reply mechanics (`IMessageBus.InvokeAsync()`) with SNS~~~~. ::: :::tip Due to the nature of SNS, Wolverine doesn't include any listening functionality for this transport. You may forward messages to Amazon SQS and use it in conjunction with the SQS transport to listen for incoming messages. ::: Wolverine supports [Amazon SNS](https://aws.amazon.com/sns/) as a messaging transport through the WolverineFx.AmazonSns package. ## Connecting to the Broker First, if you are using the [shared AWS config and credentials files](https://docs.aws.amazon.com/sdkref/latest/guide/file-format.html), the SNS connection is just this: ```cs var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // This does depend on the server having an AWS credentials file // See https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html for more information opts.UseAmazonSnsTransport() // Let Wolverine create missing topics and subscriptions as necessary .AutoProvision(); }).StartAsync(); ``` snippet source | anchor ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { var config = builder.Configuration; opts.UseAmazonSnsTransport(snsConfig => { snsConfig.ServiceURL = config["AwsUrl"]; // And any other elements of the SNS AmazonSimpleNotificationServiceConfig // that you may need to configure }) // Let Wolverine create missing topics and subscriptions as necessary .AutoProvision(); }); using var host = builder.Build(); await host.StartAsync(); ``` snippet source | anchor If you'd just like to connect to Amazon SNS running from within [LocalStack](https://localstack.cloud/) on your development box, there's this helper: ```cs var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Connect to an SNS broker running locally // through LocalStack opts.UseAmazonSnsTransportLocally(); }).StartAsync(); ``` snippet source | anchor And lastly, if you want to explicitly supply an access and secret key for your credentials to SNS, you can use this syntax: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { var config = builder.Configuration; opts.UseAmazonSnsTransport(snsConfig => { snsConfig.ServiceURL = config["AwsUrl"]; // And any other elements of the SNS AmazonSimpleNotificationServiceConfig // that you may need to configure }) // And you can also add explicit AWS credentials .Credentials(new BasicAWSCredentials(config["AwsAccessKey"], config["AwsSecretKey"])) // Let Wolverine create missing topics and subscriptions as necessary .AutoProvision(); }); using var host = builder.Build(); await host.StartAsync(); ``` snippet source | anchor ## Publishing Configuring subscriptions through Amazon SNS topics is done with the `ToSnsTopic()` extension method shown in the example below: ```cs var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseAmazonSnsTransport(); opts.PublishMessage() .ToSnsTopic("outbound1") // Increase the outgoing message throughput, but at the cost // of strict ordering .MessageBatchMaxDegreeOfParallelism(Environment.ProcessorCount) .ConfigureTopicCreation(conf => { // Configure topic creation request... }); }).StartAsync(); ``` snippet source | anchor ## Topic Subscriptions Wolverine gives you the ability to automatically subscribe SQS Queues to SNS topics with it's auto-provision feature through the `SubscribeSqsQueue()` extension method. ```cs var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseAmazonSnsTransport() // Without this, the SubscribeSqsQueue() call does nothing // This tells Wolverine to try to ensure all topics, subscriptions, // and SQS queues exist at runtime .AutoProvision() // *IF* you need to use some kind of custom queue policy in your // SQS queues *and* want to use AutoProvision() as well, this is // the hook to customize that policy. This is the default though that // we're just showing for an example .QueuePolicyForSqsSubscriptions(description => { return $$""" { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": { "Service": "sns.amazonaws.com" }, "Action": "sqs:SendMessage", "Resource": "{{description.QueueArn}}", "Condition": { "ArnEquals": { "aws:SourceArn": "{{description.TopicArn}}" } } }] } """; }); opts.PublishMessage() .ToSnsTopic("outbound1") // Sets a subscriptions to be .SubscribeSqsQueue("queueName", config => { // Configure subscription attributes config.RawMessageDelivery = true; }); }).StartAsync(); ``` snippet source | anchor ## Interoperability ::: tip Also see the more generic [Wolverine Guide on Interoperability](/tutorials/interop) ::: SNS interoperability is done through the `ISnsEnvelopeMapper`. At this point, SNS supports interoperability through MassTransit, NServiceBus, CloudEvents, or user defined mapping strategies. --- --- url: /guide/messaging/transports/sqs.md --- # Using Amazon SQS Wolverine supports [Amazon SQS](https://aws.amazon.com/sqs/) as a messaging transport through the WolverineFx.AmazonSqs package. ## Connecting to the Broker First, if you are using the [shared AWS config and credentials files](https://docs.aws.amazon.com/sdkref/latest/guide/file-format.html), the SQS connection is just this: ```cs var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // This does depend on the server having an AWS credentials file // See https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html for more information opts.UseAmazonSqsTransport() // Let Wolverine create missing queues as necessary .AutoProvision() // Optionally purge all queues on application startup. // Warning though, this is potentially slow .AutoPurgeOnStartup(); }).StartAsync(); ``` snippet source | anchor ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { var config = builder.Configuration; opts.UseAmazonSqsTransport(sqsConfig => { sqsConfig.ServiceURL = config["AwsUrl"]; // And any other elements of the SQS AmazonSQSConfig // that you may need to configure }) // Let Wolverine create missing queues as necessary .AutoProvision() // Optionally purge all queues on application startup. // Warning though, this is potentially slow .AutoPurgeOnStartup(); }); using var host = builder.Build(); await host.StartAsync(); ``` snippet source | anchor If you'd just like to connect to Amazon SQS running from within [LocalStack](https://localstack.cloud/) on your development box, there's this helper: ```cs var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Connect to an SQS broker running locally // through LocalStack opts.UseAmazonSqsTransportLocally(); }).StartAsync(); ``` snippet source | anchor And lastly, if you want to explicitly supply an access and secret key for your credentials to SQS, you can use this syntax: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { var config = builder.Configuration; opts.UseAmazonSqsTransport(sqsConfig => { sqsConfig.ServiceURL = config["AwsUrl"]; // And any other elements of the SQS AmazonSQSConfig // that you may need to configure }) // And you can also add explicit AWS credentials .Credentials(new BasicAWSCredentials(config["AwsAccessKey"], config["AwsSecretKey"])) // Let Wolverine create missing queues as necessary .AutoProvision() // Optionally purge all queues on application startup. // Warning though, this is potentially slow .AutoPurgeOnStartup(); }); using var host = builder.Build(); await host.StartAsync(); ``` snippet source | anchor ## Connecting to Multiple Brokers Wolverine supports interacting with multiple Amazon SQS brokers within one application like this: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseAmazonSqsTransport(config => { // Add configuration for connectivity }); opts.AddNamedAmazonSqsBroker(new BrokerName("americas"), config => { // Add configuration for connectivity }); opts.AddNamedAmazonSqsBroker(new BrokerName("emea"), config => { // Add configuration for connectivity }); // Or explicitly make subscription rules opts.PublishMessage() .ToSqsQueueOnNamedBroker(new BrokerName("emea"), "colors"); // Listen to topics opts.ListenToSqsQueueOnNamedBroker(new BrokerName("americas"), "red"); // Other configuration }).StartAsync(); ``` snippet source | anchor Note that the `Uri` scheme within Wolverine for any endpoints from a "named" Amazon SQS broker is the name that you supply for the broker. So in the example above, you might see `Uri` values for `emea://colors` or `americas://red`. ## Identifier Prefixing for Shared Brokers When sharing a single AWS account or SQS namespace between multiple developers or development environments, you can use `PrefixIdentifiers()` to automatically prepend a prefix to every queue name created by Wolverine. This helps isolate cloud resources for each developer or environment: ```csharp using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseAmazonSqsTransport() .AutoProvision() // Prefix all queue names with "dev-john-" .PrefixIdentifiers("dev-john"); // A queue named "orders" becomes "dev-john-orders" opts.ListenToSqsQueue("orders"); }).StartAsync(); ``` You can also use `PrefixIdentifiersWithMachineName()` as a convenience to use the current machine name as the prefix: ```csharp opts.UseAmazonSqsTransport() .AutoProvision() .PrefixIdentifiersWithMachineName(); ``` The default delimiter between the prefix and the original name is `-` for Amazon SQS (e.g., `dev-john-orders`). ## Request/Reply [Request/reply](https://www.enterpriseintegrationpatterns.com/patterns/messaging/RequestReply.html) mechanics (`IMessageBus.InvokeAsync()`) are supported with the Amazon SQS transport when system queues are enabled. Wolverine creates a dedicated per-node response queue named like `wolverine-response-[service name]-[node id]` that is used to receive replies. To enable request/reply support, call `EnableSystemQueues()` on the SQS transport configuration: ```csharp using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseAmazonSqsTransport() .AutoProvision() // Enable system queues for request/reply support .EnableSystemQueues(); }).StartAsync(); ``` ::: tip Unlike Azure Service Bus and RabbitMQ where system queues are enabled by default, SQS system queues require explicit opt-in via `EnableSystemQueues()`. This is because creating SQS queues requires IAM permissions that your application may not have. ::: System queues are automatically cleaned up when your application shuts down. Wolverine also tags each system queue with a `wolverine:last-active` timestamp and runs a background keep-alive timer. On startup, Wolverine scans for orphaned system queues (from crashed nodes) with the `wolverine-response-` or `wolverine-control-` prefix and deletes any that have been inactive for more than 5 minutes. ## Wolverine Control Queues You can opt into using SQS queues for intra-node communication that Wolverine needs for leader election and background worker distribution. Using SQS for this feature is more efficient than the built-in database control queues that Wolverine uses otherwise, and is necessary for message storage options like RavenDb that do not have a built-in control queue mechanism. ```csharp using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseAmazonSqsTransport() .AutoProvision() // This enables Wolverine to use SQS queues // created at runtime for communication between // Wolverine nodes .EnableWolverineControlQueues(); }).StartAsync(); ``` Calling `EnableWolverineControlQueues()` implicitly enables system queues and request/reply support as well. ## Disabling System Queues If your application does not have IAM permissions to create or delete queues, you can explicitly disable system queues: ```csharp using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseAmazonSqsTransport() .AutoProvision() .SystemQueuesAreEnabled(false); opts.ListenToSqsQueue("send-and-receive"); opts.PublishAllMessages().ToSqsQueue("send-and-receive"); }).StartAsync(); ``` --- --- url: /guide/http/mediator.md --- # Using as Mediator ::: info This isn't what Wolverine was originally designed to do, but seems to be a popular use case for teams struggling with ASP.Net Core MVC Controller bloat. ::: For one reason or another, many teams will use Wolverine strictly as a "mediator" that is used to simplify MVC Controllers by offloading the actual request handling like so: ```cs public class MediatorController : ControllerBase { [HttpPost("/question")] public Task Get(Question question, [FromServices] IMessageBus bus) { // All the real processing happens in Wolverine return bus.InvokeAsync(question); } } ``` snippet source | anchor ## Optimized Minimal API Integration While that strategy works and doesn't require Wolverine.Http at all, there's an optimized Minimal API approach in Wolverine.HTTP to quickly build ASP.Net Core routes with Wolverine message handlers that bypasses some of the performance overhead of "classic mediator" usage. The functionality is used from extension methods off of the ASP.Net Core [WebApplication](https://learn.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.builder.webapplication?view=aspnetcore-7.0) class used in bootstrapping: ```cs // Functional equivalent to MapPost(pattern, (command, IMessageBus) => bus.Invoke(command)) app.MapPostToWolverine("/wolverine"); app.MapPutToWolverine("/wolverine"); app.MapDeleteToWolverine("/wolverine"); // Functional equivalent to MapPost(pattern, (command, IMessageBus) => bus.Invoke(command)) app.MapPostToWolverine("/wolverine/request"); app.MapDeleteToWolverine("/wolverine/request"); app.MapPutToWolverine("/wolverine/request"); ``` snippet source | anchor With this mechanism, Wolverine is able to optimize the runtime function for Minimal API by eliminating IoC service locations and some internal dictionary lookups compared to the "classic mediator" approach at the top. This approach is potentially valuable for cases where you want to process a command or event message both through messaging or direct invocation and also want to execute the same message through an HTTP endpoint. --- --- url: /guide/http/as-parameters.md --- # Using AsParameters ::: warning When you use `[AsParameters]`, you can read HTTP form data or deserialize a request body as JSON, but **not both at the same time** and Wolverine will happily throw an exception telling you so if you try to do this. ::: ::: tip Use Wolverine's pre-generated code to understand exactly how Wolverine is processing any model object decorated with `[AsParameters]` ::: Wolverine supports the ASP.Net Core AsParameters attribute usage for complex binding of a mixed bag of HTTP information including headers, form data elements, route arguments, the request body, IoC services to a single input model using the ASP.Net Core `[AsParameters]` attribute as a marker. See the [Microsoft documentation on AsParameters](https://learn.microsoft.com/en-us/aspnet/core/fundamentals/minimal-apis/parameter-binding?view=aspnetcore-9.0) for more background. Below is a sample from our test suite showing what is possible for query string and header values: ```cs public static class AsParametersEndpoints{ [WolverinePost("/api/asparameters1")] public static AsParametersQuery Post([AsParameters] AsParametersQuery query) { return query; } } public class AsParametersQuery{ [FromQuery] public Direction EnumFromQuery{ get; set; } [FromForm] public Direction EnumFromForm{ get; set; } public Direction EnumNotUsed{get;set;} [FromQuery] public string StringFromQuery { get; set; } [FromForm] public string StringFromForm { get; set; } public string StringNotUsed { get; set; } [FromQuery] public int IntegerFromQuery { get; set; } [FromForm] public int IntegerFromForm { get; set; } public int IntegerNotUsed { get; set; } [FromQuery] public float FloatFromQuery { get; set; } [FromForm] public float FloatFromForm { get; set; } public float FloatNotUsed { get; set; } [FromQuery] public bool BooleanFromQuery { get; set; } [FromForm] public bool BooleanFromForm { get; set; } public bool BooleanNotUsed { get; set; } [FromHeader(Name = "x-string")] public string StringHeader { get; set; } [FromHeader(Name = "x-number")] public int NumberHeader { get; set; } = 5; [FromHeader(Name = "x-nullable-number")] public int? NullableHeader { get; set; } } ``` snippet source | anchor And the corresponding test case for utilizing this: ```cs var result = await Host.Scenario(x => x .Post .FormData(new Dictionary { { "EnumFromForm", "east" }, { "StringFromForm", "string2" }, { "IntegerFromForm", "2" }, { "FloatFromForm", "2.2" }, { "BooleanFromForm", "true" }, { "StringNotUsed", "string3" } }).QueryString("EnumFromQuery", "west") .QueryString("StringFromQuery", "string1") .QueryString("IntegerFromQuery", "1") .QueryString("FloatFromQuery", "1.1") .QueryString("BooleanFromQuery", "true") .QueryString("IntegerNotUsed", "3") .ToUrl("/api/asparameters1") ); var response = result.ReadAsJson(); response.EnumFromForm.ShouldBe(Direction.East); response.StringFromForm.ShouldBe("string2"); response.IntegerFromForm.ShouldBe(2); response.FloatFromForm.ShouldBe(2.2f); response.BooleanFromForm.ShouldBeTrue(); response.EnumFromQuery.ShouldBe(Direction.West); response.StringFromQuery.ShouldBe("string1"); response.IntegerFromQuery.ShouldBe(1); response.FloatFromQuery.ShouldBe(1.1f); response.BooleanFromQuery.ShouldBeTrue(); response.EnumNotUsed.ShouldBe(default); response.StringNotUsed.ShouldBe(default); response.IntegerNotUsed.ShouldBe(default); response.FloatNotUsed.ShouldBe(default); response.BooleanNotUsed.ShouldBe(default); ``` snippet source | anchor Wolverine.HTTP is also able to support `[FromServices]`, `[FromBody]`, and `[FromRoute]` bindings as well as shown in this sample from the tests: ```cs public class AsParameterBody { public string Name { get; set; } public Direction Direction { get; set; } public int Distance { get; set; } } public class AsParametersQuery2 { // We do a check inside of an HTTP endpoint that this works correctly [FromServices, JsonIgnore] public IDocumentStore Store { get; set; } [FromBody] public AsParameterBody Body { get; set; } [FromRoute] public string Id { get; set; } [FromRoute] public int Number { get; set; } } public static class AsParametersEndpoints2{ [WolverinePost("/asp2/{id}/{number}")] public static AsParametersQuery2 Post([AsParameters] AsParametersQuery2 query) { // Just proving the service binding works query.Store.ShouldBeOfType(); return query; } } ``` snippet source | anchor And lastly, you can use C# records or really just any constructor function as well and decorate parameters like so: ```cs public record AsParameterRecord( [FromRoute] string Id, [FromQuery] int Number, [FromHeader(Name = "x-direction")] Direction Direction, [FromForm(Name = "test")] bool IsTrue); public static class AsParameterRecordEndpoint { [WolverinePost("/asparameterrecord/{Id}")] public static AsParameterRecord Post([AsParameters] AsParameterRecord input) => input; } ``` snippet source | anchor The [Fluent Validation middleware](./fluentvalidation) for Wolverine.HTTP is able to validate against request types bound with `[AsParameters]`: ```cs public static class ValidatedAsParametersEndpoint { [WolverineGet("/asparameters/validated")] public static string Get([AsParameters] ValidatedQuery query) { return $"{query.Name} is {query.Age}"; } } public class ValidatedQuery { [FromQuery] public string? Name { get; set; } public int Age { get; set; } public class ValidatedQueryValidator : AbstractValidator { public ValidatedQueryValidator() { RuleFor(x => x.Name).NotNull(); } } } ``` snippet source | anchor --- --- url: /guide/messaging/transports/azureservicebus.md --- # Using Azure Service Bus ::: tip Wolverine.AzureServiceBus is able to support inline, buffered, or durable endpoints. ::: Wolverine supports [Azure Service Bus](https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-messaging-overview) as a messaging transport through the WolverineFx.AzureServiceBus nuget. ## Connecting to the Broker After referencing the Nuget package, the next step to using Azure Service Bus within your Wolverine application is to connect to the service broker using the `UseAzureServiceBus()` extension method as shown below in this basic usage: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { // One way or another, you're probably pulling the Azure Service Bus // connection string out of configuration var azureServiceBusConnectionString = builder .Configuration .GetConnectionString("azure-service-bus"); // Connect to the broker in the simplest possible way opts.UseAzureServiceBus(azureServiceBusConnectionString) // Let Wolverine try to initialize any missing queues // on the first usage at runtime .AutoProvision() // Direct Wolverine to purge all queues on application startup. // This is probably only helpful for testing .AutoPurgeOnStartup(); // Or if you need some further specification... opts.UseAzureServiceBus(azureServiceBusConnectionString, azure => { azure.RetryOptions.Mode = ServiceBusRetryMode.Exponential; }); }); using var host = builder.Build(); await host.StartAsync(); ``` snippet source | anchor The advanced configuration for the broker is the [ServiceBusClientOptions](https://learn.microsoft.com/en-us/dotnet/api/azure.messaging.servicebus.servicebusclientoptions?view=azure-dotnet) class from the Azure.Messaging.ServiceBus library. For security purposes, there are overloads of `UseAzureServiceBus()` that will also accept and opt into Azure Service Bus authentication with: 1. [TokenCredential](https://learn.microsoft.com/en-us/dotnet/api/azure.core.tokencredential?view=azure-dotnet) 2. [AzureNamedKeyCredential](https://learn.microsoft.com/en-us/dotnet/api/azure.azurenamedkeycredential?view=azure-dotnet) 3. [AzureSasCredential](https://learn.microsoft.com/en-us/dotnet/api/azure.azuresascredential?view=azure-dotnet) ## Request/Reply [Request/reply](https://www.enterpriseintegrationpatterns.com/patterns/messaging/RequestReply.html) mechanics (`IMessageBus.InvokeAsync()`) are possible with the Azure Service Bus transport *if* Wolverine has the ability to auto-provision a specific response queue for each node. That queue would be named like `wolverine.response.[application node id]` if you happen to notice that in the Azure Portal. And also see the next section. ## Wolverine Control Queues You can opt into using temporary Azure Service Bus queues for intra-node communication that Wolverine needs for leader election and background worker distribution. Using Azure Service Bus for this feature is more efficient than the built in database control queues that Wolverine uses otherwise, and is necessary for message storage options like RavenDb that do not have a built in control queue mechanism. ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { // One way or another, you're probably pulling the Azure Service Bus // connection string out of configuration var azureServiceBusConnectionString = builder .Configuration .GetConnectionString("azure-service-bus")!; // Connect to the broker in the simplest possible way opts.UseAzureServiceBus(azureServiceBusConnectionString) .AutoProvision() // This enables Wolverine to use temporary Azure Service Bus // queues created at runtime for communication between // Wolverine nodes .EnableWolverineControlQueues(); }); ``` snippet source | anchor ## Disabling System Queues If your application will not have permissions to create temporary queues in Azure Service Bus, you will probably want to disable system queues to avoid having some annoying error messages popping up. That's easy enough though: ```cs var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseAzureServiceBusTesting() .AutoProvision().AutoPurgeOnStartup() .SystemQueuesAreEnabled(false); opts.ListenToAzureServiceBusQueue("send_and_receive"); opts.PublishAllMessages().ToAzureServiceBusQueue("send_and_receive"); }).StartAsync(); ``` snippet source | anchor ## Connecting To Multiple Namespaces Wolverine supports the "named broker" feature to connect to multiple Azure Service Bus namespaces from one application: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { var connectionString1 = builder.Configuration.GetConnectionString("azureservicebus1"); opts.AddNamedAzureServiceBusBroker(new BrokerName("one"), connectionString1); var connectionString2 = builder.Configuration.GetConnectionString("azureservicebus2"); opts.AddNamedAzureServiceBusBroker(new BrokerName("two"), connectionString2); opts.PublishAllMessages().ToAzureServiceBusQueueOnNamedBroker(new BrokerName("one"), "queue1"); opts.ListenToAzureServiceBusQueueOnNamedBroker(new BrokerName("two"), "incoming"); opts.ListenToAzureServiceBusSubscriptionOnNamedBroker(new BrokerName("two"), "subscription1"); }); ``` snippet source | anchor --- --- url: /guide/messaging/transports/gcp-pubsub.md --- # Using Google Cloud Platform (GCP) Pub/Sub ::: tip Wolverine.Pubsub is able to support inline, buffered, or durable endpoints. ::: Wolverine supports [GCP Pub/Sub](https://cloud.google.com/pubsub) as a messaging transport through the WolverineFx.Pubsub package. ## Connecting to the Broker After referencing the Nuget package, the next step to using GCP Pub/Sub within your Wolverine application is to connect to the service broker using the `UsePubsub()` extension method. If you are running on Google Cloud or with Application Default Credentials (ADC), you just need to supply [your GCP project id](https://support.google.com/googleapi/answer/7014113): ```cs var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UsePubsub("your-project-id") // Let Wolverine create missing topics and subscriptions as necessary .AutoProvision() // Optionally purge all subscriptions on application startup. // Warning though, this is potentially slow .AutoPurgeOnStartup(); }).StartAsync(); ``` snippet source | anchor If you'd like to connect to a GCP Pub/Sub emulator running on your development box, you set emulator detection throught this helper: ```cs var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UsePubsub("your-project-id") // Tries to use GCP Pub/Sub emulator, as it defaults // to EmulatorDetection.EmulatorOrProduction. But you can // supply your own, like EmulatorDetection.EmulatorOnly .UseEmulatorDetection(); }).StartAsync(); ``` snippet source | anchor ## Request/Reply [Request/reply](https://www.enterpriseintegrationpatterns.com/patterns/messaging/RequestReply.html) mechanics (`IMessageBus.InvokeAsync()`) are possible with the GCP Pub/Sub transport *if* Wolverine has the ability to auto-provision a specific response topic and subscription for each node. That topic and subscription would be named like `wlvrn.response.[application node id]` if you happen to notice that in your GCP Pub/Sub. ### Enable System Endpoints If your application has permissions to create topics and subscriptions in GCP Pub/Sub, you can enable system endpoints and opt in to the request/reply mechanics mentioned above. ```cs var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UsePubsub("your-project-id") .EnableSystemEndpoints(); }).StartAsync(); ``` snippet source | anchor ## Identifier Prefixing for Shared Environments When sharing a single GCP project between multiple developers or development environments, you can use `PrefixIdentifiers()` to automatically prepend a prefix to every topic and subscription name created by Wolverine. This helps isolate the cloud resources for each developer or environment: ```csharp using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UsePubsub("your-project-id") .AutoProvision() // Prefix all topic and subscription names with "dev-john." .PrefixIdentifiers("dev-john"); // A topic named "orders" becomes "dev-john.orders" opts.ListenToPubsubTopic("orders"); }).StartAsync(); ``` You can also use `PrefixIdentifiersWithMachineName()` as a convenience to use the current machine name as the prefix: ```csharp opts.UsePubsub("your-project-id") .AutoProvision() .PrefixIdentifiersWithMachineName(); ``` The default delimiter between the prefix and the original name is `.` for GCP Pub/Sub (e.g., `dev-john.orders`). --- --- url: /guide/http/headers.md --- # Using HTTP Headers While you can always just take in `HttpContext` as an argument to your HTTP endpoint method to read request headers, there's some value in having your endpoint methods be [pure functions](https://en.wikipedia.org/wiki/Pure_function) to maximize the testability of your application code. Since reading header values and parsing those values into specific .NET types is such a common use case, Wolverine has some middleware you can opt into to read the header values and pass them into your endpoint methods using the `[Wolverine.Http.FromHeader]` attribute as shown from this sample code from the Wolverine testing code: ```cs // As of Wolverine 2.6, you can utilize header data in middleware public static void Before([FromHeader(Name = "x-day")] string? day) { Debug.WriteLine($"Day header is {day}"); Day = day; // This is for testing } [WolverineGet("/headers/simple")] public string Get( // Find the request header with the supplied name and pass // it as the "name" parameter to this method at runtime [FromHeader(Name = "x-wolverine")] string name) { return name; } [WolverineGet("/headers/int")] public string Get( // Find the request header with the supplied name and pass // it as the "name" parameter to this method at runtime // If the attribute does not exist, Wolverine will pass // in the default value for the parameter type, in this case // 0 [FromHeader(Name = "x-wolverine")] int number ) { return (number * 2).ToString(); } [WolverineGet("/headers/accepts")] // In this case, push the string value for the "accepts" header // right into the parameter based on the parameter name public string GetETag([FromHeader] string accepts) { return accepts; } ``` snippet source | anchor --- --- url: /guide/messaging/transports/kafka.md --- # Using Kafka ::: warning The Kafka transport does not really support the "Requeue" error handling policy in Wolverine. "Requeue" in this case becomes effectively an inline "Retry" ::: ## Installing To use [Kafka](https://www.confluent.io/what-is-apache-kafka/) as a messaging transport with Wolverine, first install the `Wolverine.Kafka` library via nuget to your project. Behind the scenes, this package uses the [Confluent.Kafka client library](https://github.com/confluentinc/confluent-kafka-dotnet) managed library for accessing Kafka brokers. ```bash dotnet add WolverineFx.Kafka ``` ```warning The configuration in `ConfigureConsumer()` for each topic completely overwrites any previous configuration ``` To connect to Kafka, use this syntax: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseKafka("localhost:9092") // See https://github.com/confluentinc/confluent-kafka-dotnet for the exact options here .ConfigureClient(client => { // configure both producers and consumers }) .ConfigureConsumers(consumer => { // configure only consumers }) .ConfigureProducers(producer => { // configure only producers }) .ConfigureProducerBuilders(builder => { // there are some options that are only exposed // on the ProducerBuilder }) .ConfigureConsumerBuilders(builder => { // there are some Kafka client options that // are only exposed from the builder }) .ConfigureAdminClientBuilders(builder => { // configure admin client builders }); // Just publish all messages to Kafka topics // based on the message type (or message attributes) // This will get fancier in the near future opts.PublishAllMessages().ToKafkaTopics(); // Or explicitly make subscription rules opts.PublishMessage() .ToKafkaTopic("colors") // Fine tune how the Kafka Topic is declared by Wolverine .Specification(spec => { spec.NumPartitions = 6; spec.ReplicationFactor = 3; }) // OR, you can completely control topic creation through this: .TopicCreation(async (client, topic) => { topic.Specification.NumPartitions = 8; topic.Specification.ReplicationFactor = 2; // You do have full access to the IAdminClient to do // whatever you need to do await client.CreateTopicsAsync([topic.Specification]); }) // Override the producer configuration for just this topic .ConfigureProducer(config => { config.BatchSize = 100; config.EnableGaplessGuarantee = true; config.EnableIdempotence = true; }); // Listen to topics opts.ListenToKafkaTopic("red") .ProcessInline() // Override the consumer configuration for only this // topic // This is NOT combinatorial with the ConfigureConsumers() call above // and completely replaces the parent configuration .ConfigureConsumer(config => { // This will also set the Envelope.GroupId for any // received messages at this topic config.GroupId = "foo"; config.BootstrapServers = "localhost:9092"; // Other configuration }) // Fine tune how the Kafka Topic is declared by Wolverine .Specification(spec => { spec.NumPartitions = 6; spec.ReplicationFactor = 3; }); opts.ListenToKafkaTopic("green") .BufferedInMemory(); // This will direct Wolverine to try to ensure that all // referenced Kafka topics exist at application start up // time opts.Services.AddResourceSetupOnStartup(); }).StartAsync(); ``` snippet source | anchor The various `Configure*****()` methods provide quick access to the full API of the Confluent Kafka library for security and fine tuning the Kafka topic behavior. ## Listener Consumer Settings When building a Kafka listener, Wolverine configures the underlying Confluent Kafka `ConsumerConfig` differently depending on whether the listener endpoint is **durable** (backed by the transactional inbox) and how the listener processes messages. Understanding these settings is important for getting the delivery guarantees you need. ### How Endpoint Mode Affects Consumer Configuration When an endpoint uses `EndpointMode.Durable` (i.e., you've called `.UseDurableInbox()` or applied durable inbox globally), Wolverine overrides the following consumer setting before building the listener: | Consumer Setting | Durable (`UseDurableInbox`) | Non-Durable (`BufferedInMemory` / `Inline`) | |---|---|---| | `EnableAutoCommit` | `false` | `true` (Kafka default) | | `EnableAutoOffsetStore` | `true` (Kafka default) | `true` (Kafka default) | In **durable mode**, Wolverine disables Kafka's automatic offset *commit* so that offsets are only committed when Wolverine explicitly calls `Commit()` after a message has been successfully persisted to the transactional inbox. The Kafka client still auto-stores the offset on each `Consume()` call (the default behavior), which tracks the consumer's position. However, the stored offset is not pushed to the broker until `Commit()` is called. This gives correct at-least-once semantics -- if the application shuts down unexpectedly before committing, unprocessed messages will be re-delivered when the consumer rejoins the group. In **non-durable mode** (`BufferedInMemory` or `ProcessInline`), Kafka's default auto-commit behavior is left in place. The Kafka client library periodically commits offsets automatically, which provides higher throughput at the cost of potential message loss during an ungraceful shutdown. ### Offset Commit Behavior in the Listener Regardless of endpoint mode, the `KafkaListener` calls `_consumer.Commit()` in these situations: * **On successful processing** -- `CompleteAsync()` explicitly commits the consumer offset after a message finishes processing. In durable mode this is the *only* path that advances the offset. * **On poison pill messages** -- If an incoming Kafka message cannot be deserialized into a Wolverine envelope at all (a true poison pill), the listener commits the offset to skip past the bad message and avoid blocking the consumer. * **On dead letter queue routing** -- When a message exhausts all retries and is moved to the native dead letter queue topic, the offset is committed after the DLQ produce succeeds. ### Recommended Configuration by Use Case **At-least-once delivery** (recommended for most use cases): ```csharp opts.ListenToKafkaTopic("orders") .UseDurableInbox(); ``` This ensures messages are persisted to the inbox before the offset is committed. If your process crashes, the message will be re-delivered by Kafka and de-duplicated by Wolverine's inbox. **Higher throughput, at-most-once delivery**: ```csharp opts.ListenToKafkaTopic("telemetry") .BufferedInMemory(); ``` With auto-commit enabled, offsets may be committed before processing completes. This is suitable for high-volume, loss-tolerant workloads like telemetry or logging. **Inline processing with manual consumer tuning**: ```csharp opts.ListenToKafkaTopic("events") .ProcessInline() .ConfigureConsumer(config => { config.EnableAutoCommit = false; config.AutoOffsetReset = AutoOffsetReset.Earliest; }); ``` You can always override any consumer setting per-topic using `ConfigureConsumer()`. Note that this **completely replaces** the parent-level consumer configuration -- it is not combinatorial. ## Publishing by Partition Key To publish messages with Kafka using a designated [partition key](https://developer.confluent.io/courses/apache-kafka/partitions/), use the `DeliveryOptions` to designate a partition like so: ```cs public static ValueTask publish_by_partition_key(IMessageBus bus) { return bus.PublishAsync(new Message1(), new DeliveryOptions { PartitionKey = "one" }); } ``` snippet source | anchor ## Propagating GroupId to PartitionKey When consuming from a Kafka topic, the incoming envelope's `GroupId` is automatically set from the Kafka consumer's configured `GroupId`. If your handler produces cascaded messages that should land on the same partition, you can enable automatic propagation of the originating `GroupId` to the outgoing `PartitionKey`: ```csharp opts.Policies.PropagateGroupIdToPartitionKey(); ``` This eliminates the need to manually set `DeliveryOptions.PartitionKey` on every outgoing message from your handlers. The rule will never override an explicitly set `PartitionKey`. See the [Partitioned Sequential Messaging](/guide/messaging/partitioning#propagating-groupid-to-partitionkey) documentation for more details and a code sample. ## Interoperability ::: tip Also see the more generic [Wolverine Guide on Interoperability](/tutorials/interop) ::: It's a complex world out there, and it's more than likely you'll need your Wolverine application to interact with system that aren't also Wolverine applications. At this time, it's possible to send or receive raw JSON through Kafka and Wolverine by using the options shown below in test harness code: ```cs _receiver = await Host.CreateDefaultBuilder() .UseWolverine(opts => { //opts.EnableAutomaticFailureAcks = false; opts.UseKafka("localhost:9092").AutoProvision(); opts.ListenToKafkaTopic("json") // You do have to tell Wolverine what the message type // is that you'll receive here so that it can deserialize the // incoming data .ReceiveRawJson(); // Include test assembly for handler discovery opts.Discovery.IncludeAssembly(GetType().Assembly); opts.Services.AddResourceSetupOnStartup(); opts.PersistMessagesWithPostgresql(Servers.PostgresConnectionString, "kafka"); opts.Services.AddResourceSetupOnStartup(); opts.Policies.UseDurableInboxOnAllListeners(); }).StartAsync(); _sender = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseKafka("localhost:9092").AutoProvision(); opts.Policies.DisableConventionalLocalRouting(); opts.Services.AddResourceSetupOnStartup(); opts.PublishAllMessages().ToKafkaTopic("json") // Just publish the outgoing information as pure JSON // and no other Wolverine metadata .PublishRawJson(new JsonSerializerOptions()); }).StartAsync(); ``` snippet source | anchor ## Instrumentation & Diagnostics When receiving messages through Kafka and Wolverine, there are some useful elements of Kafka metadata on the Wolverine `Envelope` you can use for instrumentation or diagnostics as shown in this sample middleware: ```cs public static class KafkaInstrumentation { // Just showing what data elements are available to use for // extra instrumentation when listening to Kafka topics public static void Before(Envelope envelope, ILogger logger) { logger.LogDebug("Received message from Kafka topic {TopicName} with Offset={Offset} and GroupId={GroupId}", envelope.TopicName, envelope.Offset, envelope.GroupId); } } ``` snippet source | anchor ## Connecting to Multiple Brokers Wolverine supports interacting with multiple Kafka brokers within one application like this: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseKafka("localhost:9092"); opts.AddNamedKafkaBroker(new BrokerName("americas"), "americas-kafka:9092"); opts.AddNamedKafkaBroker(new BrokerName("emea"), "emea-kafka:9092"); // Just publish all messages to Kafka topics // based on the message type (or message attributes) // This will get fancier in the near future opts.PublishAllMessages().ToKafkaTopicsOnNamedBroker(new BrokerName("americas")); // Or explicitly make subscription rules opts.PublishMessage() .ToKafkaTopicOnNamedBroker(new BrokerName("emea"), "colors"); // Listen to topics opts.ListenToKafkaTopicOnNamedBroker(new BrokerName("americas"), "red"); // Other configuration }).StartAsync(); ``` snippet source | anchor Note that the `Uri` scheme within Wolverine for any endpoints from a "named" Kafka broker is the name that you supply for the broker. So in the example above, you might see `Uri` values for `emea://colors` or `americas://red`. ## Native Dead Letter Queue Wolverine supports routing failed Kafka messages to a designated dead letter queue (DLQ) Kafka topic instead of relying on database-backed dead letter storage. This is opt-in on a per-listener basis. ### Enabling the Dead Letter Queue To enable the native DLQ for a Kafka listener, use the `EnableNativeDeadLetterQueue()` method: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseKafka("localhost:9092").AutoProvision(); opts.ListenToKafkaTopic("incoming") .ProcessInline() .EnableNativeDeadLetterQueue(); }).StartAsync(); ``` When a message fails all retry attempts, it will be produced to the DLQ Kafka topic (default: `wolverine-dead-letter-queue`) with the original message body and Wolverine envelope headers intact. The following exception metadata headers are added: * `exception-type` - The full type name of the exception * `exception-message` - The exception message * `exception-stack` - The exception stack trace * `failed-at` - Unix timestamp in milliseconds when the failure occurred ### Configuring the DLQ Topic Name The default DLQ topic name is `wolverine-dead-letter-queue`. You can customize this at the transport level: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseKafka("localhost:9092") .AutoProvision() .DeadLetterQueueTopicName("my-app-dead-letters"); opts.ListenToKafkaTopic("incoming") .ProcessInline() .EnableNativeDeadLetterQueue(); }).StartAsync(); ``` The DLQ topic is shared across all listeners on the same Kafka transport that have native DLQ enabled. When `AutoProvision` is enabled, the DLQ topic will be automatically created. ## Disabling all Sending Hey, you might have an application that only consumes Kafka messages, but there are a *few* diagnostics in Wolverine that try to send messages. To completely eliminate that, you can disable all message sending in Wolverine like this: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts .UseKafka("localhost:9092") // Tell Wolverine that this application will never // produce messages to turn off any diagnostics that might // try to "ping" a topic and result in errors .ConsumeOnly(); }).StartAsync(); ``` snippet source | anchor --- --- url: /guide/messaging/transports/local.md --- # Using Local Queueing Using the `Wolverine.IMessageBus` service that is automatically registered in your system through the `IHostBuilder.UseWolverine()` extensions, you can either invoke message handlers inline, enqueue messages to local, in process queues, or schedule message execution within the system. All known message handlers within a Wolverine application can be used from `IMessageBus` without any additional explicit configuration. ## Publishing Messages Locally The queueing is all based around the [TPL Dataflow library](https://docs.microsoft.com/en-us/dotnet/standard/parallel-programming/how-to-perform-action-when-a-dataflow-block-receives-data) objects from the [TPL Dataflow](https://docs.microsoft.com/en-us/dotnet/standard/parallel-programming/dataflow-task-parallel-library) library. As such, you have a fair amount of control over parallelization and even some back pressure. These local queues can be used directly, or as a transport to accept messages sent through `IMessageBus.SendAsync()` or `IMessageBus.PublishAsync()`. using the application's [message routing rules](/guide/messaging/subscriptions.html#routing-rules). This feature is useful for asynchronous processing in web applications or really any kind of application where you need some parallelization or concurrency. Some things to know about the local queues: * Local worker queues can be durable, meaning that the enqueued messages are persisted first so that they aren't lost if the application is shut down before they're processed. More on that below. * You can use any number of named local queues, and they don't even have to be declared upfront (might want to be careful with that though) * Local worker queues utilize Wolverine's [error handling](/guide/handlers/error-handling) policies to selectively handle any detected exceptions from the [message handlers](/guide/handlers/). * You can control the priority and parallelization of each individual local queue * Message types can be routed to particular queues, **but by default Wolverine will route messages to an individual local queue for each message type that is named for the message type name** * [Cascading messages](/guide/handlers/cascading) can be used with the local queues * The local queues can be used like any other message transport and be the target of routing rules ## Explicitly Publish to a Specific Local Queue If you want to enqueue a message locally to a specific worker queue, you can use this syntax: ```cs public ValueTask EnqueueToQueue(IMessageContext bus) { var @event = new InvoiceCreated { Time = DateTimeOffset.Now, Purchaser = "Guy Fieri", Amount = 112.34, Item = "Cookbook" }; // Put this message in a local worker // queue named 'highpriority' return bus.EndpointFor("highpriority").SendAsync(@event); } ``` snippet source | anchor ## Scheduling Local Execution :::tip If you need the command scheduling to be persistent or be persisted across service restarts, you'll need to enable the [message persistence](/guide/durability/) within Wolverine. ::: The "scheduled execution" feature can be used with local execution within the same application. See [Scheduled Messages](/guide/messaging/message-bus.html#scheduling-message-delivery-or-execution) for more information. Use the `IMessageBus.ScheduleAsync()` extension methods like this: ```cs public async Task ScheduleLocally(IMessageContext bus, Guid invoiceId) { var message = new ValidateInvoiceIsNotLate { InvoiceId = invoiceId }; // Schedule the message to be processed in a certain amount // of time await bus.ScheduleAsync(message, 30.Days()); // Schedule the message to be processed at a certain time await bus.ScheduleAsync(message, DateTimeOffset.Now.AddDays(30)); } ``` snippet source | anchor ## Explicit Local Message Routing In the absence of any kind of routing rules, any message enqueued with `IMessageBus.PublishAsync()` will just be handled by a local queue with the message type name. To override that choice on a message type by message type basis, you can use the `[LocalQueue]` attribute on a message type: ```cs [LocalQueue("important")] public class ImportanceMessage; ``` snippet source | anchor Otherwise, you can take advantage of Wolverine's message routing rules like this: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Publish Message2 messages to the "important" // local queue opts.PublishMessage() .ToLocalQueue("important"); }).StartAsync(); ``` snippet source | anchor The routing rules and/or `[LocalQueue]` routing is also honored for cascading messages, meaning that any message that is handled inside a Wolverine system could publish cascading messages to the local worker queues. See [message routing rules](/guide/messaging/subscriptions.html#routing-rules) for more information. ## Conventional Local Messaging Conventional local message routing is applied to every message type handled by the system that does not have some kind of explicit message type routing rule. You can override the message type to local queue configuration with this syntax: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Out of the box, this uses a separate local queue // for each message based on the message type name opts.Policies.ConfigureConventionalLocalRouting() // Or you can customize the usage of queues // per message type .Named(type => type.Namespace) // Optionally configure the local queues .CustomizeQueues((type, listener) => { listener.Sequential(); }); }).StartAsync(); ``` snippet source | anchor ## Disable Conventional Local Routing Sometimes you'll want to disable the conventional routing to local queues, especially if you want to evenly distribute work across active nodes in an application. To do so, use this syntax: ```cs public static async Task disable_queue_routing() { using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // This will disable the conventional local queue // routing that would take precedence over other conventional // routing opts.Policies.DisableConventionalLocalRouting(); // Other routing conventions. Rabbit MQ? SQS? }).StartAsync(); ``` snippet source | anchor ## Configuring Local Queues ::: warning The current default is for local queues to allow for parallel processing with the maximum number of parallel threads set at the number of processors for the current machine. Likewise, the queues are unordered by default. ::: You can configure durability or parallelization rules on single queues or conventional configuration for queues with this usage: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Explicit configuration for the local queue // by the message type it handles: opts.LocalQueueFor() .UseDurableInbox() .Sequential(); // Explicit configuration by queue name opts.LocalQueue("one") .Sequential(); opts.LocalQueue("two") .MaximumParallelMessages(10) .UseDurableInbox(); // Apply configuration options to all local queues, // but explicit changes to specific local queues take precedence opts.Policies.AllLocalQueues(x => x.UseDurableInbox()); }).StartAsync(); ``` snippet source | anchor ## Using IConfigureLocalQueue to Configure Local Queues ::: info This feature was added in reaction to the newer "sticky" handler to local queue usage, but it's perfectly usable for message types that are happily handled without any "sticky" handler configuration. ::: The advent of ["sticky handlers"](/guide/handlers/sticky) or the [separated handler mode](/guide/handlers/#multiple-handlers-for-the-same-message-type) for better Wolverine usage in modular monoliths admittedly made it a little harder to fine tune the local queue behavior for different message types or message handlers without understanding the Wolverine naming conventions. To get back to leaning more on the type system, Wolverine introduced the static `IConfigureLocalQueue` interface that can be implemented on any handler type to configure the local queue where that handler would run: ```cs /// /// Helps mark a handler to configure the local queue that its messages /// would be routed to. It's probably only useful to use this with "sticky" handlers /// that run on an isolated local queue /// public interface IConfigureLocalQueue { static abstract void Configure(LocalQueueConfiguration configuration); } ``` snippet source | anchor ::: tip Static interfaces can only be used on non-static types, so even if all your message handler *methods* are static, the handler type itself cannot be static. Just a .NET quirk. ::: To use this, just implement that interface on any message handler type: ```cs public class MultipleMessage1Handler : IConfigureLocalQueue { public static void Handle(MultipleMessage message) { } // This method is configuring the local queue that executes this // handler to be strictly ordered public static void Configure(LocalQueueConfiguration configuration) { configuration.Sequential(); } } ``` snippet source | anchor ## Durable Local Messages The local worker queues can optionally be designated as "durable," meaning that local messages would be persisted until they can be successfully processed to provide a guarantee that the message will be successfully processed in the case of the running application faulting or having been shut down prematurely (assuming that other nodes are running or it's restarted later of course). Here is an example of configuring a local queue to be durable: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Make the default local queue durable opts.DefaultLocalQueue.UseDurableInbox(); // Or do just this by name opts.LocalQueue("important") .UseDurableInbox(); }).StartAsync(); ``` snippet source | anchor See [Durable Inbox and Outbox Messaging](/guide/durability/) for more information. ## Configuring Parallelization and Execution Properties The queues are built on top of the TPL Dataflow library, so it's pretty easy to configure parallelization (how many concurrent messages could be handled by a queue). Here's an example of how to establish this: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Force a local queue to be // strictly first in, first out // with no more than a single // thread handling messages enqueued // here // Use this option if message ordering is // important opts.LocalQueue("one") .Sequential(); // Specify the maximum number of parallel threads opts.LocalQueue("two") .MaximumParallelMessages(5); // And finally, this enrolls a queue into the persistent inbox // so that messages can happily be retained and processed // after the service is restarted opts.LocalQueue("four").UseDurableInbox(); }).StartAsync(); ``` snippet source | anchor ## Local Queues as a Messaging Transport ::: tip warning The local transport is used underneath the covers by Wolverine for retrying locally enqueued messages or scheduled messages that may have initially failed. ::: In the sample Wolverine configuration shown below: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Publish Message2 messages to the "important" // local queue opts.PublishMessage() .ToLocalQueue("important"); }).StartAsync(); ``` snippet source | anchor Calling `IMessageBus.SendAsync(new Message2())` would publish the message to the local "important" queue. --- --- url: /guide/messaging/transports/mqtt.md --- # Using MQTT ::: warning Wolverine requires the [V5 version of MQTT](https://docs.oasis-open.org/mqtt/mqtt/v5.0/mqtt-v5.0.html) for its broker support ::: The Wolverine 1.9 release added a new transport option for the [MQTT standard](https://mqtt.org/) common in IoT Messaging. ## Installing To use [MQTT](https://mqtt.org/) as a transport with Wolverine, first install the `Wolverine.MQTT` library via nuget to your project. Behind the scenes, this package uses the [MQTTnet](https://github.com/dotnet/MQTTnet) managed library for accessing MQTT brokers and also for its own testing. ```bash dotnet add WolverineFx.Mqtt ``` In its most simplistic usage you enable the MQTT transport through calling the `WolverineOptions.UseMqtt()` extension method and defining which MQTT topics you want to publish or subscribe to with the normal [subscriber rules](/guide/messaging/subscriptions) as shown in this sample: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { // Connect to the MQTT broker opts.UseMqtt(mqtt => { var mqttServer = builder.Configuration["mqtt_server"]; mqtt .WithMaxPendingMessages(3) .WithClientOptions(client => { client.WithTcpServer(mqttServer); }); }); // Listen to an MQTT topic, and this could also be a wildcard // pattern opts.ListenToMqttTopic("app/incoming") // In the case of receiving JSON data, but // not identifying metadata, tell Wolverine // to assume the incoming message is this type .DefaultIncomingMessage() // The default is AtLeastOnce .QualityOfService(MqttQualityOfServiceLevel.AtMostOnce); // Publish messages to an outbound topic opts.PublishAllMessages() .ToMqttTopic("app/outgoing"); }); using var host = builder.Build(); await host.StartAsync(); ``` ::: info The MQTT transport *at this time* only supports endpoints that are either `Buffered` or `Durable`. ::: ::: warning The MQTT transport does not really support the "Requeue" error handling policy in Wolverine. "Requeue" in this case becomes effectively an inline "Retry" ::: ## Broadcast to User Defined Topics As long as the MQTT transport is enabled in your application, you can explicitly publish messages to any named topic through this usage: ```cs public static async Task broadcast(IMessageBus bus) { var paymentMade = new PaymentMade(200, "EUR"); await bus.BroadcastToTopicAsync("region/europe/incoming", paymentMade); } ``` snippet source | anchor ## Publishing to Derived Topic Names ::: info The Wolverine is open to extending the options for determining the topic name from the message type, but is waiting for feedback from the community before trying to build anything else around this. ::: As a way of routing messages to MQTT topics, you also have this option: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { // Connect to the MQTT broker opts.UseMqtt(mqtt => { var mqttServer = builder.Configuration["mqtt_server"]; mqtt .WithMaxPendingMessages(3) .WithClientOptions(client => { client.WithTcpServer(mqttServer); }); }); // Publish messages to MQTT topics based on // the message type opts.PublishAllMessages() .ToMqttTopics() .QualityOfService(MqttQualityOfServiceLevel.AtMostOnce); }); using var host = builder.Build(); await host.StartAsync(); ``` snippet source | anchor In this approach, all messages will be routed to MQTT topics. The topic name for each message type would be derived from either Wolverine's [message type name](/guide/messages.html#message-type-name-or-alias) rules or by using the `[Topic("topic name")]` attribute as shown below: ```cs [Topic("one")] public class TopicMessage1; ``` snippet source | anchor ## Publishing by Topic Rules You can publish messages to MQTT topics based on user defined logic to determine the actual topic name. As an example, say you have a marker interfaces for your messages like this: ```cs public interface ITenantMessage { string TenantId { get; } } ``` snippet source | anchor To publish any message implementing that interface to an MQTT topic, you could specify the topic name logic like this: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { // Connect to the MQTT broker opts.UseMqtt(mqtt => { var mqttServer = builder.Configuration["mqtt_server"]; mqtt .WithMaxPendingMessages(3) .WithClientOptions(client => { client.WithTcpServer(mqttServer); }); }); // Publish any message that implements ITenantMessage to // MQTT with a topic derived from the message opts.PublishMessagesToMqttTopic(m => $"{m.GetType().Name.ToLower()}/{m.TenantId}") // Specify or configure sending through Wolverine for all // MQTT topic broadcasting .QualityOfService(MqttQualityOfServiceLevel.ExactlyOnce) .BufferedInMemory(); }); using var host = builder.Build(); await host.StartAsync(); ``` snippet source | anchor ## Listening by Topic Filter Wolverine supports topic filters for listening. The syntax is still just the same `ListenToMqttTopic(filter)` as shown in this snippet from the Wolverine.MQTT test suite: ```cs _receiver = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseMqttWithLocalBroker(port); opts.ListenToMqttTopic("incoming/one", "group1").RetainMessages(); }).StartAsync(); ``` snippet source | anchor ```cs _receiver = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseMqttWithLocalBroker(port); opts.ListenToMqttTopic("incoming/#").RetainMessages(); }).StartAsync(); ``` snippet source | anchor In the case of receiving any message that matches the topic filter *according to the [MQTT topic filter rules](https://cedalo.com/blog/mqtt-topics-and-mqtt-wildcards-explained/)*, that message will be handled by the listening endpoint defined for that filter. ## Integrating with Non-Wolverine It's quite likely that in using Wolverine with an MQTT broker that you will be communicating with non-Wolverine systems or devices on the other end, so you can't depend on the Wolverine metadata being sent in MQTT `UserProperties` data. Not to worry, you've got options. In the case of the external system sending you JSON, but nothing else, if you can design the system such that there's only one type of message coming into a certain MQTT topic, you can just tell Wolverine to listen for that topic and what that message type would be so that Wolverine is able to deserialize the message and relay that to the correct message handler like so: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { // Connect to the MQTT broker opts.UseMqtt(mqtt => { var mqttServer = builder.Configuration["mqtt_server"]; mqtt .WithMaxPendingMessages(3) .WithClientOptions(client => { client.WithTcpServer(mqttServer); }); }); // Listen to an MQTT topic, and this could also be a wildcard // pattern opts.ListenToMqttTopic("app/payments/made") // In the case of receiving JSON data, but // not identifying metadata, tell Wolverine // to assume the incoming message is this type .DefaultIncomingMessage(); }); using var host = builder.Build(); await host.StartAsync(); ``` snippet source | anchor For more complex interoperability, you can implement the `IMqttEnvelopeMapper` interface in Wolverine to map between incoming and outgoing MQTT messages and the Wolverine `Envelope` structure. Here's an example: ```cs public class MyMqttEnvelopeMapper : IMqttEnvelopeMapper { public void MapEnvelopeToOutgoing(Envelope envelope, MqttApplicationMessage outgoing) { // This is the only absolutely mandatory item outgoing.PayloadSegment = envelope.Data; // Maybe enrich this more? outgoing.ContentType = envelope.ContentType; } public void MapIncomingToEnvelope(Envelope envelope, MqttApplicationMessage incoming) { // These are the absolute minimums necessary for Wolverine to function envelope.MessageType = typeof(PaymentMade).ToMessageTypeName(); envelope.Data = incoming.PayloadSegment.Array; // Optional items envelope.DeliverWithin = 5.Seconds(); // throw away the message if it // is not successfully processed // within 5 seconds } } ``` snippet source | anchor And apply that to an MQTT topic like so: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { // Connect to the MQTT broker opts.UseMqtt(mqtt => { var mqttServer = builder.Configuration["mqtt_server"]; mqtt .WithMaxPendingMessages(3) .WithClientOptions(client => { client.WithTcpServer(mqttServer); }); }); // Publish messages to MQTT topics based on // the message type opts.PublishAllMessages() .ToMqttTopics() // Tell Wolverine to map envelopes to MQTT messages // with our custom strategy .UseInterop(new MyMqttEnvelopeMapper()) .QualityOfService(MqttQualityOfServiceLevel.AtMostOnce); }); using var host = builder.Build(); await host.StartAsync(); ``` snippet source | anchor ## Clearing Out Retained Messages MQTT brokers allow you to publish retained messages to a topic, meaning that the last message will always be retained by the broker and sent to any new subscribers. That's a little bit problematic if your Wolverine application happens to be restarted, because that last retained message may easily be resent to your Wolverine application when you restart. Not to fear, the MQTT protocol allows you to "clear" out a topic by sending it a zero byte message, and Wolverine has a couple shortcuts for doing just that by returning a cascading message to "zero out" the topic a message was received on or a named topic like this: ```cs public static AckMqttTopic Handle(ZeroMessage message) { // "Zero out" the topic that the original message was received from return new AckMqttTopic(); } public static ClearMqttTopic Handle(TriggerZero message) { // "Zero out" the designated topic return new ClearMqttTopic("red"); } ``` snippet source | anchor ## Authentication via OAuth2 Wolverine supports MQTT v5 OAuth2/JWT authentication by supplying a token callback and refresh interval when you configure the transport. The callback returns raw token bytes (use UTF-8 encoding if your token is a string). When configured, Wolverine sets the MQTT authentication method to `OAUTH2-JWT`, sends the initial token with the connect packet, and re-authenticates on the configured refresh period while the client is connected. ::: info You don't need to configure `AuthenticationMethod` and `AuthenticationData` by yourself. These are overriden when the `MqttJwtAuthenticationOptions` parameter is set. ::: Minimal configuration example: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { opts.UseMqtt( mqtt => mqtt.WithClientOptions(client => client.WithTcpServer("broker")), new MqttJwtAuthenticationOptions( async () => Encoding.UTF8.GetBytes(await GetJwtTokenAsync()), 30.Minutes())); }); ``` ## Interoperability ::: tip Also see the more generic [Wolverine Guide on Interoperability](/tutorials/interop) ::: The Wolverine MQTT transport supports pluggable interoperability strategies through the `Wolverine.MQTT.IMqttEnvelopeMapper` interface to map from Wolverine's `Envelope` structure and MQTT's `MqttApplicationMessage` structure. Here's a simple example: ```cs public class MyMqttEnvelopeMapper : IMqttEnvelopeMapper { public void MapEnvelopeToOutgoing(Envelope envelope, MqttApplicationMessage outgoing) { // This is the only absolutely mandatory item outgoing.PayloadSegment = envelope.Data; // Maybe enrich this more? outgoing.ContentType = envelope.ContentType; } public void MapIncomingToEnvelope(Envelope envelope, MqttApplicationMessage incoming) { // These are the absolute minimums necessary for Wolverine to function envelope.MessageType = typeof(PaymentMade).ToMessageTypeName(); envelope.Data = incoming.PayloadSegment.Array; // Optional items envelope.DeliverWithin = 5.Seconds(); // throw away the message if it // is not successfully processed // within 5 seconds } } ``` snippet source | anchor You will need to apply that mapper to each endpoint like so: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { // Connect to the MQTT broker opts.UseMqtt(mqtt => { var mqttServer = builder.Configuration["mqtt_server"]; mqtt .WithMaxPendingMessages(3) .WithClientOptions(client => { client.WithTcpServer(mqttServer); }); }); // Publish messages to MQTT topics based on // the message type opts.PublishAllMessages() .ToMqttTopics() // Tell Wolverine to map envelopes to MQTT messages // with our custom strategy .UseInterop(new MyMqttEnvelopeMapper()) .QualityOfService(MqttQualityOfServiceLevel.AtMostOnce); }); using var host = builder.Build(); await host.StartAsync(); ``` snippet source | anchor --- --- url: /guide/messaging/transports/nats.md --- # Using NATS ::: tip Wolverine uses the official [NATS.Net client](https://github.com/nats-io/nats.net) to connect to NATS. ::: ## Installing To use [NATS](https://nats.io/) as a messaging transport with Wolverine, first install the `WolverineFx.Nats` library via NuGet: ```bash dotnet add package WolverineFx.Nats ``` ## Core NATS vs JetStream NATS provides two distinct messaging models: | Feature | Core NATS | JetStream | |---------|-----------|-----------| | **Persistence** | None (memory only) | Configurable (memory/file) | | **Delivery Guarantee** | At-most-once | At-least-once | | **Acknowledgments** | None | Full support (ack/nak/term) | | **Requeue** | Via republish | Native via `NakAsync()` | | **Dead Letter** | Not available | Via `AckTerminateAsync()` | | **Scheduled Delivery** | Not available | Native (Server 2.12+) | Choose **Core NATS** for: * Real-time notifications where message loss is acceptable * Low-latency fire-and-forget messaging * Heartbeats and ephemeral events Choose **JetStream** for: * Commands and events requiring durability * Workflows where message delivery must be guaranteed * Scenarios requiring replay or scheduled delivery ## Basic Configuration ### Core NATS (Simple Pub/Sub) ```csharp using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Connect to NATS opts.UseNats("nats://localhost:4222") .AutoProvision(); // Listen to a subject opts.ListenToNatsSubject("orders.received") .ProcessInline(); // Publish to a subject opts.PublishAllMessages() .ToNatsSubject("orders.received"); }).StartAsync(); ``` ### JetStream (Durable Messaging) ```csharp using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseNats("nats://localhost:4222") .AutoProvision() .UseJetStream() .DefineWorkQueueStream("ORDERS", "orders.>"); // Listen with JetStream consumer opts.ListenToNatsSubject("orders.received") .UseJetStream("ORDERS", "orders-consumer"); // Publishing automatically uses JetStream when stream is defined opts.PublishAllMessages() .ToNatsSubject("orders.received"); }).StartAsync(); ``` ## Connection Configuration ### Basic Connection ```csharp opts.UseNats("nats://localhost:4222"); ``` ### Connection with Timeouts ```csharp opts.UseNats("nats://localhost:4222") .ConfigureTimeouts( connectTimeout: TimeSpan.FromSeconds(10), requestTimeout: TimeSpan.FromSeconds(30) ); ``` ## Authentication ### Username and Password ```csharp opts.UseNats("nats://localhost:4222") .WithCredentials("username", "password"); ``` ### Token Authentication ```csharp opts.UseNats("nats://localhost:4222") .WithToken("my-secret-token"); ``` ### NKey Authentication ```csharp opts.UseNats("nats://localhost:4222") .WithNKey("/path/to/nkey.file"); ``` ### TLS Configuration ```csharp opts.UseNats("nats://localhost:4222") .UseTls(insecureSkipVerify: false); ``` ## JetStream Configuration ### Configuring JetStream Defaults ```csharp opts.UseNats("nats://localhost:4222") .UseJetStream(js => { js.MaxDeliver = 5; // Max redelivery attempts js.AckWait = TimeSpan.FromSeconds(30); js.DuplicateWindow = TimeSpan.FromMinutes(2); }); ``` ### Defining Streams #### Work Queue Stream (Retention by Interest) ```csharp opts.UseNats("nats://localhost:4222") .DefineWorkQueueStream("ORDERS", "orders.>"); ``` #### Work Queue with Additional Configuration ```csharp opts.UseNats("nats://localhost:4222") .DefineWorkQueueStream("ORDERS", stream => stream.EnableScheduledDelivery(), "orders.>"); ``` #### Custom Stream Configuration ```csharp opts.UseNats("nats://localhost:4222") .DefineStream("EVENTS", stream => { stream.WithSubjects("events.>") .WithLimits(maxMessages: 1_000_000, maxAge: TimeSpan.FromDays(7)) .WithReplicas(3) .EnableScheduledDelivery(); }); ``` #### Log Stream (Time-Based Retention) ```csharp opts.UseNats("nats://localhost:4222") .DefineLogStream("LOGS", TimeSpan.FromDays(30), "logs.>"); ``` #### Replicated Stream (High Availability) ```csharp opts.UseNats("nats://localhost:4222") .DefineReplicatedStream("CRITICAL", replicas: 3, "critical.>"); ``` ### JetStream Domain For multi-tenant or leaf node configurations: ```csharp opts.UseNats("nats://localhost:4222") .UseJetStreamDomain("my-domain"); ``` ## Listening to Messages ### Inline Processing Messages are processed immediately on the NATS subscription thread: ```csharp opts.ListenToNatsSubject("orders.received") .ProcessInline(); ``` ### Buffered Processing Messages are queued in memory and processed by worker threads: ```csharp opts.ListenToNatsSubject("orders.received") .BufferedInMemory(); ``` ### JetStream Consumer ```csharp opts.ListenToNatsSubject("orders.received") .UseJetStream("ORDERS", "my-consumer"); ``` ### Named Endpoints ```csharp opts.ListenToNatsSubject("orders.received") .Named("orders-listener"); ``` ## Publishing Messages ### To a Specific Subject ```csharp opts.PublishMessage() .ToNatsSubject("orders.created"); ``` ### All Messages to a Subject ```csharp opts.PublishAllMessages() .ToNatsSubject("events"); ``` ### Inline Sending Send messages synchronously without buffering: ```csharp opts.PublishAllMessages() .ToNatsSubject("orders") .SendInline(); ``` ## Scheduled Message Delivery NATS Server 2.12+ supports native scheduled message delivery. When enabled, Wolverine uses NATS headers for scheduling instead of database persistence. ### Requirements 1. NATS Server version >= 2.12 2. Stream configured with `EnableScheduledDelivery()` ### Configuration ```csharp opts.UseNats("nats://localhost:4222") .UseJetStream() .DefineWorkQueueStream("ORDERS", s => s.EnableScheduledDelivery(), "orders.>"); ``` ### How It Works When conditions are met, scheduled messages use NATS headers: * `Nats-Schedule: @at ` * `Nats-Schedule-Target: ` The transport automatically detects server version at startup. ### Fallback Behavior When native scheduled send is not available (server < 2.12 or stream not configured), Wolverine falls back to its database-backed scheduled message persistence. ## Multi-Tenancy NATS transport supports subject-based tenant isolation. ### Basic Multi-Tenancy ```csharp opts.UseNats("nats://localhost:4222") .ConfigureMultiTenancy(TenantedIdBehavior.RequireTenantId) .AddTenant("tenant-a") .AddTenant("tenant-b"); ``` ### Tenant Behavior Options * `RequireTenantId`: Throws if tenant ID is missing * `FallbackToDefault`: Uses base subject if tenant ID is missing ### Custom Subject Mapper ```csharp public class MyTenantMapper : ITenantSubjectMapper { public string MapSubjectForTenant(string baseSubject, string tenantId) => $"{tenantId}.{baseSubject}"; public string? ExtractTenantId(string subject) => subject.Split('.').FirstOrDefault(); public string GetSubscriptionPattern(string baseSubject) => $"*.{baseSubject}"; } opts.UseNats("nats://localhost:4222") .UseTenantSubjectMapper(new MyTenantMapper()); ``` ## Request-Reply Wolverine's request-reply pattern works with NATS: ```csharp // Send and wait for response var response = await bus.InvokeAsync(new CreateOrder(...)); ``` The response endpoint always uses Core NATS for low-latency replies, even when the main endpoints use JetStream. ## Error Handling ### JetStream * **Retry**: Message is requeued via `NakAsync()` with optional delay * **Dead Letter**: Message is terminated via `AckTerminateAsync()` ### Core NATS * **Retry**: Message is republished to the subject * **Dead Letter**: Handled by Wolverine's error handling policies ## Auto-Provisioning Enable automatic creation of streams and consumers: ```csharp opts.UseNats("nats://localhost:4222") .AutoProvision(); ``` Or use resource setup on startup: ```csharp opts.Services.AddResourceSetupOnStartup(); ``` ## Subject Prefix When sharing a NATS server between multiple developers or development environments, you can add a prefix to all NATS subjects to isolate each environment's messaging. Use `WithSubjectPrefix()` or the generic `PrefixIdentifiers()` method: ```csharp opts.UseNats("nats://localhost:4222") .WithSubjectPrefix("myapp"); // Subject "orders" becomes "myapp.orders" ``` You can also use `PrefixIdentifiersWithMachineName()` as a convenience to use the current machine name as the prefix: ```csharp opts.UseNats("nats://localhost:4222") .PrefixIdentifiersWithMachineName(); ``` ## Complete Example ```csharp using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseNats("nats://localhost:4222") .AutoProvision() .WithCredentials("user", "pass") .UseJetStream(js => { js.MaxDeliver = 5; js.AckWait = TimeSpan.FromSeconds(30); }) .DefineWorkQueueStream("ORDERS", s => s.EnableScheduledDelivery(), "orders.>"); // Listen to orders with JetStream durability opts.ListenToNatsSubject("orders.received") .UseJetStream("ORDERS", "order-processor") .Named("order-listener"); // Publish order events opts.PublishMessage() .ToNatsSubject("orders.created"); opts.PublishMessage() .ToNatsSubject("orders.shipped"); opts.Services.AddResourceSetupOnStartup(); }).StartAsync(); ``` ## Testing To run tests locally: ```bash # Start NATS with JetStream docker run -d --name nats -p 4222:4222 -p 8222:8222 nats:latest --jetstream -m 8222 # For scheduled delivery tests, use NATS 2.12+ docker run -d --name nats -p 4222:4222 -p 8222:8222 nats:2.12-alpine --jetstream -m 8222 ``` --- --- url: /guide/messaging/transports/pulsar.md --- # Using Pulsar ::: info Fun fact, the Pulsar transport was actually the very first messaging broker to be supported by Jasper/Wolverine, but for whatever reason, wasn't officially released until Wolverine 3.0. ::: ## Installing To use [Apache Pulsar](https://pulsar.apache.org/) as a messaging transport with Wolverine, first install the `WolverineFx.Pulsar` library via nuget to your project. Behind the scenes, this package uses the [DotPulsar client library](https://pulsar.apache.org/docs/next/client-libraries-dotnet/) managed library for accessing Pulsar brokers. ```bash dotnet add WolverineFx.Pulsar ``` To connect to Pulsar and configure senders and listeners, use this syntax: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { opts.UsePulsar(c => { var pulsarUri = builder.Configuration.GetValue("pulsar"); c.ServiceUrl(pulsarUri); // Any other configuration you want to apply to your // Pulsar client }); // Publish messages to a particular Pulsar topic opts.PublishMessage() .ToPulsarTopic("persistent://public/default/one") // And all the normal Wolverine options... .SendInline(); // Listen for incoming messages from a Pulsar topic opts.ListenToPulsarTopic("persistent://public/default/two") .SubscriptionName("two") .SubscriptionType(SubscriptionType.Exclusive) // And all the normal Wolverine options... .Sequential(); // Listen for incoming messages from a Pulsar topic with a shared subscription and using RETRY and DLQ queues opts.ListenToPulsarTopic("persistent://public/default/three") .WithSharedSubscriptionType() .DeadLetterQueueing(new DeadLetterTopic(DeadLetterTopicMode.Native)) .RetryLetterQueueing(new RetryLetterTopic([TimeSpan.FromSeconds(1), TimeSpan.FromSeconds(3), TimeSpan.FromSeconds(5)])) .Sequential(); }); ``` snippet source | anchor The topic name format is set by Pulsar itself, and you can learn more about its format in [Pulsar Topics](https://pulsar.apache.org/docs/next/concepts-messaging/#topics). ::: info Depending on demand, the Pulsar transport will be enhanced to support conventional routing topologies and more advanced topic routing later. ::: ## Read Only Subscriptions As part of Wolverine's "Requeue" error handling action, the Pulsar transport tries to quietly create a matching sender for each Pulsar topic it's listening to. Great, but that will blow up if your application only has receive-only permissions to Pulsar. In this case, you probably want to disable Pulsar requeue actions altogether with this setting: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { opts.UsePulsar(c => { var pulsarUri = builder.Configuration.GetValue("pulsar"); c.ServiceUrl(pulsarUri); }); // Listen for incoming messages from a Pulsar topic opts.ListenToPulsarTopic("persistent://public/default/two") .SubscriptionName("two") .SubscriptionType(SubscriptionType.Exclusive) // Disable the requeue for this topic .DisableRequeue() // And all the normal Wolverine options... .Sequential(); // Disable requeue for all Pulsar endpoints opts.DisablePulsarRequeue(); }); ``` snippet source | anchor If you have an application that has receive only access to a subscription but not permissions to publish to Pulsar, you cannot use the Wolverine "Requeue" error handling policy. ### Subscription behavior when closing connection By default, the Pulsar transport will automatically close the subscription when the endpoints is being stopped. If the subscription is created for you, and should be kept after application shut down, you can change this behavior. ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { opts.UsePulsar(c => { var pulsarUri = builder.Configuration.GetValue("pulsar"); c.ServiceUrl(pulsarUri); }); // Disable unsubscribe on close for all Pulsar endpoints opts.UnsubscribePulsarOnClose(PulsarUnsubscribeOnClose.Disabled); }); ``` snippet source | anchor ## Interoperability ::: tip Also see the more generic [Wolverine Guide on Interoperability](/tutorials/interop) ::: Pulsar interoperability is done through the `IPulsarEnvelopeMapper` interface. --- --- url: /guide/messaging/transports/rabbitmq.md --- # Using Rabbit MQ ::: tip Wolverine uses the [Rabbit MQ .NET Client](https://www.rabbitmq.com/dotnet.html) to connect to Rabbit MQ. ::: ## Installing All the code samples in this section are from the [Ping/Pong with Rabbit MQ sample project](https://github.com/JasperFx/wolverine/tree/main/src/Samples/PingPongWithRabbitMq). To use [RabbitMQ](http://www.rabbitmq.com/) as a transport with Wolverine, first install the `WolverineFX.RabbitMQ` library via nuget to your project. Behind the scenes, this package uses the [RabbitMQ C# Client](https://www.rabbitmq.com/dotnet.html) to both send and receive messages from RabbitMQ. ```cs return await Host.CreateDefaultBuilder(args) .UseWolverine(opts => { opts.ApplicationAssembly = typeof(Program).Assembly; // Listen for messages coming into the pongs queue opts.ListenToRabbitQueue("pongs"); // Publish messages to the pings queue opts.PublishMessage().ToRabbitExchange("pings"); // Configure Rabbit MQ connection to the connection string // named "rabbit" from IConfiguration. This is *a* way to use // Wolverine + Rabbit MQ using Aspire opts.UseRabbitMqUsingNamedConnection("rabbit") // Directs Wolverine to build any declared queues, exchanges, or // bindings with the Rabbit MQ broker as part of bootstrapping time .AutoProvision(); // Or you can use this functionality to set up *all* known // Wolverine (or Marten) related resources on application startup opts.Services.AddResourceSetupOnStartup(); // This will send ping messages on a continuous // loop opts.Services.AddHostedService(); }).RunJasperFxCommands(args); ``` snippet source | anchor See the [Rabbit MQ .NET Client documentation](https://www.com/dotnet-api-guide.html#connecting) for more information about configuring the `ConnectionFactory` to connect to Rabbit MQ. ## Managing Rabbit MQ Connections In its default setup, the Rabbit MQ transport in Wolverine will open two connections, one for listening and another for sending messages. All Rabbit MQ endpoints will share these two connections. If you need to conserve Rabbit MQ connections and have a process that is only sending or only receiving messages through Rabbit MQ, you can opt to turn off one or the other connections that might not be used at runtime. To only listen to Rabbit MQ messages, but never send them: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // *A* way to configure Rabbit MQ using their Uri schema // documented here: https://www.rabbitmq.com/uri-spec.html opts.UseRabbitMq(new Uri("amqp://localhost")) // Turn on listener connection only in case if you only need to listen for messages // The sender connection won't be activated in this case .UseListenerConnectionOnly(); // Set up a listener for a queue, but also // fine-tune the queue characteristics if Wolverine // will be governing the queue setup opts.ListenToRabbitQueue("incoming2", q => { q.PurgeOnStartup = true; q.TimeToLive(5.Minutes()); }); }).StartAsync(); ``` snippet source | anchor To only send Rabbit MQ messages, but never receive them: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // *A* way to configure Rabbit MQ using their Uri schema // documented here: https://www.rabbitmq.com/uri-spec.html opts.UseRabbitMq(new Uri("amqp://localhost")) // Turn on sender connection only in case if you only need to send messages // The listener connection won't be created in this case .UseSenderConnectionOnly(); // Set up a listener for a queue, but also // fine-tune the queue characteristics if Wolverine // will be governing the queue setup opts.ListenToRabbitQueue("incoming2", q => { q.PurgeOnStartup = true; q.TimeToLive(5.Minutes()); }); }).StartAsync(); ``` snippet source | anchor ## Aspire Integration Just note that when you use the existing Aspire integration for Rabbit MQ that Aspire "pokes" in an environment variable for a Rabbit MQ `Uri` and not a connection string -- even though the Aspire information is available through `IConfiguration.GetConnectionString()`. Be aware of this when using Aspire so that you're passing that information as a `Uri` like this: ```csharp var rabbitmqEndpoint = builder.Configuration.GetConnectionString("rabbitmq"); if (rabbitmqEndpoint != null) { builder.Host.UseWolverine(opts => { // Important! Convert the "connection string" up above to a Uri opts.UseRabbitMq(new Uri(rabbitmqEndpoint)).AutoProvision(); }); } ``` Why does Aspire do this? We have no idea, but just don't be tripped up by this little quirk. ## Enable Rabbit MQ for Wolverine Control Queues If you are using Wolverine in a cluster of running nodes -- and it's more likely that you are than not if you have any kind of non trivial load -- Wolverine needs to communicate between its running nodes for various reasons if you are using any kind of message persistence. Normally that communication is done through little, specialized database queueing (crude polling), but there's an option to use more efficient Rabbit MQ queues for that inter-node communication with a non-durable Rabbit MQ queue for each node with this option: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // *A* way to configure Rabbit MQ using their Uri schema // documented here: https://www.rabbitmq.com/uri-spec.html opts.UseRabbitMq(new Uri("amqp://localhost")) // Use Rabbit MQ for inter-node communication .EnableWolverineControlQueues(); }).StartAsync(); ``` snippet source | anchor ## Disable Rabbit MQ Reply Queues ::: info The response queues (and system queues) are now created as durable Rabbit MQ queues with a TTL expiration of 30 minutes after there is no connection for these queues. ::: By default, Wolverine creates an in memory queue in the Rabbit MQ broker for each individual node that is used by Wolverine for request/reply invocations (`IMessageBus.InvokeAsync()` when used remotely). Great, but if your process does not have permissions with your Rabbit MQ broker to create queues, you may encounter errors. Not to worry, you can disable that Wolverine system queue creation with: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // *A* way to configure Rabbit MQ using their Uri schema // documented here: https://www.rabbitmq.com/uri-spec.html opts.UseRabbitMq(new Uri("amqp://localhost")) // Stop Wolverine from trying to create a reply queue // for this node if your process does not have permission to // do so against your Rabbit MQ broker .DisableSystemRequestReplyQueueDeclaration(); // Set up a listener for a queue, but also // fine-tune the queue characteristics if Wolverine // will be governing the queue setup opts.ListenToRabbitQueue("incoming2", q => { q.PurgeOnStartup = true; q.TimeToLive(5.Minutes()); }); }).StartAsync(); ``` snippet source | anchor Of course, doing so means that you will not be able to do request/reply through Rabbit MQ with your Wolverine application. ## Configuring Channel Creation You now have the ability to fine tune how the [Rabbit MQ channels](https://www.rabbitmq.com/docs/channels~~~~) are created by Wolverine through this syntax: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { opts .UseRabbitMq(builder.Configuration.GetConnectionString("rabbitmq")) // Fine tune how the underlying Rabbit MQ channels from // this application will behave .ConfigureChannelCreation(o => { o.PublisherConfirmationsEnabled = true; o.PublisherConfirmationTrackingEnabled = true; o.ConsumerDispatchConcurrency = 5; }); }); ``` snippet source | anchor ## Compatibility Note ::: info Wolverine with the `WolverineFX.RabbitMQ` transport has also been verified to work against [LavinMQ](https://lavinmq.com/), a modern RabbitMQ-protocol compatible message broker, using the RabbitMQ transport with 100% protocol compatibility when configured through the standard RabbitMQ integration shown above. ::: --- --- url: /guide/messaging/transports/redis.md --- # Using Redis ## Installing To use [Redis Streams](https://redis.io/docs/latest/develop/data-types/streams/) as a messaging transport for Wolverine, first install the `WolverineFx.Redis` Nuget package to your application. Behind the scenes, the `Wolverine.Redis` library is using the [StackExchange.Redis](https://github.com/StackExchange/StackExchange.Redis) library. ```bash dotnet add WolverineFx.Redis ``` ## Using as Message Transport To connect to Redis and configure listeners and senders, use this syntax: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseRedisTransport("localhost:6379") // Auto-create streams and consumer groups .AutoProvision() // Configure default consumer name selector for all Redis listeners .ConfigureDefaultConsumerName((runtime, endpoint) => $"{runtime.Options.ServiceName}-{runtime.DurabilitySettings.AssignedNodeNumber}") // Useful for testing - auto purge queues on startup .AutoPurgeOnStartup(); // Just publish all messages to Redis streams (uses database 0 by default) opts.PublishAllMessages().ToRedisStream("wolverine-messages"); // Or explicitly configure message routing with database ID opts.PublishMessage() .ToRedisStream("colors", databaseId: 1) // Configure specific settings for this stream .BatchSize(50) .SendInline(); // Listen to Redis streams with consumer groups (uses database 0 by default) opts.ListenToRedisStream("red", "color-processors") .ProcessInline() // Configure consumer settings .ConsumerName("red-consumer-1") .BatchSize(10) .BlockTimeout(TimeSpan.FromSeconds(5)) // Start from beginning to consume existing messages (like Kafka's AutoOffsetReset.Earliest) .StartFromBeginning(); // Listen to Redis streams with database ID specified opts.ListenToRedisStream("green", "color-processors", databaseId: 2) .BufferedInMemory() .BatchSize(25) .StartFromNewMessages(); // Default: only new messages (like Kafka's AutoOffsetReset.Latest) opts.ListenToRedisStream("blue", "color-processors", databaseId: 3) .UseDurableInbox() .ConsumerName("blue-consumer") .StartFromBeginning(); // Process existing messages too // Alternative: use StartFrom parameter directly opts.ListenToRedisStream("purple", "color-processors", StartFrom.Beginning) .BufferedInMemory(); // This will direct Wolverine to try to ensure that all // referenced Redis streams and consumer groups exist at // application start up time opts.Services.AddResourceSetupOnStartup(); }).StartAsync(); ``` snippet source | anchor If you need to control the database id within Redis, you have these options: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseRedisTransport("localhost:6379"); // Configure streams on different databases opts.PublishMessage() .ToRedisStream("orders", databaseId: 1); opts.PublishMessage() .ToRedisStream("payments", databaseId: 2); // Listen on different databases opts.ListenToRedisStream("orders", "order-processors", databaseId: 1); opts.ListenToRedisStream("payments", "payment-processors", databaseId: 2); // Advanced configuration with database ID opts.ListenToRedisStream("notifications", "notification-processors", databaseId: 3) .ConsumerName("notification-consumer-1") .BatchSize(100) .BlockTimeout(10.Seconds()) .UseDurableInbox(); }).StartAsync(); ``` snippet source | anchor To work with multiple databases in one application, see this sample: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseRedisTransport("localhost:6379").AutoProvision(); // Different message types on different databases for isolation // Database 0: Default messages opts.PublishMessage().ToRedisStream("system-events"); opts.ListenToRedisStream("system-events", "system-processors"); // Database 1: Order processing opts.PublishMessage().ToRedisStream("orders", 1); opts.ListenToRedisStream("orders", "order-processors", 1); // Database 2: Payment processing opts.PublishMessage().ToRedisStream("payments", 2); opts.ListenToRedisStream("payments", "payment-processors", 2); // Database 3: Analytics and reporting opts.PublishMessage().ToRedisStream("analytics", 3); opts.ListenToRedisStream("analytics", "analytics-processors", 3); }).StartAsync(); ``` snippet source | anchor ## Interoperability First, see the [tutorial on interoperability with Wolverine](/tutorials/interop) for general guidance. Next, the Redis transport supports interoperability through the `IRedisEnvelopeMapper` interface. If necessary, you can build your own version of this mapper interface like the following: ```cs // Simplistic envelope mapper that expects every message to be of // type "T" and serialized as JSON that works perfectly well w/ our // application's default JSON serialization public class OurRedisJsonMapper : EnvelopeMapper>, IRedisEnvelopeMapper { // Wolverine needs to know the message type name private readonly string _messageTypeName = typeof(TMessage).ToMessageTypeName(); public OurRedisJsonMapper(Endpoint endpoint) : base(endpoint) { // Map the data property MapProperty(x => x.Data!, (e, m) => e.Data = m.Values.FirstOrDefault(x => x.Name == "data").Value, (e, m) => m.Add(new NameValueEntry("data", e.Data))); // Set up the message type MapProperty(x => x.MessageType!, (e, m) => e.MessageType = _messageTypeName, (e, m) => m.Add(new NameValueEntry("message-type", _messageTypeName))); // Set up content type MapProperty(x => x.ContentType!, (e, m) => e.ContentType = "application/json", (e, m) => m.Add(new NameValueEntry("content-type", "application/json"))); } protected override void writeOutgoingHeader(List outgoing, string key, string value) { outgoing.Add(new NameValueEntry($"header-{key}", value)); } protected override bool tryReadIncomingHeader(StreamEntry incoming, string key, out string? value) { var target = $"header-{key}"; foreach (var nv in incoming.Values) { if (nv.Name.Equals(target)) { value = nv.Value.ToString(); return true; } } value = null; return false; } protected override void writeIncomingHeaders(StreamEntry incoming, Envelope envelope) { var headers = incoming.Values.Where(k => k.Name.StartsWith("header-")); foreach (var nv in headers) { envelope.Headers[nv.Name.ToString()[7..]] = nv.Value.ToString(); // Remove "header-" prefix } // Capture the Redis stream message id envelope.Headers["redis-entry-id"] = incoming.Id.ToString(); } } ``` snippet source | anchor ## Scheduled Messaging The Redis transport supports native Redis message scheduling for delayed or scheduled delivery. There's no configuration necessary to utilize that. ## Dead Letter Queue Messages For `Buffered` or `Inline` endpoints, you can use native Redis streams for "dead letter queue" messages using the name "{StreamKey}:dead-letter": ```cs var builder = Host.CreateDefaultBuilder(); using var host = await builder.UseWolverine(opts => { opts.UseRedisTransport("localhost:6379").AutoProvision() .SystemQueuesEnabled(false) // Disable reply queues .DeleteStreamEntryOnAck(true); // Clean up stream entries on ack // Sending inline so the messages are added to the stream right away opts.PublishAllMessages().ToRedisStream("wolverine-messages") .SendInline(); opts.ListenToRedisStream("wolverine-messages", "default") .EnableNativeDeadLetterQueue() // Enable DLQ for failed messages .UseDurableInbox(); // Use durable inbox so retry messages are persisted // schedule retry delays // if durable, these will be scheduled natively in Redis opts.OnException() .ScheduleRetry( TimeSpan.FromSeconds(10), TimeSpan.FromSeconds(20), TimeSpan.FromSeconds(30)); opts.Services.AddResourceSetupOnStartup(); }).StartAsync(); ``` snippet source | anchor --- --- url: /guide/messaging/transports/signalr.md --- # Using SignalR ::: info The SignalR transport has been requested several times, but finally got built specifically for the forthcoming "CritterWatch" product that will be used to monitor and manage Wolverine applications. In other words, the Wolverine team has heavily dog-fooded this feature. ::: ::: tip Much of the sample code is taken from a runnable sample application in the Wolverine codebase called [WolverineChat](https://github.com/JasperFx/wolverine/tree/main/src/Samples/WolverineChat). ::: The [SignalR library](https://dotnet.microsoft.com/en-us/apps/aspnet/signalr) from Microsoft isn't hard to use from Wolverine for simplistic WebSockets or Server Side Events usage, but what if you want a server side application to exchange any number of different messages between a browser (or other WebSocket client because that's actually possible) and your server side code in a systematic way? To that end, Wolverine now supports a first class messaging transport for SignalR. To get started, just add a Nuget reference to the `WolverineFx.SignalR` library: ```bash dotnet add package WolverineFx.SignalR ``` ## Configuring the Server ::: tip Wolverine.SignalR does not require any usage of Wolverine.HTTP, but these two libraries can certainly be used in the same application as well. ::: The Wolverine.SignalR library sets up a single SignalR `Hub` type in your system (`WolverineHub`) that will be used to both send and receive messages from the browser. To set up both the SignalR transport and the necessary SignalR services in your DI container, use this syntax in the `Program` file of your web application: ```cs builder.UseWolverine(opts => { // This is the only single line of code necessary // to wire SignalR services into Wolverine itself // This does also call IServiceCollection.AddSignalR() // to register DI services for SignalR as well opts.UseSignalR(o => { // Optionally configure the SignalR HubOptions // for the WolverineHub o.ClientTimeoutInterval = 10.Seconds(); }); // Instead of self-hosting, it's also possible to // use Azure SignalR. Only one of the two SignalR // registrations are necessary. Both register the // required services in DI opts.UseAzureSignalR(hub => { // Optionally configure the SignalR HubOptions // for the WolverineHub hub.ClientTimeoutInterval = 10.Seconds(); }, service => { // And optionally configure the Azure SignalR // options for the connection. service.ApplicationName = "wolverine"; // You probably want one of these from your // configuration somehow service.ConnectionString = "Endpoint=https://myresource.service.signalr.net;AccessKey=...;Version=1.0;"; }); // Using explicit routing to send specific // messages to SignalR opts.Publish(x => { // WolverineChatWebSocketMessage is a marker interface // for messages within this sample application that // is simply a convenience for message routing x.MessagesImplementing(); x.ToSignalR(); }); }); ``` snippet source | anchor That handles the Wolverine configuration and the SignalR service registrations, but you will also need to map an HTTP route for the SignalR hub with this Wolverine.SignalR helper: ```cs var app = builder.Build(); app.UseRouting(); app.UseAuthorization(); #if NET9_0_OR_GREATER app.MapStaticAssets(); app.MapRazorPages() .WithStaticAssets(); #endif #if NET8_0 app.UseStaticFiles(); app.MapRazorPages(); #endif // This line puts the SignalR hub for Wolverine at the // designated route for your clients app.MapWolverineSignalRHub("/api/messages"); return await app.RunJasperFxCommands(args); ``` snippet source | anchor ## Custom hubs If the default `WolverineHub` isn't enough, you can provide a custom Hub that will be used for all received messages: ```cs builder.Services.AddSignalR(); builder.Host.UseWolverine(opts => { opts.ServiceName = "Server"; // Hooking up the SignalR messaging transport // in Wolverine using a custom hub opts.UseSignalR(); // A message for testing opts.PublishMessage().ToSignalR(); }); var app = builder.Build(); // Syntactic sugar, really just doing: // app.MapHub("/messages"); app.MapWolverineSignalRHub(); ``` snippet source | anchor Custom hubs must still inherit from `WolverineHub`. It's possible to override `ReceiveMessage`, but if you don't invoke the base functionality you're gonna have a bad time. ## Messages and Serialization For the message routing above, you'll notice that I utilized a marker interface just to facilitate message routing like this: ```cs // Marker interface for the sample application just to facilitate // message routing public interface WolverineChatWebSocketMessage : WebSocketMessage; ``` snippet source | anchor The Wolverine `WebSocketMessage` marker interface does have a little bit of impact in that: 1. It implements the `IMessage` interface that's just a helper for [Wolverine to discover message types](/guide/messages.html#message-discovery) in your application upfront for diagnostics or upfront resource creation 2. By marking your message types as `WebSocketMessage`, it changes [Wolverine's message type name](/guide/messages.html#message-type-name-or-alias) rules to using a [Kebab-cased version of the message type name](https://developer.mozilla.org/en-US/docs/Glossary/Kebab_case) For example, these three message types: ```cs public record ChatMessage(string User, string Text) : WolverineChatWebSocketMessage; public record ResponseMessage(string User, string Text) : WolverineChatWebSocketMessage; public record Ping(int Number) : WolverineChatWebSocketMessage; ``` snippet source | anchor will result in these message type names according to Wolverine: | .NET Type | Wolverine Message Type Name | |-------------------|-----------------------------| | `ChatMessage` | "chat\_message" | | `ResponseMessage` | "response\_message" | | `Ping` | "ping" | That message type name is important because the Wolverine SignalR transport uses and expects a very light [CloudEvents](https://cloudevents.io/) wrapper around the raw message being sent to the client and received from the browser. Here's an example of the JSON payload for the `ChatMessage` message: ```json { "type": "chat_message", "data": { "user": "Hank", "text": "Hey" } } ``` You can always preview the message type name by using the `dotnet run -- describe` command and finding the "Message Routing" table in that output, which should look like this from the sample application: ```text Message Routing ┌───────────────────────────────┬────────────────────┬──────────────────────┬──────────────────┐ │ .NET Type │ Message Type Alias │ Destination │ Content Type │ ├───────────────────────────────┼────────────────────┼──────────────────────┼──────────────────┤ │ WolverineChat.ChatMessage │ chat_message │ signalr://wolverine/ │ application/json │ │ WolverineChat.Ping │ ping │ signalr://wolverine/ │ application/json │ │ WolverineChat.ResponseMessage │ response_message │ signalr://wolverine/ │ application/json │ └───────────────────────────────┴────────────────────┴──────────────────────┴──────────────────┘ ``` The only elements that are mandatory are the `type` node that should be the Wolverine message type name and `data` that is the actual message serialized by JSON. Wolverine will send the full CloudEvents envelope structure because it's reusing the envelope mapping from [our CloudEvents interoperability](/tutorials/interop.html#interop-with-cloudevents), but the browser code **only** needs to send `type` and `data`. The actual JSON serialization in the SignalR transport is isolated from the rest of Wolverine and uses this default `System.Text.Json` configuration: ```cs JsonOptions = new(JsonSerializerOptions.Web) { PropertyNamingPolicy = JsonNamingPolicy.CamelCase }; JsonOptions.Converters.Add(new JsonStringEnumConverter()); ``` snippet source | anchor But of course, if you needed to override the JSON serialization for whatever reason, you can just push in a different `JsonSerializerOptions` like this: ```cs var builder = WebApplication.CreateBuilder(); builder.UseWolverine(opts => { // Just showing you how to override the JSON serialization opts.UseSignalR().OverrideJson(new JsonSerializerOptions { IgnoreReadOnlyProperties = false }); }); ``` snippet source | anchor ## Interacting with the Server from the Browser It's not mandatory, but in developing and dogfooding the Wolverine.SignalR transport, we've found it helpful to use the actual [signalr Javascript library](https://learn.microsoft.com/en-us/aspnet/core/signalr/javascript-client) and our sample SignalR application uses that library for the browser to server communication. ```js "use strict"; // Connect to the server endpoint var connection = new signalR.HubConnectionBuilder().withUrl("/api/messages").build(); //Disable the send button until connection is established. document.getElementById("sendButton").disabled = true; // Receiving messages from the server connection.on("ReceiveMessage", function (json) { // Note that you will need to deserialize the raw JSON // string const message = JSON.parse(json); // The client code will need to effectively do a logical // switch on the message.type. The "real" message is // the data element if (message.type == 'ping'){ console.log("Got ping " + message.data.number); } else{ const li = document.createElement("li"); document.getElementById("messagesList").appendChild(li); li.textContent = `${message.data.user} says ${message.data.text}`; } }); connection.start().then(function () { document.getElementById("sendButton").disabled = false; }).catch(function (err) { return console.error(err.toString()); }); document.getElementById("sendButton").addEventListener("click", function (event) { const user = document.getElementById("userInput").value; const text = document.getElementById("messageInput").value; // Remember that we need to wrap the raw message in this slim // CloudEvents wrapper const message = {type: 'chat_message', data: {'text': text, 'user': user}}; // The WolverineHub method to call is ReceiveMessage with a single argument // for the raw JSON connection.invoke("ReceiveMessage", JSON.stringify(message)).catch(function (err) { return console.error(err.toString()); }); event.preventDefault(); }); ``` Note that the method `ReceiveMessage` is hard coded into the `WolverineHub` service. Also note that messages are sent and recieved as raw json strings. You need to `JSON.parse` incoming messages and `JSON.stringify` outgoing messages yourself. Our vision for this usage is that you probably integrate directly with a client side state tracking tool like [Pinia](https://pinia.vuejs.org/) (how we're using the SignalR transport to build "CritterWatch"). ## Sending Messages to SignalR For the most part, sending a message to SignalR is just like sending messages with any other transport like this sample: ```cs public class Pinging : BackgroundService { private readonly IWolverineRuntime _runtime; public Pinging(IWolverineRuntime runtime) { _runtime = runtime; } protected override async Task ExecuteAsync(CancellationToken stoppingToken) { var number = 0; while (!stoppingToken.IsCancellationRequested) { await Task.Delay(1.Seconds(), stoppingToken); // This is being published to all connected SignalR // applications await new MessageBus(_runtime).PublishAsync(new Ping(++number)); } } } ``` snippet source | anchor The call above will occasionally send a `Ping` message to all connected clients. But of course, you'll frequently want to more selectively send messages to reply to the current connection or maybe to a specific group. If you are handling a message that originated from SignalR, you can send a response back to the originating connection like this: ```cs public record RequestSum(int X, int Y) : WebSocketMessage; public record SumAnswer(int Value) : WebSocketMessage; public static class RequestSumHandler { public static ResponseToCallingWebSocket Handle(RequestSum message) { return new SumAnswer(message.X + message.Y) // This extension method will wrap the raw message // with some helpers that will .RespondToCallingWebSocket(); } } ``` snippet source | anchor In the next section we'll learn a bit more about working with SignalR groups. ## SignalR Groups One of the powerful features of SignalR is being able to work with [groups of connections](https://learn.microsoft.com/en-us/aspnet/core/signalr/groups). The SignalR transport currently has some simple support for managing and publishing to groups. Let's say you have these web socket messages in your system: ```cs public record EnrollMe(string GroupName) : WebSocketMessage; public record KickMeOut(string GroupName) : WebSocketMessage; public record BroadCastToGroup(string GroupName, string Message) : WebSocketMessage; ``` snippet source | anchor The following code is a set of simplistic message handlers that handle these messages with some SignalR connection group mechanics: ```cs // Declaring that you need the connection that originated // this message to be added to the named SignalR client group public static AddConnectionToGroup Handle(EnrollMe msg) => new(msg.GroupName); // Declaring that you need the connection that originated this // message to be removed from the named SignalR client group public static RemoveConnectionToGroup Handle(KickMeOut msg) => new(msg.GroupName); // The message wrapper here sends the raw message to // the named SignalR client group public static SignalRMessage Handle(BroadCastToGroup msg) => new Information(msg.Message) // This extension method wraps the "real" message // with an envelope that routes this original message // to the named group .ToWebSocketGroup(msg.GroupName); ``` snippet source | anchor In the code above: * `AddConnectionToGroup` and `RemoveConnectionToGroup` are both examples of Wolverine ["side effects"](/guide/handlers/side-effects.html) that are specific to adding or removing the current SignalR connection (whichever connection originated the message and where the SignalR transport received the message) * `ToWebSocketGroup(group name)` is an extension method in Wolverine.SignalR that restricts the message being sent to SignalR to only being sent to connections in that named group ## SignalR Client Transport ::: tip If you want to use the .NET SignalR Client for test automation, just know that you will need to bootstrap the service that actually hosts SignalR with the full stack including Kestrel. `WebApplicationFactory` will not be suitable for this type of integration testing through SignalR. ::: Wolverine.SignalR is actually two transports in one library! There is also a full fledged messaging transport built around the [.NET SignalR client](https://learn.microsoft.com/en-us/aspnet/core/signalr/dotnet-client) that we've used extensively for test automation, but could technically be used as a "real" messaging transport. The SignalR Client transport was built specifically to enable end to end testing against a Wolverine server that hosts SignalR itself. The SignalR Client transport will use the same CloudEvents mechanism to send and receive messages from the main Wolverine SignalR transport and is 100% compatible. If you wanted to use the SignalR client as a "real" messaging transport, you could do that like this sample: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { // this would need to be an absolute Url to where SignalR is // hosted on your application and include the exact route where // the WolverineHub is listening var url = builder.Configuration.GetValue("signalr.url"); opts.UseClientToSignalR(url); // Setting this up to publish any messages implementing // the WebSocketMessage marker interface with the SignalR // client opts.Publish(x => { x.MessagesImplementing(); x.ToSignalRWithClient(url); }); }); ``` snippet source | anchor Or a little more simply, if you are just using this for test automation, you would need to give it the port number where your SignalR hosting service is running on the local computer: ```cs // Ostensibly, *something* in your test harness would // be telling you the port number of the real application int port = 5555; using var clientHost = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Just so you know it's possible, you can override // the relative url of the SignalR WolverineHub route // in the hosting application opts.UseClientToSignalR(port, "/api/messages"); // Setting this up to publish any messages implementing // the WebSocketMessage marker interface with the SignalR // client opts.Publish(x => { x.MessagesImplementing(); x.ToSignalRWithClient(port); }); }).StartAsync(); ``` snippet source | anchor To make this a little more concrete, here's a little bit of the test harness setup we used to test the Wolverine.SignalR transport: ```cs public abstract class WebSocketTestContext : IAsyncLifetime { protected WebApplication theWebApp; protected readonly int Port = PortFinder.GetAvailablePort(); protected readonly Uri clientUri; private readonly List _clientHosts = new(); public WebSocketTestContext() { clientUri = new Uri($"http://localhost:{Port}/messages"); } public async Task InitializeAsync() { var builder = WebApplication.CreateBuilder(); builder.WebHost.ConfigureKestrel(opts => { opts.ListenLocalhost(Port); }); ``` snippet source | anchor In the same test harness class, we bootstrap new `IHost` instances with the SignalR Client to mimic browser client communication like this: ```cs var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.ServiceName = serviceName; opts.UseClientToSignalR(Port); opts.PublishMessage().ToSignalRWithClient(Port); opts.PublishMessage().ToSignalRWithClient(Port); opts.Publish(x => { x.MessagesImplementing(); x.ToSignalRWithClient(Port); }); }).StartAsync(); ``` snippet source | anchor The key point here is that we stood up the service using a port number for Kestrel, then stood up `IHost` instances for a Wolverine application using the SignalR Client using the same port number for easy connectivity. And of course, after all of that we should probably talk about how to publish messages via the SignalR Client. Fortunately, there's really nothing to it. You merely need to invoke the normal `IMessageBus.PublishAsync()` APIs that you would use for any messaging. In the sample test below, we're utilizing the [tracked session](https://wolverinefx.net/guide/testing.html#integration-testing-with-tracked-sessions) functionality as normal to send a message from the `IHost` hosting the SignalR Client transport and expect it to be successfully handled in the `IHost` for our actual SignalR server: ```cs [Fact] public async Task receive_message_from_a_client() { // This is an IHost that has the SignalR Client // transport configured to connect to a SignalR // server in the "theWebApp" IHost using var client = await StartClientHost(); var tracked = await client .TrackActivity() .IncludeExternalTransports() .AlsoTrack(theWebApp) .Timeout(10.Seconds()) .ExecuteAndWaitAsync(c => c.SendViaSignalRClient(clientUri, new ToSecond("Hollywood Brown"))); var record = tracked.Received.SingleRecord(); record.ServiceName.ShouldBe("Server"); record.Envelope.Destination.ShouldBe(new Uri("signalr://wolverine")); record.Message.ShouldBeOfType() .Name.ShouldBe("Hollywood Brown"); } ``` snippet source | anchor *Conveniently enough as I write this documentation today using existing test code, Hollywood Brown had a huge game last night. Go Chiefs!* ### Authorization If you are connecting to a hub requiring authorization (for example using the `[Authorize]` attribute) you need to provide a token provider. ```cs var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.ServiceName = serviceName; // Configure a client with an access token provider. You get an instance of `IServiceProvider` // if you need access to additional services, for example accessing `IConfiguration` opts.UseClientToSignalR(Port, accessTokenProvider: (sp) => () => Task.FromResult(accessToken)); opts.Publish(x => { x.MessagesImplementing(); x.ToSignalRWithClient(Port); }); opts.Publish(x => { x.MessagesImplementing(); // You can also configure the access token provider when configuring // the message publishing. Last configuration wins and applies to the // client URL, *not* the message type x.ToSignalRWithClient(Port, accessTokenProvider: (sp) => () => { var configuration = sp.GetRequiredService(); var configuredToken = configuration.GetValue("SignalR:AccessToken") // Fall back to the token passed in when testing ?? accessToken; return Task.FromResult(configuredToken); }); }); }).StartAsync(); ``` snippet source | anchor ## Web Socket "Sagas" ::: info The functionality described in this section was specifically built for "CritterWatch" where a browser request kicks off a "scatter/gather" series of messages from CritterWatch to other Wolverine services and finally back to the originating browser client. ::: Let's say that you have a workflow in your system something like: 1. The browser makes a web socket call to the server to request some information or take a long running action 2. The server application needs to execute several messages or even call out to additional Wolverine services 3. Once the server application has finally completed the work that the client requested, the server needs to send a message to the originating SignalR connection with the status of the long running activity or the data that the original client requested The SignalR transport can leverage some of Wolverine's built in saga tracking to be able to route the eventual Web Socket response back to the originating caller even if the work required intermediate steps. The easiest way to enroll in this behavior today is the usage of the `[EnlistInCurrentConnectionSaga]` that should be on either --- --- url: /guide/messaging/transports/azureservicebus/emulator.md --- # Using the Azure Service Bus Emulator The [Azure Service Bus Emulator](https://learn.microsoft.com/en-us/azure/service-bus-messaging/overview-emulator) allows you to run integration tests against a local emulator instance instead of a real Azure Service Bus namespace. This is exactly what Wolverine uses internally for its own test suite. ## Docker Compose Setup The Azure Service Bus Emulator requires a SQL Server backend. Here is a minimal Docker Compose setup: ```yaml networks: sb-emulator: services: asb-sql: image: "mcr.microsoft.com/azure-sql-edge" environment: - "ACCEPT_EULA=Y" - "MSSQL_SA_PASSWORD=Strong_Passw0rd#2025" networks: sb-emulator: asb-emulator: image: "mcr.microsoft.com/azure-messaging/servicebus-emulator:latest" volumes: - ./docker/asb/Config.json:/ServiceBus_Emulator/ConfigFiles/Config.json ports: - "5673:5672" # AMQP messaging - "5300:5300" # HTTP management environment: SQL_SERVER: asb-sql MSSQL_SA_PASSWORD: "Strong_Passw0rd#2025" ACCEPT_EULA: "Y" EMULATOR_HTTP_PORT: 5300 depends_on: - asb-sql networks: sb-emulator: ``` ::: tip The emulator exposes two ports: the AMQP port (5672) for sending and receiving messages, and an HTTP management port (5300) for queue/topic administration. These must be mapped to different host ports. ::: ## Emulator Configuration File The emulator reads a `Config.json` file on startup. A minimal configuration that lets Wolverine auto-provision everything it needs: ```json { "UserConfig": { "Namespaces": [ { "Name": "sbemulatorns" } ], "Logging": { "Type": "File" } } } ``` You can also pre-configure queues and topics in this file if needed: ```json { "UserConfig": { "Namespaces": [ { "Name": "sbemulatorns", "Queues": [ { "Name": "my-queue", "Properties": { "MaxDeliveryCount": 3, "LockDuration": "PT1M", "RequiresSession": false } } ], "Topics": [ { "Name": "my-topic", "Subscriptions": [ { "Name": "my-subscription", "Properties": { "MaxDeliveryCount": 3, "LockDuration": "PT1M" } } ] } ] } ], "Logging": { "Type": "File" } } } ``` ## Connection Strings The emulator uses standard Azure Service Bus connection strings with `UseDevelopmentEmulator=true`: ```cs // AMQP connection for sending/receiving messages var messagingConnectionString = "Endpoint=sb://localhost:5673;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=SAS_KEY_VALUE;UseDevelopmentEmulator=true;"; // HTTP connection for management operations (creating queues, topics, etc.) var managementConnectionString = "Endpoint=sb://localhost:5300;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=SAS_KEY_VALUE;UseDevelopmentEmulator=true;"; ``` ::: warning The emulator uses separate ports for messaging (AMQP) and management (HTTP) operations. In production Azure Service Bus, a single connection string handles both, but the emulator requires you to configure these separately. ::: ## Configuring Wolverine with the Emulator The key to using the emulator with Wolverine is setting both the primary connection string (for AMQP messaging) and the `ManagementConnectionString` (for HTTP administration) on the transport: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { opts.UseAzureServiceBus(messagingConnectionString) .AutoProvision() .AutoPurgeOnStartup(); // Required for the emulator: set the management connection string // to the HTTP port since it differs from the AMQP port var transport = opts.Transports.GetOrCreate(); transport.ManagementConnectionString = managementConnectionString; // Configure your queues, topics, etc. as normal opts.ListenToAzureServiceBusQueue("my-queue"); opts.PublishAllMessages().ToAzureServiceBusQueue("my-queue"); }); ``` ## Creating a Test Helper Wolverine's own test suite uses a static helper extension method to standardize emulator configuration across all tests. Here's the pattern: ```cs public static class AzureServiceBusTesting { // Connection strings pointing at the emulator public static readonly string MessagingConnectionString = "Endpoint=sb://localhost:5673;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=SAS_KEY_VALUE;UseDevelopmentEmulator=true;"; public static readonly string ManagementConnectionString = "Endpoint=sb://localhost:5300;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=SAS_KEY_VALUE;UseDevelopmentEmulator=true;"; private static bool _cleaned; public static AzureServiceBusConfiguration UseAzureServiceBusTesting( this WolverineOptions options) { // Delete all queues and topics on first usage to start clean if (!_cleaned) { _cleaned = true; DeleteAllEmulatorObjectsAsync().GetAwaiter().GetResult(); } var config = options.UseAzureServiceBus(MessagingConnectionString); var transport = options.Transports.GetOrCreate(); transport.ManagementConnectionString = ManagementConnectionString; return config.AutoProvision(); } public static async Task DeleteAllEmulatorObjectsAsync() { var client = new ServiceBusAdministrationClient(ManagementConnectionString); await foreach (var topic in client.GetTopicsAsync()) { await client.DeleteTopicAsync(topic.Name); } await foreach (var queue in client.GetQueuesAsync()) { await client.DeleteQueueAsync(queue.Name); } } } ``` ## Writing Integration Tests With the helper in place, integration tests become straightforward: ```cs public class when_sending_messages : IAsyncLifetime { private IHost _host; public async Task InitializeAsync() { _host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.UseAzureServiceBusTesting() .AutoPurgeOnStartup(); opts.ListenToAzureServiceBusQueue("send_and_receive"); opts.PublishMessage() .ToAzureServiceBusQueue("send_and_receive"); }).StartAsync(); } public async Task DisposeAsync() { await _host.StopAsync(); } [Fact] public async Task send_and_receive_a_single_message() { var message = new MyMessage("Hello"); var session = await _host.TrackActivity() .IncludeExternalTransports() .Timeout(30.Seconds()) .SendMessageAndWaitAsync(message); session.Received.SingleMessage() .Name.ShouldBe("Hello"); } } ``` ::: tip Use `.IncludeExternalTransports()` on the tracked session so Wolverine waits for messages that travel through Azure Service Bus rather than only tracking in-memory activity. ::: ## Disabling Parallel Test Execution Because the emulator is a shared resource, tests that create and tear down queues or topics can interfere with each other when run in parallel. Wolverine's own test suite disables parallel execution for its Azure Service Bus tests: ```cs // Add to a file like NoParallelization.cs in your test project [assembly: CollectionBehavior(CollectionBehavior.CollectionPerAssembly)] ``` --- --- url: /guide/messaging/transports/tcp.md --- # Using Wolverine's Lightweight TCP Transport Wolverine has a lightweight transport option built in that relies on batching messages through raw socket communication. At this point, this transport is absolutely robust enough for production usage (that's my story and I'm sticking to it), but does not yet have any facility for security. As such, it may be most useful for testing or development scenarios where the "real" message broker is not really usable in local environments. Either way, there's not much necessary to use the TCP transport: ::: tip You can listen to messages from as many ports as you like, but be aware of port contention issues. ::: To listen for messages with the TCP transport, use the `ListenAtPort()` extension method shown below: ```cs public static IHost CreateHostBuilder() { var builder = Host.CreateApplicationBuilder(); // This adds Wolverine with inline configuration // of WolverineOptions builder.UseWolverine(opts => { // This is an example usage of the application's // IConfiguration inside of Wolverine bootstrapping var port = builder.Configuration.GetValue("ListenerPort"); opts.ListenAtPort(port); // If we're running in development mode and you don't // want to worry about having all the external messaging // dependencies up and running, stub them out if (builder.Environment.IsDevelopment()) { // This will "stub" out all configured external endpoints opts.StubAllExternalTransports(); } }); return builder.Build(); } ``` snippet source | anchor Likewise, to publish via TCP, use the `ToPort()` extension method to publish to another port on the same machine: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.PublishAllMessages().ToPort(5555) .Named("One"); opts.PublishAllMessages().ToPort(5555) .Named("Two"); }).StartAsync(); var bus = host.Services .GetRequiredService(); // Explicitly send a message to a named endpoint await bus.EndpointFor("One").SendAsync(new SomeMessage()); // Or invoke remotely await bus.EndpointFor("One").InvokeAsync(new SomeMessage()); // Or request/reply var answer = bus.EndpointFor("One") .InvokeAsync(new Question()); ``` snippet source | anchor or use `ToServerAndPort()` to send messages to a port on another machine: ```cs using var host = Host.CreateDefaultBuilder() .UseWolverine(opts => { // Route a single message type opts.PublishMessage() .ToServerAndPort("server", 1111); // Send every possible message to a TCP listener // on this box at port 2222 opts.PublishAllMessages().ToPort(2222); // Or use a more fluent interface style opts.Publish().MessagesFromAssembly(typeof(PingMessage).Assembly) .ToPort(3333); // Complicated rules, I don't think folks will use this much opts.Publish(rule => { // Apply as many message matching // rules as you need // Specific message types rule.Message(); rule.Message(); // Implementing a specific marker interface or common base class rule.MessagesImplementing(); // All types in a certain assembly rule.MessagesFromAssemblyContaining(); // or this rule.MessagesFromAssembly(typeof(PingMessage).Assembly); // or by namespace rule.MessagesFromNamespace("MyMessageLibrary"); rule.MessagesFromNamespaceContaining(); // Express the subscribers rule.ToPort(1111); rule.ToPort(2222); }); // Or you just send all messages to a certain endpoint opts.PublishAllMessages().ToPort(3333); }).StartAsync(); ``` snippet source | anchor --- --- url: /guide/http/validation.md --- # Validation within Wolverine.HTTP ::: info You can of course use completely custom Wolverine middleware for validation, and once again, returning the `ProblemDetails` object or `WolverineContinue.NoProblems` to communicate validation errors is our main recommendation in that case. ::: Wolverine.HTTP has direct support for utilizing validation within HTTP endpoint that all revolve around the [ProblemDetails](https://learn.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.mvc.problemdetails?view=aspnetcore-7.0) specification. 1. Using one off `Validate()` or `ValidateAsync()` methods embedded directly in your endpoint types that return `ProblemDetails`. This is our recommendation for any validation logic like data lookups that would require you to utilize IoC services or database calls. 2. Fluent Validation middleware through the separate `WolverineFx.Http.FluentValidation` Nuget 3. [Data Annotations](https://learn.microsoft.com/en-us/dotnet/api/system.componentmodel.dataannotations?view=net-10.0) middleware that is an option you have to explicitly configure within Wolverine.HTTP application ::: tip We **very strongly** recommend using the one off `ValidateAsync()` method for any validation that requires you to use an IoC' service rather than trying to use the Fluent Validation `IValidator` interface. Especially if that validation logic is specific to that HTTP endpoint. ::: ## Using ProblemDetails Wolverine has some first class support for the [ProblemDetails](https://learn.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.mvc.problemdetails?view=aspnetcore-7.0) specification in its [HTTP middleware model](./middleware). Wolverine also has a [Fluent Validation middleware package](./fluentvalidation) for HTTP endpoints, but it's frequently valuable to write one off, explicit validation for certain endpoints. Consider this contrived sample endpoint with explicit validation being done in a "Before" middleware method: ```cs public class ProblemDetailsUsageEndpoint { public ProblemDetails Validate(NumberMessage message) { // If the number is greater than 5, fail with a // validation message if (message.Number > 5) return new ProblemDetails { Detail = "Number is bigger than 5", Status = 400 }; // All good, keep on going! return WolverineContinue.NoProblems; } [WolverinePost("/problems")] public static string Post(NumberMessage message) { return "Ok"; } } public record NumberMessage(int Number); ``` snippet source | anchor Wolverine.Http now (as of 1.2.0) has a convention that sees a return value of `ProblemDetails` and looks at that as a "continuation" to tell the http handler code what to do next. One of two things will happen: 1. If the `ProblemDetails` return value is the same instance as `WolverineContinue.NoProblems`, just keep going 2. Otherwise, write the `ProblemDetails` out to the HTTP response and exit the HTTP request handling To make that clearer, here's the generated code: ```csharp public class POST_problems : Wolverine.Http.HttpHandler { private readonly Wolverine.Http.WolverineHttpOptions _wolverineHttpOptions; public POST_problems(Wolverine.Http.WolverineHttpOptions wolverineHttpOptions) : base(wolverineHttpOptions) { _wolverineHttpOptions = wolverineHttpOptions; } public override async System.Threading.Tasks.Task Handle(Microsoft.AspNetCore.Http.HttpContext httpContext) { var problemDetailsUsageEndpoint = new WolverineWebApi.ProblemDetailsUsageEndpoint(); var (message, jsonContinue) = await ReadJsonAsync(httpContext); if (jsonContinue == Wolverine.HandlerContinuation.Stop) return; var problemDetails = problemDetailsUsageEndpoint.Before(message); if (!(ReferenceEquals(problemDetails, Wolverine.Http.WolverineContinue.NoProblems))) { await Microsoft.AspNetCore.Http.Results.Problem(problemDetails).ExecuteAsync(httpContext).ConfigureAwait(false); return; } var result_of_Post = WolverineWebApi.ProblemDetailsUsageEndpoint.Post(message); await WriteString(httpContext, result_of_Post); } } ``` And for more context, here's the matching "happy path" and "sad path" tests for the endpoint above: ```cs [Fact] public async Task continue_happy_path() { // Should be good await Scenario(x => { x.Post.Json(new NumberMessage(3)).ToUrl("/problems"); }); } [Fact] public async Task stop_with_problems_if_middleware_trips_off() { // This is the "sad path" that should spawn a ProblemDetails // object var result = await Scenario(x => { x.Post.Json(new NumberMessage(10)).ToUrl("/problems"); x.StatusCodeShouldBe(400); x.ContentTypeShouldBe("application/problem+json"); }); } ``` snippet source | anchor Lastly, if Wolverine sees the existence of a `ProblemDetails` return value in any middleware, Wolverine will fill in OpenAPI metadata for the "application/problem+json" content type and a status code of 400. This behavior can be easily overridden with your own metadata if you need to use a different status code like this: ```csharp // Use 418 as the status code instead [ProducesResponseType(typeof(ProblemDetails), 418)] ``` ### Using ProblemDetails with Marten Aggregates Of course, if you are using [Marten's aggregates within your Wolverine http handlers](./marten), you also want to be able to validation using the aggregate's details in your middleware and this is perfectly possible like this: ```cs [AggregateHandler] public static ProblemDetails Before(IShipOrder command, Order order) { if (order.IsShipped()) { return new ProblemDetails { Detail = "Order already shipped", Status = 428 }; } return WolverineContinue.NoProblems; } ``` snippet source | anchor ## ProblemDetails Within Message Handlers `ProblemDetails` can be used within message handlers as well with similar rules. See this example from the tests: ```cs public static class NumberMessageHandler { public static ProblemDetails Validate(NumberMessage message) { if (message.Number > 5) { return new ProblemDetails { Detail = "Number is bigger than 5", Status = 400 }; } // All good, keep on going! return WolverineContinue.NoProblems; } // This "Before" method would only be utilized as // an HTTP endpoint [WolverineBefore(MiddlewareScoping.HttpEndpoints)] public static void BeforeButOnlyOnHttp(HttpContext context) { Debug.WriteLine("Got an HTTP request for " + context.TraceIdentifier); CalledBeforeOnlyOnHttpEndpoints = true; } // This "Before" method would only be utilized as // a message handler [WolverineBefore(MiddlewareScoping.MessageHandlers)] public static void BeforeButOnlyOnMessageHandlers() { CalledBeforeOnlyOnMessageHandlers = true; } // Look at this! You can use this as an HTTP endpoint too! [WolverinePost("/problems2")] public static void Handle(NumberMessage message) { Debug.WriteLine("Handled " + message); Handled = true; } // These properties are just a cheap trick in Wolverine internal tests public static bool Handled { get; set; } public static bool CalledBeforeOnlyOnMessageHandlers { get; set; } public static bool CalledBeforeOnlyOnHttpEndpoints { get; set; } } ``` snippet source | anchor This functionality was added so that some handlers could be both an endpoint and message handler without having to duplicate code or delegate to the handler through an endpoint. ## Data Annotations ::: warning While it is possible to access the IoC Services via `ValidationContext`, we recommend instead using a more explicit `Validate` or `ValidateAsync()` method directly in your message handler class for the data input. ::: Wolverine.Http has a separate package called `WolverineFx.Http.DataAnnotationsValidation` that provides a simple middleware to use [Data Annotation Attributes](https://learn.microsoft.com/en-us/dotnet/api/system.componentmodel.dataannotations?view=net-10.0) in your endpoints. To get started, add this one line of code to your Wolverine.HTTP configuration: ```csharp app.MapWolverineEndpoints(opts => { // Use Data Annotations that are built // into the Wolverine.HTTP library opts.UseDataAnnotationsValidationProblemDetailMiddleware(); }); ``` This middleware will kick in for any HTTP endpoint where the request type has any property decorated with a [`ValidationAttribute`](https://learn.microsoft.com/en-us/dotnet/api/system.componentmodel.dataannotations.validationattribute?view=net-10.0) or which implements the `IValidatableObject` interface. Any validation errors detected will cause the HTTP request to fail with a `ProblemDetails` response. For an example, consider this input model that will be a request type in your application: ```cs public record CreateAccount( // don't forget the property prefix on records [property: Required] string AccountName, [property: Reference] string Reference ) : IValidatableObject { public IEnumerable Validate(ValidationContext validationContext) { if (AccountName.Equals("invalid", StringComparison.InvariantCultureIgnoreCase)) { yield return new("AccountName is invalid", [nameof(AccountName)]); } } } ``` snippet source | anchor As long as the Data Annotations middleware is active, the `CreateAccount` model would be validated if used as the request body like this: ```cs [WolverinePost("/validate/account")] public static string Post( // In this case CreateAccount is being posted // as JSON CreateAccount account) { return "Got a new account"; } ``` snippet source | anchor or even like this: ```cs [WolverinePost("/validate/account2")] public static string Post2([FromQuery] CreateAccount customer) { return "Got a new account"; } ``` snippet source | anchor ## Fluent Validation Middleware ::: warning If you need to use IoC services in a Fluent Validation `IValidator` that might force Wolverine to use a service locator pattern in the generated code (basically from `AddScoped(s => build it at runtime)`), we recommend instead using a more explicit `Validate` or `ValidateAsync()` method directly in your HTTP endpoint class for the data input. ::: ::: warning If you are using `ExtensionDiscovery.ManualOnly`, you must explicitly call `opts.UseFluentValidationProblemDetail()` in your Wolverine configuration in addition to `opts.UseFluentValidation()`. Without this, the `IProblemDetailSource` service will not be registered and the middleware will fail at runtime. With the default `ExtensionDiscovery.Automatic` mode, these services are registered automatically by the `WolverineFx.Http.FluentValidation` extension. ```csharp services.AddWolverine(ExtensionDiscovery.ManualOnly, opts => { opts.UseFluentValidation(); opts.UseFluentValidationProblemDetail(); // Required in manual discovery mode! }); ``` ::: Wolverine.Http has a separate package called `WolverineFx.Http.FluentValidation` that provides a simple middleware for using [Fluent Validation](https://docs.fluentvalidation.net/en/latest/) in your HTTP endpoints. To get started, install that Nuget reference: ```bash dotnet add package WolverineFx.Http.FluentValidation ``` Next, let's assume that you have some Fluent Validation validators registered in your application container for the request types of your HTTP endpoints -- and the [UseFluentValidation](/guide/handlers/fluent-validation) method from the `WolverineFx.FluentValidation` package will help find these validators and register them in a way that optimizes this middleware usage. Next, add this one single line of code to your Wolverine.Http bootstrapping: ```csharp opts.UseFluentValidationProblemDetailMiddleware(); ``` as shown in context below in an application shown below: ```cs app.MapWolverineEndpoints(opts => { // This is strictly to test the endpoint policy opts.ConfigureEndpoints(httpChain => { // The HttpChain model is a configuration time // model of how the HTTP endpoint handles requests // This adds metadata for OpenAPI httpChain.WithMetadata(new CustomMetadata()); }); // more configuration for HTTP... // Opting into the Fluent Validation middleware from // Wolverine.Http.FluentValidation opts.UseFluentValidationProblemDetailMiddleware(); // Or instead, you could use Data Annotations that are built // into the Wolverine.HTTP library opts.UseDataAnnotationsValidationProblemDetailMiddleware(); ``` snippet source | anchor ## AsParameters Binding The Fluent Validation middleware can also be used against the `[AsParameters]` input of an HTTP endpoint: ```cs public static class ValidatedAsParametersEndpoint { [WolverineGet("/asparameters/validated")] public static string Get([AsParameters] ValidatedQuery query) { return $"{query.Name} is {query.Age}"; } } public class ValidatedQuery { [FromQuery] public string? Name { get; set; } public int Age { get; set; } public class ValidatedQueryValidator : AbstractValidator { public ValidatedQueryValidator() { RuleFor(x => x.Name).NotNull(); } } } ``` snippet source | anchor ## QueryString Binding Wolverine.HTTP can apply the Fluent Validation middleware to complex types that are bound by the `[FromQuery]` behavior: ```cs public record CreateCustomer ( string FirstName, string LastName, string PostalCode ) { public class CreateCustomerValidator : AbstractValidator { public CreateCustomerValidator() { RuleFor(x => x.FirstName).NotNull(); RuleFor(x => x.LastName).NotNull(); RuleFor(x => x.PostalCode).NotNull(); } } } public static class CreateCustomerEndpoint { [WolverinePost("/validate/customer")] public static string Post(CreateCustomer customer) { return "Got a new customer"; } [WolverinePost("/validate/customer2")] public static string Post2([FromQuery] CreateCustomer customer) { return "Got a new customer"; } } ``` snippet source | anchor --- --- url: /tutorials/vertical-slice-architecture.md --- # Vertical Slice Architecture ::: info This guide is written from the standpoint of a [CQRS Architecture](https://learn.microsoft.com/en-us/azure/architecture/patterns/cqrs). While we think a vertical slice architecture (VSA) could be valuable otherwise, vertical slices and CQRS are a very natural pairing. And also, we think the full "Critter Stack" of Wolverine + [Marten](https://martendb.io) is a killer combination for a very robust and productive development experience using [CQRS with Event Sourcing](./cqrs-with-marten). ::: Wolverine is well suited for a "Vertical Slice Architecture" approach where, to over simplify things a bit, you generally try to organize code by feature or use case rather than by horizontal technical layering. Most of the content about "Vertical Slice Architecture" practices in the .NET ecosystem involve the MediatR framework. It's important to note that while you can use [Wolverine as "just" a mediator tool](/tutorials/mediator) and a drop in replacement for MediatR, we feel that you'll achieve better results, more testable code, and far simpler code over all by instead leaning into Wolverine capabilities. ::: tip See [Wolverine for MediatR User](/tutorials/from-mediatr) for more information about moving from MediatR to Wolverine. ::: ## Wolverine's Philosophy toward Vertical Slice Architecture Alright, before we potentially make you angry by trashing the current Clean/Onion Architecture approach that's rampant in the .NET ecosystem, let's talk about what the Wolverine community thinks is important for achieving good results in a long lived, complex software system. **Effective test coverage is paramount for sustainable development.** More than layering schemes or the right abstractions or code structure, we believe that effective automated test coverage does much more to enable sustainable development of a system over time. And by "effective" test coverage, we mean an automated test suite that's subjectively fast, reliable, and has enough coverage that you feel like it's not risky to change the system code. Designing for testability is a huge topic in its own right, but let's just say for now that step one is having your business or workflow logic largely decoupled from infrastructure concerns. It's also very helpful to purposely choose technologies that are better behaved in integration testing and have a solid local Docker story. **The code is easy to reason about.** It's relatively easy to identify the system inputs and follow the processing to understand the relationship between system inputs and the effects of those inputs including changes to the database, calls to other systems, or messages raised by those inputs. We've seen too many enterprise systems that suffer from bugs partially because it's just too hard to understand where to make logical changes or to understand what unintended consequences might pop up. We've also seen applications with very poor performance due to how the application interacted with its underlying database(s), and inevitably that problem is partially caused by excessive layering making it hard to understand how the system is really using the database. **Ease of iteration.** Some technologies and development techniques allow for much easier iteration and adaptation than other tools that might be much more effective as a "write once" approach. For example, using a document database approach leads to easier evolutionary changes of persisted types than an ORM would. And an ORM would lead to easier evolution than writing SQL by hand. *Modularity between features.* Technologies change over time, and there's always going to be a reason to want to upgrade your current dependencies or even replace dependencies. Our experience in large enterprise systems is that the only things that really make it easier to upgrade technologies are effective test coverage to reduce risk and the ability to upgrade part of the system at a time instead of having to upgrade an entire technical layer of an entire system. This might very well push you toward a micro-service or [modular monolith approach](/tutorials/modular-monolith), but we think that the vertical slice architecture approach is helpful in all cases as well. So now let's talk about how the recommended Wolverine approach will very much differ from a layered Clean/Onion Architecture approach or really any modern [Ports and Adapters](https://8thlight.com/insights/a-color-coded-guide-to-ports-and-adapters) approach that emphasizes abstractions and layers for loose coupling. We're big, big fans of the [A Frame Architecture](https://www.jamesshore.com/v2/projects/nullables/testing-without-mocks#a-frame-arch) idea for code organization to promote testability without just throwing in oodles of abstractions and mock objects everywhere (like what happens in many Clean Architecture codebases). Wolverine's ["compound handler" feature](/guide/handlers/#compound-handlers), its [transactional middleware](/guide/durability/marten/transactional-middleware), and its [cascading message feature](/guide/handlers/cascading) are all examples of built in support for "A-Frame" structures. ![A Frame Architecture](/a-frame.png) With the "A-Frame Architecture" approach, you're trying to isolate behavioral logic from infrastructure by more or less dividing the world up into three kinds of responsibilities in code: 1. Actual business logic that makes decisions and decides how to change the state of the application or what next steps to take 2. Infrastructure services. For example, persistence tools like EF Core's `DbContext` or service gateways to outside web services 3. Coordination or controller logic sitting on top that's delegating to both the infrastructure and business logic code, but keeping those two areas of the code separate For more background on the thinking behind the "A Frame Architecture" (which like "Vertical Slice Architecture", is more about code organization than architecture), we'll recommend: * Jeremy's post from 2023 about the [A-Frame Architecture with Wolverine](https://jeremydmiller.com/2023/07/19/a-frame-architecture-with-wolverine/) * [Object Role Stereotypes](https://learn.microsoft.com/en-us/archive/msdn-magazine/2008/august/patterns-in-practice-object-role-stereotypes) by Jeremy from the old MSDN Magazine. That article focuses on Object Oriented Programming, but the basic concept applies equally to the functional decomposition that Wolverine + the "A-Frame" leans toward. * [A Brief Tour of Responsibility-Driven Design](https://www.wirfs-brock.com/PDFs/A_Brief-Tour-of-RDD.pdf) by Rebecca Wirfs-Brock -- and again, it's focused on OOP, but we think the concepts apply equally to just using functions or methods too For the most part, Wolverine should enable you to make most handler or HTTP methods be pure functions. We're more or less going to recommend against wrapping your persistence tooling like Marten or EF Core with any kind of repository abstractions and mostly just utilize their APIs directly in your handlers or HTTP endpoint methods. We believe the "A-Frame Architecture" approach mitigates any *important* coupling between business or workflow logic and infrastructure. The ["specification" pattern](https://jeremydmiller.com/2024/12/03/specification-usage-with-marten-for-repository-free-development/) or really even just reusable helper methods from outside of a vertical slice can be used to avoid duplication of complex query logic, but for the most part, we find it helpful to see queries that are directly related to a vertical slice in the same code file. Which if you're reading this guide, you hopefully see how to do so without actually making business logic coupled to infrastructure even if data access and business logic appears in the same code file or even the same handler type. Do utilize Wolverine's [side effect](/guide/handlers/side-effects) model and cascading message support to be able to get to pure functions in your handlers. ## Enough navel gazing, show me code already! Let's just jump into a couple simple examples. First, let's say you're building a message handler that processes a `PlaceOrder` command. With this example, I'm going to use [Marten](/guide/durability/marten) for object persistence, but it's just not that different with Wolverine's [EF Core](/guide/durability/efcore) or [RavenDb](/guide/durability/ravendb) integration. I'll do that in a single C# file named `PlaceOrder.cs`: ```csharp public record PlaceOrder(string OrderId, string CustomerId, decimal Amount); public class Order { public string Id { get; set; } public string CustomerId { get; set; } public decimal Amount { get; set; } public class Validator : AbstractValidator { public Validator() { RuleFor(x => x.OrderId).ShouldNotBeNull(); RuleFor(x => x.CustomerId).ShouldNotBeNull(); RuleFor(x => x.Amount).ShouldNotBeNull(); } } } public static class PlaceOrderHandler { // Transaction Script style // I'm assuming the usage of transactional middleware // to actually call IDocumentSession.SaveChangesAsync() public static void Handle( PlaceOrder command, IDocumentSession session) { var order = new Order { Id = command.OrderId, CustomerId = command.CustomerId, Amount = command.Amount }; session.Store(order); } } ``` For the first pass, I'm using a very simple [transaction script](https://martinfowler.com/eaaCatalog/transactionScript.html) approach that just mixes in the Marten `IDocumentSession` (basically the equivalent to an EF Core `DbContext`) right in the behavioral code. For very simplistic cases, this is probably just fine, especially if the interfaces for the infrastructure are easily "mockable" to substitute out in isolated, solitary unit tests. Or if you happen to be using infrastructure like [Marten](https://martendb.io) that has is relatively friendly to "sociable" integration testing. ::: tip See [Martin Fowler's Unit Test](https://martinfowler.com/bliki/UnitTest.html) write up for a discussion of "solitary vs sociable" tests. ::: A couple other things to note about the code sample above: * You'll notice that the method is synchronous and doesn't call into `IDocument.SaveChangesAsync()` to commit the implied unit of work. I'm assuming that's happening by utilizing Wolverine's [transactional middleware](/guide/durability/marten/transactional-middleware) approach that happily works for Marten, EF Core, and RavenDb at the time of this writing. * There's a Fluent Validation validator up there, but I didn't directly use it, because I'm assuming the usage of the [Fluent Validation middleware package](/guide/handlers/fluent-validation) that comes in a Wolverine extension Nuget. * I didn't utilize any kind of repository abstraction around the raw Marten `IDocumentSession`. Much more on this below, but my value judgement is that the simpler code is more important than worrying about swapping out the persistence tooling later. A "transaction script" style isn't going to be applicable in every case, so let's look to decouple that handler completely from Marten and make it a ["pure function"](https://en.wikipedia.org/wiki/Pure_function) that's a little easier to get into a unit test by leveraging some of Wolverine's "special sauce": ```csharp public static class PlaceOrderHandler { public static Insert Handle(PlaceOrder command) { var order = new Order { Id = command.OrderId, CustomerId = command.CustomerId, Amount = command.Amount }; return Storage.Insert(order); } } ``` The `Insert` is one of Wolverine's [Storage Side Effect types](/guide/handlers/side-effects.html#storage-side-effects) that can help you specify persistence actions as side effects from message or HTTP endpoint handlers without actually having to couple the handler or HTTP endpoint methods to persistence tooling or even their abstractions. With this being a "pure function", we can walk right up to it and test its functionality with a simple little unit test like so (using [xUnit.Net](https://xunit.net/)): ```csharp [Fact] public void handling_place_order_creates_new_order() { // Look Ma, no mocks anywhere in sight! var command = new PlaceOrder("111", "222", 100.23M); var action = PlaceOrderHandler.Handle(command); action.Entity.Id.ShouldBe(command.OrderId); action.Entity.CustomerId.ShouldBe(command.CustomerId); action.Entity.Amount.ShouldBe(command.Amount); } ``` If you'll notice, we didn't use any further database abstractions, we didn't create umpteen separate Clean/Onion Architecture projects for each and every technical layer, and we also didn't use any mock objects whatsoever to test the code. We just walked right up and called a method with its input and measured its expected outputs. Testability *and* simplicity FTW! Now, let's try a little more complex sample to cancel an order, and get into HTTP endpoints while we're at it. This time around, let's say that the `CancelOrder` command should do nothing if the order doesn't exist, or if it has already been shipped. Otherwise, we should delete the order and publish a `OrderCancelled` domain event to be handled by other modules in the system or to notify other, external systems. Again, starting with a transaction script approach *first*, we could have this code: ```csharp public record CancelOrder(string OrderId); public record OrderCancelled(string OrderId); public static class CancelOrderHandler { public static async Task Handle( CancelOrder command, IDocumentSession session, IMessageBus messageBus, CancellationToken token) { var order = await session.LoadAsync(command.OrderId, token); // You should probably log something at the least here if (order == null) return; if (order.HasShipped) return; // Maybe it's a soft delete here? session.Delete(order); // Publish a domain event to let other things in the system know to // take actions to stop shipping, inventory, who knows what await messageBus.PublishAsync(new OrderCancelled(command.OrderId)); } } ``` Now, to hook this up to HTTP, we *could* delegate to Wolverine as a mediator tool as is common in the .NET ecosystem today, either directly in the `Program` file: ```csharp app.MapPost("/api/orders/cancel", (CancelOrder command, IMessageBus bus, CancellationToken token) => bus.InvokeAsync(command, token)); ``` But since the `Program` file would get absolutely overrun with a lot of unrelated forwarding calls to Wolverine's `IMessageBus` entry point, and the ugliness would be much worse when you remember how much extra code you would add for [OpenAPI metadata](https://learn.microsoft.com/en-us/aspnet/core/fundamentals/openapi/overview?view=aspnetcore-9.0). There's no kind of automatic discovery for Minimal API like there is for MVC Core (or MediatR or Wolverine itself of course), so you might have to resort to extra mechanisms in the same file just to register the Minimal API endpoints or give up and toss in an MVC Core controller just to delegate to Wolverine as a "Mediator". But wait, there's more! You probably want to give your HTTP API's clients some decent response to explain when and why a request to cancel an order was rejected. The Minimal API `IResult` gives you an easy way to do that, so we *could* have our Wolverine return an `IResult` result something like this: ```csharp public static class CancelOrderHandler { public static async Task Handle( CancelOrder command, IDocumentSession session, IMessageBus messageBus, CancellationToken token) { var order = await session.LoadAsync(command.OrderId, token); // return a 404 if the order doesn't exist if (order == null) return Results.NotFound(); // return a 400 with a description of why the order could not be cancelled if (order.HasShipped) return Results.BadRequest("Order has already been shipped"); // Maybe it's a soft delete here? session.Delete(order); // Publish a domain event to let other things in the system know to // take actions to stop shipping, inventory, who knows what await messageBus.PublishAsync(new OrderCancelled(command.OrderId)); return Results.Ok(); } } ``` and change the Minimal API call to: ```csharp app.MapPost("/api/orders/cancel", (CancelOrder command, IMessageBus bus, CancellationToken token) => bus.InvokeAsync(command, token)); ``` Now, the `IResult` return type by itself is a bit of a "mystery meat" response that could mean anything, so Minimal API can't glean any useful OpenAPI metadata from that, so you'd have to chain some extra code behind the call to `MapPost()` just to add OpenAPI declarations. That's tedious noise code. Let's instead introduce [Wolverine.HTTP endpoints instead]() and rewrite the cancel order process -- this time with a route value instead of the request body -- to simplify the code: ```csharp public static class CancelOrderEndpoint { public static ProblemDetails Validate(Order order) { return order.HasShipped ? new ProblemDetails { Status = 400, Detail = "Order has already shipped" } // It's all good, just keep going! : WolverineContinue.NoProblems; } [WolverinePost("/api/orders/cancel/id"), EmptyResponse] public static (Delete, OrderCancelled) Post([Entity] Order order) { return (Storage.Delete(order), new OrderCancelled(order.Id)); } } ``` And there's admittedly a bit to unpack here: * The `[EmptyResponse]` attribute is a Wolverine thing that tells Wolverine.HTTP that the endpoint produces no response, so Wolverine emits a 204 status code for empty response, and "knows" that none of the return values should be used as the HTTP response body * The `Validate()` method is an example of *[Compound Handlers](/guide/handlers/#compound-handlers)* (this applies equally to Wolverine HTTP endpoints) in Wolverine, and will be called before the main method. By returning a [`ProblemDetails`](https://learn.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.mvc.problemdetails?view=aspnetcore-9.0) type from that method, that tells Wolverine that the method might stop all other processing by returning, well, problems. Learn more about how [Wolverine.HTTP uses the ProblemDetails](/guide/http/problemdetails) response type. Arguably, this is a built in form of [Railway Programming](https://fsharpforfunandprofit.com/rop/) in Wolverine (or at least a similar concept), but without the ugly high code ceremony that comes with Railway Programming. An idiom with Wolverine development is to largely utilize `Validate` methods to make the main handler or endpoint method be for the ["Happy Path"](https://en.wikipedia.org/wiki/Happy_path). * It's legal to return .NET tuple values from either message handler or HTTP endpoint methods, with Wolverine treating each "return value" independently * The `Delete` return type is a known [persistence "side effect"]() by Wolverine, so it "knows" to use that to delegate to the configured persistence tooling for the `Order` entity, which in this sample application is Marten. *For EF Core, Wolverine is smart enough to use the correct `DbContext` for the entity type if you are using multiple `DbContext` types.* * In this case, because of the `[EmptyResponse]` declaration, any return value that doesn't have any other special handling is considered to be a [cascading message](/guide/handlers/cascading) and Wolverine pretty well treats it the same as if you'd called `IMessageBus.PublishAsync()`. We highly recommend using the cascading message signature instead of directly invoking `IMessageBus.PublishAsync()` as a way to simplify your code, keep your handler/endpoint methods "pure functions" whenever possible, and also to make the code more declarative about the side effects that happen as a result of system inputs * The `[Entity]` attribute is a [persistence helper](/guide/handlers/persistence.html#automatically-loading-entities-to-method-parameters) in Wolverine. Wolverine is actually generating code using your persistence tooling (Marten in this case, but EF Core and RavenDb are also supported) to load the order using the "id" route argument from Marten's `IDocumentSession` service and passing it into both the main method and the `Validate()` method. By default, the `[Entity]` value is considered to be "Required", so if the entity is not found, it will stop all other processing and return a 404 status code. No other code is necessary. Whew, but wait, there's more! Let's say that you've opted to use Wolverine's transactional outbox integration, and for now, let's assume that you're just using local queues with this configuration in your `Program` file: ```csharp builder.Host.UseWolverine(opts => { // Other Wolverine configuration... opts.Policies.AutoApplyTransactions(); opts.Policies.UseDurableLocalQueues(); }); ``` and for Marten: ```csharp builder.Services.AddMarten(opts => { // Marten configuration... }) // This adds Marten integration // and PostgreSQL backed message persistence // to Wolverine in this application .IntegrateWithWolverine(); ``` ::: info Wolverine makes no distinction between "events" and "commands". It's all a message to Wolverine. "Event vs command" is strictly a logical role in Wolverine usage. ::: In this case, the outgoing `OrderCancelled` event message will happen **durably** through Wolverine's transactional inbox (just to be technical, the local queues when durable go through the inbox storage). This is a really important detail because it means that the event processing won't be lost if the process happens to crash in between processing the initial HTTP POST and the event message being processed through the queue because Wolverine can recover that work in a process restart or "fail" the message over to being processed by another active application node. Moreover, Wolverine local queues can use [Wolverine's error handling policies](/guide/handlers/error-handling) for retry loops, scheduled retries, or even circuit breakers if there are too many failures. The point here is that Wolverine is very suitable for creating resilient systems *even* with that low code ceremony model. One last point, not only is the Wolverine.HTTP approach simpler than the commonly used Minimal API delegating to a Mediator approach, there's a couple other benefits that are worth calling out: * Wolverine.HTTP has its own built in discovery for endpoints and routes, so you don't need to rig up your own discovery mechanisms like folks do in common "Vertical Slice Architecture with Minimal API and MediatR" approaches * Wolverine.HTTP tries really hard to glean OpenAPI metadata off of the type signatures of endpoint methods and the applied middleware like the `Validate` method up above. This will lead to spending less time decorating your code with OpenAPI metadata attributes or Minimal API fluent interface calls ## Recommended Layout ::: tip You might want to keep message contract types that are shared across modules or applications in separate libraries for sharing. In that case we've used the message handler or endpoint class name as the file name. ::: You'll of course have your own preferences, but [JasperFx Software](https://jasperfx.net) clients have had success by generally naming a file after the command or query message, even for HTTP endpoints. So a `PlaceOrder.cs` file might contain: * The `PlaceOrder` command or HTTP request body type itself * If using one of the [Fluent Validation](/guide/handlers/fluent-validation) integrations, maybe a `Validator` class that's just an inner type of `PlaceOrder`, but the point is to just keep it in the same file * The actual `PlaceOrderHandler` or `PlaceOrderEndpoint` for HTTP endpoints And honestly, that's it for many cases. I would of course place closely related command/event/http messages or handlers in the same namespace. That's the easy part, so let's move on to what might be controversial. Let's step into a quick, simplistic example that's using [Marten](https://martendb.io) for persistence: Or for an HTTP endpoint, just swap out `PlaceOrderHandler` for this: ```csharp public static class PlaceOrderEndpoint { [WolverinePost("/api/orders/place")] public static void Post( PlaceOrder command, IDocumentSession session) { var order = new Order { Id = command.OrderId, CustomerId = command.CustomerId, Amount = command.Amount }; session.Store(order); } } ``` We feel like it's much more important and common to need to reason about a single system input at one time than it ever is to need to reason about the entire data access layer or even the entire domain logic layer at one time. To that end the Wolverine team recommends putting any data access code that is **only germane to one vertical slice** directly into the vertical slice code as a default approach. To be blunt, we are recommending that you largely forgo wrapping any kind of repository abstractions around your persistence tooling, but instead, purposely seek to shrink down the call stack depth (how deep do you go in a handler calling service A that calls service B that might call repository C that uses persistence tool D to...). ## What about the query side? We admittedly don't have nearly as much to say about using Wolverine on the query side, but here are our rough recommendations: 1. If you are able and willing to use Wolverine.HTTP, do not use Wolverine as a "mediator" underneath `GET` query handlers. We realize that is a very common approach for teams that use ASP.Net MVC Core or Minimal API with MediatR, but we believe that is just unnecessary complexity and that will cause you to write more code to satisfy OpenAPI needs 2. We would probably just use the application's raw persistence tooling directly in `GET` endpoint methods and depend on integration testing for the query handlers -- maybe through [Alba specifications](https://jasperfx.github.io/alba). --- --- url: /introduction/what-is-wolverine.md --- # What is Wolverine? Wolverine is a toolset for command execution and message handling within .NET applications. The killer feature of Wolverine (we think) is its very efficient command execution pipeline that can be used as: 1. An [inline "mediator" pipeline](/tutorials/mediator) for executing commands 2. A [local message bus](/guide/messaging/transports/local) for in-application communication 3. A full-fledged [asynchronous messaging framework](/guide/messaging/introduction) for robust communication and interaction between services when used in conjunction with low level messaging infrastructure tools like RabbitMQ 4. With the [WolverineFx.Http](/guide/http/) library, Wolverine's execution pipeline can be used directly as an alternative ASP.Net Core Endpoint provider Wolverine tries very hard to be a good citizen within the .NET ecosystem. Even when used in "headless" services, it uses the idiomatic elements of .NET (logging, configuration, bootstrapping, hosted services) rather than try to reinvent something new. Wolverine utilizes the [.NET Generic Host](https://learn.microsoft.com/en-us/dotnet/core/extensions/generic-host) for bootstrapping and application teardown. This makes Wolverine relatively easy to use in combination with many of the most popular .NET tools. ## .NET Version Compatibility Wolverine aligns with the [.NET Core Support Lifecycle](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) to determine platform support. New major releases will drop versions of .NET that have fallen out of support. --- --- url: /guide/serverless.md --- # Wolverine and Serverless ::: tip No telling when this would happen, but there is an "ultra efficient" serverless model planned for Wolverine that will lean even heavier into code generation as a way to optimize its usage within serverless functions. Track that [forthcoming work on GitHub](https://github.com/JasperFx/wolverine/issues/34). ::: Wolverine was very much originally envisioned for usage in long running processes, and as such, wasn't initially well suited to serverless technologies like [Azure Functions](https://azure.microsoft.com/en-us/products/functions) or [AWS Lambda functions](https://aws.amazon.com/pm/lambda). If you're choosing to use Wolverine HTTP endpoints or message handling as part of a serverless function, we have three main suggestions about making Wolverine be more successful: 1. Make any outgoing [message endpoints](/guide/runtime.html#endpoint-types) be *Inline* so that messages are sent immediately 2. Utilize the new *Serverless* optimized mode 3. Absolutely take advantage of [pre-generated types]() to cut down the all important cold start problem with serverless functions ## Serverless Mode ::: tip Wolverine's [Transactional Inbox/Outbox](/guide/durability/) is very unsuitable for usage within serverless functions, so you'll definitely want to disable it through the mode shown below ::: First off, let's say that you want to use the transactional middleware for either Marten or EF Core within your serverless functions. That's all good, but you will want to turn off all of Wolverine's transactional inbox/outbox functionality with this setting that was added in 1.10.0: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { opts.Services.AddMarten("some connection string") // This adds quite a bit of middleware for // Marten .IntegrateWithWolverine(); // You want this maybe! opts.Policies.AutoApplyTransactions(); // But wait! Optimize Wolverine for usage within Serverless // and turn off the heavy duty, background processes // for the transactional inbox/outbox opts.Durability.Mode = DurabilityMode.Serverless; }).StartAsync(); ``` snippet source | anchor ## Pre-Generate All Types The runtime code generation that Wolverine does comes with a potentially non-trivial "cold start" problem with its first usage. In serverless architectures, that's probably intolerable. With Wolverine, you can bypass that cold start problem by opting into [pre-generated types](/guide/codegen.html#generating-code-ahead-of-time). ## Use Inline Endpoints If you are using Wolverine to send cascading messages from handlers in serverless functions, you will want to use *Inline* endpoints where the messages are sent immediately without any background processing as would be normal with *Buffered* or *Durable* endpoints: ```cs .UseWolverine(opts => { opts.UseRabbitMq().AutoProvision().AutoPurgeOnStartup(); opts .PublishAllMessages() .ToRabbitQueue(queueName) // This option is important inside of Serverless functions .SendInline(); }) ``` snippet source | anchor --- --- url: /tutorials/mediator.md --- # Wolverine as Mediator ::: tip All of the code on this page is from [the InMemoryMediator sample project](https://github.com/JasperFx/wolverine/tree/main/src/Samples/InMemoryMediator). ::: Recently there's been some renewed interest in the old [Gof Mediator pattern](https://en.wikipedia.org/wiki/Mediator_pattern) as a way to isolate the actual functionality of web services and applications from the mechanics of HTTP request handling. In more concrete terms for .NET developers, a mediator tool allows you to keep MVC Core code ceremony out of your application business logic and service layer. It wasn't the original motivation of the project, but Wolverine can be used as a full-featured mediator tool. Before you run off and use [Wolverine for MediatR users](/introduction/from-mediatr), we think you can arrive at lower ceremony and simpler code in most cases by using [WolverineFx.Http](/guide/http/) for your web services. If you really just like the approach of separating message handlers underneath ASP.Net Minimal API, there is also a set of helpers to more efficiently pipe Minimal API routes to Wolverine message handlers that are a bit more performance optimized than the typical usage of pulling `IMessageBus` out of the IoC container on every request. See [Optimized Minimal API Integration](/guide/http/mediator.html#optimized-minimal-api-integration) for more information. ## Mediator Only Wolverine ::: tip To really get the most value out of Wolverine, you will probably want to completely embrace its integration of persistence tooling like EF Core or its "Critter Stack" sibling [Marten](https://martendb.io) and middleware strategies. ::: Wolverine was not originally conceived of as a "mediator" tool per se. Out of the box, Wolverine is optimized for asynchronous messaging that requires stateful background processing. If you are using Wolverine as "just" a mediator tool, all that background stuff for messaging is just unnecessary overhead, so let's tell Wolverine to turn all that stuff off so we can run more lightly: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Other configuration... // But wait! Optimize Wolverine for usage as *only* // a mediator opts.Durability.Mode = DurabilityMode.MediatorOnly; }).StartAsync(); ``` snippet source | anchor ::: warning Using the `MediatorOnly` mode completely disables all asynchronous messaging, including the local queueing as well ::: The `MediatorOnly` mode sharply reduces the overhead of using Wolverine you don't care about or need if Wolverine is only being used as a mediator tool. ## Starting with Wolverine as Mediator Let's jump into a sample project. Let's say that your system creates and tracks *Items* of some sort. One of the API requirements is to expose an HTTP endpoint that can accept an input that will create and persist a new `Item`, while also publishing an `ItemCreated` event message to any other system (or internal listener within the same system). For the technology stack, let's use: * [MVC Core](https://docs.microsoft.com/en-us/aspnet/core/mvc/overview?view=aspnetcore-6.0) as the Web API framework, but I'm mostly using the newer Minimal API feature for this * Wolverine as our mediator of course! * Sql Server as the backing database store, using [Wolverine's Sql Server message persistence](/guide/durability/#using-sql-server-for-message-storage) * [EF Core](https://docs.microsoft.com/en-us/ef/core/) as the persistence mechanism First off, let's start a new project with the `dotnet new webapi` template. Next, we'll add some configuration to add in Wolverine, a small EF Core `ItemDbContext` service, and wire up our new service for Wolverine's outbox and EF Core middleware: From there, we'll slightly modify the `Program` file generated by the `webapi` template to add Wolverine and opt into Wolverine's [extended command line support](/guide/command-line): ```cs var builder = WebApplication.CreateBuilder(args); // Using Weasel to make sure the items table exists builder.Services.AddHostedService(); var connectionString = builder.Configuration.GetConnectionString("SqlServer"); builder.Host.UseWolverine(opts => { opts.PersistMessagesWithSqlServer(connectionString); // If you're also using EF Core, you may want this as well opts.UseEntityFrameworkCoreTransactions(); opts.Policies.UseDurableLocalQueues(); opts.Durability.KeepAfterMessageHandling = TimeSpan.FromHours(1); opts.LocalQueue("q1").UseDurableInbox(); }); // Register the EF Core DbContext builder.Services.AddDbContext( x => x.UseSqlServer(connectionString), // This is weirdly important! Using Singleton scoping // of the options allows Wolverine to significantly // optimize the runtime pipeline of the handlers that // use this DbContext type optionsLifetime: ServiceLifetime.Singleton); ``` snippet source | anchor Now, let's add a Wolverine message handler that will: 1. Handle a new `CreateItemCommand` message 2. Create a new `Item` entity and persist that with a new `ItemsDbContext` custom EF Core `DbContext` 3. Create and publish a new `ItemCreated` event message reflecting the new `Item` Using idiomatic Wolverine, that handler looks like this: ```cs public class ItemHandler { // This attribute applies Wolverine's EF Core transactional // middleware [Transactional] public static ItemCreated Handle( // This would be the message CreateItemCommand command, // Any other arguments are assumed // to be service dependencies ItemsDbContext db) { // Create a new Item entity var item = new Item { Name = command.Name }; // Add the item to the current // DbContext unit of work db.Items.Add(item); // This event being returned // by the handler will be automatically sent // out as a "cascading" message return new ItemCreated { Id = item.Id }; } } ``` snippet source | anchor **Note**, as long as this handler class is public and in the main application assembly, Wolverine is going to find it and wire it up inside its execution pipeline. There's no explicit code or funky IoC registration necessary. Now, moving up to the API layer, we can add a new HTTP endpoint to delegate to Wolverine as a mediator with: ```cs app.MapPost("/items/create", (CreateItemCommand cmd, IMessageBus bus) => bus.InvokeAsync(cmd)); ``` snippet source | anchor There isn't much to this code -- and that's the entire point! When Wolverine registers itself into a .NET Core application, it adds the `IMessageBus` service to the underlying system IoC container so it can be injected into controller classes or Minimal API endpoint as shown above.The `IMessageBus.InvokeAsync(message)` method takes the message passed in, finds the correct execution path for the message type, and executes the correct Wolverine handler(s) as well as any of the registered [Wolverine middleware](/guide/handlers/middleware). ::: tip This execution happens inline, but will use the "Retry" or "Retry with Cooldown" error handling capabilities. See [Wolverine's error handling](/guide/handlers/error-handling) for more information. ::: See also: * [Cascading messages from actions](/guide/handlers/cascading) for a better explanation of how the `ItemCreated` event message is automatically published if the handler success. * [Messages](/guide/messages) for the details of messages themselves including versioning, serialization, and forwarding. * [Message handlers](/guide/handlers/) for the details of how to write Wolverine message handlers and how they are discovered As a contrast, here's what the same functionality looks like if you write all the functionality out explicitly in a controller action: ```cs // This controller does all the transactional work and business // logic all by itself public class DoItAllMyselfItemController : ControllerBase { [HttpPost("/items/create3")] public async Task Create( [FromBody] CreateItemCommand command, [FromServices] IDbContextOutbox outbox) { // Create a new Item entity var item = new Item { Name = command.Name }; // Add the item to the current // DbContext unit of work outbox.DbContext.Items.Add(item); // Publish an event to anyone // who cares that a new Item has // been created var @event = new ItemCreated { Id = item.Id }; // Because the message context is enlisted in an // "outbox" transaction, these outgoing messages are // held until the ongoing transaction completes await outbox.SendAsync(@event); // Commit the unit of work. This will persist // both the Item entity we created above, and // also a Wolverine Envelope for the outgoing // ItemCreated message await outbox.SaveChangesAndFlushMessagesAsync(); } } ``` snippet source | anchor So one, there's just more going on in the `/items/create` HTTP endpoint defined above because you're needing to do a little bit of additional work that Wolverine can do for you inside of its execution pipeline (the outbox mechanics, the cascading message getting published, transaction management). Also though, you're now mixing up MVC controller stuff like the `[HttpPost]` attribute to control the Url for the endpoint and the service application code that exercises the data and domain model layers. ## Getting a Response The controller methods above would both return an empty response body and the default `200 OK` status code. But what if you want to return some kind of response body that gave the client of the web service some kind of contextual information about the newly created `Item`. To that end, let's write a different controller action that will relay the body of the `ItemCreated` output of the message handler to the HTTP response body (and assume we'll use JSON because that makes the example code simpler): ```cs app.MapPost("/items/create2", (CreateItemCommand cmd, IMessageBus bus) => bus.InvokeAsync(cmd)); ``` snippet source | anchor Using the `IMessageBus.Invoke(message)` overload, the returned `ItemCreated` response of the message handler is returned from the `Invoke()` message. To be perfectly clear, this only works if the message handler method returns a cascading message of the exact same type of the designated `T` parameter. --- --- url: /introduction/from-mediatr.md --- # Wolverine for MediatR Users ::: tip Also see the comprehensive [Migrating to Wolverine](/guide/migrating-to-wolverine) guide for side-by-side comparisons with MassTransit, NServiceBus, Rebus, and Brighter, including practical migration checklists and a discussion of how Wolverine's convention-based approach differs from "IHandler of T" frameworks. ::: [MediatR](https://github.com/jbogard/MediatR) is an extraordinarily successful OSS project in the .NET ecosystem, but it's a very limited tool and the Wolverine team frequently fields questions from folks converting to Wolverine from MediatR. Offhand, the common reasons to do so are: 1. Wolverine has built in support for the [transactional outbox](/guide/durability), even for its [in memory, local queues](/guide/messaging/transports/local) 2. Many people are using MediatR *and* a separate asynchronous messaging framework like MassTransit or NServiceBus while Wolverine handles the same use cases as MediatR *and* [asynchronous messaging](/guide/messaging/introduction) as well with one single set of rules for message handlers 3. Wolverine's programming model can easily result in significantly less application code than the same functionality would with MediatR It's important to note that Wolverine allows for a completely different coding model than MediatR or other "IHandler of T" application frameworks in .NET. While you can use Wolverine as a near exact drop in replacement for MediatR, that's not taking advantages of Wolverine's capabilities. ::: info The word "unambitious" is literally part of MediatR's tagline. For better or worse, Wolverine on the other hand, is most definitely an ambitious project and covers some very important use cases that MediatR does not. ::: ## Handlers MediatR is an example of what I call an "IHandler of T" framework, just meaning that the primary way to plug into the framework is by implementing an interface signature from the framework like this simple example in MediatR: ```csharp public class Ping : IRequest { public string Message { get; set; } } public class PingHandler : IRequestHandler { private readonly TextWriter _writer; public PingHandler(TextWriter writer) { _writer = writer; } public async Task Handle(Ping request, CancellationToken cancellationToken) { await _writer.WriteLineAsync($"--- Handled Ping: {request.Message}"); return new Pong { Message = request.Message + " Pong" }; } } ``` ::: info No, Wolverine is not using reflection at runtime to call your methods because that would be slow. Instead, Wolverine is generating C# code (even if the handler is F#) to effectively create its own adapter type which is more or less the same thing as MediatR's `IRequestHandler` interface. Learn much more about that in the [Runtime Architecture](/guide/runtime) section. ::: Now, if you assume that `TextWriter` is a registered service in your application's IoC container, Wolverine could easily run the exact class above as a Wolverine handler. While most [Hollywood Principle](https://deviq.com/principles/hollywood-principle) application frameworks usually require you to implement some kind of adapter interface, Wolverine instead wraps around *your* code, with this being a perfectly acceptable handler implementation to Wolverine: ```csharp // No marker interface necessary, and records work well for this kind of little data structure public record Ping(string Message); public record Pong(string Message); // It is legal to implement more than message handler in the same class public static class PingHandler { public static Pong Handle(Ping command, TextWriter writer) { _writer.WriteLine($"--- Handled Ping: {request.Message}"); return new Pong(command.Message); } } ``` So you might notice a couple of things that are different right away: * While Wolverine is perfectly capable of using constructor injection for your handlers and class instances, you can eschew all that ceremony and use static methods for just a wee bit fewer object allocations * Like MVC Core and Minimal API, Wolverine supports "method injection" such that you can pass in IoC registered services directly as arguments to the handler methods for a wee bit less ceremony * There are no required interfaces on either the message type or the handler type * Wolverine [discovers message handlers](/guide/handlers/discovery) through naming conventions (or you can also use marker interfaces or attributes if you have to) * You can use synchronous methods for your handlers when that's valuable so you don't have to scatter `return Task.CompletedTask;` all over your code * Moreover, Wolverine's [best practice](/introduction/best-practices) as much as possible is to use pure functions for the message handlers for the absolute best testability There are more differences though. At a minimum, you probably want to look at Wolverine's [compound handler](/guide/handlers/#compound-handlers) capability as a way to build more complex handlers. ::: tip Wolverine was built with the express goal of allowing you to write very low ceremony code. To that end we try to minimize the usage of adapter interfaces, mandatory base classes, or attributes in your code. ::: ## Built in Error Handling Wolverine's `IMessageBus.InvokeAsync()` is the direct equivalent to MediatR's `IMediator.Send()`, *but*, the Wolverine usage also builds in support for *some* of Wolverine's [error handling policies](/guide/handlers/error-handling) such that you can build in selective retries. ## MediatR's INotificationHandler ::: warning You should not be using MediatR's `INotificationHandler` for any kind of background work that needs a true delivery guarantee (i.e., the notification will get processed even if the process fails unexpectedly). ::: MediatR's `INotificationHandler` concept is strictly [fire and forget](https://www.enterpriseintegrationpatterns.com/patterns/conversation/FireAndForget.html), which is just not suitable if you need delivery guarantees of that work. Wolverine on the other hand supports both a "fire and forget" (`Buffered` in Wolverine parlance) or a [durable, transactional inbox/outbox](/guide/durability) approach with its in memory, local queues such that work will *not* be lost in the case of errors. Moreover, using the Wolverine local queues allows you to take advantage of Wolverine's error handling capabilities for a much more resilient system that you'll achieve with MediatR. The equivalent of `INotificationHandler` in Wolverine is just a message handler. You can publish messages anytime through the `IMessageBus.PublishAsync()` API, but if you're just needing to publish additional messages (either commands or events, to Wolverine it's all just a message), you can utilize Wolverine's [cascading message](/guide/handlers/cascading) usage as a way of building more testable handler methods. ## MediatR IPipelineBehavior to Wolverine Middleware MediatR uses its `IPipelineBehavior` model as a "Russian Doll" model for handling cross cutting concerns across handlers. Wolverine has its own mechanism for cross cutting concerns with its [middleware](/guide/handlers/middleware) capabilities that are far more capable and potentially much more efficient at runtime than the nested doll approach that MediatR (and MassTransit for that matter) take in its pipeline behavior model. ::: tip The Fluent Validation example is just about the most complicated middleware solution in Wolverine, but you can expect that most custom middleware that you'd write in your own application would be much simpler. ::: Let's just jump into an example. With MediatR, you might try to use a pipeline behavior to apply [Fluent Validation](https://docs.fluentvalidation.net/en/latest/) to any handlers where there are Fluent Validation validators for the message type like [this sample](https://garywoodfine.com/how-to-use-mediatr-pipeline-behaviours/): ```csharp public class ValidationBehaviour : IPipelineBehavior where TRequest : IRequest { private readonly IEnumerable> _validators; public ValidationBehaviour(IEnumerable> validators) { _validators = validators; } public async Task Handle(TRequest request, CancellationToken cancellationToken, RequestHandlerDelegate next) { if (_validators.Any()) { var context = new ValidationContext(request); var validationResults = await Task.WhenAll(_validators.Select(v => v.ValidateAsync(context, cancellationToken))); var failures = validationResults.SelectMany(r => r.Errors).Where(f => f != null).ToList(); if (failures.Count != 0) throw new ValidationException(failures); } return await next(); } } ``` It's cheating a little bit, because Wolverine has both an add on for incorporating [Fluent Validation middleware for message handlers](/guide/handlers/fluent-validation) and a [separate one for HTTP usage](/guide/http/fluentvalidation) that relies on the [ProblemDetails](https://learn.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.mvc.problemdetails?view=aspnetcore-9.0) specification for relaying validation errors. Let's still dive into how that works just to see how Wolverine really differs -- and why we think those differences matter for performance and also to keep exception stack traces cleaner (don't laugh, we really did design Wolverine quite purposely to avoid the really nasty kind of Exception stack traces you get from many other middleware or "behavior" using frameworks). Let's say that you have a Wolverine.HTTP endpoint like so: ```cs public record CreateCustomer ( string FirstName, string LastName, string PostalCode ) { public class CreateCustomerValidator : AbstractValidator { public CreateCustomerValidator() { RuleFor(x => x.FirstName).NotNull(); RuleFor(x => x.LastName).NotNull(); RuleFor(x => x.PostalCode).NotNull(); } } } public static class CreateCustomerEndpoint { [WolverinePost("/validate/customer")] public static string Post(CreateCustomer customer) { return "Got a new customer"; } [WolverinePost("/validate/customer2")] public static string Post2([FromQuery] CreateCustomer customer) { return "Got a new customer"; } } ``` snippet source | anchor In the application bootstrapping, I've added this option: ```csharp app.MapWolverineEndpoints(opts => { // more configuration for HTTP... // Opting into the Fluent Validation middleware from // Wolverine.Http.FluentValidation opts.UseFluentValidationProblemDetailMiddleware(); } ``` Just like with MediatR, you would need to register the Fluent Validation validator types in your IoC container as part of application bootstrapping. Now, here's how Wolverine's model is very different from MediatR's pipeline behaviors. While MediatR is applying that `ValidationBehaviour` to each and every message handler in your application whether or not that message type actually has any registered validators, Wolverine is able to peek into the IoC configuration and "know" whether there are registered validators for any given message type. If there are any registered validators, Wolverine will utilize them in the code it generates to execute the HTTP endpoint method shown above for creating a customer. If there is only one validator, and that validator is registered as a `Singleton` scope in the IoC container, Wolverine generates this code: ```csharp public class POST_validate_customer : Wolverine.Http.HttpHandler { private readonly Wolverine.Http.WolverineHttpOptions _wolverineHttpOptions; private readonly Wolverine.Http.FluentValidation.IProblemDetailSource _problemDetailSource; private readonly FluentValidation.IValidator _validator; public POST_validate_customer(Wolverine.Http.WolverineHttpOptions wolverineHttpOptions, Wolverine.Http.FluentValidation.IProblemDetailSource problemDetailSource, FluentValidation.IValidator validator) : base(wolverineHttpOptions) { _wolverineHttpOptions = wolverineHttpOptions; _problemDetailSource = problemDetailSource; _validator = validator; } public override async System.Threading.Tasks.Task Handle(Microsoft.AspNetCore.Http.HttpContext httpContext) { // Reading the request body via JSON deserialization var (customer, jsonContinue) = await ReadJsonAsync(httpContext); if (jsonContinue == Wolverine.HandlerContinuation.Stop) return; // Execute FluentValidation validators var result1 = await Wolverine.Http.FluentValidation.Internals.FluentValidationHttpExecutor.ExecuteOne(_validator, _problemDetailSource, customer).ConfigureAwait(false); // Evaluate whether or not the execution should be stopped based on the IResult value if (result1 != null && !(result1 is Wolverine.Http.WolverineContinue)) { await result1.ExecuteAsync(httpContext).ConfigureAwait(false); return; } // The actual HTTP request handler execution var result_of_Post = WolverineWebApi.Validation.ValidatedEndpoint.Post(customer); await WriteString(httpContext, result_of_Post); } } ``` The point here is that Wolverine is trying to generate the most efficient code possible based on what it can glean from the IoC container registrations and the signature of the HTTP endpoint or message handler methods. The MediatR model has to effectively use runtime wrappers and conditional logic at runtime. Do note that Wolverine has built in middleware for logging, validation, and transactional middleware out of the box. Most of the custom middleware that folks are building for Wolverine are much simpler than the validation middleware I talked about in this guide. ## Vertical Slice Architecture MediatR is almost synonymous with the "Vertical Slice Architecture" (VSA) approach in .NET circles, but Wolverine arguably enables a much lower ceremony version of VSA. The typical approach you'll see is folks delegating to MediatR commands or queries from either an MVC Core `Controller` like this ([stolen from this blog post](https://dev.to/ifleonardo_/agile-and-modular-development-with-vertical-slice-architecture-and-mediatr-in-c-projects-3p4o)): ```csharp public class AddToCartRequest : IRequest { public int ProductId { get; set; } public int Quantity { get; set; } } public class AddToCartHandler : IRequestHandler { private readonly ICartService _cartService; public AddToCartHandler(ICartService cartService) { _cartService = cartService; } public async Task Handle(AddToCartRequest request, CancellationToken cancellationToken) { // Logic to add the product to the cart using the cart service bool addToCartResult = await _cartService.AddToCart(request.ProductId, request.Quantity); bool isAddToCartSuccessful = addToCartResult; // Check if adding the product to the cart was successful. return Result.SuccessIf(isAddToCartSuccessful, "Failed to add the product to the cart."); // Return failure if adding to cart fails. } public class CartController : ControllerBase { private readonly IMediator _mediator; public CartController(IMediator mediator) { _mediator = mediator; } [HttpPost] public async Task AddToCart([FromBody] AddToCartRequest request) { var result = await _mediator.Send(request); if (result.IsSuccess) { return Ok("Product added to the cart successfully."); } else { return BadRequest(result.ErrorMessage); } } } ``` While the introduction of MediatR probably is a valid way to sidestep the common code bloat from MVC Core Controllers, with Wolverine we'd recommend just using the [Wolverine.HTTP](/guide/http) mechanism for writing HTTP endpoints in a much lower ceremony way and ditch the "mediator" step altogether. Moreover, we'd even go so far as to drop repository and domain service layers and just put the functionality right into an HTTP endpoint method if that code isn't going to be reused any where else in your application. ::: tip See [Automatically Loading Entities to Method Parameters](https://wolverinefx.net/guide/handlers/persistence.html#automatically-loading-entities-to-method-parameters) for some context around that `[Entity]` attribute usage ::: So something like this: ```csharp public static class AddToCartRequestEndpoint { // Remember, we can do validation in middleware, or // even do a custom Validate() : ProblemDetails method // to act as a filter so the main method is the happy path [WolverinePost("/api/cart/add")] public static Update Post( AddToCartRequest request, // See [Entity] Cart cart) { return cart.TryAddRequest(request) ? Storage.Update(cart) : Storage.Nothing(cart); } } ``` We of course believe that Wolverine is more optimized for Vertical Slice Architecture than MediatR or any other "mediator" tool by how Wolverine can reduce the number of moving parts, layers, and code ceremony. ## IoC Usage Just know that [Wolverine has a very different relationship with your application's IoC container](/guide/runtime.html#ioc-container-integration) than MediatR. Wolverine's philosophy all along has been to keep the usage of IoC service location at runtime to a bare minimum. Instead, Wolverine wants to mostly use the IoC tool as a service registration model at bootstrapping time. --- --- url: /guide.md --- # Wolverine Guides Welcome to the Wolverine documentation website! See the content in the left hand pane. --- --- url: /tutorials.md --- # Wolverine Tutorials | Tutorial | Description | |--------------------------------------------------------------|------------------------------------------------------------------------------------------------------------| | [Wolverine as Mediator](/tutorials/mediator) | Learn how to use Wolverine as a mediator tool within an ASP.Net Core or other application | | [Ping/Pong Messaging with Rabbit MQ](/tutorials/ping-pong) | Basic tutorial on asynchronous messaging with Rabbit MQ | | [Vertical Slice Architecture](./vertical-slice-architecture) | How Wolverine can be used for more effective vertical slice architecture style development | | [Modular Monolith Architecture](./modular-monolith) | Learn how best to use Wolverine inside of "Modular Monolith" architectures | | [CQRS and Event Sourcing with Marten](./cqrs-with-marten) | Utilize the full "Critter Stack" for a very productive development experience | | [Railway Programming](./railway-programming) | Wolverine builds in some very light weight Railway Programming inspired abilities | | [Interoperability with Non-Wolverine Systems](./interop) | Everything you need to know to make Wolverine play nicely and exchange messages with non-Wolverine systems | | [Leader Election and Agents](./leader-election) | Learn about Wolverine's internal leader election and how to write your own "sticky" agent family | | [Dealing with Concurrency](./concurrency) | Dealing with concurrency can be hard, but Wolverine has plenty of tools to help you manage it | | [Dead Letter Queues](./dead-letter-queues)| Understand how dead letter queueing works in Wolverine and how to manage message failures | | [Idempotency in Messaging](./idempotency) | Find out how best to use Wolverine's built in support for messaging idempotency | --- --- url: /guide/codegen.md --- # Working with Code Generation ::: warning If you are experiencing noticeable startup lags or seeing spikes in memory utilization with an application using Wolverine, you will want to pursue using either the `Auto` or `Static` modes for code generation as explained in this guide. ::: Wolverine uses runtime code generation to create the "adaptor" code that Wolverine uses to call into your message handlers. Wolverine's [middleware strategy](/guide/handlers/middleware) also uses this strategy to "weave" calls to middleware directly into the runtime pipeline without requiring the copious usage of adapter interfaces that is prevalent in most other .NET frameworks. That's great when everything is working as it should, but there's a couple issues: 1. The usage of the Roslyn compiler at runtime *can sometimes be slow* on its first usage. This can lead to sluggish *cold start* times in your application that might be problematic in serverless scenarios for examples. 2. There's a little bit of conventional magic in how Wolverine finds and applies middleware or passed arguments to your message handlers or HTTP endpoint handlers. Not to worry though, Wolverine has several facilities to either preview the generated code for diagnostic purposes to really understand how Wolverine is interacting with your code and to optimize the "cold start" by generating the dynamic code ahead of time so that it can be embedded directly into your application's main assembly and discovered from there. By default, Wolverine runs with "dynamic" code generation where all the necessary generated types are built on demand the first time they are needed. This is perfect for a quick start to Wolverine, and might be fine in smaller projects even at production time. ::: warning Note that you may need to delete the existing source code when you change handler signatures or add or remove middleware. Nothing in Wolverine is able to detect that the generated source code needs to be rewritten ::: Lastly, you have a couple options about how Wolverine handles the dynamic code generation as shown below: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // The default behavior. Dynamically generate the // types on the first usage opts.CodeGeneration.TypeLoadMode = TypeLoadMode.Dynamic; // Never generate types at runtime, but instead try to locate // the generated types from the main application assembly opts.CodeGeneration.TypeLoadMode = TypeLoadMode.Static; // Hybrid approach that first tries to locate the types // from the application assembly, but falls back to // generating the code and dynamic type. Also writes the // generated source code file to disk opts.CodeGeneration.TypeLoadMode = TypeLoadMode.Auto; }).StartAsync(); ``` snippet source | anchor At development time, use the `Dynamic` mode if you are actively changing handler signatures or the application of middleware that might be changing the generated code. Even at development time, if the handler signatures are relatively stable, you can use the `Auto` mode to use pre-generated types locally. This may help you have a quicker development cycle -- especially if you like to lean heavily on integration testing where you're quickly starting and stopping your application. The `Auto` mode will write the generated source code for missing types to the `Internal/Generated` folder under your main application project. ::: tip If you're using the `Auto` mode in combination with `dotnet watch` you need to disable the watching of the `Internal/Generated` folder to avoid application restarts each time codegen writes a new file. You can do this by adding the following to the `.csproj` file of your app project. ```xml ``` ::: At production time, if there is any issue whatsoever with resource utilization, the Wolverine team recommends using the `Static` mode where all types are assumed to be pre-generated into what Wolverine thinks is the application assembly (more on this in the troubleshooting guide below). ::: tip Most of the facilities shown here will require the [Oakton command line integration](./command-line). ::: ## Embedding Codegen in Docker This blog post from Oskar Dudycz will apply to Wolverine as well: [How to create a Docker image for the Marten application](https://event-driven.io/en/marten_and_docker/) At this point, the most successful mechanism and sweet spot is to run the codegen as `Dynamic` at development time, but generating the code artifacts just in time for production deployments. From Wolverine's sibling project Marten, see this section on [Application project setup](https://martendb.io/devops/devops.html#application-project-set-up) for embedding the code generation directly into your Docker images for deployment. ## Troubleshooting Code Generation Issues ::: warning There's nothing magic about the `Auto` mode, and Wolverine isn't (yet) doing any file comparisons against the generated code and the current version of the application. At this point, the Wolverine community recommends against using the `Auto` mode for code generation as it has not added much value and can cause some confusion. ::: In all cases, don't hesitate to reach out to the Wolverine team in the Discord link at the top right of this page to ask for help with any codegen related issues. If Wolverine is throwing exceptions in `Static` mode saying that it cannot find the expected pre-generated types, here's your checklist of things to check: Are the expected generated types written to files in the main application project before that project is compiled? The pre-generation works by having the source code written into the assembly in the first place. Is Wolverine really using the correct application assembly when it looks for pre-built handlers or HTTP endpoints? Wolverine will log what *it* thinks is the application assembly upfront, but it can be fooled in certain project structures. To override the application assembly choice to help Wolverine out, use this syntax: ```cs using var host = Host.CreateDefaultBuilder() .UseWolverine(opts => { // Override the application assembly to help // Wolverine find its handlers // Should not be necessary in most cases opts.ApplicationAssembly = typeof(Program).Assembly; }).StartAsync(); ``` snippet source | anchor If the assembly choice is correct, and the expected code files are really in `Internal/Generated` exactly as you'd expect, make sure there's no accidental `` nodes in your project file. *Don't laugh, that's actually happened to Wolverine users* ::: warning Actually, while the Wolverine team mostly uses JetBrains Rider that doesn't exhibit this behavior, we found out the hard way interacting with other folks that Visual Studio.Net will add the `` into your `csproj` file when you manually delete the generated code files sometimes. ::: If you see issues with *Marten* document providers, make sure that you have registered that document with Marten itself. At this point, Wolverine does not automatically register `Saga` types with Marten. See [Marten's own documentation](https://martendb.io) about document type discovery. ## Wolverine Code Generation and IoC ::: info Why, you ask, does Wolverine do any of this? Wolverine was originally conceived of as the successor to the [FubuMVC & FubuTransportation](https://fubumvc.github.io) projects from the early 2010's. A major lesson learned from FubuMVC was that we needed to reduce object allocations, layering, runaway `Exception` stack traces, and allow for more flexible and streamlined handler or endpoint method signatures. To that end we fully embraced using runtime code generation -- and this was built well before source generators were available. As for the IoC part of this strategy, we ask you, what's the very fastest IoC tool in .NET? The answer of course, is "no IoC container." ::: Wolverine's code generation uses the configuration of your IoC tool to create the generated code wrappers around your raw message handlers, HTTP endpoints, and middleware methods. Whenever possible, Wolverine is trying to completely eliminate your application's IoC tool from the runtime code by generating the necessary constructor function invocations to exactly mimic your application's IoC configuration. ::: info Because you should care about this, Wolverine is absolutely generating `using` or `await using` for any objects it creates through constructor calls that implements `IDisposable` or `IAsyncDisposable`. ::: When generating the adapter classes, Wolverine can infer which method arguments or type dependencies can be sourced from your application's IoC container configuration. If Wolverine can determine a way to generate all the necessary constructor calls to create any necessary services registered with a `Scoped` or `Transient` lifetime, Wolverine will generate code with the constructors. In this case, any IoC services that are registered with a `Singleton` lifetime will be "inlined" as constructor arguments into the generated adapter class itself for a little better efficiency. ::: warning The usage of a service locator within the generated code will naturally be a little less efficient just because there is more runtime overhead. More dangerously, the service locator usage can sometimes foul up the scoping of services like Wolverine's `IMessageBus` or Marten's `IDocumentSession` that are normally built outside of the IoC container ::: If Wolverine cannot determine a path to generate code for raw constructor construction of any registered services for a message handler or HTTP endpoint, Wolverine will fall back to generating code with the [service locator pattern](https://en.wikipedia.org/wiki/Service_locator_pattern) using a scoped container (think [IServiceScopeFactory](https://learn.microsoft.com/en-us/dotnet/api/microsoft.extensions.dependencyinjection.iservicescopefactory?view=net-9.0-pp)). Here's some facts you do need to know about this whole process: * The adapter classes generated by Wolverine for both message handlers and HTTP endpoints are effectively singleton scoped and only ever built once * Wolverine will try to bring `Singleton` scoped services through the generated adapter type's constructor function *one time* * Wolverine will have to fall back to the service locator usage if any service dependency that has a `Scoped` or `Transient` lifetime is either an `internal` type or uses an "opaque" Lambda registration (think `IServiceCollection.AddScoped(s => {})`) ::: tip The code generation using IoC configuration is tested with both the built in .NET `ServiceProvider` and [Lamar](https://jasperfx.github.io/lamar). It is theoretically possible to use other IoC tools with Wolverine, but only if you are *only* using `IServiceCollection` for your IoC configuration. ::: As of Wolverine 5.0, you now have the ability to better control the usage of the service locator in Wolverine's code generation to potentially avoid unwanted usage: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { // This is the default behavior. Wolverine will allow you to utilize // service location in the codegen, but will warn you through log messages // when this happens opts.ServiceLocationPolicy = ServiceLocationPolicy.AllowedButWarn; // Tell Wolverine to just be quiet about service location and let it // all go. For any of you with small children, I defy you to get the // Frozen song out of your head now... opts.ServiceLocationPolicy = ServiceLocationPolicy.AlwaysAllowed; // Wolverine will throw exceptions at runtime if it encounters // a message handler or HTTP endpoint that would require service // location in the code generation // Use this option to disallow any undesirably service location opts.ServiceLocationPolicy = ServiceLocationPolicy.NotAllowed; }); ``` snippet source | anchor ::: note [Wolverine.HTTP has some additional control over the service locator](/guide/http/#using-the-httpcontext-requestservices) to utilize the shared scoped container with the rest of the AspNetCore pipeline. ::: ## Allow List for Service Location Wolverine always reverts to using a service locator when it encounters an "opaque" Lambda registration that has either a `Scoped` or `Transient` service lifetime. You can explicitly create an "allow" list of service types that can use a service locator pattern while allowing the rest of the code generation for the message handler or HTTP endpoint to use the more predictable and efficient generated constructor functions with this syntax: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { // other configuration // Use a service locator for this service w/o forcing the entire // message handler adapter to use a service locator for everything opts.CodeGeneration.AlwaysUseServiceLocationFor(); }); ``` snippet source | anchor For example, this functionality might be helpful for: * [Refit proxies](https://github.com/reactiveui/refit) that are registered in IoC with a Lambda registration, but might not use any other services * EF Core `DbContext` types that might require some runtime configuration to construct themselves, but don't use other services (a [JasperFx Software](https://jasperfx.net) client ran into this needing to conditionally opt into read replica usage, so hence, this feature made it into Wolverine 5.0) ## Environment Check for Expected Types As a new option in Wolverine 1.7.0, you can also add an environment check for the existence of the expected pre-built types to [fail fast](https://en.wikipedia.org/wiki/Fail-fast) on application startup like this: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { if (builder.Environment.IsProduction()) { opts.CodeGeneration.TypeLoadMode = TypeLoadMode.Static; opts.Services.CritterStackDefaults(cr => { // I'm only going to care about this in production cr.Production.AssertAllPreGeneratedTypesExist = true; }); } }); using var host = builder.Build(); await host.StartAsync(); ``` snippet source | anchor Do note that you would have to opt into using the environment checks on application startup, and maybe even force .NET to make hosted service failures stop the application. See [Oakton's Environment Check functionality](https://jasperfx.github.io/oakton/guide/host/environment.html) for more information (the old Oakton documentation is still relevant for JasperFx). ## Previewing the Generated Code ::: tip All of these commands are from the JasperFx.CodeGeneration.Commands library that Wolverine adds as a dependency. This is shared with [Marten](https://martendb.io) as well. ::: To preview the generated source code, use this command line usage from the root directory of your .NET project: ```bash dotnet run -- codegen preview ``` ## Generating Code Ahead of Time To write the source code ahead of time into your project, use: ```bash dotnet run -- codegen write ``` This command **should** write all the source code files for each message handler and/or HTTP endpoint handler to `/Internal/Generated/WolverineHandlers` directly under the root of your project folder. ## Handling Code Generation with Wolverine when using Aspire or Microsoft.Extensions.ApiDescription.Server When integrating **Wolverine** with **Aspire**, or using `Microsoft.Extensions.ApiDescription.Server` to generate OpenAPI files at build time, you may encounter issues with code generation because connection strings are only provided by Aspire when the application is run. This limitation affects both Wolverine codegen and OpenAPI schema generation, because these processes require connection strings during their execution. To work around this, add a helper class that detects if we are just generating code (either by the Wolverine codegen command or during OpenAPI generation). You can then conditionally disable external Wolverine transports and message persistence to avoid configuration errors. ```csharp public static class CodeGeneration { public static bool IsRunningGeneration() { return Assembly.GetEntryAssembly()?.GetName().Name == "GetDocument.Insider" || Environment.GetCommandLineArgs().Contains("codegen"); } } ``` Example use ```csharp if (CodeGeneration.IsRunningGeneration()) { builder.Services.DisableAllExternalWolverineTransports(); builder.Services.DisableAllWolverineMessagePersistence(); } builder.Services.AddWolverine(options => { var connectionString = builder.Configuration.GetConnectionString("postgres"); if (CodeGeneration.IsRunningGeneration() == false) { var dataSource = new NpgsqlDataSourceBuilder(connectionString).Build(); options.PersistMessagesWithPostgresql(dataSource, "wolverine"); } } ``` ## Optimized Workflow Wolverine and [Marten](https://martendb.io) both use the shared JasperFx library for their code generation, and you can configure different behavior for production versus development time for both tools (and any future "CritterStack" tools) with this usage: ```cs using var host = await Host.CreateDefaultBuilder() .UseWolverine(opts => { // Use "Auto" type load mode at development time, but // "Static" any other time opts.Services.CritterStackDefaults(x => { x.Production.GeneratedCodeMode = TypeLoadMode.Static; x.Production.ResourceAutoCreate = AutoCreate.None; // Little draconian, but this might be helpful x.Production.AssertAllPreGeneratedTypesExist = true; // These are defaults, but showing for completeness x.Development.GeneratedCodeMode = TypeLoadMode.Dynamic; x.Development.ResourceAutoCreate = AutoCreate.CreateOrUpdate; }); }).StartAsync(); ``` snippet source | anchor Which will use: 1. `TypeLoadMode.Dynamic` when the .NET environment is "Development" and dynamically generate types on the first usage 2. `TypeLoadMode.Static` for other .NET environments for optimized cold start times ## Customizing the Generated Code Output Path By default, Wolverine writes generated code to `Internal/Generated` under your project's content root. For Console applications or non-standard project structures, you may need to customize this path. ### Using CritterStackDefaults You can configure the output path globally for all Critter Stack tools: ```cs var builder = Host.CreateApplicationBuilder(); builder.Services.CritterStackDefaults(opts => { // Set a custom output path for generated code opts.GeneratedCodeOutputPath = "/path/to/your/project/Internal/Generated"; }); ``` snippet source | anchor ### Auto-Resolving Project Root for Console Apps Console applications often have `ContentRootPath` pointing to the `bin` folder, which causes generated code to be written to the wrong location. Enable automatic project root resolution: ```cs var builder = Host.CreateApplicationBuilder(); builder.Services.CritterStackDefaults(opts => { // Automatically find the project root by looking for .csproj/.sln files // Useful for Console apps where ContentRootPath defaults to bin folder opts.AutoResolveProjectRoot = true; }); ``` snippet source | anchor ### Direct Wolverine Configuration You can also configure the path directly on Wolverine: ```cs var builder = Host.CreateApplicationBuilder(); builder.UseWolverine(opts => { opts.CodeGeneration.GeneratedCodeOutputPath = "/path/to/output"; }); ``` snippet source | anchor Note that explicit Wolverine configuration takes precedence over `CritterStackDefaults`. --- --- url: /guide/http/forms.md --- # Working with Form Data Wolverine will allow you to bind HTTP form data to a model type that is decorated with the `[FromForm]` attribute from ASP.Net Core. Similar to the above usuage of `[FromQuery]` Wolverine also supports form parameters as input either directly as method parameters like shown here: ```cs [WolverinePost("/form/string")] public static string UsingForm([FromForm]string name) // name is from form data { return name.IsEmpty() ? "Name is missing" : $"Name is {name}"; } ``` snippet source | anchor And the corresponding test: ```cs [Fact] public async Task use_string_form_hit() { var body = await Scenario(x => { x.Post .FormData(new Dictionary{ ["name"] = "Magic" }) .ToUrl("/form/string"); x.Header("content-type").SingleValueShouldEqual("text/plain"); }); body.ReadAsText().ShouldBe("Name is Magic"); } [Fact] public async Task use_string_form_miss() { var body = await Scenario(x => { x.Post .FormData([]) .ToUrl("/form/string"); x.Header("content-type").SingleValueShouldEqual("text/plain"); }); body.ReadAsText().ShouldBe("Name is missing"); } [Fact] public async Task use_decimal_form_hit() { var body = await Scenario(x => { x.WithRequestHeader("Accept-Language", "fr-FR"); x.Post .FormData(new Dictionary (){ {"Amount", "42.1"} }) .ToUrl("/form/decimal"); x.Header("content-type").SingleValueShouldEqual("text/plain"); }); body.ReadAsText().ShouldBe("Amount is 42.1"); } ``` snippet source | anchor You can also use the FromForm attribute on a complex type, Wolverine will then attempt to bind all public properties or all parameters from the single default constructor with Form values: ```cs [WolverinePost("/api/fromformbigquery")] public static BigQuery Post([FromForm] BigQuery query) => query; ``` snippet source | anchor Individual properties on the class can be aliased using `[FromForm(Name = "aliased")]` --- --- url: /guide/http/querystring.md --- # Working with QueryString ::: tip Wolverine can handle both nullable types and the primitive values here. So `int` and `int?` are both valid. In all cases, if the query string does not exist -- or cannot be parsed -- the value passed to your method will be the `default` for whatever that type is. ::: Wolverine supports passing query string values to your HTTP method arguments for the exact same set of value types supported for route arguments. In this case, Wolverine treats any value type parameter where the parameter name does not match a route argument name as coming from the HTTP query string. When Wolverine does the runtime matching, it's using the exact parameter name as the query string key. Here's a quick sample: ```cs [WolverineGet("/querystring/string")] public static string UsingQueryString(string name) // name is from the query string { return name.IsEmpty() ? "Name is missing" : $"Name is {name}"; } ``` snippet source | anchor And the corresponding tests: ```cs [Fact] public async Task use_string_querystring_hit() { var body = await Scenario(x => { x.Get.Url("/querystring/string?name=Magic"); x.Header("content-type").SingleValueShouldEqual("text/plain"); }); body.ReadAsText().ShouldBe("Name is Magic"); } [Fact] public async Task use_string_querystring_miss() { var body = await Scenario(x => { x.Get.Url("/querystring/string"); x.Header("content-type").SingleValueShouldEqual("text/plain"); }); body.ReadAsText().ShouldBe("Name is missing"); } [Fact] public async Task use_decimal_querystring_hit() { var body = await Scenario(x => { x.WithRequestHeader("Accept-Language", "fr-FR"); x.Get.Url("/querystring/decimal?amount=42.1"); x.Header("content-type").SingleValueShouldEqual("text/plain"); }); body.ReadAsText().ShouldBe("Amount is 42.1"); } ``` snippet source | anchor ## \[FromQuery] Binding Wolverine can support the [FromQueryAttribute](https://learn.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.mvc.fromqueryattribute?view=aspnetcore-9.0) binding similar to MVC Core or Minimal API. Let's say that you have a GET endpoint where you may want to use a series of non-mandatory querystring values for a query, and it would be convenient to have Wolverine just let you declare a single .NET type for all the optional query string values that will be filled with any of the matching query string parameters like this sample: ```cs // If you want every value to be optional, use public, settable // properties and a no-arg public constructor public class OrderQuery { public int PageSize { get; set; } = 10; public int PageNumber { get; set; } = 1; public bool? HasShipped { get; set; } } // Or -- and I'm not sure how useful this really is, use a record: public record OrderQueryAlternative(int PageSize, int PageNumber, bool HasShipped); public static class QueryOrdersEndpoint { [WolverineGet("/api/orders/query")] public static Task> Query( // This will be bound from query string values in the HTTP request [FromQuery] OrderQuery query, IQuerySession session, CancellationToken token) { IQueryable queryable = session.Query() // Just to make the paging deterministic .OrderBy(x => x.Id); if (query.HasShipped.HasValue) { queryable = query.HasShipped.Value ? queryable.Where(x => x.Shipped.HasValue) : queryable.Where(x => !x.Shipped.HasValue); } // Marten specific Linq helper return queryable.ToPagedListAsync(query.PageNumber, query.PageSize, token); } } ``` snippet source | anchor Because we've used the `[FromQuery]` attribute on a parameter argument that's not a simple type, Wolverine is trying to bind the query string values to each public property of the `OrderQuery` object being passed in as an argument to `QueryOrdersEndpoint.Query()`. Here's the code that Wolverine generates around the method signature above (warning, it's ugly code): ```csharp // #pragma warning disable using Microsoft.AspNetCore.Routing; using System; using System.Linq; using Wolverine.Http; using Wolverine.Marten.Publishing; using Wolverine.Runtime; namespace Internal.Generated.WolverineHandlers { // START: GET_api_orders_query public class GET_api_orders_query : Wolverine.Http.HttpHandler { private readonly Wolverine.Http.WolverineHttpOptions _wolverineHttpOptions; private readonly Wolverine.Runtime.IWolverineRuntime _wolverineRuntime; private readonly Wolverine.Marten.Publishing.OutboxedSessionFactory _outboxedSessionFactory; public GET_api_orders_query(Wolverine.Http.WolverineHttpOptions wolverineHttpOptions, Wolverine.Runtime.IWolverineRuntime wolverineRuntime, Wolverine.Marten.Publishing.OutboxedSessionFactory outboxedSessionFactory) : base(wolverineHttpOptions) { _wolverineHttpOptions = wolverineHttpOptions; _wolverineRuntime = wolverineRuntime; _outboxedSessionFactory = outboxedSessionFactory; } public override async System.Threading.Tasks.Task Handle(Microsoft.AspNetCore.Http.HttpContext httpContext) { var messageContext = new Wolverine.Runtime.MessageContext(_wolverineRuntime); // Building the Marten session await using var querySession = _outboxedSessionFactory.QuerySession(messageContext); // Binding QueryString values to the argument marked with [FromQuery] var orderQuery = new WolverineWebApi.Marten.OrderQuery(); if (int.TryParse(httpContext.Request.Query["PageSize"], System.Globalization.CultureInfo.InvariantCulture, out var PageSize)) orderQuery.PageSize = PageSize; if (int.TryParse(httpContext.Request.Query["PageNumber"], System.Globalization.CultureInfo.InvariantCulture, out var PageNumber)) orderQuery.PageNumber = PageNumber; if (bool.TryParse(httpContext.Request.Query["HasShipped"], out var HasShipped)) orderQuery.HasShipped = HasShipped; // The actual HTTP request handler execution var pagedList_response = await WolverineWebApi.Marten.QueryOrdersEndpoint.Query(orderQuery, querySession, httpContext.RequestAborted).ConfigureAwait(false); // Writing the response body to JSON because this was the first 'return variable' in the method signature await WriteJsonAsync(httpContext, pagedList_response); } } // END: GET_api_orders_query } ``` Note there are some limitations of this approach in Wolverine: * Wolverine can use *either* a class that has a single constructor with arguments (like a `record` type) or a class with a public, default constructor and public settable properties but not have *both* a constructor with arguments and settable properties! * The types marked as `[FromQuery]` must be public, as well as any properties you want to bind * The binding supports array types, but know that you will always get an empty array as the value even with no matching query string values * Likewise, `string` values will be null if there is no query string * For any kind of parsed data (`Guid`, numbers, dates, boolean values, enums), Wolverine will not set any value on public setters if there is either no matching querystring value or the querystring value cannot be parsed ## \[AsParameters] Binding Also see the [AsParameters](./as-parameters) binding.