Aecor — Purely functional event sourcing in Scala. Part 3

Hello! This is a series of posts about Aecor — a library for building eventsourced applications in Scala in purely functional way. If this is the first time you see this series, I highly recommend to start with Introduction.

In previous posts we learned everything about eventsourced behaviors, constructed and refined one for the Booking entity. Now it’s finally time to see it in action. In this post we’ll:

  • see how you can launch your behavior with Aecor;
  • get a quick overview of how it works on the surface;
  • discuss some theoretical topics, related to running eventsourced entities in a distributed system.

Overview

Let’s start with a high-level picture.

Aecor has a notion of a Runtime, which is just an infrastructure platform, where behaviors can be deployed. For example, you would need to store the events somewhere — this is one of the responsibilities of an Aecor runtime.

As you have seen in previous posts, Aecor behaviors are completely unaware of any infrastructure concerns. This is pretty much inline with famous onion architecture: behaviors belong to the domain layer and runtime is out there in the infrastructure layer.

I think it’s pretty valuable, because you don’t always have such clean separation of layers. If you ever tried to do eventsourcing with naked akka-persistence, then you know what I mean.

When you deploy an entity onto a runtime, it looks like this:

This diagram should help you visualize things as we go through the post. 

At the moment of writing, Aecor provides an akka-cluster based runtime, which comes in two flavors:

  • Akka Persistence Runtime, which is an older one and uses akka-persistence under the hood.
  • Generic Akka Runtime, which is more flexible and depends on akka-cluster only.

While it can be easier to start with Akka Persistence runtime (especially if you have done akka-persistence before), I would recommend looking at Generic runtime first. It has some nice advantages, that we’ll talk about later.

Now let’s break the runtimes down into essential components. There are three of them: the journal, consensus and wire protocol.

The Log

Eventsourcing can’t exist without a log, so the first thing you’ll need is the place to store events — the journal.

Event logs existed long before computers were invented

There are a lot of options for underlying storage technology: it can be a specialized solution (e.g. EventStore), or you can pick some general purpose database, like PostgreSQL or Cassandra. The choice depends on performance and operational requirements for your system. What’s important is that semantically journal has to behave as an append-only log.

Journal will need some kind of serialization protocol for your events. It’s a very important topic in eventsourcing, because events, being immutable, require schema evolution to be taken care of. And with akka, serialization is especially easy to get wrong.

Comprehensive guide to Akka serialization

So storage and serialization. If you go with akka persistence runtime, then there’s a variety of plugins for different storage backends. You won’t need anything special to make it work with Aecor — just configure the plugin inline with it’s documentation. Aecor will pick it all up the same way a raw akka-persistence app would do.

On the other hand, generic akka runtime doesn’t know anything about storage or eventsourcing. It’s actually orthogonal: you can partition any kind of tagless final behavior over an akka-cluster using this runtime. 

To deploy eventsourced behaviors on a generic runtime you’ll still need a journal, and Aecor has something to offer here. We’ll see later how you can wire it all up.

Single Writer

I go into some theory here, so feel free to skip this section if you’re familiar with this concept.

Next thing we’d like to have from our runtime is single writer guarantee. In simple words it means, that at any point in time in the whole cluster there is maximum one instance of each specific entity, that can process commands and write events to the log. There should be no concurrent command processing and no concurrent writes to the log for any single entity instance.

For example, in such a runtime if we issue (possibly in parallel) some commands for booking with id “42”, such runtime guarantees that all commands will be handled by a single node, as long as it’s alive. Also, “booking 42” command handler on this node has to process commands strictly one by one, with no concurrency involved.

It’s a very very strong guarantee, which is called serializability of changes (or serializable isolation in database world). It’s extremely hard to provide it globally, but here it’s very local and thus reasonable — we require serializability only within a single entity instance. Such small area of consistency is often called a consistency boundary.

This is why it is so important to think about partitioning when you design your entities. If you want your system to scale, what should grow is the amount of consistency boundaries, not their size.

To get a single writer in a distributed system you need consensus: machines in the cluster have to agree on who is going to be the writer for each entity instance. And this is where akka-cluster comes into picture. It uses a gossip-based consensus protocol to track node membership, which in turn allows to distribute (or shard) entity processing over the cluster.

A lot of new research is going on around consensus protocols these days, thanks to crypto-currency and blockchain frenzy.

After selecting a single node to run the handler, the “no concurrency” property is further ensured by wrapping the handler into a single actor. Actor mailbox guarantees that messages are processed one by one.

Just to clarify, actors are not the only way to do this — you can use an in-memory message queue to do the same thing in a more functional and typesafe manner. And Aecor uses akka not because of actors: it’s the Scala-native consensus which makes akka-cluster invaluable here.


Let’s wrap this section up. There’s a single important reason for Aecor to use akka-cluster under the hood: it allows to retain strong consistency for every entity instance even in a distributed deployment.

Both production-ready runtimes of Aecor rely heavily on akka-cluster. But luckily, except for cluster configuration, it’s completely opaque. Aecor wraps it up nicely in purely functional, typesafe API.

Wire protocol

Last requirement for a distributed eventsourcing runtime is a messaging protocol. The following scenario will illustrate why it’s needed.

Let’s say node A receives a command for booking “42”. Latest cluster gossip say, that booking “42” handler should be executed on another node B. Now node A should transmit the command to node B, where it can be handled. When B is done, it has to hand the result back to A, so that it can fulfill the original request.

So nodes will inevitably have to communicate commands and responses between each other. All of this messaging is nicely handled by akka-sharding, but you’ll need to teach it how to properly encode and decode these calls over the wire.

Aecor doesn’t enforce any specific protocol format, so you can use any format as long as it’s consistent. To do it you just define an instance of WireProtocol for your algebra, where WireProtocol is defined by Aecor as:

Conceptually it’s clear that it’s an encoder/decoder pair. Encoder and Decoder are from scodec library, which is a nice tool to work with binary data in Scala.

Other details are probably not transparent, but we’ll leave them for a later post. We don’t need them right now, because Aecor provides a really nice default wire protocol, which should be enough most of the time.

To get it for your algebra, just mark it with @boopickleWireProtocol macro annotation:

It will generate a WireProtocol instance, using boopickle serialization library. Since these messages are not persisted, serialization can focus on speed and message size. So boopickle is a sane choice here.

If your algebra accepts or returns something special, that boopickle can’t derive a serializer for, macro will fail with an error and you’ll have to manually provide a pickler for that type.

Also, since target algebra that we’re going to deploy is an EitherK, we’ll need to define an scodec.Codec for our rejection type. Given a WireProtocol[M] for algebra M and an scodec.Codec[R] for rejection R you get a WireProtocol[EitherK[M, R, ?[_]] for free.

The following are all manual definitions we need to do for wire protocol to work in our example app:

To see a complete setup, please refer to the github repo.

Deploying to a generic akka runtime

So we went over the components of Aecor runtime. Let’s now finally deploy something.

We’ll start with a more recommended approach, which is to use Generic Akka Runtime. We’ve done some preparation: defined a WireProtocol and configured an akka-cluster. Since we’re not using akka persistence, we’ll need to setup a journal ourselves.

Entity key

But first, it’s finally time to define booking identity. We discussed this a little bit in previous posts: behavior itself doesn’t usually need it’s own identifier to run domain logic. But runtime is a totally different story, so let’s define a trivial BookingKey:

Entity key doesn’t have to be a value class and can easily be composite. But for us a String will work here.

Both the journal, we’re going to define, and Aecor runtime would have to know, how to encode and decode the key. That is required for both storage and sharding to work properly. You’ll just need to define a pair of codecs:

It’s very similar to how you define codecs for Map keys in circe. In this case the codecs are trivial, but for composite keys Aecor provides a small Composer utility (usage example).

Aecor journal

Journals are a nice extension point for Aecor: there’s a simple EventJournal interface, that you can implement to define and plug in your own journal. We’ll use aecor-postgres-journal, which works on top of doobie:

We pass in:

  • a doobie transactor, configured with a PostgreSQL backend;
  • a table name;
  • and a serializer, which in my example uses protobuf. You can check it out here, but it’s just boilerplate, that can be automated with a macro.

There is also tagging parameter, which is configuring the read side of our entity, so we’ll discuss it in Part 4,  dedicated to CQRS.

There’s one nice advantage of Aecor journals over akka persistence. While most akka persistence plugins store all events of all entities into a single table, Aecor entities have completely independent journals.

You can have several entities with journals storing data in different tables, database instances or even different database engines — all deployed to a single Aecor runtime.

Configuring such a setup with akka-persistence plugins would be quite a challenge.

Deployment

Now we have everything to deploy the entity, let’s see how it’s done:

Ok, what is going on here?

First we define a handle for our runtime, which is kinda trivial. But next line is interesting. Using Eventsourced helper we join together the behavior and the journal. What we get back is a createBehavior function.

It can be viewed as a constructor for an entity behavior: you give it a key, and in return it gives you a behavior handle for corresponding entity instance. You can then use the handler to run commands against this particular instance.

You can also specify a snapshotting policy if you want the runtime to periodically save snapshots of entity state to speedup recovery.

There’s some machinery involved in the Eventsourced smart construtor, but I will save the details for a later post as well.

Now that we have a behavior constructor, what’s left is to distribute it over a cluster. Last statement does exactly that. You can tune sharding-related parameters (e.g. number of shards or passivation timeout) with settings argument.

In return we get a suspended behavior gateway. While the signature is similar to the behavior constructor we’ve seen before, the gateway is much smarter. With a gateway on hand, you can send commands to any node of the cluster, and the runtime will route those commands to different node if it has to. Of course, responses will be routed back to the original node you called the handler from.

The gateway function is suspended, so you just execute that F effect somewhere “at the end of the world” where the application starts. And that’s it — you have an entry point to a fully operational, distributed eventsourced behavior!

It’s a good time to play around with it. Demo repository defines http endpoints to do just that.

Deploying to an Akka persistence runtime

Deployment to akka persistence runtime is almost identical to the generic solution above, just a little simpler — it doesn’t use Eventsourced to run the actions, relying on akka-persistence machinery instead.

Due to similarity and less relevance, comparing to generic runtime, I won’t spend too much time here. You can refer to an exampe wiring in the series repo if you want to. Prior to generic runtime, Aecor was working on akka-persistence-cassandra plugin version 0.59. For simplicity the same version is used in the example. You can pick any other plugin at almost no additional cost.

Getting rid of EitherK

Last thing I would like to show today is a small generalization you can do to your deployment.

As you have seen already, entity gateway we obtain from Aecor runtime returns an EitherK. Let’s quickly refresh what it is — it’s just an algebra transformer, that wraps all the behavior actions into EitherT.

While perfectly usable, it feels too heavy. Since many parts of our system are going to access booking entity, it doesn’t feel right to make them all work with so many transformers.

To make it better we’ll just need to unwrap EitherK, and then unwrap the EitherT in the algebra results. There’s a small conversion function in Aecor, that does exactly that, it’s called Eventsourced.Entities.fromEitherK:

After this transformation the raw type requires a type-lambda to be properly written. So Aecor has an Entities.Rejectable alias, that makes it all look nicer.

And in the end it’s reasonable to define another short alias for the whole entity gateway type. Since it grants access to all instances of this entity, it makes sense to name it in plural (see Bookings[F[_]] alias in the snippet above).

What’s next?

We successfully deployed an eventsourced Booking entity to our cluster. Now, in the spirit of CQRS, we need to project booking events into a view that is optimized for doing simple and efficient queries.

Will do that in Part 4, which is coming next. See you there and thanks for reading!

Aecor — Purely functional event sourcing in Scala. Part 2

Hello! This is a series of posts about Aecor — a library for building eventsourced applications in Scala in purely functional way. If you haven’t yet, you might want to go though Introduction and Part 1 first.

In this post we’re going to:

  • peel off the layers of EventsourcedBehaviour class we’ve seen in the end of Part 1;
  • fine tune the behavior of our Booking entity;
  • learn a new monad transformer along the way.

At some point in writing this post I thought of splitting it in two parts, but in the end decided not to — if it gets too long before we launch something, people might get bored. So this post is a very long read again. I may be a good idea to read it in chunks.

But at least, in the next part it’ll be all about running Aecor behaviors, as planned.

Part 2.1. Understanding Eventsourced Behavior

Previous time we defined Booking entity behavior using plain Scala and a pinch of MTL type classes from Aecor.

It was looking good, so we began to wrap it into something Aecor can launch, which led us to this bit of code.

To put it shortly, it does two things:

  • composes pieces of logic we wrote into a single coherent behavior instance;
  • specializes our MonadAction-ish effect F.

It will take us several steps to completely understand what’s going on. Let’s start with the effect part.

Meet ActionT

 

 

As you can see, it’s no joke. So what is ActionT?

The name hints it’s a monad transformer. Monad transformers were invented more than 20 years ago, and these days serve as building blocks to run programs, written using MTL-style (with type classes like MonadState, MonadError, etc.).

As you probably guessed already, ActionT is used to run programs, defined in terms of MonadAction. Let’s do a quick recap of what MonadAction should be able to do (from Part 1):

  • rely on some state of type S to make decisions. We can also read this state.
  • Produce (or append) events of type E as a reaction to commands.
  • Return some kind of result to the caller of the action.

And now directly to the definition of ActionT:

So, first thing I noticed when I initially saw it is S, E, A in type parameters, which is pretty cool, because “Aecor” translates as sea or ocean from Latin.

Now back to the features we want from ActionT. The unsafeRun signature is pretty cryptic so let’s break it up. It’s a function of three arguments:

  • S
  • (S, E) => Folded[S]
  • Chain[E]

You should remember the Folded type from the previous post. Given these three arguments, unsafeRun function returns a value of type F[Folded[(Chain[E], A)]].
For those of you who haven’t heard of Chain yet, it’s a collection from cats that does both append and prepend in constant time. You can freely replace it with List for the purposes of this series.

Let’s draw what we’ve got and solve the puzzle:

Now, given what we know about MonadAction and by substituting type parameters with what they represent, it should be rather clear:

“Give me initial state S, a way to fold events E into S and a log of already happened events, and I will run some action that will return an A and maybe produce some more events. So in the end I’ll return new a new (possibly amended) event log and the A. All of that happens under effect F.”

So it’s actually the same command handler in disguise! There’s a twist though. It’s a composable command handler, by which I mean you can chain them one after another.

It’s possible because of the third parameter — the log of already happened events. Of course, it’s not the whole history of the entity. These are the events that have happened up to this moment of executing the handler.

A picture would be much more expressive in explaining this:

 

So we can compose actions (or command handlers) into chains, where each individual stage takes the event log from the previous stage, appends it’s own events and passes it down to the next stage.

This is done for two reasons:

  • each action can run this log through the folder to get current state (which is usually needed to do anything meaningful);
  • we need the full log produced by all actions when the whole sequence is complete.

As you see, there are strict rules to how event log Chain[E] should be handled, that’s why the ActionT constructor is private. Not every unsafeRun function would work.

On the other hand, initial state and folder function don’t change throughout execution of the whole chain. They can be completely arbitrary. The public run method on ActionT confirms all of this: you can run your action with any initial state and folder, but you have to start with an empty log.

The M word

After long explanations like this people often tend to say the M word. I’m no exception today.

So first of all, unsafeRun returns an F[_]. And then we somehow expect the next action to use the stuff from inside the F. Which requires a F to be a Monad (and which is actually it’s whole point).

But the composition of actions itself looks monadic as well, and it actually is! So each transition on the picture above is just a call to ActionT.flatMap!

It should all click now for those who are familiar with transformers. Indeed, MonadAction is a Monad, so if we want to run MonadAction programs with ActionT, it has to be a Monad as well. And, like other monad transformers, it’s only a monad when underlying effect F is itself a monad.

As you might remember from Part 1, we found that MonadAction is quite similar to a combination of Reader and Writer. ActionT confirms this similarity: it’s indeed a Reader of initial state S and a Writer of events E.

I hope this “let’s connect the dots” deviation was not too boring for those of you, who got it all straight away. The main takeaway here: ActionT is the engine for MonadAction programs (effectively command handlers), that accumulates produced events along the way.

As an Aecor user, you won’t actually have to deal with ActionT directly, but it surely helps to know, how it works.

EventsourcedBehavior

Now it’s a good time to look at EventsourceBehavior. As I said, it just gathers the pieces of behavior into one coherent thing:

If it’s the first time you see an abstraction over a tagless final algebra, you might feel this way (I surely did):

But the confusion goes away very quickly. Such type parameter expects something of a shape Algebra[F[_]], for example our Booking behavior algebra would fit nicely as M in EventsourcedBehavior.

So what’t inside?

Two things we’ve just discussed in the ActionT section: initial state S for the behavior and a way to fold events E into the state (the folder function). So let’s focus on the actions portion.

So we have some algebra M and some “raw” effect F. EventsourcedBehavior interprets algebra M in a more complex effect, which is ActionT[F, S, E, ?]. Let’s try these mechanics by hand with a subset of our EventsourcedBooking algebra from Part 1.

We substitute effect I (letter was changed intentionally) with an effect ActionT[F, S, E, ?] for some lower-level effect F:

This code is not what you get exactly, it’s purpose is to just make the point. I removed the implicit requirement in the result, because it’s now satisfied automatically via the MonadAction instance for ActionT.

Well, not exactly — we didn’t touch rejections yet (I simplified the requirement to MonadAction for now). But, putting them aside, we’re good — ActionT provides everything we need. And if we generalize back:

For any algebra M that requires MonadAction, putting it inside EventsourcedBehavior satisfies that requirement.

Cool, so actions is just a named set of behavior specific command handlers. Now we need to tune something here so that it can deal with rejections.

EitherK

The default way to embed errors into arbitrary effect F is well-known: EitherT monad transformer. It turns F[A] into F[Either[Error, A]], which allows to embed errors into the left channel of the Either.

Aecor faces a more challenging problem, though. For EventsourcedBehavior to support rejections, it has to be able to transform an arbitrary algebra M[_[_]] into some M' so that for every F, M'[F] can embed rejections.

Sounds nuts? It’s actually simpler than it sounds. Let’s look at an example.
Given behavior like this:

we have to wrap all the F‘s into EitherT, so that every method in the algebra can embed rejections, like this:

One more time, but just the diff:

 

If we try to generalize it into arbitrary algebra M[_[_]], we need this kind of transformation (hope you got used to M[_[_]] thingy a little bit):

And that’s the essence. This way you can take any tagless final algebra and “teach” it to work with errors.

To save some typing and get better type inference, Aecor has EitherK:

Please, note, that cats library has it’s own EitherK class, which is a completely different thing. Whereas cats.data.EitherK is just a coproduct, where each channel is under some effect, aecor.data.EitherK is a sort of higher-kinded monad transformer.

Naming the latter EitherTK or something alike would resolve the name clash, but EitherK is simpler to type and pronounce. I haven’t yet seen a context where both would be used at the same time, so this ambiguity doesn’t feel like a big deal so far.

EitherK really does what is says. Writing EitherK[Algebra, Rejection, F] would give you an Algebra[EitherT[F, Rejection, ?]].

Let’s apply this to EventsourcedBooking and, out of curiosity, see what effect type we’ll end up with.

Please, don’t flip your table and leave: you won’t have to type those signatures when using Aecor. We’re just unwrapping it all to get a deeper understanding.

In essence, this is an algebra of command handlers (powered by ActionT with respective state and event types), that can fail with errors of type BookingCommandRejection, (capability provided by EitherT transformer). This combined effect gets a valid MonadActionReject instance, and completely satisfies the initial requirements for EventsourcedBehavior.

Looking at it again

Remember where we started? We assembled EventsourcedBehavior for Booking algebra:

Now it should make sense to you. We’re building an ActionT-based behavior with rejection support via EitherK.

The only missing bit is optionalRejectable smart constructor, which is nothing more than sugar:

  • it allows us to use BookingState.init and BookingState#handleEvent without lifting result to Option;
  • it requires an EitherK based algebra, which helps to drive the type inference, so that we only need the outer level type ascription.

Here it actually makes a lot of sense to download the ticket-booking project (or aecor itself), open it in IDE and click through definitions to see how types match up.

Phew… I bet this was intense and took a bit of energy. It’s a good time to take a break before we look at how we can tune an EventsourcedBehavior.

Part 2.2. Tuning EventsourcedBehavior

So now that we’re somewhat comfortable with EventsourcedBehavior, let’s see how we can modify it to our needs.

Event metadata

As I mentioned in Part 1, it’s beneficial to separate essential event data from generic metadata. Especially if the latter doesn’t participate in making decisions within the behavior. Let’s see how we can do it in Aecor.

Given behavior that we just constructed, we can call behavior.enrich to get a new behavior that will automatically enrich every event with metadata we specify.

Let’s say we want to store a timestamp along with each event. For enrichment to work we need only one thing — a suspended getter in F:

What you get is the same behavior, but with different event type: instead of plain BookingEvent, you get events of type Enriched[EventMetadata, BookingEvent], which is just a simple product:

For every produced event Aecor will execute generateTimestamp and put the result into the Enriched envelope.

One small thing is missing though. enrich requires that the algebra baked inside EventsourcedBehavior has a FunctorK instance. In plain words it means, that for such algebra M, we can at any time go from M[F] to M[G] given a natural transformation F ~> G.

Providing FunctorK instance is mostly a mechanical process, and thanks to cats-tagless project we can get it for free. We just have to annotate our tagless final algebra with @autoFunctorK:

That’s it. It was really simple, and we get metadata supply without polluting our clean events with irrelevant things. Power of composition is unlimited 🙂

Do you even lift, bro?

Now a plot twist. The whole team realizes we forgot to add booking expiration functionality.

Well, it’s not that bad, we’re still working out the behavior. So on the entity level it should be rather simple (you might want to refresh the Booking algebra we defined in Part 1).

First, we need an expiration event. Also, when booking is initially confirmed, we should store the exact moment in future when it will expire (if not paid or canceled by that moment):

Second, we need to make corresponding adjustments in the algebra: add an expire action and tweak confirm action to receive the expiration deadline.

The expiration deadline will come from an external ticket management system, so for the Booking behavior it’s just an argument. We’ll also add an expiresAt: Option[Instant] to the BookingState — we’ll need it to validate any attempts to expire the booking too early.

Last thing to do is to implement expire action in the EventsourcedBooking version of Booking algebra. And this is where we’ll get stuck.

To check that expiration doesn’t happen to soon, we’ll need to get current system time, and check that it’s already past the expiresAt deadline. Getting current time is a side-effect, so we’ll need some kind of effectful clock. We’ll pick one from cats-effect:

But if we then try to work it all up from here, we’ll have a hard time finding a Clock[I] instance. As we remember, I is a very special effect — it supports command handlers that produce events.

But we need just a simple clock, is there a way to avoid all of this additional complexity?

For sure. If we need a simple effect, let’s just add it:

Good. But there’s another problem. Once you execute your clock, you get an F[Instant]. But you can’t just flatMap it into an I[Something]F and I are completely unrelated effects. If there was a way to go from F[Instant] to I[Instant], then everything else would work as it did before.

It turns out that there is such a way, and it seems logical. After all, F is a much simpler effect, and the more powerful I should be able to embed F values.

For such behaviors Aecor provides MonadActionLift and MonadActionLiftReject type classes. These are just extensions over corresponding type classes we already know:

This is all we need. Now we can liftF the result of our clock into I, and everything will work again. Let’s see the final implementation:

Types will work their way up now. To see how it looks as a whole, please refer to the repo.

One may ask here:

So we embedded a clock into our behavior. But the same way we could embed any kind of effectful service, right?

Seems like too much power, where’s the line?

A fair question. Although it’s possible, I would keep it to simple local side-effects like Clock or Logger. Embedding something more complicated doesn’t make a lot of sense — if it’s a database or external request, you don’t get any additional atomicity guarantees anyway. But the downside is significant — you clutter your behavior with some logic, that could be executed elsewhere (and the results would be then passed to behavior as simple arguments).

Still, the possibility is there, so if you make a really good case — go for it! I can imagine some external validation being implemented this way. In my opinion, as long as this external service doesn’t populate your events, you’re fine.

Time is out

Well, not really. But this is the last tweak I’ll cover today. It’s a really simple and neat example of how powerful these effect-polymorphic behaviors are.

It’s not really relevant in this case, but let’s say we have a behavior that can take long to handle commands. Maybe it does some really heavy calculations or goes into database for validation purposes.

In this case we’d want to limit the maximum response time and get a timeout if the command processing takes more than 2 seconds. With a less composable solution we’d have to bake it inside the behavior, or handle it on the outer level.

With Aecor you keep your original behavior intact and just run it through a natural transformation to get a modified version. This requires mapK, and EventsourcedBehavior is a FunctorK as long as underlying algebra M is a FunctorK too.

Let’s define a natural transformation that implements a timeout cutoff:

We leverage some goodies from cats-effect here, namely Concurrent.timeoutTo. This natural transformation takes any Concurrent effect F and produces an effect of the same type F, where any action that takes more than 2 seconds will raise an error. We can do that since Concurrent extends MonadError.

The only thing left to get a timed-out behavior is to run the original behavior through the given natural transformation:

And that’s it. Pretty concise, isn’t it?

Conclusion

This was a really dense post with a lot of code to crunch. I really appreciate the time you spent reading and hope you extracted some value out of it.

Next time we’ll finally launch our behavior on a real cluster with a real event journal. Stay tuned.

Aecor — Purely functional event sourcing in Scala. Part 1

Hello! This is a series of posts about Aecor — a library for building eventsourced applications in Scala in purely functional way. If you haven’t yet, you might want to go though Introduction first.

In this post we’ll explore entity behaviors in general, and how to make eventsourced behaviors with Aecor. I’ll also delve into design practices and try to answer arising questions, so grab a coffee — it’s not a quick read 🙂

Part 1. Defining entity behavior

As we agreed in the introduction, we’re building a ticket booking system.

Booking tickets is the core domain of our imaginary business, and with all this complexity and inherent temporality, “Booking” entity is a good candidate to be eventsourced.

But why exactly Booking? How does one come up with this decision? Let’s stay a little bit on this topic.

Picking entities

Event sourcing works really well with Domain Driven Design, and the term Entity comes from DDD. In the famous “Blue book” entity is defined as something that:

  • has a distinguished identity, that allows to differentiate two instances of an entity, even if all their attributes are the same
  • usually obey some form of lifecycle, according to business rules.

One can easily define several entities in a ticket booking context, and Booking is what first comes to mind:

  • it must have some unique identifier, so that a client can refer to it at any time (identity)
  • as the booking process goes forward, it goes through several distinct states (lifecycle)

So we selected Booking as one of our entities. This alone doesn’t imply event sourcing — in classic DDD entity is backed by a regular CRUD-ish repository.
But if we see value in eventsourcing some part of the system, entities are usually a natural fit to have a consistency boundary wrapping them around.

Eventsourcing an entity (or several entities, wrapped into an Aggregate) usually gives the best trade-off between granularity (which gives scalability) and consistency, meaning that most of our business invariants can be checked within single consistency boundary.

Behavior interface

So we decided to eventsource our booking entity. Time to define some behavior for it!

First step, which actually doesn’t require Aecor or any other library, is a Tagless Final algebra for your entity. Let’s put it like this:

If you’re not familiar enough with Tagless Final, there’re lots of good posts on the web. I can recommend this one by @LukaJacobowitz, or my own Writing a simple Telegram bot with tagless final, http4s and fs2.

So what do we see here that our booking can do?

  • We can place a booking for 1 or more specific seats in a concert on behalf of a client.
  • Booking can be confirmed, which means, that seats are reserved and prices are determined. We define an explicit confirmation step here, because actual concert data management and seats reservation is done in another system. Confirmation is going to happen using asynchronous collaboration with that system.
    That system also manages pricing, so when booking is confirmed, seats become tickets — in our case a ticket is just a seat with price attached.
  • By the same token, if something goes wrong (e.g. seats are already reserved), booking is denied with a reason.
  • Client can cancel the booking any time.
  • Receive payment is an obviously important lifecycle action for booking.
  • And our entity will expose some parts of it’s internal state, namely status and tickets (optional, because there’re no prices until the booking is confirmed).

Just several lines of code, but quite a bit of thought and effort. And also questions! I’ll try to answer ones that most probably arise at this point.

This algebra definitely looks like something with internal state. Why so?

It’s true, and for reasons:

  • We focus on behavior. Internal state that will fuel it is secondary, and we don’t want to tie the behavior algebra to it.
  • When some other component calls an action of this behavior, it shouldn’t be bothered with booking internal state either.

I usually think of it like this: an instance of this Booking algebra would represent a specific booking entity instance at it’s current state, and the methods are actions that you can perform with that instance.

Why F[Unit] all over the place? And where are the errors? You can’t pay for a denied booking, for example.

Fair questions. Unit here represents some kind of “Ack” response, meaning that the action succeed. Booking will probably change inside, but we don’t care. Returning Unit in this case is very common in Tagless Final.
As for errors — in good traditions of MTL we delegate error handling to our effect F.

By the way, at the moment it looks like most TF algebras out there, where F is going to be something like IO or Task in the end. Spoiler: it won’t be so when we get to eventsourced behavior.

If these actions are for a particular booking, what is place doing here? Don’t we create a new booking by placing it?

This is an interesting one. When you do traditional CRUD, the creation of entity instance is separate from any kind of logic it might have (or not have).
But if we move completely into behavior land, then there’s definitely some kind of business action that brings the entity into existence. In our case this is place action. It’s a an integral verb of our domain and a part of the entity lifecycle, so we treat it accordingly — it belongs to the entity algebra.

Behavior actions, MTL-style

I hope we’re ready to move forward and finally unpack some Aecor typeclasses. Let’s take a look at them.

The core one is MonadAction:

It provides basic building blocks for actions. Aecor action describes how an entity reacts to incoming commands, which makes it very similar to command handler concept. As signatures might have suggested you, actions:

  • rely on some state of type S to make decisions. We can also read this state.
  • Produce (or append) events of type E as a reaction to commands.
  • Return some kind of result to the caller of the action.

So any effect F that can do these things can be used to describe actions and thus have an instance of MonadAction.

We will also need errors. In the context of handling commands an error means that command can’t be executed for current state of the entity. For example, one should not be able to pay for a denied booking. In this case we say that receivePayment command is rejected, and the action resulted with rejection.

Aecor provides a more powerful version of MonadAction, that can work with errors. It’s called MonadActionReject:

It’s related to MonadAction in the same way to how MonadError relates to Monad. Usually, your entities would need rejections, but sometimes there’s no such need — this is where you can get away with a simpler MonadAction.

Before we implement our actions, we’ll have to agree on S, E and R types for our eventsourced booking.

Events

Implementing event sourcing is inherently harder than more traditional state-based approaches. One of the reasons is that in addition to state you will need events (and in our case also rejections).

Mining proper events from the domain is a big topic in itself. Let’s say we already had an eventstorming session with our domain experts and came up with the following events:

BookingPaid and BookingSettled are distinct events, because some bookings are free and can be settled without payment.

Notice, that we’re back to no-dependency mode: these events are completely arbitrary and library agnostic — no marker traits or similar hacks. Maximum composition.

Also, we don’t put any identity information or metadata (e.g. timestamps) here. Aecor provides a way to decouple business-related data from metadata to make events cleaner. We’ll see later, how you can enrich your events with metadata. We’ll discuss identity soon as well.

State

Next, we’ll need our entity to keep some state inside. We should not fall into a trap of thinking database schemas here. The purpose of this state is not to map into tables or provide efficient queries — it’s part of your domain model, so it should:

  • be readable and use ubiquitous language;
  • be rich enough for expressive command and event handling;
  • support the whole entity lifecycle.

We’ll use the following state for our entity:

tickets is optional, because we don’t have seat prices for the whole life of the booking — we get them with confirmation. A more typesafe way to encode this would be to put a non-optional tickets field in all statuses where the tickets have to be in place. Here for simplicity we just put an option into the state root.

And again — our state is totally library agnostic.

Identity in state and events

A fair question here would be:

You say a lot about identity, but where the hell is the bookingId?

This is a neat idea I first heard from Denis Mikhaylov. It says that in general, entity should not need identity information to handle commands. You definitely need some kind of identifier to route a command the the correct entity instance. But after that business logic doesn’t usually care.

Moreover, when it appears that chosen identifier is still required for business logic, you most probably can decompose it into two parts: pure identity and something that is required for command handlers to work. Then you move the former out of your events and state, keeping only the latter.

I’ve implemented and seen this idea in action, and I find it awesome. Separation of concerns all the way down. Answering the question — we’ll definitely see bookingId later, but it’s not relevant for our behavior.

Rejections

I won’t spend too much time on rejections. Simple enum is usually enough, but nobody stops you from enriching your rejections with some data. Here’s what we got for booking command rejections:

Implementing Actions

We’re ready to implement actions for our eventsourced behavior. We’ll start by requiring our effect to be a MonadActionReject:

Our ADT’s from previous sections took their respective places, with one quirk: state is wrapped into Option. This is where we get back to the trade-off of having place verb in our behavior algebra. Until the booking is placed, there’s no trace of it in the system, and hence no state.

It’s a common thing in event sourcing: very often there’s some kind of initial event that moves the state from None to Some(...). At this level we have to accept this and express it in our types.

Let’s implement place action:

Let’s walk through this code:

  1. Import MonadActionReject DSL into scope
  2. Use read to get current state of this booking entity
  3. If something is already there, it means that this particular booking was already placed and we can’t re-place it again: reject the command.
  4. If it was not placed, we perform some validation an either reject the command or append a BookingPlaced event.

Congratulations, this is our first command handler!

Aside on MTL.

We can flatMap in F because MonadAction extends Monad. This gives us a lot of power in defining out effectful actions, especially when other effects come into play (we’ll see an example later).

MTL fans could have noticed, that MonadAction[F, S, E] is very similar to a combination of MonadReader[F, S] and MonadWriter[F, E]. Rejections add up to MonadError[F, R]. Notable exception is reset combinator, which adds a remote flavor of MonadChronicle: it allows to drop all the accumulated reactions and start over from a clean slate.

All of this is not accidental — it’s just the nature of command handlers. They have to read state, write events and raise rejections. So MonadAction could probably “extend” these mtl typeclasses… but so far no practical benefit was found and Monad is just enough.

More handlers

Let’s complete the actions for eventsourced booking.

Let’s walk through confirm action. Others are pretty much similar.

  1. Unlike place, confirm runs on existing booking and should be rejected for a booking that was not yet placed. This is handled in status method, that confirmation action calls into.
  2. After booking is confirmed, if tickets are free we can settle the booking immediately. Notice, how regular monadic combinators are used to do that.
  3. Sometimes the handler doesn’t have to do anything and just ack (e.g. double confirmation). ignore alias is defined for a better readability in these cases.

Experienced eventsourcing practitioners would say that this is only half of the story. Our behavior produces events, but we haven’t specified how the state would change in reaction to these events.

Folding events

It’s not a secret that eventsourcing is conceptually just an asynchronous foldLeft on an infinite stream of events. Obviously, we lack a folding function for this to work.

Actually, given the optionality of our our entity state, it makes sense to define two functions:

  • one for initialization, where we go from nothing to something;
  • second for more regular lifecycle transformations, from one existing state to another.

We’ll define both on our BookingState since folding events is one of it’s direct responsibilities:

Here we face an eternal problem of eventsourcing, which is handling illegal folds. Usually lifecycle implies that some events can only happen in particular states. For example, we shouldn’t ever receive a BookingDenied event for a booking that has Settled status.

Command handlers must hold such invariants, so seeing an illegal fold at runtime is a programmer’s error. It’s very hard to navigate this knowledge into the fold function. Especially in a way that compiler would allow us to write only folds that make sense and will actually happen.

It would probably require much more complex type signatures and totally different structure to pull that trick off. The payout is nice but is not worth the effort: for a properly designed aggregate of a normal size code review is enough.

Aecor provides a specialized Option-like type called Folded[A] to account for illegal folds:

You can see it wrapping the fold result in the functions we defined earlier.

⚠️ A timeless warning! 

Never side-effect in your event handlers!

Always worth mentioning. Aecor is as explicit about it as it can be — everywhere it needs a fold function, it’s has to be without effects.

In Haskell that would be enough, but not in Scala. Just keep doing pure FP and you’ll be fine 🙂

Bringing it all together

Now we’re finally ready to wire it all up into something Aecor can launch, which is (you don’t say) EventsourcedBehaviour.

Oh well… I guess this is enough for now. It was a long read, and the signature above screams for a fresh head. So let’s call it a day, and dive into EventsourcedBehaviour next time.

Please, post your feedback in comments and thank you all for reading!

Aecor — Purely functional event sourcing in Scala. Introduction

Hello! I’m starting a series of posts about Aecor — a library for building eventsourced applications in Scala in purely functional way.
My ambitious plan includes not only providing a comprehensive walkthrough for this great tool, but also:

  • discuss common event sourcing topics and how Aecor approaches them;
  • explain how Aecor works under the hood;
  • and, of course, build a working app! 🙂

Introduction

Aecor is more than 2 years old and is written entirely by Denis Mikhaylov (@notxcain). I’ve been following the project since it’s early days, and recently got a chance to work at Evotor, where Denis’es team runs couple of dozens of Aecor-based services in production.

It’s super exciting to see advanced FP projects like this one being deployed in real production for a real business. What might look as a playground with a pile of fancy FP constructs, is actually a battle-tested solution with a well-thought, clear and composable interface.

Unsurprisingly, Aecor has always been one of the early adopters for cutting edge FP tech in Scala. While reading it’s code, you can find a lot of idiomatic and powerful applications of cats, cats-effect, fs2, and other Typelevel libraries. I should also mention Tagless Final pattern, which Aecor leverages in truly interesting ways.

All that power is used to give you another thing, that never ceases to fascinate me, which is Event Sourcing. A lot has been written about this technique and it can give you unmatched powers in many contexts. You definitely shouldn’t eventsource everything,  but when you have an entity that is a good fit — Aecor will do most of the heavy lifting for you.

I’ve been into Event sourcing in Scala for several years now, doing it both as a hobby and professionally. Although I can’t call myself an expert, I can still fully appreciate amount of knowledge and effort that Denis put into Aecor.

Now that I’ve had some exposure to Aecor in production, I’m even more excited about it. Working in a team, extremely experienced in running eventsourced apps, I’m also learning a lot every day. And this is probably a good time for a post series 🙂

What Aecor gives you

Series is structured around capabilities, that Aecor gives the developer, so let’s briefly mention them.

One of the most exciting parts of event sourcing is defining behavior. I believe, that when designing software, behavior is what you should start with. Focusing on behavior instead of database schema is also in the roots of Domain Driven Design, and Aecor follows that principle.

Specifically, Aecor provides a set of MTL-style typeclasses, that can be composed to define different flavors of eventsourced behaviors. We’ll see how it works in detail in Part 1 and Part 2 of the series.

Next, you would probably want to run your behavior somehow. The whole scalability part of event sourcing is based on the ability to have small isolated islands of strong consistency. In simple words, you need a guarantee that for any single entity there’s no concurrent command processing. This is known as the Single Writer Principle, and in distributed system it requires consensus.

When you need consensus, the only Scala-native answer is Akka-cluster. It’s sharding module is a perfect fit for scalable eventsourced system. Aecor allows you to launch your behaviors on top of akka-cluster, and in Part 3 of the series we’ll find out how to do it, as well as:

  • how Aecor isolates your purely functional and typed code from not so functional and typed Akka actors;
  • what advantages Aecor runtime has, comparing to akka-persistence (Akka’s own event sourcing solution)
  • alternative ways of implementing single writer, specifically an ongoing R&D around Kafka-based runtime, where consensus is delegated to Kafka partitions.

Part 4 of the series is about building blocks for CQRS, that you get with Aecor. It’s well known, that CQRS is a natural fit for event sourcing. So it would be strange for an event sourcing toolbox not to have a couple of CQRS screwdrivers.

In this section we’ll build a streamed view (also called projection) of our entity.

Part 5 is not directly related to Aecor. We’ll discuss the Process Manager pattern, which is a very powerful tool to orchestrate eventsourced entities and other parts of the system. It fits naturally into Aecor-based apps, so I decided to dedicate a separate chapter to it.

By this time you’ll know everything to build solutions with Aecor. So it will be a good time to take a look under the hood: in Part 6 we’ll take Aecor apart gear by gear to see how it works and discuss design choices made.

What we’re going to build

Usually event sourcing examples are about transferring money or doing e-commerce. Instead, we’ll build a simple concert ticket booking system. Although real systems are times more complex, we’ll try to implement some interesting non-trivial business rules. This is by no means a guide to building booking systems — requirements are artificially crafted and may look awkward to real domain experts. But they suit well to the purpose of the series, which is to demonstrate Aecor on a not too trivial app.

You can check out the finished solution for this series in the github repo. Follow readme instructions if you want to launch it and play around (or try to break it).

Installing Aecor

Just to get you started, here’s how to wire up Aecor to your build (we’ll discuss specific modules later in the series) :

Also make sure that partial-unification flag is turned on.

And that’s it for the introduction. See you in the Part 1, where we’re going to define behavior for our booking entity.

Reverse State Monad in Scala. Is it possible?

Hello all!

In this post we’re going to have some fun with a mind-breaking thing called Reverse State, and explore the limits of laziness in Scala along the way.

When I see something interesting implemented in a “foreign” programming language, I often have a desire to port it to Scala – just out of pure wondering how it would look. And sometimes using familiar language also allows to deeper understand the concepts presented. Some time ago I did it with a great book called “Neural Networks and Deep Learning”: here are most of the exercises from the book written in Scala.

This time a totally different thing caught my eye. It was a really nice article about Reverse State monad and it’s implementation in Haskell. I have never heard about it before, so implementing it in Scala seemed like an exciting exercise.

And it didn’t disappoint, although outcome was not the one I expected 🙂 So I decided to make it a story instead of just plain code. I’ll use the technique that Eugene Yokota applied in his “Learning Scalaz”: we’ll follow the source (in Eugene’s case it was Learn You A Haskell For Great Good) piece by piece, discussing and writing the code along the way. Let’s go!

Prerequisites

To really follow along, reader should be familiar with Monoid and Traverse typeclasses as well as with State monad in Scala. Also a quick read of the original article won’t hurt.

Warmup

There’s a big introductory part that’s dedicated to different ways of scanning a data structure. Let’s not skip this block and use it as a warmup.

Given a list of Ints, can you produce a cumulative sum of those integers? For example, if we had the list [2, 3, 5, 7, 11, 13], we want to have [2, 5, 10, 17, 28, 41].

There are actually many different ways to write this function. Depending on your taste on imperative programming, you can choose anywhere between highly imperative ST-based destructive updates and an idiomatic functional style.

This should be simple, there’s scanLeft in Scala std lib:

 

For example, what if you want to produce a cumulative sum that is accumulated from the right? So if we had the list [2, 3, 5, 7, 11, 13], we want to have [41, 39, 36, 31, 24, 13].

Pretty much the same thing here:

 

As a Haskell programmer, we have the instinct to generalize things.

Say no more. Scala developers love it too. We’ll use type classes from cats, but scalaz would work the same way:

But how might we implement such a function? Let’s consider cumulative first. What we really need is to keep track of the running sum as we traverse, and then returning the running sum as the new value. The State monad then becomes helpful.

State is a well-known concept in Scala, so we can easily follow-up with our implementation of cumulative:

Comments are required here I guess. “Traversing with State” is a powerful technique to go over some data structure, while accumulating information along the way. Processing of each next element allows you to modify the accumulated State “effect”. In this case we’re just accumulating the running sum (according to the provided Monoid) and using the same sum as the result value of the state calculation.

runA is analogous to evalState in Haskell – it evaluates the thing and returns just the result value (ignoring the accumulated state). And, since for stack safety reasons State calculations in cats are wrapped into Eval, we have to execute it to get our value out.

Ok, let’s now, using the original article’s help, try to implement cumulativeR with Reverse State.

Enter the reverse state monad

As mind-boggling as it is, let’s try to digest this definition:

The reverse state monad, on the other hand, has the same API, except that you can set the state, so that the last time you ask for it, you will get back the value you set in the future.

Image result for smart guy meme

Oh man… Well, let’s try to at least port the provided implementation to Scala. But I’ll change two things with comparison to the original:

  1. I’ll swap the results in the signature of the runF function to be consistent with normal State in cats, where the state is returned on the left, and the value is on the right.
  2. To implement cumulativeR we only need an Applicative, so I’ll not try provide an instance of Monad for ReverseState at this moment.

So this is our ReverseState applicative. We’re drowning in Evals, but that’s the cost of stack safety: everything here is really similar to the original State in cats, except for the ap function.

And, actually, no “clever use of laziness” is happening in the ap. Seems like it will show up in flatMap, but so far we’re fine without it  –  cumulativeR implementation works already:

We can check that it’s output is equivalent to the scanRight-based implementation:

Of course, due to laziness, similar example in Haskell will not calculate anything until we explicitly ask for an element of the list or trigger the evaluation somehow else.

Let’s go ahead and implement the Scan generalization as presented in the original.

Can we do better? Can we generalize this to more kinds of “cumulative” operations? What if, instead of a simple running sum, what if we want a running average? Or a running standard deviation? Or some entirely new thing such as the running maximum multiplied by the minimum? The only difference between all of those tasks is that the specific state transforming function (the function that was passed to ReverseState) is different.

Since we don’t have proper universal quantification in Scala, I’ll just lift the x into the type parameters list and name it S (state). I could make it closer to the original using shapeless polys, but that’s not the topic of the post.

Here, we simply unwrap a given state monad action, wrap it again in our ReverseState, do the traversal, then unwrap it again.

I find it beautiful! And it works, although I decided not to present standard deviation and max*min scans here. The former would require a lot of math and the latter needs proper composition abstractions for Scan, which fall out of the scope of this post.

So that’s it! We implemented everything introduced in original post, and we did it in Scala! Except for…

FlatMap! Where is my FlatMap ???

The true power of state lies in the ability to sequence stateful computations using bind (or flatMap as we know it in Scala). But does it work for ReverseState?

In Haskell it definitely does. Laziness of Haskell runtime allows bind to be a finite computation. Let’s take a closer look to the definition from the original article:

It’s clear that there’s a circular computational dependency between future and a: each of them is calculated in terms of the other. But that is fine – as long as we operate on finite data, at some point next “future” state won’t be needed and Haskell runtime will evaluate only as much as required for the result to be produced.

So what about Scala?

I would be happy to be proven wrong, but after hours of thought and experiments, after trying to wrap pretty much every tiny thing in Eval, I came to conclusion that there’s no possible way to implement flatMap for ReverseState in Scala.

Although there’s a way to encode a circular dependency in Scala, there has to be an explicit exit from the “loop”. In other words, computation of such a circular dependency in Scala will only complete when under some runtime condition the dependency is gone. The reason is simple –  JVM runtime is strict, thus it can’t suspend computations, that are not needed right now.

This restriction still allows some pretty interesting laziness tricks, like loeb function, for example. But let’s take a look at how an implementation of flatMap for ReverseState might look like in Scala:

The circular dependency in the result is unconditional – the next leg of calculation is created regardless of any previous results.

Eval won’t help here, because to work inside Eval we need to sequence it with flatMap. So we won’t even be able to construct our Eval computation, since it would require circularly dependant flatMap calls on Eval. The flatMap calls themselves are eager and there’s no way to avoid that.

So, depending on whether we wrap the result into Eval.defer, we either get an infinite loop or a stack overflow for programs that involve flatMap-ing ReverseState.

Seems like we reached the limits of laziness in Scala here.

Update

There’s one case though where flatMap for ReverseState will work properly in Scala. It’s when your state type S is a lazy data structure (a standard Stream, for example).

It may seem like some random exceptional fact, but actually it’s the same case of providing the runtime with a condition to stop evaluation and break the circular computational dependency. This time it’s just less explicit and takes the form of Stream‘s laziness.

Thanks to Oleg Nizhnik (@odomontois) for pointing me in this direction.

Conclusion

In this post we found out, that ReverseState is not a Monad in Scala. Again, I would really love to be proven wrong here, so if you happen to find a working instance – please, ping me!

It’s not a Monad, but it’s an Applicative, which means we still can use it in some meaningful computations 🙂

As an example of such, we looked at right-to-left stateful traversals. Big thanks to Zhouyu Qian from Capital Match for his post about ReverseState in Haskell, that served as a foundation for the post you just read.

Thanks for reading!

Code

Interactive playground with all of the presented code is available here.

Writing a simple Telegram bot with tagless final, http4s and fs2

Hello all!

Recently I’ve started diving in fs2 and http4s. They looked so awesome, that to properly introduce myself I decided to implement something interesting. A bot for Telegram messenger seemed like a nice idea. So here’s a small tutorial on how to implement a bot in a purely functional way using tagless final encoding, fs2 for streaming and http4s client to talk to telegram API.

Most of the code snippets here have comments, that explain in more detail what’s going on. I encourage the reader to not skip them — many small details are not covered in the text. Otherwise it would be twice as large.

Disclaimer 1: There are feature rich telegram bot libraries out there. If you need something to be done quickly and you don’t mind bringing akka to your dependencies — you’ll be better off with those solutions.

Disclaimer 2: I’m in no way an expert in both fs2 or http4s, so if you find a more optimal way to do something presented here — please, leave me a comment!  🙂

“Gimme da code!”. Here’s the github repo.

So what are we going to build?

A todo-list bot. To not overload the tutorial with specifics of Telegram Bot API, we’ll make it very simple:

  • It will keep a separate todo-list for each chat (either personal or group one)
  • Bot will be “long polling” the messages that were sent to it. I selected long polling, because it’s simpler to implement and, more importantly, it works in development mode.
  • The interface will look like this:
    • /show command will make the bot answer with the current todo-list.
    • /clear command will erase all the tasks for this chat
    • any other message will be interpreted as an instruction to add a new todo-list item (with message content used as list item content)

So in the end interaction with the bot should look like this:

User interface of a simple todo-list bot

 

Let’s proceed to designing the algebras.

Designing Algebras

Despite the simplicity of the bot, there’s quite a bunch of algebras we’re going to operate with:

  • A simplified Telegram Bot API algebra, that will only contain the requests we need
  • Some kind of storage algebra, that will handle the storage of our todo-lists
  • Logger algebra for purely functional logging
  • Higher-level todo-list bot algebra, that will be composed out of other algebras above

Logger algebra

We’ll start with the simplest one. Logging is quite a common task, and there are projects already that provide logging algebras out of the box. We’ll use log4cats.

For those who are wondering, a simplified example of Logger algebra would look like this:

Log4cats provides several implementations, we’ll pick slf4j.

Storage algebra

Storage algebra should also be quite simple to digest:

Basically, for each particular chat, it allows us to add items, query the whole list and erase it.

In a real world app, this algebra would be interpreted into some database storage service. But for our purposes an in-memory implementation will be enough.

For purely functional and asynchronous concurrent access to shared state, we’ll use fs2.async.Ref. We could use Ref from cats-effect 1.0.0-RC2, but at the moment of writing it was not possible due to http4s depending on incompatible version of cats-effect.

Ref will store a Map[ChatId, List[Item]], which is essentially what we want to store. Here’s the implementation:

Telegram Bot API

Telegram Bot API algebra will look this way:

Yep, for this bot we’ll only need to poll incoming messages and post responses.

Notice, that I introduced a separate effect S[_] for streamed result. The idea came from this recent article. While in this particular example we’ll only use fs2.Stream[F, ?] as the streaming effect, it makes sense to abstract it out anyway. As far as I understand, abstracting over streaming effects in Scala is an area of research, so in future we may find out a better way to work with streams in tagless final encoding.

Here goes one of the most exciting parts of this post. Let’s implement bot API algebra!

First let’s see what it is constructed from. There’s a comment for each constructor parameter.

Now let’s implement the logic using this toolkit. sendMessage is a simple one, so we’ll start with it. It’s going to call the endpoint with the same name:

Here we’re using query parameters flavour of the API, which is not very convenient in general. For production bots there are other ways, including JSON.

Also, we don’t care about result body here, so we pass Unit  type to the client.expect  call.

Now to the juice — streaming updates.

Telegram Bot API  — stream updates

In plain words, what we’re going to do is to repeatedly call the getUpdates endpoint. Since getUpdates  returns a list of updates, we’ll also have to flatten the result of each call. Finally, our stream has to be aware of the last requested offset — we don’t want to receive any duplicates.

So let’s begin step by step.

repeatedly call

Here we create a stream of single Unit  value, repeat  it and lift into our effect F  using covary.

Here we don’t add any delay between stream elements, because we’ll instead rely on long polling timeout for throttling, which is recommended in the API docs. Just in case you wonder — throttling an fs2 stream is very simple.

Next step.

call the getUpdates endpoint

and

be aware of the last requested offset

This feels like a stateful stream stage. Moreover, it also requires to run side-effects to obtain the new state — new offset can only be obtained from the pack of fresh updates.

Let’s dive a little deeper here. For pure stateless mapping fs2 defines a map  function on streams:

Then, if we need to run an effectful mapping, we use evalMap (notice the mapping function is now a Kleisli-like effectful function)

If we want to go from a simple map to a stateful map, there’s mapAccumulate. Now you need to specify initial state and a way to obtain the new state after mapping each element:

Notice also, that the resulting stream for each incoming element emits both the stream state after the mapping and the mapped element itself.

Probably you see where it’s going — we need both! Each of our polls needs the latest offset. But to update the offset for the next request we have to execute the poll! So we want something like mapAccumulate, but the calculation step has to be effectful.

And there’s such combinator, it’s (quite expectedly) called evalMapAccumulate:

Now we’re ready to proceed with our stream of updates:

requestUpdates stage has to do 2 things: get the new messages from the Telegram Bot API and calculate the new offset:

Good! One last step remains.

flatten the result

Each response contains a list of updates. For a better API user experience we’d like to have each update as a standalone stream element. This is quite simple to achieve with flatMap:

Basically, we transform each polled bunch of updates into a separate stream of updates (using Stream.emits ), and concatenate them together into a single flattened stream.

And that’s it. Types line-up, our stream of updates is ready to be processed by some domain specific algebras.

Todo-list bot logic

Now that we have all the lower level machinery in place, let’s develop the business logic.

First of all, let’s define all the possible inputs to our bot. We’ll also need a way to convert raw incoming messages into domain specific commands:

And now we have everything we need to define todo-list bot algebra:

This is a higher level overview. The only interaction this algebra provides is to launch the bot process. The process itself is a simple stream that polls the updates (using the lower level bot API) and handles each update with an effect — hence the evalMap we’re already familiar with.

Let’s take a look at both stages.

pollCommands is quite simple: we start long polling from zero offset and map all non-empty messages into domain commands. Conversion involves some hoops since message.text is an option.

Also, in a real world implementation after each processed command we’d persist the offset somewhere, so that when bot is restarted, that offset would be used instead of zero. But, actually, it’s not that bad — once getUpdates was called with some non-zero offset, long polling API marks all the commands before that offset as read, and they are no longer served, even if you then call the API with zero offset.

handleCommand  is the place where we’re going to invoke our storage algebra. This one should be pretty clear, despite the verboseness:

All commands are handled in a similar fashion: update the storage and trace the change to the log.

There’s one small caveat with addItem. I want to make the bot a little less boring than a stone, so why won’t it have several different answers to choose from when an item is added?
Such cases are when we should carefully track and suspend side-effects. Random number generation is a side-effect, so we use the Sync[F]  to suspend it until the “end of the universe”.
Also, List.head  is unsafe, so we use F.catchNonFatal  to lift the error into F . It’s possible, because Sync[F]  extends MonadError[Throwable, F]:

Wiring up and “launching” the bot process

Now we have everything to construct our bot.

Quite a lot going on here:

  • We derive the JSON decoder for API response using auto derivation from circe generic
  • Http client is created safely using Http1Client.stream. This produces a stream of a single element, and guarantees that all allocated resources (connection pool) are released when the stream either finishes or crashes.
  • We flatMap the http client stream into our todo-list bot process stream. To create it, we cook all the ingredients. Some of these “cooking” steps are also side-effectful: for example the Logger interpreter or the  Ref instance for our storage algebra.
  • Stream.force  in the end just allows to go from F[Stream[F, A]]  to a Stream[F, A] that we’d like to return

End of the universe

We can be really proud of ourselves — we managed to develop a nice useful computation without even specifying our effect type, let alone executing any side-effects! That’s cool.

Now we’re ready to pull the trigger: specify our effect type and actually run the computation. We’ll use fs2.StreamApp[IO]  helper so that we don’t even have to write “unsafe” with our own hands.

To test the bot out, you’ll have to create a one for yourself. It’s a no-brainer, just follow the instructions here. Once you get a token, just plug it in and you’re ready to go!

To not expose my test bot token, app grabs it from TODOLIST_BOT_TOKEN env variable. If that doesn’t fit you, just put your token directly into the  TodoListBotProcess  constructor instead.

Conclusion

So it turns out that you don’t need akka or other complex frameworks to build a Telegram Bot. Moreover, you can do it in a purely functional way, and it actually looks beautiful! And as a bonus — the process of writing a tagless final program is a real joy, I definitely recommend to try it out 🙂

Some ideas to implement as an exercise for interested readers:

  • Allow to edit items (through editing original messages in the chat)
  • Wrap the bot with an administration http api, to send announcements for example

That’s all I have this time, thanks for reading!

Multiple faces of Scala Iterator trap

This post is about an issue with Scala collections I encountered recently. Although it is purely an issue of Scala standard library, I haven’t faced it for almost 3 years of my Scala development experience.

The consequences of not being familiar with the matter can lead to very painful results: unexpected stack overflow exceptions in your application.

The core

This issue can arise in variety of different forms, so let’s first look at the core: the simplest reproducer. It involves folding over a standard Iterator:

 

As any Scala developer knows from day one, foldLeft is stack safe. But don’t let the feeling of safety trick you here. This code blows up the stack:

 

And, of course, it is not foldLeft. The biggest “WTF?” moment here is that stack overflow is caused by this trivial-looking call:

 

How is that?

It appears, that for Iterator most transformation methods, like map, filter, etc., do not immediately evaluate the resulting collection. Instead, they build another Iterator on top of the original one.

Calling these transformations several times in a row builds a chain of Iterators. Each one delegates in some way to the respective underlying iterator. Let’s look for the evidence in the source:

 

As you can see, hasNext of the new iterator just calls the hasNext of underlying one. At this moment the vulnerability should be pretty clear: it’s possible to build a chain of arbitrary depth, that can eventually implode.

What’s important to note, that it’s not the transformation itself, that blows the stack. To make the iterator crash you have to call something that triggers the chain. It can be something as simple as .toString or .isEmpty. As we will see later, it can make debugging harder.

The faces

In hindsight, this issue looks straightforward and even obvious. However, it can take so many forms and shapes, that it can be a total riddle from the first look. Debugging experience can be very painful, especially when you are not familiar with the issue before it comes to the surface. I had that painful experience, so let’s dive into details.

I guess, this kind of behaviour is fine for Iterator by it’s very nature. Also, one can fairly say, that Iterator is not a frequent citizen of our code bases. I would agree — I can’t recall using it directly. But things are not so simple.

There are places, where underlying Iterator can leak out of well-known standard collections. I, personally, burnt my fingers with mapValues method on standard Map.

Let’s now take it to the next level. The foldLeft reproducer we’ve seen is very focused. Everything is in one place and, thus, quite simple to debug. But with advent of Reactive Programming, various distribution abstractions like actors and streams are used more and more often.

What it means, is that you may have the very same Iterator trap, scattered over multiple stages of your stream. Or some actor can silently build up an iterator chain and then message it to some other actor, which triggers evaluation and blows up the stack.

I my case, an akka-persistence-query stream crashed after several thousands of replayed events. It’s quite mind boggling, when you see a stack overflow error inside the stream stage, where there’re no signs of infinite recursion or complex calculations.

To illustrate, here’s a simplified version of my case. We build an event-sourcing application with CQRS. When streaming events to the query-side, we want to ensure, that each event is processed exactly once by each consumer.

For that purpose, there’s a small Deduplicator stage, which accumulates a map of last processed event numbers. This allows the stage to mark each event as an original or a duplicate separately for each consumer:

 

Although not trivial, the code is reasonably simple. And it is a time bomb.

Notice, that deduplicated value is constructed using mapValues [line 20]. Then, after another mapValues it becomes the stage state for the next stream element [line 26].

So this is effectively the same fold, just an asynchronous one. After some number of events the Iterator chain becomes larger, than our stack can fit in. The bomb is ready.

Last ingredient for the explosion is the trigger. And our innocent Deduplicator guy just passes the trigger down the stream [line 28]. The actual crash happens somewhere in the consumer code, which happens to access the Map.

The fix

One can fairly argue, that Deduplicator is not optimally implemented and can be more efficient. But for the sake of our study, let’s fix it without rewriting.

The solution to an overly long Iterator chain is to break it. First idea that comes to mind is to trigger evaluation of mapValues result immediately.

There are many ways to do that. For example, there are several good suggestions in this SO thread. Developers from Lightbend argued, that .view.force call is the best way to “evaluate” the collection.

Indeed, in our example, calling deduplicated.view.force produces a fresh map, without any ticking time bombs inside. Stack overflow threat is gone.

Conclusion

So this is it. Now any time you’ll see a stack overflow with Iterator involved, you know what to look for.

While debugging this issue, I was lucky to have access to paid Lightbend subscription. The support was incredible, I saved tons of time using their experience and guidance, while looking for the root cause and eliminating it.

That’s all I have today, thanks for reading!

Cross-platform polymorphic date/time values in Scala with type classes

At Evolution Gaming me and Artsiom work on internal scheduling application, that has a huge ScalaJS frontend. We have to deal with lots of complex date/time logic, both on backend and browser sides.

I quickly realised, that sharing same business logic code across platforms would be a massive advantage. But there was a problem: there were (and still is) no truly cross-platform date/time library for Scala/ScalaJS.

After a small research I settled with a type class-based solution that provides cross-platform java.time.* -like values with full TimeZone/DST support. In this post we will:

  • take a quick look at the current state of date/time support in Scala/ScalaJS;
  • see how to get cross-platform date/time values today with the help of type classes.

Described approach works quite well in our application, so I extracted the core idea into a library, called DTC. If you’re a “Gimme the code!” kind of person, I welcome you to check out the repo.

Prerequisites

I assume, that reader is familiar with ScalaJS and how to set up a cross-platform project. Familiarity with type classes is also required.

The Goal

There’s no solution without a goal. Precise goal will also provide correct context for reasonings in this article. So let me state it.

My primary goal is to be able to write cross-platform code that operates on date/time values with full time zone support.

This also means that I will need implementation(s) that behave consistently across JVM and browser. We’re Scala programmers, so let’s choose JVM behaviour semantics as our second goal.

Current state of date/time in Scala and ScalaJS

So we have our goal, let’s see how we can achieve it.

In this section we’ll go over major date/time libraries and split them into 3 groups: JVM-only, JS-only and cross-platform.

JVM-only

We won’t spend too much time here, everything is quite good on JVM side: we have Joda and Java 8 time package. Both are established tools with rich API and time zone support.

But they can’t be used in ScalaJS code.

JS-only

We’re looking at JS libraries, because we can use them in ScalaJS through facades. When it comes to date/time calculations, there are effectively two options for JavaScript: plain JS Date and MomentJS library.

There’re things that are common for both and make them quite problematic to use for our goal:

  • values are mutable;
  • semantics are different from JVM in many places. For example, you can call date.setMonth(15) , and it will give you a same date in March next year!

There’s also a JS-Joda project, which is not so popular in JS world, but has much more value to JVM developers, because it brings Java8 time semantics to Javascript.

JS Date

JS Date is defined by ECMA Script standard and is available out of the box in any JS runtime. But it has several weaknesses. Major ones are:

  • quite poor API;
  • time zone support is not universal: behaviour depends on environment time zone, and you can’t ask for a datetime value in an arbitrary zone.

Since JS Date is a part of language standard, ScalaJS bindings for it are provided by ScalaJS standard library.

MomentJS

Despite MomentJS values are mutable and still have minor bugs in calculations, it’s quite a good library, especially, if you need full time zone support.

It also has a ScalaJS facade.

JS-joda

JS-joda is implementation of a nice idea: porting java.time.* functionality to Javascript. Though I’ve never used this project, it looks like an established and well-maintained library.

ScalaJS facade is also in place, so you can definitely give it a try in your Scala project.

The only problem with regard to our goal is it still lacks proper DST support. But it’s already in progress, so you can expect it to be fully functional in observable future.

Cross-platform date/time libraries

After a small research, I found three options. Let’s see them in detail.

Scala-js-java-time

This library is the future of cross-platform date/time code. It’s effectively Java 8 time, written from scratch for ScalaJS.

At the time of writing this post, scala-js-java-time already provides LocalTime, LocalDate, Duration , and a handful of other really useful java.time.* classes (full list here).

It means, that you can use these classes in cross-compiled code and you won’t get linking errors: in JVM runtime original java.time.* classes will be used, and JS runtime will be backed by scala-js-java-time implementations.

Problem here, is that we need LocalDateTime and ZonedDateTime in ScalaJS. And they are not there yet.

Spoiler: we’ll be using scala-js-java-time in our final solution for the problem.

Scala Java-Time

Scala Java-Time is a fork of ThreeTen backport project. So it’s main purpose is to provide java.time.* -like functionality on Java 6 & 7.

It is also compiled to ScalaJS, which means we can write cross-platform code with it. And we can even use (to some extent) LocalDateTime!

The only problem is it doesn’t support time zones for ScalaJS yet (providing this support is the main focus of the project now, though).

So this library is close, but still misses our goal by a tiny bit.

Soda time

Soda time is a port of Joda to ScalaJS.

It’s in early development stages and also doesn’t have time zones in ScalaJS, but I still added it to the list, because developers took an interesting approach: they are converting original Joda code with ScalaGen.

So the resulting code really smells Java, but I’m still curious of what this can develop into.

Idea? No, the only option

The reason I’ve given the overview of currently available libraries is simple: it makes clear that there’s only one possible solution to the problem.

There’s no cross-platform library with full time zone support. And for JavaScript runtime there’s only MomentJS, that really fits our requirements. All this leaves us with nothing, except following approach:

  1. We define some type class, that provides rich date/time API. It’s a glue that will allow us to write cross-platform code.
  2. All code, that needs to operate date/time values, becomes polymorphic, like this:
  3. We provide platform-dependent type class instances: java.time.* -based for JVM and MomentJS-based for ScalaJS.
  4. We define common behaviour laws to test the instances against. This will guarantee, that behaviour is consistent across platforms.
  5. MomentJS API is powerful, but it has to be sandboxed and shaped to:
    • provide immutable values;
    • provide JVM-like behaviour;
  6. There’s a limitation, that we can’t overcome without some manual implementation: both JS libraries don’t support nano seconds. So we’ll have to live with milliseconds precision.

We won’t go over all of these points in this article. DTC library does the heavy lifting for all of them. In following sections we’ll just glance over the core implementation and example.

DateTime type class

Let’s just take a small subset of java.time.LocalDateTime API and lift it into a generic type class. We’ll use simulacrum, to avoid common boilerplate:

 

First of all, a total order for DateTime values is defined. So we can extend cats.kernel.Order and get all it’s goodies out of the box.

Second, thanks to scala-js-java-time, we can use LocalTime and LocalDate to represent parts of the value. Also, we can use Duration for addition operations.

For now, let’s just view it as a glue for LocalDateTime. We’ll get to time zone support a bit later.

Cross-compiled business logic

Having our new type class, let’s define some “complex” logic, that needs to be shared across JVM and browser.

 

With syntax extensions in place, the code looks quite nice.

More over, you can notice, that nothing here says, if time should be local or zoned. Code is polymorphic, and we can use different kinds of date/time values, depending on the context.

Now let’s get to the flesh and bones: type class instances.

Type class instances

Let’s start with JVM instance, as it’s going to be quite simple. Follow comments in code for details.

 

With MomentJS it’s going to be much more interesting, because we’ve obliged ourselves to provide values, that are comfortable to work with for a functional programmer.

To enforce immutability, we won’t expose any moment APIs directly. Instead, we’re going to wrap moment values in a simple object, that will be immutable:

 

Several notable things here:

  1. We make both constructor and underlying value private to make sure there’s no way to modify object internals. We’ll provide a custom constructor later.
  2. Notice month value adjustment to provide JVM-like behaviour. You will see much more of such things in DTC, I even had to write a couple of methods from scratch.
  3. To compare two moment values, we use their raw timestamps.

Now it’s trivial to define DateTime instance for our MomentLocalDateTime:

 

Now have everything to run our generic domain logic on both platforms. I’ll leave it as an exercise for my dear reader.

Now let’s discuss some aspects of making this thing work for zoned values as well.

Time Zone support

Not much time is needed to realise, that we need separate type classes for local and zoned values. Reasons are:

  • They obey different laws. For example, you can’t expect a zoned value to have same local time after adding 24h to it.
  • They have different constructor APIs. Zoned value needs time zone parameter to be properly constructed.
  • Zoned values should provide additional APIs for zone conversions.

On the other side, most of querying, addition and modification APIs are common to both kinds of date/time values. And we would like to take advantage of that in cases we don’t really care about a kind of the value and wish to allow using both.

This leads us to following simple hierarchy:

  1. LawlessDateTimeTC (which we initially called DateTime) that contains common methods, specific to all date/time values.
  2. LocalDateTimeTC and ZonedDateTimeTC will extend LawlessDateTimeTC and provide kind-specific methods (constructors, for example).

This -TC suffix is ugly, but name clash in JVM code is worse :).

We will also have to provide a cross-compiled wrapper for time zone identifiers, because java.time.ZoneId is not yet provided by scala-js-java-time, and we don’t really want to pass raw strings around.

Everything else is just an evolution of core idea. Full implementation and more examples are available in the DTC repo.

Note on polymorphism

A side-effect of this solution, is that all your code becomes polymorphic over the specific date/time type. While most of the time you’re going to use single kind of time (zoned or local), there are cases when polymorphism becomes useful.

For example, in an event-sourced system, you may require zoned values for most of the calculations within the domain, as well as commands. But, at the same time, it can be a good idea to store events to journal with UTC values instead.
With type class-based approach, you can use same data structures for both purposes, by just converting between type parameters of different kinds.

Conclusion

Though polymorphic code can look scary for some people, described approach give us following advantages:

  1. Truly cross-platform code, that operates on rich date/time APIs with time zone support.
  2. Polymorphism over specific kind of date/time values.

If you’re working with date/time values in Scala on a daily basis, please, give DTC a try and tell me what you think!

Thanks for reading! 🙂

Published slides from my “Recursive implicit proofs with Shapeless HLists” talk

Recently I was giving an introductory talk about constructing implicit proofs with Shapeless at Scala Riga Meetup.

Contents in brief

  • Analogy between Scala implicit resolution and Mathematical proofs.
  • Recursive implicit proofs (why “recursive” is important feature)
  • HList basics
  • Various implicit proofs for HLists with demos. Mathematical induction as a handy tecnique to writing HList proofs.
  • Expanding HList proof to all product-like types with shapeless.Generic
Slides (english, PDF)
Slides (english, Slideshare)

Implementing type-safe request builder: practical use of HList constraints

Hello, Scala developers!

In this post we will develop a simple type-safe request builder. Along the way, we will:

  • encode domain rules and constraints at compile time with implicits;
  • heavily use HLists;
  • use existing and implement a couple of new HList constraints.

The example in this post is a simplified version of a real request builder from my Scalist project. Implementation of intentionally omitted concepts, like response parsing and effect handling, can be found there.

The code from this article can be found on GitHub.

Prerequisites

I assume that you, Dear Reader, have a basic understanding of:

  • implicit resolution;
  • typeclasses;
  • what an HList is.

If some of those are missing, I encourage you to come back later: there’s a lot of good articles and talks on these topics out there in the web.

The problem

We will be developing a query request builder for and API that is capable of joining together multiple logical requests and returning all results in one single response. A great example is Todoist API: you are allowed to say “Give me projects, labels and reminders”, and the API will return all of them in a single JSON response.

Such an optimization is great from performance perspective. On the other hand, it imposes difficulties for a type-safe client implementation:

  • you need to somehow track the list of requested resources;
  • return value type must reflect this list and not allow to query for something else.

Those problems are not easy to solve at compile time, but, as you might have heard, almost everything is possible with Shapeless 🙂

The task

The task here is to implement a request builder with following features:

  • allows to request single resource like this:
    builder.get[Projects].execute
  • allows to request multiple resources like this:
    builder.get[Projects].and[Labels].and[Comments].execute
  • doesn’t allow requesting duplicates. This will not compile:
    builder.get[Projects].and[Projects]
  • doesn’t allow requesting types that are not resources. This will not compile:
    builder.get[Passwords]
  • execute method will return something that will contain only requested resources and will not allow to even try getting something else out of it.

Simplifications

To focus on the main topic, I will cut off all other aspects of implementing a good HTTP API client, like sending the request, parsing, abstracting over effects and so on.

To keep things simple, let’s make  execute method just ask for an implicit MockResponse typeclass instance, that will supply requested values instead of doing everything mentioned above.

Let’s code!

Model

We’ll start with some simple things, that are required though. First let’s define a model to play with. Just some case classes from task management domain:

 

What we are actually going to request are lists of domain objects. But get[Projects] looks better than get[List[Project]], so let’s define some type aliases. Also, this is a good place to put an instance of our builder:

 

Good. We’ll get to the Builder later. Now let’s define

APIResource typeclass

An instance of APIResource[R] is a marker that R can be requested with our builder.

 

A couple of comments here:

  • In a real case typeclass body will not be empty — it’s a good place to define some specific entity-related properties. A resource identifier, that is used to create a request can be a good example.
  • We mark APIResource as sealed here, because we know all the instances upfront and don’t want library users to create new ones.

Now we’re ready to implement the Builder.

Single resource request

Builder call chain starts with get method — let’s create it:

 

Quite simple for now: we allow the method to be called only for types that have an implicit APIResource instance in scope. It returns a request definition that can be executed or extended further with and method.

We will solve the execution task first. A RequestDefinition trait defines execute method for all request definitions:

 

There’s the MockResponse thing I was talking about. It allows us to avoid implementing all the real machinery that is not relevant for the topic of this post.
Almost everything about this typeclass is simple and straightforward, but there’s an interesting thing that will show up later, so I have to put the implementation here to reference it.

 

Ok, we’re able to execute requests! Let’s see how we can solve the chaining task with and method.

Chaining: 2 resources request

First, we have to decide, value of what type should be returned by an executed multiple resource request. Standard List or Map could do the job in general, but not with our requirements — precious type information will be lost.

This is where HList comes in handy — it’s designed to store several instances of not related types without loss of any information about those types.

Next. When implementing a type-safe API, we have to put all domain constraints into declarations, so that they’re available for the compiler. Let’s write out, what is required to join two resources in one request:

  • both of them must have an APIResource instance in place, and
  • their types must be different

Good, we’re ready to make our first step to a multiple resource request definition:

 

All our constraints are in place:

  • the context bound ensures that original resource type is valid.
  • AR: APIResource[RR] implicit parameter ensures that newly added resource type is valid.
  • NEQ: RR =:!= R implicit ensures that R and RR are different types.

Also, you can notice the HList in the result type parameter. It ensures the execution result to be precisely what we requested at compile time.

Now we’re all set up to dive into the most interesting and complex problem.

Chaining: multiple resources request

Here the task is to append a new resource request R to a list L of already requested ones. Again, let’s first define required constraints in words:

  1. every element of  L must have an APIResource instance in place;
  2. R must have an APIResource instance too;
  3. L must not contain duplicates. Let’s call such a list “distinct”;
  4. L must not already contain R . In another words, result list must be distinct too.

That’s a lot. Let’s see, what implementation we can come up with here.
We will start from the MultipleRequestDefinition class itself:

 

Actually, we already have everything for our first requirement. A LiftAll typeclass from shapeless ensures that every element of an HList has a specified typeclass instance.
In our case, implicit allAR: LiftAll[APIResource, L] constraints L to have an APIResource for each element.

Implicit ID: IsDistinctConstraint[L] will ensure that all elements of L are different (requirement #3). There’s no IsDistinctConstraint in shapeless 2.3.0, so we will have to implement it ourselves. We’ll come to that later.

That’s it for the class definition. Let’s move on to the and combinator:

 

Requirement #2 is trivial here. NotContainsConstraint for requirement #4 will have to be implemented by us too.

All right, so we have two HList constraints to implement. Let’s see how it’s done.

Implementing HList constraint

In general, a constraint is implemented as a typeclass, that provides instances for and only for objects, that meet the constraint.

Most of the time it can be done with a technique similar to mathematical induction. It involves two steps:

  1. Define a base case: implicit constraint instance for HNil or an HList of known length like 1 or 2.
    Base case depends on the nature of the constraint and can involve additional constraints for HList element types.
  2. Define the inductive step: implicit function, that describes how new elements can be added to an arbitrary HList, that already meets the constraint.

We will start with NotContainsConstraint. The typeclass definition is quite straightforward:

 

U  is the type that L must not contain to meet the constraint.

Let’s define the base case. Here it’s simple:

HNil doesn’t contain anything.

In general we want constraints to stay same under any circumstances, so it’s usual to define implicit rules right in the typeclass companion object:

 

Seems logical: for any type U we state that HNil doesn’t contain it. Heading over to inductive step, it can be expressed in words this way:

Given an HList that doesn’t contain U, we can add any non-U element to it and get a new HList, that still doesn’t contain U.

Let’s encode it.

 

Here we require an HList L, that doesn’t contain U (is guaranteed by implicit  ev: L NotContainsConstraint U) and a type T, that is not equal to U( ev2: U =:!= T ). Given those evidences, we can state that L :: T doesn’t contain U. We do it by supplying a new typeclass instance  new NotContainsConstraint[T :: H, U] {} .

Some tests:

 

Nice, it works!
I hope it’s transparent here how implicit resolution can or can not find a constraint instance for an HList: we start with HNil base case and go to the list head. If implicits chain is not broken along the way by a duplicate element — we get a constraint instance for the whole list.

Now we’re going to implement IsDistinctConstraint in a similar manner. And our fresh NotContainsConstraint is going to help us here!

Base case is quite simple:

HNil is a distinct list.

 

Inductive step is quite simple too:

If an HList L is distinct and it doesn’t contain type U, than U :: L is a distinct list too.

 

Tests show that everything works as expected:

Wiring everything up together

Now, when we’ve done all the preparation work, it’s time to get our builder to work.
We’ll try it out in a REPL session. Single request case goes first:

 

Everything is ok, we’re getting the mocks, defined in MockResponse. But surprise awaits us, if we try to get multiple resources:

 

There’s no implicit mock response for our HList! We will have to add some implicits into MockResponse companion to help it join our results:

 

After all those constraint tricks this simple typeclass extension should be transparent to you. We basically supply a MockResponse instance for any combination of MockResponses.

Important note: although the problem we’re solving here looks artificial, it is not — in a real world we will have to propagate requested types through all network & parsing machinery, that obtains the result. It is the only way to keep the compile-time safety.
And, similarly to our example, some tools (probably implicit) will be required for joining several results in an HList.

Finally, we get everything working! Notice the result types and how HList allows to select only requested types.

 

All safety requirements are also met — there’s no room for programmer errors:

Shapeless 2.3.1

Shapeless 2.3.1 is coming out soon, and it will contain both constraints we implemented here.

Conclusion

Creating a library or an API is a great responsibility. Providing end-user with a type-safe interface, that doesn’t allow to run and deploy malformed definitions is a high priority aspect.

HList constraints are a great tool in a Scala API developer’s toolbox. In this post we’ve seen them in action, applied to a practical example.

Thanks for reading! See you in future posts 🙂

Solid Type System vs Runtime Checks & Unit Tests (Scala example)

A month ago I made a talk for my colleagues at our private developers meetup called QIWI Conf. It’s about how Scala type system can help you to release safer code.

Talk is mostly for Scala beginners, yet some patterns covered are very powerful and not offered by majority of other languages.

Contents in brief

  • Introduction
    • What are fail-fast options today?
    • Why compile-time checking is the best choice?
  • Patterns that can lift up your code to be checked at compile time
    • Options
    • Either and scalaz.\/
    • Sealed ADT + pattern matching
    • Tagging
    • Phantom types
    • Path dependent types
Slides (english)
Video (russian):

FPConf Notes

Notable talks from the FPConf conference.

Macros in Scala

A good introduction to scala macros from JetBrains scala plugin developer.

Macros is basically an AST transformation.

Simple macros are implementes as a method invocations:

More complex things are achieved with implicit macros.

Macros can help with:

  • Typeclass generation (including generic implicit macros as a fallback)
  • DSLs
  • Typechecked strings (e.g. format-string)
  • Compiler plugins

IDE support is far from ideal, mostly because of todays scala macro implementation limitations. Coming  Scala.meta to the rescue.

Speaker had a custom IDEA build on his laptop, with a super-awesome “expand macros” feature. Just press a magic shortcut and  examine expanded macro code.
Brilliant thing by JetBrains, can’t wait to use it.

Embedding a language in string interpolator

Great example of custom string interpolator use case.

Mikhail Limansky showed an example of creating MongoDB query language interpreter with a string interpolator.

Most of the times you don’t need such things, for example if you use a good ORM. But speaker’s case is valid. He works with two projects each using different and verbose ORM’s to access a single mongo database. So he made a decision to implement a single expressive (and importantly— well-known) language interpreter, that would generate ORM code of choice.

Even though it’s a plain string interpolator, it’s almost compeletely typesafe, thanks to extensive macros usage. Talk covers every step to implement such a thing.

Here’s the library.
https://github.com/limansky/mongoquery

Frontend with joy (DataScript database)

Slides

Introduction into DataScript database: https://github.com/tonsky/datascript

In a few words — it’s an immutable in-memory database for browser js applications.
Along with mentioned immutability it has several other advantages:

  • Powerful query language with pattern matching.
  • Well-defined and simple database changes description format. With an ability to query with such diffs it has event sourcing out of the box.
  • It builds indexes.

Scala performance for those who doubt

Slides (from jpoint)

My personal favourite of all the fpconf talks.

You can’t measure performance by hand on JVM, because of
  • Deadcode elimination
  • Constant folding
  • Loop unrolling
Tool to use: JMH
Micro-benchmark runner. Has sbt plugin (sbt-jmh)
To find root of some performance problems it’s useful to look at bytecode and even assembler code.
Tools here: JMH perfasm profiler, javap

Measurements

1) Pattern matching
Simple ADT match equals in speed to if-clause sequence.
Null-check is much faster then Option-matching. In general, because of type-erasure, parametrized types pattern matching is slower (but it’s a fair price such feature).

2) Tail recursion
Basically is as fast as loops

3) Collections
Fold and map combinators have significant overhead for big arrays, and especially for primitives as elements.
That’s partly because of HotSpot optimization heuristics are shaped for java.

The problem for primitives is boxing. Scala collections are generics, so specialization doesn’t work for them.

There is an alternative collection library, called Debox, which have specialized Buffer, Set  and Map .

Conclusion

  • Scala is slow:
    • it’s easy to write beautiful, but laggy code
    • collections are super-slow with primitives
    • scalac can generate strange code
  • Scala is fast:
    • with good internals knowledge, beautiful code can work as fast as java code
    • with a few hacks you can make collections be friends with primitives
    • JVM can optimize strange scalac generated code