Monday, October 17, 2011

Lets use EventSourcing on the Cloud, another Amazon project with Whale in the name

So, I'm playing with Event Sourcing, and I cooked up my own, mostly because I want to be able to explain this well to those who ask.

Let's explain in code with a really simple domain that everyone will understand -- Bank Accounts, you better have one!

How does one get a Bank Account? They open it, right? So I made this event.


Now lets make a quick domain POCO domain object for an account. The ctor will take this event, and do some cool stuff with state. Looks like this.

Two interesting things to note here. First, that I'm mutating my state in the Apply(AccountOpened) method, not just in the ctor. This is because that method will be called later when we get the Account from a repository. Yeah, it's a bit different, but honestly, to me the benefits WAY outweigh this single "drawback" to me. What benefits, you ask? Glad you did... We'll get there in a minute.
Second difference is that UncommittedEvents object... We'll also get to that in a minute. For now, lets write a quick integration test and watch WhaleEs in action. First, we bootsrap WhaleEs, like so.

I'm just parsing some text file that has my Amazon info in it here, then calling WhaleEs fluent config API so that I can get an Instance of Repository.
Next I'll create an account and save it.
Like so.

Kinda boring, huh? Let's go look at the guts of Repository, cause I know that's what you wanna see.

Just a little reflection magic here. I really don't wanna make people reference my library and ruin the POCO on their AR objects, so I just let them set the method name that contains the list of Uncommitted objects. That's pretty much the convention everyone is using for ES so I'm sure we'll be ok with that. Anyway, I'm calling that getter, then persisting the events it returns (calling the object that I blogged about last).
The get method is uses similar logic. It simply pulls the event stream for the AR with that Id, then uses reflection to call Apply(event,true) for every event, rebuilding state. It sends the extra true parameter for isReplaying, that way the AR won't add the event to it's UncommittedEvents property. Here it is.

Not to horribly complicated actually. Lets see if it works.

Sweet! Console outputs 100, just like I opened the account with.  Let's play with this a little more. Make a couple deposits and withdraws.



This works great, test passes, and that proves that we're actually rebuilding state. "So what Elliott! I could do that by just saving state, you twit!" you say.
Yes, tis true, my dear friend, you certainly could. However requirements change. Let's say they did, and know we want a list of all activity that ever occurred on the account. In good 'ole state land, we'd just cross our fingers that when we originally designed the system we had the good sense to actually save Deposit and Withdraw objects to some persistence, maybe we would, but maybe we wouldn't. Point is, working this way, even though we didn't it is very easy to add an "ActivityList" property to our Account object. Just modify the Apply methods for the appropriate events.... Like so.

Add a quick test like this.

And we see this....


Account Opened On 10/5/2011 2:35:43 PM with $1000
Deposit made on 10/5/2011 2:35:43 PM $45
Withdraw on 10/5/2011 2:35:43 PM $17

Anyway, I know there's a ton of writings on this stuff, but I hope this helps someone somewhere.
Feel free to yank the code down from github at https://github.com/elliottohara/WhaleES.

Have fun!

Monday, October 10, 2011

Quick and Dirty Event Source using Amazon S3 and JSON

https://github.com/elliottohara/WhaleES

So yeah, I'm doing it too. Writing an event store. Not really because I think that Jonathan Oliver's isn't good, just because I wanna wrap my head around it. Lemme break down Event Sourcing into as small as a nutshell as possible.

Event Sourcing is the storing of a "stream" of events that represent the history of all that has happened in a system, and using that to create state instead of simply storing the state.

If that doesn't make sense to you, well, that's not the purpose of this blog. Go google it a bit, and read some more, then come back. This particular  post is about the quick and dirty ES implementation I'm writing.  If you're too lazy for google, here's a few great links.

http://cqrsinfo.com/documents/events-as-storage-mechanism/
http://martinfowler.com/eaaDev/EventSourcing.html
http://codebetter.com/gregyoung/2010/02/20/why-use-event-sourcing/


Here's the idea. Events are raised by Aggregate Roots, and I'll create one stream for each AR. I'll serialize the stream of events (for now using JSON) to a Amazon S3 bucket with a key of the type name of the AR. Seems pretty simple to me... Let's give it a whirl.

First, I'll create some test dummies for my AR and an Event. Like so

I think I'll make an EventSource object where T is the AR type... Here's a quick integration test - starting with actual integration because, well, because I don't wanna get bit by Amazon quirks that I'll miss in a unit test.

Not really a test, but I'll just look in S3 for the file for now... Like I said, this is quick and dirty! Let's go make this happen.  So first, we're gunna need a way to tell the serializer what types the actual events are, so we can't just dump them all in a file and persist it. I created a quick little EventEnvelope class that wraps that. Like so.
We'll just serialize a list of these to S3. Ok, lets do this.
Looks pretty simple... Run the test. No exceptions and yep, I see a file there.

The contents look like so...

Cool. Looks ok so far. Now lets make that GetEventStream method work. I'll add a test that'll just write out the TestEvent.What value that I just put up there.

Pretty simple... Now lets make it pass.


Pretty simple huh? We're just calling that ExistingEvents for method that yanks the file from s3 for that Id..
Run the test, and yep, I see "Blah" in my console.

Ok, so I'm not sure if this code will actually get used, because Jonathan Oliver did some really great ES stuf and he's working on S3/Simple DB implementations now, however, I really wanted to create a quick implementation for some POC stuff. I was surprised at the simplicity of this. Next up, lets see if I can write something to get some AR state working of a stream.

Monday, October 3, 2011

Givin' Amazon cloud services some luv... Why you should consider using Amazon for SOA needs

So the Los Techies Open Spaces Event was amazing. I have never in an environment where I was surrounded by so many people that just "got it". It was just awesome.

There was a re-occurring theme (besides FUBU and javascript) that I found myself gravitating to, it was EventSourcing and messaging patterns (surprise huh). The last session was on messaging systems, and I was surprised to kinda be the lone voice in the room mentioning Amazon services. I figured I'd make a quick blog post where I show the actual pub/sub guts of WhaleBus, and how simple it is to get messaging working on the Amazon cloud.

First of all, go sign up for AWS, it's free to sign up, and the free tier allows 100K SNS and SQS requests and up to 1K email notifications.

Done? Ok cool, let's get to coding.

Let's publish a message to the cloud real quick like.  Spin yourself up a project and get AWSSDK.dll (it's on nuget- just search for Amazon, it's the first result).

So, first, let's create a "Topic" to publish our messages to. Amazon gives us a web interface to do this, but who wants that? Let's do it programmatically...

Simple enough, I see the topic arn on my console when I run it. And I can see the new topic in my aws console.
Next up, let's publish a messages to that topic. First, let's just set up an email end point through the aws console (we can do it programatically just as well, but I wanna publish real quick like).

Create an Email Endpoint UI in AWS console


Now, amazon sends me an email to the address I specified and makes me opt in (otherwise this would be a really cool way to piss people off, huh?). I follow the link in the email. Subscriber is done.



Simple enough, right?
Now, when I publish a message to this topic, I SHOULD get a email with the message json serialized.  Let's try it out.


Run the test, and check my email, and booya!


So, there we go, I just created a topic, then published a message to that topic. However, I'm having a hard time seeing how useful getting emails of published messages is, I mean, I could just send myself an email with a subject of "Elliott has an awesome blog" and a body of "Hello from the cloud" and get the same result, right?

So, lets, now go set up a subscriber that has a little more value. SNS supports HTTP posts, but let's not do that, lets use an SQS queue.  Yeah, we can create one through their ui, but lets do it with code.
Ok, ok, lots of code there, but it's almost all security stuff that you'll only do once per sqs queue.
Everything from line 21 - 30 is simply setting the security to allow SNS to publish messages to the newly created SQS queue.

Now,  let's create a go publish that message again and write a quick little test that'll pull messages from the sqs queue. So, I run the publish_a_message test, and I see the email arrive. So I know the message made it to SNS, let's write the code to pull from the sqs queue.


All this code is doing is looping for 5 seconds and calling RecieveMessage, then writing the contents of the message to the console. Here's what I see.
So, yeah, it works, and it's simple. So I like it. Whatta ya think?