Paul Rayner

| Comments

Domain-Driven Design in Ruby at DDD Exchange 2013 in London

Last week when I was in London I came across Leonardo DaVinci’s drawing, The Virgin and Child with St Anne and St John the Baptist, while wandering the National Gallery in London. It’s also known as The Burlington House Cartoon because drawings of this sort were usually transferred to a board for painting by pricking or incising the outline. With this cartoon, this has not been done, suggesting that the drawing has been kept as a work of art in its own right.

I see a sample app as functioning like this cartoon. It’s not a complete system, and is not intended to be prescriptive in any way. Rather, it is an along-the-way artifact created to learn. It’s a point-in-time snapshot of a much deeper, largely invisible, learning process, and thus is incomplete in that way too. When the sample app is done, it can function as a teaching tool, as a guide on the way to building something real. As a conversation starter and tradeoff clarifier.

Last Friday I presented at the DDD Exchange 2013 hosted by Skills Matter in London on what I’ve learned recently in exploring doing domain-driven design (DDD) in Ruby. The way I approached this exploration was to start porting the DDD sample app from Java and C# to Ruby. I wanted to do this because it would give me an opportunity to go much deeper in Ruby than every before, while applying DDD concepts and techniques I was familiar with in some unfamiliar ways using new tools.

I started the port to Ruby back in early May, and presented my early findings to an encouraging audience at DDD Denver on May 13. At that point I had only the domain model objects and some of the Rspec tests in place. Much of my time had been taken with investigating how best to implement value objects, and possible different approaches for the UI and for how to tackle enabling eventual consistency between aggregates. I had only begun to work out how to handle persistence with MongoDB.

When I gave the DDD Denver presentation, I was very nervous about presenting such an incomplete effort in public. But I found everyone to be very supportive and it inspired me to keep going. In the next four weeks I was able to solve all the big issues and prepare a presentation for DDD Exchange 2013. If you are interested, Skills Matter did an excellent job of recording the presentation and getting it online. See below for links to resources.

My hope is that this - currently very unfinished and unpolished - effort sparks interesting conversations about options, tradeoffs and possibilities, and helps others get to grips with the details of how to make DDD real on their projects.

Here’s a list of resources related to my presentation:

A big thank you again to Skills Matter for hosting such a wonderful event and making it such a special day (for both me and my son).


And I just have to include this one…

| Comments

Succeeding With DDD - Documentation

I’m often asked about what teams doing Domain-Driven Design (DDD) should do in the way of documentation.The question What types of Written Design Documents are used in DDD projects?) came up on Stack Overflow and I started to write a response, but realized it was getting way too long to post there. So here it is.

When it comes to documentation, we need to begin with the end in mind. We need to understand why we are writing it in the first place: What purpose is each document intending to serve? The problem with a lot of documentation is that it is seen as an end in itself, rather than a means to an end, which is to deliver a quality product that meets an important customer need. This is why agile teams adopt the value of “working software over comprehensive documentation.”

However, documentation serves a number of important, and different, purposes. For each documentation artifact, ask: “Is this artifact to support the team now as it develops the software, or is it to support future development?” Depending on the answer to this question, approach the documentation in a different way. Let’s start with supporting future development.

The Problem of Tribal Mythology

Jason Smith in Elemental Design Patterns says the following about kinds of documentation supporting future development:

We know we should document our software; we know we should keep it up to date; we know we should commit to pen or screen the whys, the hows, and the reasons; but we also know it is a pain. It really is, so we don’t do it. What we have instead is a body of knowledge that is locked within the heads of developers, that is passed along in fits and spurts, when prompted and only where necessary, frequently without any comprehensive framework of common understanding among stakeholders.

As Jason points out, Grady Booch has popularized the phrase “tribal knowledge” for this kind of information artifact. Documenting for the future preserves the oral tradition by encoding knowledge that already exists. It supports the later transmission, socializing and sustainability of the “tribal knowledge” of the team.

So one type of documentation we create supports future development by preserving the oral tradition that teams develop along with the software. Without this kind of documentation “…the collected tribal knowledge degrades into “tribal mythology” (Booch). When this happens, no one really knows how the system ended up the way it has, and the knowledge is lost.

This kind of supporting, future-facing documentation is particularly relevant where such knowledge is not immediately apparent by reading the code, supporting tests and other artifacts. Such documentation is typically written after features/modules are implemented/delivered. It can be produced as the software is being built, but then there is the additional maintenance cost of keeping it up-to-date as things change.

Preserving Tribal Wisdom

So we want to avoid tribal mythology by documenting our systems as necessary. We want to capture and preserve for people to come the “tribal wisdom” that has been gained in the rough-and-tumble of developing the system. As Jason points out:

Tribal wisdom, however, is the virtuous flip side of this tribal mythology. It is prescribed action with understanding, how accompanied by why, and is adaptable to new environments, new situations, and new problems. It transcends rote copying, and provides illumination through a comprehensive discussion of the reasons behind the action.

At some point in the past, for almost every action or decision in a system, someone knew why it was done that way. Carrying those decisions forward, even the small ones, can be critical. Small decisions accrete into large systems, and small designs build into large designs. By ensuring that we have a strong tradition of knowledge retention that facilitates understanding, we build a tradition of tribal wisdom.

Favor Documenting over Documentation

So we support future development by preserving tribal wisdom through documentation, but what about supporting the team as they develop the product?

In the same sense that agile teams favor planning over following a plan, they tend to favor documenting (as an ongoing, just-in-time, activity) over creating a (once-and-for all) document. And in the same manner that their planning is focused around high-fidelity communication, customer collaboration and team interaction, any documenting they do tends to have the same goals and characteristics.

A plan is only useful until it needs to change, which is why agile teams focus on enabling and responding to change. The intention is the same with any documentation they create in service to building a software solution - it should not be painful, but rather serve the team in better understanding the problem space, and helping the team grasp what the solution needs to look like. Let’s look at some important characteristics of this style of documentation:

Characteristics of Useful Documentation


This should have to go without saying but, like comments in code, much of the documentation that exists cannot be trusted. If you have documents that are supporting your development, make them living documents by keeping them up to date. They must be correct. They must speak the truth about the software and the business domain.


Part of keeping documents trustworthy is enabling change. Documentation must be malleable - make it as easy to change as possible. Reduce the friction of having to change it. Documentation that is burdensome to change is less likely to be kept up-to-date.

Makeing it malleable typically means making it as lightweight and informal as possible. Prefer hand-drawn diagrams over created in a tool (such as Visio), prefer electronic over hard-copy. Only include the pertinent details. Indicate which things are tentitive, and which may be harder to change.

The important thing is to understand the purpose of each document, and ensure that it is kept up to date. As much as possible, push the knowledge into the code and the tests.


Documentation must be as accessible as necessary. Things that the team is working on right now, I would expect to be on the walls of the team area. Just the same as many teams use information radiators such as burndown charts and task boards to track their delivery progress, I like to see sketches of design diagrams on the walls too.

I like to see a context map on the wall, showing the terrain the team is dealing with. I’ve worked with many teams that were not co-located, so we would put the the documents in shared folders, and on the wiki. Sometimes we would sketch on a whiteboard, and then take a photo of the diagram and put it on the team wiki.

Don’t let your wiki fall prey to the Tragedy of the Commons. Appoint a curator for your documents if necessary. But strive for team-ownership of the documentation, just as you strive for team ownership of the code.

Documentation and Doing DDD

DDD teams often find they have a leg-up with documentation, because they devote so much effort to distilling domain knowledge into the software itself via the domain model. Teams doing DDD are focused capturing the essence of the critical concepts of the core domain in the domain model itself. With DDD the rules, reasoning, assumptions and key business concepts are embedded in the software.

When I start with a team, the first thing we draw together is a context map. This diagram helps set them up for success in terms of knowing what context they are working in, how it relates to their core domain and the other contexts they need to interact with.

For DDD teams, and for software teams in general, the important thing should be not that the domain is documented, it is that it is understood, and that this understanding is shared among everyone connected with developing the software. Good documentation engenders a shared understanding of the business domain. Good documentation for a DDD team captures the essence of the reasoning around the domain model: a rich, expressive software model that enables significant business capabilities in the core domain, supporting the strategic goals of the business. Teams doing DDD accomplish this by simplifying domain complexity enough to provide a shared language and understanding, and embedding it in the code.

DDD is not prescriptive about documentation. What documents are produced usually has more to do with the team’s existing process than doing DDD. However, there are certain kinds of documentation that teams doing DDD do find very helpful. Let’s look at some of these.

Requirements Specification?

Many teams opt for user stories as items in a feature queue, prioritized by value to the business (i.e. “Product Backlog”, in Scrum terms). See my earlier blog post on user stories and DDD.

A team doing DDD could use a requirements specification document. But the trap with heavyweight, detailed specification documents is that they tend to separate design from implementation. As Mary Poppendieck writes:

The theme running through all of my experience is that the long list of things we have come to call requirements – and the large backlog of things we have come to call stories – are actually the design of the system. Even a list of features and functions is design. And in my experience, design is the responsibility of the technical team developing the system.


I suggest we might get better results if we skip writing lists of requirements and building backlogs of stories. Instead, expect the experienced designers, architects, and engineers on the development team to design the system against a set of high-level goals and constraints – with input from and review by business analysts and product managers, as well as users, maintainers, and other stakeholders.

Agile teams tend to eschew producing detailed requirements specifications, preferring a more light-weight approach to describing what the system needs to do. The problem with such documents is that design decisions are made too early, with insufficient domain and technical knowledge, and having it written up in a specification tends to set that ignorance in concrete.

All too often, detailed requirements lists and backlogs of stories are actually bad system design done by amateurs.

The risk in this approach is that:

Separating design from implementation amounts to outsourcing the responsibility for the suitability of the resulting system to people outside the development team. The team members are then in a position of simply doing what they are told to do, rather than being full partners collaborating to create great solutions to problems that they care about.

Most teams I coach are following some form of agile process (Scrum, XP etc) and thus tend to focus more on rapid feedback loops and incremental development over producing copious amounts of documentation first. This tends to aid with modeling, as the documentation is produced as-needed, rather than to get through some “gate” in a prescribed SDLC process. The code itself is the design, paraphrasing Jack Reeves.

Some teams find it helpful to develop a list of use cases, a list of tasks the program is able to perform or some combination of both. I would experiment with what you find most useful for your team. Use cases have fallen out of vogue recently, but I am still a big fan of them.

Note that I am not against specifying requirements in written form, but rather entombing those requirements (i.e. what features the system should provide to meet the customer’s needs) in a large tome that locks-in the details of how the system should behave. I have utilized use cases in a lightweight, just-in-time way and found them very useful. See Alistair Cockburn’s article on Why I still use use cases for similar reasons to mine.

I would also strongly recommend using mockups and prototypes as much as possible.

Core Elements

I typically create a short document that captures the core domain vision statement and the context map.


Architecture is largely orthogonal, but supportive, for DDD. I find the “4+1 architecture” approach to be the most useful. It is useful to keep in mind that, as Grady Booch declared in 2009, architecture is a shared hallucination:

Architecture is just a collective hunch, a shared hallucination, an assertion by a set of stakeholders on the nature of their observable world, be it a world that is or a world as they wish it to be. Architecture therefore serves as a means of anchoring an extended set of stakeholders to a common vision of that world, a vision around which they may rally, to which they are led, and for which they work collectively to make manifest.

Notice that in Krutchen’s approach, scenarios are the unifying thing. Reference scenarios are a more specific form of this. See my presentation on domain scenarios at the DDD Exchange 2012 for a walkthrough of using reference scenarios. In DDD reference scenarios describe the key business problems that the model needs to solve.

Reference scenarios will be the core domain business capabilities that the software, and in particular, the domain model, will enable. They often take the form of a short narrative, with a supporting diagram. Not starting out that way, but the key is capture the significant details that make the problem worth solving for the business.

George Fairbanks book, Just-Enough Software Architecture is the best book I’ve found on characterizing, describing and documenting software archtictures. I love the pragmatic, risk-driven approach to architecture that this book takes (the sections on modeling alone are excellent, though it defines DDD too narrowly for my taste). If you are looking for something more comprehensive in the software engineering tradition, then it’s hard to beat the definitive tome: Documenting Software Architectures.

Ubiquitous language

It can be helpful having a document that explains the Ubiquitous Language. Many teams develop a dictionary of significant business terms early on, and for a team with a business analyst this can be a very significant contribution. However, the same caveats mentioned above relating to separating design from implementation are particularly relevant:

In most software development processes I have encountered, a business analyst or product owner has been assigned the job of writing the requirements or stories or use cases which constitute the design of the system. Quite frankly, people in these roles often lack the training and experience to do good system design, to propose alternative designs and weigh their trade-offs, to examine implementation details and modify the design as the system is being developed.

So as with all the documents described here, the dictionary must be kept up to date to be useful. Such a dictionary can be an important start, but it shouldn’t be the end. I like to see it developed into a document that has diagrams showing important states of the model, and how the terminology of the domain model is used.

As terms change over time, such a document can be a good place to explain why these changes in language were made, since that kind of historical information won’t be obvious by looking at the code etc.

Informal UML diagrams

I am always sketching UML diagrams on whiteboards. It saddens me that many teams don’t see the value in this. I particularly find instance diagrams particularly useful in walking through scenarios with domain experts. I find that when the domain experts see the concrete, pertinent business data values in the “little boxes” in the diagram, it really helps with understanding what the model is expressing.

Many times when I work with a team that has an existing model, one of the first things I will have the developers do is walk me and the domain expert through a reference scenario on the whiteboard, explaining how the model supports solving the important business problem. This activity alone is often enough to show strengths and weaknesses of the domain model. Instance diagrams also really help with understanding aggregate boundaries, since aggregates are runtime artifacts.

Sequence diagrams can be very helpful for understanding the application flow from the UI, API, or context boundary down to the domain model. And also in understanding interactions between sagas, objects, domain services or aggregates (such as via application services or other infrastucture responsible for eventual consistency between aggregates).

To create electronic versions such I often use light-weight UML sketch tools such as Web Sequence Diagrams and yUML. I like the way these tools produce diagrams that look hand-drawn, which lends them towards being viewed as transient and gives the team permission to change them. One of the problems with producing high-quality UML diagrams is that it tends to communicate that they are “done,” and shouldn’t be changed. That they are finished.

Anything else?

I’m a big fan BDD tool such as Cucumber to create living documentation for the system, if the team has the skills and experience with such a tool. For example, the following feature file helps support the ubiquitous language supporting the underlying conceptual model represented in the domain model.

I’m biased towards Cucumber as a tool because I like the separation of steps in feature files and stepdefinitions encourages the separation of ubiquitous language from the technical implementation. The business terminology goes in the feature files, and should be refactored as the ubiquitous language is refined over time.

I am co-authoring the book BDD with Cucumber for Pearson/Addison Wesley. The book will cover doing BDD using Cucumber (Ruby), Cucumber-JVM and SpecFlow.

But it’s not the tool that’s most important, the same thing could be done with other acceptance testing frameworks such as Concordian, Fitnesse or Robot Framework. There’s an interesting discussion going on right now on the Agile Alliance Functional Testing Tools (AA-FTT) mailing list about these frameworks and the various tradeoffs they provide. The important thing is the improvements I see in communication and collaboration when teams use these tools to refine acceptance criteria for user stories.

Standalone vs. Combined Documents

No preference for this. Most teams work this kind of thing out on their own over time. I’m not even sure what the factors are for deciding whether to combine documents or not. My preference is to keep documents short and focused. I find they are more likely to be read and used if they are concise and cohesive - maybe principles of good software module design could be pertinent in structuring documents too.

My preference is for diagrams surrounded by text. If a picture is worth a 1000 words, supporting text that explains the critical aspects of the diagram is a multiplier for this in terms of utility.

Respect Your Audience

Finally, and most importantly, when writing any software documentation consider your audience. Will the readers be coders? testers? domain experts? all of the above? Is this technical documentation, or business-facing documentation? How you answer these questions should factor strongly in terms of what kinds of information you include in the document, particularly how much technical detail you incorporate.

There’s probably a lot of things I’ve missed here. What has been your experience with doing DDD in terms of documentation?

| Comments

Agile User Stories and Domain-Driven Design (DDD)

On Monday night at our DDD Denver meetup we ended up having a valuable and lively group discussion using a modified “Lean Coffee” format. The four questions we covered (in order) were:

  1. Where to start in developing a domain model?
  2. What is the biggest hurdle for a team adopting DDD?
  3. What is the intersection of DDD & agile user stories?
  4. Techniques for implementing DDD across geographically dispersed teams

As we discussed the intersection of DDD and user stories, I mentioned a quick reference guide that I have used for my own coaching and training over the years. There seemed to be a lot of interest in having me share the resource more widely.

So I am now making my “Stories for Design and Delivery” reference freely and publicly available.

This double-sided quick-reference provides a wealth of distilled content about how to integrate stories and design, including making decisions about splitting based on business subdomain.

Click on either thumbnail below to download the full-size PDF version.




I put the first version of the guide together back in 2009 as a quick reference guide for Mike Cohn’s User Stories Applied. I needed an easier way to get the whole team to understand user stories without forcing them all to read the book (as good as it is, this wasn’t going to happen). I printed and laminated a bunch of copies and distributed them to the team.

Iteration two was several years ago when Richard Lawrence published his excellent Patterns for Splitting User Stories post (also referenced as a footnote on the guide’s second page). At that time I incorporated Richard’s material into the second page, greatly improving the guidance around splitting stories.

I’ve used this guide many times over the years in my classes and coaching, teaching teams how to collaboratively and creatively decompose their functionality into manageable increments.

Iteration three was almost a year ago, when I decided to move away from the conventional agile community’s terminology and emphasis on process, and focus on how stories can support design, rather than fragment it. I tried to approach it first and foremost as a DDD practitioner, concerned about putting design first. So I incorporated my current understanding of how domain modeling, tactical design practices and strategic design (i.e. mainly subdomain distillation) fits with how most teams manage their work items.

Note: I’ve deliberately defied convention by not calling them user stories. Stories - as I conceive of them - may relate directly to customers, users, stakeholders and even predominantly technical considerations, not just end users. Some heavily design-focused stories, such as building an anti-corruption layer in front of a back-end system, might only be exposed to users tangentially via seemingly unrelated functionality (from their perspective).


Just recently, Richard has reworked his story splitting guide into an excellent flow chart: How to Split a User Story, which I highly recommend as a more process-focused complement to mine.

I hope this is as helpful to others as it has been to me. Let me know in the comments if you do find this useful. And please let me know any ways I might improve the quick reference guide.

The document is designed to be printed double-sided. I recommend laminating your copies before you hand them out so they last longer and are less likely to get lost in a pile of paper.

| Comments

Word Document to Asciidoc Conversion

I had content in Word documents that I needed to convert to Asciidoc for our book. Here are the steps I found to work best:

  1. Save Word doc as HTML
  2. Encode as UTF-8
  3. Use pandoc to convert from HTML to AsciiDoc
  4. Use Sublime Text 2 search and replace (using some regular expressions) to strip out crazy things
  5. Use Sublime Text 2 to perform any remaining formatting

Save Word doc as HTML

Open the document in Word, and then save as a web page. Select the “Save only Display Information into HTML” option when saving. Exit from Word (and wave it goodbye as you do!).

Encode as UTF-8

Open the html file in Sublime Text 2. Avert your eyes at the horror that is Word-formatted HTML. Reopen with encoding UTF-8 and save the file:

"Sublime Text 2 Reopen with Encoding"

If I don’t recode as UTF-8, then the next step will fail with the error:

pandoc: Cannot decode byte '\x6f': Data.Text.Encoding.decodeUtf8: Invalid UTF-8 stream

Use Pandoc to convert from HTML to AsciiDoc

Run pandoc. For example, the following command takes ConventionSheet.htm and converts it to the AsciiDoc file file.asc:

pandoc -f html -t asciidoc -o file.asc ConventionSheet.htm

Use Sublime Text 2 search and replace (using some regular expressions) to strip out crazy things

Weird single quotes need to go:

"Sublime Text 2 Replace backtick with single quote"

If you had reviewing turned on in Word, then reviewer comments and changes will likely be present in the HTML. Remove these using a search and replace with the following Regex in the search field:


When matched lines cross line breaks then you can use the single line option (?s) in your regex for search and replace:


Use Sublime Text 2 to perform any remaining AsciiDoc formatting

Monospace any regex or other special characters (these will cause problems for the AsciiDoc parser) in the document.

Edit the AsciiDoc document as you wish! Note that GitHub now natively displays AsciiDoc files (using AsciiDoctor behind the scenes), just as it does for Markdown.

| Comments

Colors When Viewing Folders in Terminal

Saw directory listing coloring at Golden Ruby Users Group this week, and needed to have it!


export LS_COLORS

ls, colors and

Customize Your Colors

The values in LSCOLORS are codes corresponding to different colors for different types of files. The letter you use indicates which color to use, and the position in the string indicates what type of file should be that color. Each color comes in pairs – a foreground color and a background color. Here is a list of color values:

  • a = black
  • b = red
  • c = green
  • d = brown
  • e = blue
  • f = magenta
  • g = cyan
  • h = grey
  • A = dark grey
  • B = bold red
  • C = bold green
  • D = yellow
  • E = bold blue
  • F = magenta
  • G = cyan
  • H = white
  • x = default

And here is a list of the positions in LSCOLORS:

  • directory
  • symbolic link
  • socket
  • pipe
  • executable
  • block device
  • character device
  • executable with setuid set
  • executable with setguid set
  • directory writable by others, with sticky bit
  • directory writable by others, without sticky bit

Colors for Dark Terminal Themes:

export CLICOLOR=1
export LSCOLORS=GxFxCxDxBxegedabagaced

| Comments

Array Slicing in Ruby

I’ve found the Ruby Koans to be brilliant for exposing a learner to aspects of the language that are not obvious, or even weird, at first glance.

Indexing Ruby Arrays

For example, let’s look at a koan for accessing array elements:

def test_accessing_array_elements
  array = [:peanut, :butter, :and, :jelly]

  assert_equal :peanut, array[0]
  assert_equal :peanut, array.first
  assert_equal :jelly, array[3]
  assert_equal :jelly, array.last
  assert_equal :jelly, array[-1]
  assert_equal :butter, array[-3]

This is my first time seeing negative array references in any language. I was able to surmise (correctly) that they refer to entries counting backwards from the end of the array.

A negative index is assumed to be relative to the end of the array—that is, an index of -1 indicates the last element of the array, -2 is the next to last element in the array, and so on.

According to the Core API docs, indexing an array can also give us nil:

ary[index] → obj or nil

Getting a nil would seem to be the likely behavior if we try to index beyond the boundary of the array. Let’s try it:

> array [4]
=> nil 

As expected, we get nil.

So far, so good. Indexing seems to work in a way mostly familiar from past experience in other languages.

Slicing Ruby Arrays

Now let’s try slicing, not indexing, arrays. The call, according to the Core API docs, is of the form:

ary[start, length] → new_ary or nil

So the array[s, n] syntax means: retrieve n elements from the array starting by the s-th position, unless there is some reason to return nil.

Let’s use the same array as before, adding it in IRB:

> array = [:peanut, :butter, :and, :jelly]
 => [:peanut, :butter, :and, :jelly] 

Let’s get the first array element:

> array[0, 1]
=> [:peanut] 

Which says, get me the relevant slice of the array starting at position zero, with a length of one. No difficulties so far.

If you try to access elements from the array using n=0, you will get [] as a result (within the range of the array).

Now, let’s try slicing (instead of accessing via index) beyond the end of the array:

> array [5,0]
=> nil 

> array [6,10]
=> nil 

No matter what starting point we try after 5, or what length we specify, we will get nil. Once again, straightforward and expected behavior.

Here’s where it got a little weird for me:

> array [4,0]
=> [] 

> array [4,1]
=> [] 

> array [4,100]
=> []

When we specify a starting point of 4, we get an empty array, regardless of how many elements we request. The semantics are subtly different at this boundary point. According to the Core API docs, it’s a special case.

The issue arises because I’m used to thinking about accessing arrays, but this is slicing. The way to think about slicing needs to be different. As a response to this question on Stack Overflow points out, treat the first number when you slice not as identifying the element, but places between elements, in order to be able to define spans (and not elements themselves):

  :peanut   :butter   :and   :jelly
0         1         2      3        4

What this means is that 4 is still within the array, from a slicing perspective; if you request 0 elements, you get the empty end of the array. But since there is no index 5, it’s outside the bounds of the array, you can’t slice from there. Indexing, of course, refers to the elements themselves.

One final example, using assignment:

t = 'hi'
t[0,0] = '('
t[3,0] = ')'
=> "(hi)"

In Ruby Koans these are the tests that highlight the differences:

def test_slicing_arrays
  array = [:peanut, :butter, :and, :jelly]

  assert_equal [:peanut], array[0, 1]
  assert_equal [:peanut, :butter], array[0, 2]
  assert_equal [:and, :jelly], array[2, 2]
  assert_equal [:and, :jelly], array[2, 20]
  assert_equal [], array[4,0]
  assert_equal [], array[4,100]
  assert_equal nil, array[5,0]

Thanks also to the My Brainstormings blog for additional help with understanding how arrays work in Ruby.

| Comments

Object Ids in Ruby

In my effort to master Ruby this year, I started this morning working through Ruby Koans. I just completed these tests and was intrigued by the comment in the second koan:

def test_some_system_objects_always_have_the_same_id
  assert_equal 0, false.object_id
  assert_equal 2, true.object_id
  assert_equal 4, nil.object_id

def test_small_integers_have_fixed_ids
  assert_equal 1, 0.object_id
  assert_equal 3, 1.object_id
  assert_equal 5, 2.object_id
  assert_equal 201, 100.object_id

  # What pattern do the object IDs for small integers follow?

To put it another way:

>> (0..50).each { |i| print i.object_id, ' ' }

What would you expect to see as output? You can find the answer in Fixed Object Id for System Objects and Small Integers in Ruby.

But why does this happen? I did a little digging and found these 2006 articles by Caleb Tennis on The Ruby VALUE and Ruby Values and object_ids. In the first article he points out that:

The first point of interest is the VALUE - Ruby’s internal representation of its objects. In the general sense, a VALUE is just a C pointer to a Ruby object data type. We use VALUEs in the C code like we would use objects in the Ruby code.

…for performance purposes, Ruby doesn’t use the VALUE as a pointer in every instance. For Fixnums, Ruby stores the number value directly in the VALUE itself. That keeps us from having to keep a lookup table of every possible Fixnum in the system.

There is also a good Stack Overflow answer to this question of how object assignment works in Ruby:

In MRI the object_id of an object is the same as the VALUE that represents the object on the C level. For most kinds of objects this VALUE is a pointer to a location in memory where the actual object data is stored. Obviously this will be different during multiple runs because it only depends on where the system decided to allocate the memory, not on any property of the object itself.

However for performance reasons true, false, nil and Fixnums are handled specially. For these objects there isn’t actually a struct with the object’s data in memory. All of the object’s data is encoded in the VALUE itself. As you already figured out the values for false, true, nil and any Fixnum i, are 0, 2, 4 and i*2+1 respectively.

The reason that this works is that on any systems that MRI runs on, 0, 2, 4 and i*2+1 are never valid addresses for an object on the heap, so there’s no overlap with pointers to object data.

I highly recommend using a koan-based approach to learning the details of a new language. There’s a good list of links for various languages in “Koan-a-Copia!” article by Nola Stowe.

| Comments

Blogging With Octopress and Github Pages

Why Octopress?

Here are 4 good reasons from AlBlue’s blog to consider using Octopress for a technical blog:

  • Jekyll-based
  • Markdown content
  • Stylish
  • Plugins

See also Octopress Is Pretty Great, which has a great description of not only the positives of Octopress, but also a good step-by-step summary of how to configure Octopress for your environment.

The Jekyll-style approach is to write content in an author-friendly text format (i.e. Markdown) that is then translated to HTML and served up as such is gaining more and more momentum over the more common CMS-style approach. Tom Preston Werner wrote Jekyll back in 2008, and it is now used to serve content on Github Pages.

With my previous blog I had been using BlogEngine.NET, which is a nice full-featured .NET-based blogging engine that didn’t require me to install and configure SQL Server on my virtual host. I generally liked BlogEngine.NET, but found the authoring experience tedious, and the updates a hassle.

In writing blog posts on my MBP I struggled with using OSX-based HTML editors, finding they only got in the way of the writing process. I really wanted to move to tools I am either more comfortable with already, or interested in mastering: Ruby, Rake, SASS, Jekyll, Markdown, Sublime Text 2, Byword, Marked, and Git. As Joel Hooks says on his “Fresh Start” blog post:

[Octopress]…falls well into the breakable toy category of things, and that is something I can use right now as I learn new tools. I’m looking forward to improving this space with quality content about modern standards-based web development with open source tools.

The Jekyll-based toolset really suits my workflow. I can write my posts in Markdown lightweight plain text format with Sublime Text 2 or Byword, manage all my changes in Git with full support for lightweight branching and additive changes, preview it with a Rake Preview and serve it all up with a Rake Deploy.

Potential Downside?

To be fair, AlBlue’s blog also lists some possible disadvantages of Octopress. Part of what he mentions is the lack of separation between content and plumbing, in that there’s really five sets of things to manage:

  1. The source posts (in Markdown format)
  2. The layout and supporting scaffolding (in HTML/Liquid templates)
  3. The plugins for Jekyll to know how to process the Liquid templates)
  4. The Octopress supporting management code
  5. The published HTML

His point is that the first two items (the source posts, layout and templates) should be in source control, but the:

plugins and octopress management code really need to live in a different Git repository, though, so that they can be upgraded independently.

This makes a lot of sense, but I don’t feel bothered by this at the moment, and fully expect the gap here to be addressed sometime in the near future.

Getting a new theme installed was certainly a very straightforward exercise.

Getting Started with Octopress

Thanks to the following blog posts for help with getting this set up:

and thanks also for very timely Twitter help from GitHubber Matthew McCullough:

Twitter Conversation with @matthewmccull

More Cool Octopress Resources

In the process of researching Octopress, I stumbled across some other helpful resources, so I’ve put them here in the hope they might be helpful to others too:

Github pages aren’t the only game in town, as @aeroplanesoft pointed out:

Hosting Octopress on Amazon S3

If you’re using rbenv, as I was initially, then start with Blog with Octopress and Github Pages for a good description of working with rbenv.

Customize your Blog!

As Wynn Netherland points out, Octopress Classic is the new Kubrick, so customize your blog. Make it your own. Take a look at some of the Octopress “hidden” features and get creative (I really like the Octopress theme fellow DDD Denver member Leo Gorodinski is using on his blog). Or, if you are front-end design-challenged like me, get an expert like Jordan McCullough to develop a theme for you. Friends don’t let friends stick with the Octopress default template.

This started out as a post about installing Octopress, and became something much more for me. I’ve found since I stopped blogging that I have amassed a huge amount of notes and sites captured in Evernote, but I haven’t been sharing my discoveries back with the wider community. So I’ve resolved to write a blog post instead of stashing away things I find into my own private area, add commentary when I can, and write substantial informational posts of my own from time to time. That way others can benefit from what I’m learning.

Serve the Community

As Scott Hanselman said in Your Blog is the Engine of Community:

I would encourage you all to blog more. Tweet less. Blogs are owned by you. They are easily found, easily linked to, and great conversations happen with great blog posts. The river of social media rushes on and those conversations are long forgotten. A great blog post is forever. Today’s real-time social media is quickly forgotten.

I’ll never even come close to being as prolific as Scott Hanselman or Ayende Rahien (I don’t know how they do it), but I’ll aim for something of a sustainable cadence to my posts. Don’t make the same mistake I’ve been making by stashing useful content and your valuable knowledge in a private location. Own your own content, and don’t be afraid to share it widely so that others can learn.

Don’t be a meme, be a movement. - Scott Hanselman

| Comments

Book Review: Implementing Domain-Driven Design

This is a review of the book Implementing Domain-Driven Design by Vaughn Vernon, based on the Safari Books Online rough cut edition. The book is also currently available for preorder on, with a scheduled release date of February 14, 2013. Rather than try to cover everything, I’ll be focusing on the parts of the book that I found most interesting and helpful: highlighting the things that stood out to me.

I have been a certified Domain Language DDD instructor for over two years now, and the most common question I am asked is where to find solid, pragmatic advice on how to actually implement DDD using the frameworks and tools with which developers are already familiar. The good news is Implementing Domain-Driven Design more than adequately fills the lacuna. This book is up-to-date, easily comprehensible, free from dogma and the advice it gives is firmly grounded in the real-world experience of one of DDD’s foremost practitioners.

| Comments

Duplicate Entries Using ‘Open With’ on OS X

I noticed today on my MBP running Mountain Lion that using “Open With” (control+click on a file in Finder) showed duplicate entries for the file.

I’m unsure as to what I did that caused this problem to happen, though a recent thread on Apple Support Communities indicates it has something to do with clone images. I recently updated XCode through the App Store, and now have a duplicate entry for XCode when I do “Open With,” so perhaps it’s happening when I update an application.

As one of the responses in the thread suggested, the solution for me was to run the following command in my Terminal window:

/System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/LaunchServices.framework/Versions/A/Support/lsregister -kill -r -domain local -domain system -domain user

I didn’t need to logout. I just relaunched Finder (control+option+click on Finder icon in the Dock) and the problem went away.