logo-mongodb.png

Some days a go I wrote a blogpost about Learning Mongo. Of course I did not stop learning. As a good habit, I wrote down the next things I learned and played around with. That is what this blogpost is about, next steps in the learning process of Mongo. This post mainly focussus on Replication Sets, object Identity, WriteConcern and a bit about Sharding.

The case for most of the code used in this blog is about creating an EventStore for the Axonframework.

 

WriteConcern and batch inserts

In my previous blog post I already wrote about WriteConcern and batch inserts. I even gave some example code for the batch insert. I mentioned the difference between WriteConcern SAFE and REPLICA_SAFE. To overcome the difference between test and production I played around with an environment setting to change the WriteConcern form SAFE to REPLICA_SAFE when working in a production environment. That way it is easier to test your application within a test environment and without to much hassle go into production.

Configuring the Mongo Replica Set or the single instance

To do this magic of switching, I used a spring factory bean. This factory bean creates a Mongo instance. If a test.context parameter is provided with a value true, a Mongo instance with the default settings is assumed. If not, a Replica Set of three servers is expected. The values are specified in a property file. The factory bean and the spring config are shown in the next two code blocks. The factory bean is not complete, only the important stuff is shown.

public class MongoFactory implements FactoryBean, InitializingBean {
    private boolean testContext;
    private List mongoAddresses;
    private Mongo mongo;
     
    ...
    @Override
    public void afterPropertiesSet() throws Exception {
        if (testContext) {
            this.mongo = new Mongo();
        } else {
            if (mongoAddresses.isEmpty() || mongoAddresses.size() < 3) {
                throw new IllegalStateException("Please configure at least 3 instances of Mongo for production.");
            }
            this.mongo = new Mongo(mongoAddresses);
        }
    }
}
<bean id="mongoDb" class="org.axonframework.samples.trader.app.eventstore.mongo.MongoFactory">
    <property name="testContext" value="${test.context}"/>
    <property name="mongoAddresses">
        <list value-type="com.mongodb.ServerAddress">
            <bean class="com.mongodb.ServerAddress">
                <constructor-arg index="0" value="${server1.host}"/>
                <constructor-arg index="1" value="${server1.port}"/>
            </bean>
            <bean class="com.mongodb.ServerAddress">
                <constructor-arg index="0" value="${server2.host}"/>
                <constructor-arg index="1" value="${server2.port}"/>
            </bean>
            <bean class="com.mongodb.ServerAddress">
                <constructor-arg index="0" value="${server3.host}"/>
                <constructor-arg index="1" value="${server3.port}"/>
            </bean>
        </list>
    </property>
</bean>

All nice, but in the end not very flexible. What if we want more than three nodes?

Well, that is not hard. These three nodes are not all the nodes known to the java Driver. They are just a starting point. Using these Nodes the driver will search for the other nodes. One of these nodes must be available when we initialize the Mongo connection. To prove this I provide only one node to the java Driver. Closing this node does not stop my application from functioning. The following shows you a line of the log of the application which shows that a new master is selected for the connection due to a connection problem with the old master.

paired mode, switching master b/c of: java.io.IOException: couldn't connect to [localhost/10.0.1.7:27017] bc:java.net.ConnectException: Connection refused

Now we create a new document and watch the logs of the master node. We can see that a connection is created.

Sun Sep 26 09:56:28 [initandlisten] connection accepted from 10.0.1.7:49424 #13

As long as the replication set is in the right state, the java driver can use it. It needs enough servers to be configured when starting up to find the complete replication set.

Majority

Within a replication set a majority must be available to function normally. If no majority in the set can be found, the set reaches an unworkable state. That is a good reason for the java driver to fail. In the log of the only server that is up you find these lines:

Mon Sep 27 09:40:14 [ReplSetHealthPollTask] replSet info localhost:27018 is now down (or slow to respond)
Mon Sep 27 09:40:14 [rs Manager] replSet can't see a majority of the set, relinquishing primary
Mon Sep 27 09:40:14 [rs Manager] replSet info relinquished primary state
Mon Sep 27 09:40:19 [rs Manager] replSet can't see a majority, will not try to elect self

Majority is important to Mongo. If no majority can be selected within a set, all nodes will remain in Secondary mode and stop functioning. So what is this majority thing?

After reading this blog post things became more clear to me: What are Replica Sets?.

Becoming the primary node has to do with voting. Getting a majority of YES votes makes you the primary node. By default each node has one vote. This explains why no majority could be found in the three server case, where two servers are down. Think about the implications of having four over three servers. If two go down, by default no majority is available.

If you have one big stable server and a few lesser servers, you might want to give the big server more votes. But be careful not to give to many votes, if it has to many votes it could cause the complete set to go down due to a majority problem when one server goes down.

This is the situation where the arbiter steps in.

The Arbiter

So what if you just need to nodes in two different data centers. One will be the primary node and the other a secondary. What if the connection between the two temporarily fails. Than your set becomes unusable. No majority can be found, so the set becomes unavailable. This is where an arbiter can help. An arbiter just has a vote, it cannot and will not contain data. It is just a way to make sure one of the servers becomes primary if there is no majority. Adding an arbiter is very easy. The following code shows how to add an arbiter and how this looks in the replication set configuration.

> rs.addArb("localhost:27020");
{ "ok" : 1 }
> rs.isMaster()                
{
	"setName" : "axon",
	"ismaster" : true,
	"secondary" : false,
	"hosts" : [
		"localhost:27019",
		"localhost:27018",
		"localhost:27017"
	],
	"arbiters" : [
		"localhost:27020"
	],
	"ok" : 1
}

To see it in action we need a different replication set. Therefore I created one with two servers. The following block shows you the status after adding two servers without an agent and taking one down. Then we add an arbiter to the game and do the same thing again.

> cfg={_id:"axon",members : [{_id:0,host:"localhost:27017"},{_id:1,host:"localhost:27018"}]}
> rs.initiate(cfg)
> rs.status()
{
	"set" : "axon",
	"date" : "Mon Sep 27 2010 10:58:56 GMT+0200 (CEST)",
	"myState" : 1,
    ...
    "ok" : 1
}

Close one of the servers. The log of the other server shows:

Mon Sep 27 11:01:43 [conn2] end connection 127.0.0.1:49424
Mon Sep 27 11:01:43 [rs_sync] replSet syncThread: 10278 dbclient error communicating with server
Mon Sep 27 11:01:44 [ReplSetHealthPollTask] replSet info localhost:27017 is now down (or slow to respond)
Mon Sep 27 11:01:44 [rs Manager] replSet can't see a majority, will not try to elect self

Now we start the other server again, add an arbiter. After that close it again and check the logs

> rs.status()  
{
	"set" : "axon",
	"date" : "Mon Sep 27 2010 11:10:50 GMT+0200 (CEST)",
	"myState" : 1,
    ...
    "ok" : 1
}

Mon Sep 27 11:10:00 [conn2] end connection 127.0.0.1:49677
Mon Sep 27 11:10:02 [ReplSetHealthPollTask] replSet info localhost:27018 is now down (or slow to respond)
Mon Sep 27 11:10:02 [rs Manager] replSet info electSelf 0
Mon Sep 27 11:10:02 [rs Manager] replSet PRIMARY

Experiment with WriteConcern

Using the REPLICA_SAFE in a test environment with only one is not wise. Therefore my configuration is special for a test environment and I do not use REPLICA_SAFE. With only one node, I use SAFE. The easiest thing to test this is to create configure the production environment and just provide one node. You can wait for ever. But what if you do provide the REPLICA_SAFE and at a certain time an action is broken. Harder to test. That is what I took the debug perspective for. I waited with a write action within a debug session and closed 3 out of 4 nodes. Curious what happens? I cannot tell, since shutting down almost all nodes leavers the one available node in a state that no primary node is available. It is not possible to get a majority, so no primary node or master is selected. I am looking for other ways to test this, but I did not really find one. if you did please post a comment.

Identification

MongoDB creates a field _id and generates an unique identifier. It also installs an index on that field. If you have a good reason to provide your own identifier, you can easily provide it by adding a field with the name _id. Be sure that this field must be unique. The following code show the creation of a DBObject and the result through the Mongo shell. I show you an example from the MongoDB implementation of the Axon event store. One could use the aggregateIdentifier as identifier for Mongo as well. In the sample code I add the UUID of the aggregateIdentifier twice. In the real solution you would not do this of course.

return BasicDBObjectBuilder.start()
        .add("_id", entry.getAggregateIdentifier().toString())
        .add(AGGREGATE_IDENTIFIER, entry.getAggregateIdentifier().toString())
        .add(SEQUENCE_NUMBER, entry.getSequenceNumber())
        .add(TIME_STAMP, entry.getTimeStamp().toString())
        .add(TYPE, entry.getType())
        .add(SERIALIZED_EVENT, entry.getSerializedEvent())
        .get();
{ "_id" : "35934d42-1943-4a75-867a-fccf86757624", "aggregateIdentifier" : "35934d42-1943-4a75-867a-fccf86757624", "sequenceNumber" : NumberLong( 0 ), "timeStamp" : "2010-09-23T18:24:20.013", "type" : "OrderBook", "serializedEvent" : BinData(2,"tgEAADxvcmcuYXhvbmZyYW1ld29yay5zYW1wbGVzLnRyYWRlci5hcHAuYXBpLk9yZGVyQm9va0NyZWF0ZWRFdmVudD48dGltZXN0YW1wPjIwMTAtMDktMjNUMTg6MjQ6MjAuMDEzPC90aW1lc3RhbXA+PGV2ZW50SWRlbnRpZmllcj5kNmU1MWZmOC0wMWRmLTQ5NDktOTBhZC1iMzJjNDg2OTM5N2Q8L2V2ZW50SWRlbnRpZmllcj48c2VxdWVuY2VOdW1iZXI+MDwvc2VxdWVuY2VOdW1iZXI+PGFnZ3JlZ2F0ZUlkZW50aWZpZXI+MzU5MzRkNDItMTk0My00YTc1LTg2N2EtZmNjZjg2NzU3NjI0PC9hZ2dyZWdhdGVJZGVudGlmaWVyPjx0cmFkZUl0ZW1JZGVudGlmaWVyPjVmYjY2MDllLTZhYjEtNGFmNi05YTJmLWU0MjhmZjhjZDQ0ZTwvdHJhZGVJdGVtSWRlbnRpZmllcj48L29yZy5heG9uZnJhbWV3b3JrLnNhbXBsZXMudHJhZGVyLmFwcC5hcGkuT3JkZXJCb29rQ3JlYXRlZEV2ZW50Pg==") }

You can see that the _id and the aggregateIdentifier are the same. Which is not the case if we omit the _id field. The following line show that.

{ "_id" : ObjectId("4c9b813a032447e454de036b"), "aggregateIdentifier" : "cbc79934-8cb6-4062-aa6e-205b3960901e", "sequenceNumber" : NumberLong( 0 ), "timeStamp" : "2010-09-23T18:32:58.738", "type" : "OrderBook", "serializedEvent" : BinData(2,"tgEAADxvcmcuYXhvbmZyYW1ld29yay5zYW1wbGVzLnRyYWRlci5hcHAuYXBpLk9yZGVyQm9va0NyZWF0ZWRFdmVudD48dGltZXN0YW1wPjIwMTAtMDktMjNUMTg6MzI6NTguNzM4PC90aW1lc3RhbXA+PGV2ZW50SWRlbnRpZmllcj5hNTEyNGI5Yi0zZDYyLTQ1OTQtODBjOS1jYmE0YzI2OGZkZjM8L2V2ZW50SWRlbnRpZmllcj48c2VxdWVuY2VOdW1iZXI+MDwvc2VxdWVuY2VOdW1iZXI+PGFnZ3JlZ2F0ZUlkZW50aWZpZXI+Y2JjNzk5MzQtOGNiNi00MDYyLWFhNmUtMjA1YjM5NjA5MDFlPC9hZ2dyZWdhdGVJZGVudGlmaWVyPjx0cmFkZUl0ZW1JZGVudGlmaWVyPjQ3MzAyMzIwLTRhZjktNDVjZS1hMWI0LTk2MDk0ZGRjNGJkMDwvdHJhZGVJdGVtSWRlbnRpZmllcj48L29yZy5heG9uZnJhbWV3b3JrLnNhbXBsZXMudHJhZGVyLmFwcC5hcGkuT3JkZXJCb29rQ3JlYXRlZEV2ZW50Pg==") }

Back to the situation where we do provide our own _id field. MongoDB does create the index even if we provide our own field. The following query shows the index :

> db.domainevents.getIndexes()
[
	{
		"name" : "_id_",
		"ns" : "axonframework.domainevents",
		"key" : {
			"_id" : 1
		}
	}
]

By providing an index, you immediately assure that the content of that field is unique. This is something you can use for other fields as well. This is the topic for the next section.

Unique documents

The following line shows you how to create an index. In our example we will not use the aggregateIdentifier as _id, but we do want it to be unique and we want to search on it fast. Therefore we need an index. We will have a look at this next, but as promised first a sample of adding an index.

db.things.ensureIndex({firstname: 1, lastname: 1}, {unique: true});

Now let us have a look at our own example. First the proof, no index on aggregateIdentifier.

> db.domainevents.getIndexes()
[
	{
		"name" : "_id_",
		"ns" : "axonframework.domainevents",
		"key" : {
			"_id" : 1
		}
	}
]

Now we add an index on the aggregateIdentifier

> db.domainevents.ensureIndex({aggregateIdentifier:1},{unique:1})
> db.domainevents.getIndexes()                                   
[
	{
		"name" : "_id_",
		"ns" : "axonframework.domainevents",
		"key" : {
			"_id" : 1
		}
	},
	{
		"_id" : ObjectId("4c9b900dbfe62c324f170e85"),
		"ns" : "axonframework.domainevents",
		"key" : {
			"aggregateIdentifier" : 1
		},
		"name" : "aggregateIdentifier_1",
		"unique" : 1
	}
]

Notice that we have two indexes now. The one for _id is a special one, the one for aggregateIdentifier is just a document in Mongo as well. Therefore it has its own ObjectId. The _id_ index does not have the unique parameter. But because it is special, the values need to be unique. That is all nice, but a bit hard to test with an Axonframework sample I am creating. Another thing I need in the sample is a unique username. Therefore we create a special collection with the _id equal to the username. Let us watch what happens. For now I do it in the shell

> db.users.insert({_id:"jettro"})
> db.users.getIndexes()          
[
	{
		"name" : "_id_",
		"ns" : "axonframework.users",
		"key" : {
			"_id" : 1
		}
	}
]
> db.users.insert({_id:"allard"})
> db.users.insert({_id:"jettro"})
E11000 duplicate key error index: axonframework.users.$_id_  dup key: { : "jettro" }

Ok, so I proved that. Don’t believe me? Try it out yourself, it is not hard.

There is a lot to tell about indexes, if you want to know everything, check the online manual (see later on). Some of the things that I think are interesting are the following:

  • You can give an index an order, which is especially interesting for doing range queries in multi value indexes.
  • You can add uniqueness to an index, already showed that.
  • Indexes are case oriented
  • Preferably do not use a sort on a field that does not have an index

To learn more about indexes and MongoDB go here : http://www.mongodb.org/display/DOCS/Indexes

MapReduce

http://www.mongodb.org/display/DOCS/MapReduce

Sharding

Sharding is about horizontal scalability. If you have a large ordering website like amazon. Shards for the different locations seems logical. Order data for users in Europe are on a different shard than those for american customers. Mongo has two clear requirements for shards. They must be able to balance load over shards as well as support good failover within a shard. Failover is achieved by creating a shard out of a replica set.

Let us have a look at running a server farm with MongoDB sharding, on one local dev machine :-). You might want to have a look at the official documentation for sharding configuration.

To set up the environment, we create two shards with one server each. We need a config server and a router server. I’ll show you aliases I use on my mac to start them.

alias mongoshard1="mongod --shardsvr --port 27017 --dbpath /data/s0 --rest"
alias mongoshard2="mongod --shardsvr --port 27018 --dbpath /data/s1 --rest"
alias mongoconfig="mongod --configsvr --port 27019 --dbpath /data/config --rest"
alias mongorouter="mongos --configdb localhost:27019 --port 27020"

Than we need to configure the shards, the config and the router. That is done in the next few lines

mongo localhost:27020
> use admin
switched to db admin
> db.runCommand( { addshard : "localhost:27017" } )
{ "shardAdded" : "shard0000", "ok" : 1 }
> db.runCommand( { addshard : "localhost:27018" } )
{ "shardAdded" : "shard0001", "ok" : 1 }
> db.runCommand( { enablesharding : "axonframework" } )
{ "ok" : 1 }
> db.runCommand( { shardcollection : "axonframework.type", key : {name : 1} } )
{ "collectionsharded" : "axonframework.type", "ok" : 1 }

Now you are done. Ok, this is very basic. Do check the documentation I mentioned before. In the future I’ll give more info on this topic, but for now this is it.

Still learning MongoDB
Tagged on: