Saturday, August 22, 2009

Querying Java Objects stored in Terracotta's NAM Part 3

First and Second part of this series I talked about Querying data structures available in Java. First part specifically talked about existing ones like JoSQL, JxPath and Quaere and discussed indexing problem. Second part specifically talked about lucene and jofti as indexing frameworks and wrote small framework to test and get some performance numbers. This test showed that B-tree index (memory and possibly disk based) is suitable and not lucene index. Third part of this series now I am discussing my own small framework- tcquerymap which is rewrite of Jofti. Jofti is old and not maintained and uses jdk14 libraries. Now even documentation link does not work :

querymap documentation link : Getting Started With Querymap
querymap source download : Complete Source Code with Eclipse Project
querymap TIM : Copy this TIM to terracotta modules directory to use it
querymap terracotta sample app : Download sample eclipse App. You can use Terracotta eclipse plugin to launch it within eclipse

So the basic idea is maintaining in-memory indexes which index String Ids with "Comparable" as Keys. All primitive wrappers in Java implement this interface so no special conversion is needed.

How it works

It scans java objects and makes comparable objects for each of the property mentioned to be indexed. It inserts these comparable objects in its own tree against String Ids assigned to the java object. So in-effect its nothing more than maintaining many in-memory Maps. In fact the implementation that I wrote uses JDK TreeMap and not b-tree map. But in-future it will be replaced by more performant B-tree.

One needs to understand that with in-memory object indexes its only possible to implement subset of SQL query and no join queries. To start with, small framework only implements following operations : arithmetic operands : <,>,<=,>=,==. Logical operands : AND , OR. Range : BETWEEN, Set : IN

To implement querying SQL parser needs to be implemented. I choose to avoid writing parser and implemented direct API kind of querying similar to Quaere. I find Domain specific languages more intuitive since when developer writes code for it he know what querying he writes. So interface looks like below. It is not as good as quaere. Quaere is DSL, below API is just few interfaces implemented. Here execute returns List of IDs put into index against the properties.

import static*;

Collection col = from(Domain.class).where(
gt("inner.property2", 60),

So basically it is equivalent of follwing SQL

select from Domain
where inner.property2 > 60
and inner.property2 < property3=rand(NUM_OBJECTS)

How it performs
Naturally is not as performant as Jofti since it used JDK tree map which uses red-black tree. In future when I complete writing my own b-tree implementation I expect to perform as good as Jofti. Jofti further implemented node level locking so multiple concurrent insert opertations can work parellel. This can also be implemented too. But query performance is not bad and I expect it to improve with b-tree implementation.

Integration with Terracotta

Since it uses TreeMap it is cluster-able easily. Attached tc-config.xml has all correct declarations for it. One more advantage is with Terracotta is that object identifier are readily generated by Terracotta. See implementation TerracottaQueryMap. Please dont compare performance of TerracottaQueryMap against HashMap, CHM or Terracotta Distributed Map(Concurrent String Map old name) since all these are just single index maps so easy to stripe or employ mulitple locks.

For using it as TIM you need to add following lines to tc-config.xml

<module id="" name="querymap-1.0" version="1.0.0">

In code you can use it as queryable map as follows. Here propList is list of properties to be indexed.

TerracottaQueryMap map = new TerracottaQueryMap(Domain.class,proplist));


To Query.

Collection col =map.entrySet(

Future directions planned
Java has hugh limitation for memory intensive application. So achieve scale two approaches : partition index and merge results or use disk to overflow index pages. Other thing I see can be implemented is that when query selects random elements these random elements need to be faulted from Terracotta server thus degrading performance, same elements can be read from local store too like EHCACHE. Later on this topic separately.

If you think this framework is useful please let me know. You can download its source as Eclipse project(sole dependency on tc.jar) and as Terracotta Integration Module here.


Friday, July 10, 2009

Erlang and Concurrency

Here I write a lot about my experiments with Terracotta which is shared durable memory for Java applications which are multi-threaded. But some days before I came across erlang, language which is created at Erricsson for running fault-tolerant telecom applications. Whats so special about it?. If you google or rather "Bing" it you will find plenty of things about it. How cool and scalable it is?. If you read this you will even come to know How Goldman Sachs was using it to gain significant advantage in program trading over other competitors and how others are trying to steal leaked source code.

So erlang is not procedural programming language its functional programming language. Frankly I also need to understand whats so different about it. But I saw this presentation of infoQ site about erlang concurrency and was amazed. Right from starting I always thought about following graph - throughput increases as a function of in-coming request rate till some point but after it stabilizes and then it drops. It drops because of system overload. In a perfectly cpu-intensive lock-contention free application this will happen because of cpu context switching.

But in erlang it stays constant instead your latency (response time) increases. This is according to Little's Law. Little law says relation between throughput and latency is number of users in system. N = RX.

Above is famous graph of benchmark of YAWS (Http server written in erlang) against Apache and you can see how early apache gets saturated and dies. You can read details here but explanation is given here is :

"The problem with Apache is not related to the Apache code per se but is due to the manner in which the underlying operating system (Linux) implements concurrency. We believe that any system implemented using operating system threads and processes would exhibit similar performance. Erlang does not make use of the underlying OS's threads and processes for managing its own process pool and thus does not suffer from these limitations."

So basically all magic is erlang's concurrency model : No Shared State, Only message passing between light-weight processes. Erlang processes are way lighter than Java Threads since they are logical entities and not tied to user-level or kernel-threads. Thus erlang shows "No Shared State" concurrency model scales well. Since now JVM is touted as platform, I am looking forward to see erlang implementation on JVM and see how it does against other concurrent interpreted languages - scala. This is great post why JVM is unfit for such porting. May be Java 8 ( I think closures are not part of Java 7). This is also interesting read about Erlang on Java : Erlang Concurrency model on JVM . Some of work on writing OTP(erlang's sdk for writing applications) for scala

I have already got Programming Erlang book now looking forward to write first program in OTP.


Thursday, July 9, 2009

Links : Java Sample Apps

Many times you hear or read something cool about some framework or tool and you are interested in Sample application written for it just to play with it, browse and go through source-code to find out how to code it.

Here is list of great sample Apps that I just stumbled upon while reading this great blog about Tomcat Clustering.

Link :

List is

I will add following to above lists which I know about

Terracotta Samples Application written by Team

Monday, June 29, 2009

Terracotta's Hibernate Integration

This post is re-post of my earlier write-behind post but in different perspective : Terracotta's Hibernate integration 3.1

With version 3.1 Terracotta has implemented its own Caching for Hibernate Second Level Caching Provider. Earlier Terracotta's hibernate integration approach was : clustering EHCACHE. Terracotta with its JVM clustering ability, it was easily possible to cluster any POJO structure. So before 3.1, you might have used EHCACHE as hibernate second level cahce provider and tim-hibernate and tim-ehcache for clustering second level cache. With version 3.1 onwards terracotta will have its own cache backed by map-evictor and concurrent string map. Apart from this new hibernate integration has lots of new additions like cache admin console and read-write cache. Cache is always up-to-date and coherent.

But what I feel is that Terracotta platform is way more capable and following additional features can be added to make applications more scalable. These are just cool ideas.

Cache Warm-up feature
It would be nice feature to refresh or load cache whenever application or application cluster is starting up. This can easily be implemented with some sort of CacheLoader interface where Terracotta can callback this interface when faulting cache objects from terracotta server during first access. But such warm-up is only required on full cluster restart otherwise lot of meaningful cache entrites will get overwritten.

Write-Behind Caching
When you think of cache you will arrive at these cache strategies : Read-Through Caching, Write-Through Caching, Write-Behind Caching. Hibernate Second Level cache is Read-Write-Through Cache where if cache miss occurs, entity is read from database and then handed over to cache for susequent access. But H2LC is not Write-Behind caching. With Terracotta's disk persistence and asynchronsous module it would be really efficient for certain use-cases to implement write-behind. Currently hibernate just directly writes to database. Instead if its modified to write to second level cache and persistent async-database-queue, this would decrease latency and increase throughput dramatically. Imagine if you can schedule all your database writes in non-business hours using tim-async. I find write-behind is certainly the best way to reduce pressure on database. And with Terracotta's clusterwide coherent persistent datastore its practicaly possible. Terracotta would be your database guard taking all your querying as well as database inserts on its shoulders.
But this model would require certain changes in the way hibernate works. especially query cache. Since now Terracotta will have latest snapshot of yor System of Record, queries have to be executed against cache and not database. Thus it can not be generic solution. You can implemented write-behind only in certain cases where your business use case permits it. On the other hand to solve query problem Querymap that i disucssed in my previous posts can be used to query certain type of data. So if your business use case permits write-behind and query-map can give you very fast database accelerator. In one of my previous jobs I was working on financial application where certain set of objects were modified at very high rate and same were queried against. For such application classic replicated H2LC does not bring any value, instead it will degrade the performance due to overhead during frequent-cluster-wide updates. But Terracotta will make it scalable, forwarding updates only to Node on which cache entry exists, updating the object clusterwide so when AsyncProcessor picks it up it will contain all the changes made. Its Terracotta's DSO Magic.

Advantage here is that you dont have to do religious shift of Killing Your Database Totally. Database is your System of Record. With Terracotta Hibernate Accerlerator you are only delaying updates to SOR and not replacing it.

Currently I am going through Hibernate source code and learning how hiberante event mechanism works. My guess is that write-behind can be implemented with hibernate events. If not I may try to modify the source code to add write-behind and h2lc-cache querying capability. Hibernate search is similar where instead of classic session you get Indexing-aware session.

With Terracotta FX (assuming your application requires more than 4000 write operations per second - avg throughput of one un-tuned Terracotta server) your write throughput will increase linearly which is not possible with any RDBMS on any type of hardware.

I hope Terracotta will add these features in coming versions. Terracotta 3.1 Hibernate Integration is just start.

Monday, May 25, 2009

Querying Java Objects stored in Terracotta's NAM Part 2

First part of this series, I talked about existing frameworks and what I found out is that they lack indexing hence rarely useful for large data-sets. So I tried finding out how to do indexing. My idea was simple : index objects and store reference to object in index then with Terracotta you can cluster objects and index as well. So it becomes "queryable" datastore. My first attempt was to find out how indexing is done. By book it says b-tree index. I found out this(jdbm) framework which is trying to do the persistent DB in Java by implementing B-tree indexes on disk.I took only b tree and implemented simple query parser. What it does is that it traverses b-tree and finds out tuples and then returns them.

After this first attempt, then I experimented with Lucene. Lucene is not tree-index, its inverted index but it has lot of capability and its fast, supports in-memory and disk-based indexes.

So here is my little framework for queryable datastore :

public interface TCQueryMap extends Map{
void init();
Map query(String query);

Naturally its extension of Map which is single index. My implementation wraps a HashMap with ReadWrite locks and Lucene RAMDirectory index. So all get/puts hit index within lock boundaries and then you can query index. This is very simple, I have not gone into complexities like spill-over of index onto disk etc.

LuceneIndexingConfig config = new LuceneIndexingConfig();
List propList = new ArrayList();
// index three properties only


TCQueryMap indexer = new LuceneQueryStore(config);

// add object
Acccount account .....

// query object
Map col = indexer.query("person.age:21");

I also came to know about Jofti from one of comments. Jotfi is what I would eventually like to write. I don't know why its not used by many people. One reason could be its not maintained. I found it pretty useful so I plugged in Jofti as well in my framework.

Now lets compare it with simple Hibernate-JDBC based solution. Obviously its not perfect. SQL is way more complex and expressive language. But here we are talking about cached data and I am sure once data is cached ( it means objectified from join query on relational DB) very few times you will require join, its mostly "where clause" of one or more conditions. Lucene does that very well.

So lets see numbers. I have not done any tuning apart from standard lucene stuff. One of main parameters is how many properties you want to index. This determines index size, memory and speed.

Below is small benchmark showing 60K objects inserts with three properties indexed and then random queries on three properties

Lucene Inserts/Sec = 1000
Jofti Inserts/Sec = 8793.78
Lucene Queries/Sec = 5172
Jofti Queries/Sec =13636.36

Results for 14 properties indexed :

Lucene Inserts/Sec = 740
Jofti Inserts/Sec = 4866.96
Lucene Queries/Sec = 3750
Jofti Queries/Sec = 12500

Since Jofti is Tree index it outperforms Lucene Index. The problem with Lucene is that once index gets bigger insert performance slows down. Also these numbers are taken with one commit on one put operation. If you index lot of objects together and then commit, lucene is also fast, that's how it is to be used - Batch API. On the other hand Jotfi is fast, I could not find any details about being thread-safe and other concurrency issues so I wrapped it around Lock. I don't know why Jofti is not used by many people.

Also what if you can run Hibernate/JPA queries on Map? that would be great. Its already done by hibernate team. They run query against Second level cache but it would big task to find out and extract idea out of it. Just a thought. Second Level Query cache gets invalidated when you modify single entity, imagine if we update the same object in QueryMap cache you dont need Query cache of course querying capability is not great.

Another thought that comes in my mind is clustering in-memory databases like H2 or HSQLDB. Imagine the benefits of it. But then its anti-terracotta. Why? it would be Relational DB with baggage of ORM mismatch.

Entire source code you can download it from here : Tar file is just bunch of java files and its very early
prototype. Stay tuned to project, I will update it once I finish with proper integration with Terracotta.

So if you find it useful please leave comments, I would love to hear from you.

Monday, May 4, 2009

Got one!!

Finally I got my own I Love Linux T-shirt, here is snap.
You too can create your own T-shirt with Geeky quotes or any other quotes or picture. I am planning to make another with Ubuntu but waiting/searching for good image apart from standard "Linux for Human Beings"

Wednesday, April 15, 2009

Portable Ubuntu Rocks

Years ago (literally 2.5 years ago) I had tried co-linux. At that time it was in initial stages but was working perfectly in text mode. If you don't know whats co-linux, its linux distribution which works like Windows Binary. No need to setup Virtaul Machine Emualator and install or add virtual images. This was when I had never heard of Virtualization and I was really amazed of the idea. At that time co-linux had managed some elementary GUI drawing mainly KDE applications (at least i had seen screeshots), it did not work though on my machine. Just minutes before I downloaded portable ubuntu after reading this Post from LifeHacker. And it works!! just like described. Who needs VMWare and stuff like that if you can work on linux shell as well as Firefox just Alt-Tab apart. Here is screenshot of Portable Ubuntu in action.

Saturday, April 11, 2009

Links : List of Geeky Quotes

Here :
My Fav is I would love to change the world, but they won’t give me the source code

Wednesday, April 8, 2009

Links : Distributed Hash Tables

Nice List of Distributed Key-Value Stores :

Querying Java Objects stored in Terracotta's NAM

This post is inspired from : . Nothing new .. just another word in blogosphere.

Terracotta is gr8 clustering solution in-fact its platform-level service hence has large number of uses. One of the use is using it as database. Terracotta can never replace database but it can play role of data storage media very well. One of major disadvantage is lack of querying data. Only way you can query data is Map. Map is like single index so if you want to get list of object satisfying some criteria you are required to integrate through entire collection. There are already APIs written for querying java collections. So you can use them with Terracotta NAM.

When you think about querying there are lot of factors : Query Language, Its Performance - Optimizers, Operations Supported : Select, Update, Delete, Joins


JoSQL is good API for querying java collections with good documentation. I did small test with 1 million objects and random query took around 800ms which is way too much. Again its simple iteration through collection due to lack of indexes and query execution plan. Problem with Indexes is that Object graphs can change and at every change you are required to recompute the index which would be difficult to do : as complex as Terracotta's bytecode instrumentation. You can find test code here

Query Language : Moderately good, Performance : Not good for large collections, No update or delete only select projection queries. No joins


Quaere is a very flexible DSL that lets you perform a wide range of queries against any data structure that is an array, or implements the java.lang.Iterable. Its sort of port of LINQ of .Net world. I think linq is next generation data quering tool -cleaner. Quaere is not query language but query API just like Hiberate Criteria query but more elegant. I really liked quaere - its really powerful its support join operation. You can read this post for details : Solving Puzzles with Quaere Its still beta level and not released. One of Queare's another sister project is its JPA integration. Imagine you could write standard JPA application with Quaere as query language and Terracotta as persistent store. No need of database. But as with JoSQL Quaere is also slow. I mean slower than RDBMS. I did small test with 1 million object similar to JoSQL test and response time was similar to JoSQL. You can download test code here

This post also discusses jmap's OQL implementation. It uses rhino javascript engine behind with hashtables.I did consider it to port for Terracotta but its custom written for Object Heap Dumps. JxPath is another tool with which you can query java collections using XPath expressions. I did not evaluate JxPath since i felt it will be on similar lines of JoSQL and Quaere, only different flavor. If you have used XPath earlier then this is much easier to use.

GlazedList is event driven list API specially designed for Swing Applications displaying table and list data. But if you consider List of Objects as table (each object is row and its properties as columns) a proper in-memory index can be maintained for querying. But this applies to only root object level. What if inner object in object graph store in your container changes?. You may then need to update the container whenever object changes. So i guess maintaining in-memory index for java objects is pretty difficult thing to do.

With such tools i think you can easily query moderate size java collections stored in Terracotta's durable memory with acceptable response time.


Tuesday, April 7, 2009

Maven : Java Profiling

At workplace I use maven extensively. In fact I like maven so much that I am slowly moving lot of java eclipse project into maven project. Maven has very good eclipse plugin which makes integration with Eclipse very easy. You can run maven from eclipse. Maven is really good tool. If you have any specific needs you can write your own plugin. At my previous workplace, senior engineer in my project automated everything (starting jboss servers on remote machine in various environments : testing, perf-testing, integration testing etc. database population, database schema drop/rebuild etc).

But now I wanted to profile java application which I used to run from maven. One of the best part of maven is dependency management and repository - it builds classpath automatically for you, but then its pain too. If you want to run java application you have to do through Maven. There is exec plugin with which you can run any Application or shell script and there is exec:java with which you can run java Main class. The problem is that exec:java is in same JVM. So you cant run any java agent(-agent) or other things : Specifically Java Profiling. You should make sure you run the application within the same environment/settings.

So my first task was to get the complete classpath and then launch java with profiler java options. I am using jprofiler which uses JVMTI agent hence you need to append "-agentlib:jprofilerti=port=31757 -Xbootclasspath/a:/Applications/jprofiler5/bin/agent.jar" to java command line.

Here is little shell script through which I managed to do java profiling for maven Project. This will work only for J2SE applications tough!. Frustrating part was variable DYLD_LIBRARY_PATH. I was new to MacOS and was trying with usual LD_LIBRARY_PATH and -agentpath jvm option. Surprisingly -agentpath option should work on MacOS but it didnt work i guess some problem with Jprofiler binary. But lastly i managed to profile my application properly.

mvn dependency:build-classpath -Dmdep.outputFile=mycp.txt
export CLASSP=`cat ./mycp.txt`
export DYLD_LIBRARY_PATH=$DYLD_LIBRARY_PATH:/Applications/jprofiler5/bin/macos
$JAVA_HOME/bin/java -cp $CLASSP:./target/classes -agentlib:jprofilerti=port=31757 -Xbootclasspath/a:/Applications/jprofiler5/bin/agent.jar $*

This is the first time I had to do away with maven command. I wished somebody had written maven plugin for lauching jprofiler enabled apps. There is maven plugin for Yourkit Java Profiler : but it did now work.

Monday, March 9, 2009

Links : What is REST?

I needed to learn about REST interface and wanted some good article for learning basics of REST : REpresentational State Transfer. This article : A Brief Introduction to REST is really the best one I found so thought of sharing it with you. REST is really nice idea of further simplifying the modeling of web-applications. I implemented by REST program with JSON and my client is in Javascript which is really cool (People are writing entire web based OS and here i am playing with simple java script stuff :-) ) With help of java-script libraries you can easily achieve data binding : like FLEX and other RIA. I really enjoyed working with this totally different stuff. I will add snippets in coming days.


Thursday, February 26, 2009

Links : Things I Wish I’d Been Told

When I look back in my engineering days and now, I certainly figured out one thing : Whatever they teach in class hardly matters - the syllabus is totally outdated. What matters is that how you learn new things quickly, be updated in this fast moving world of technology.

Here I am sharing a link I Wish I’d Been Told, when I graduated with computer science engineering degree. Tips For Students with a Bachelors in Computer Science

Saturday, February 14, 2009

Funny Tech Videos : Part 5

Geeky Valentine .. for bloggers and twiters.

Happy Valentine Days May all you get linklove that you deserve.


Download Your Favorite Youtube Videos in Batch

Last week SRGMAP - LC ( Sa-Re-Ga-Ma-Pa Marathi Little Champs) singing contest for children finished. These children are really are talented and boy, crowd loved every performance from them. From long time this was the only show that I wanted to watch every episode of it and did not miss much initially. But when I moved out for my job I was missing it but then this guy "ahonkan" made sure i at least get to see performance of these god gifted children. Thanks a million ahonkan .. there will countless others like me. Now competition is over I thought of downloading all videos and keeping copy of it for as my personal collection. Initially I thought I will write my small program in java where I will download the user specific feeds, playlists, Chanel and the via youtube API i will download all FLV files. But then I did came across this great program. : 1-Click Youtube Downloaded. It allows you download and match bunch of videos It allows you donwload and match bunch of videos from youtube.

Below is the video from 1-Click which shows you how to download videos using this program.

great stuff. may be older but still good enough for another word in blogo-sphere.

Note : Videos uploaded on youtube could be subject to copyright and may be illegal. Here I am only writing about the useful computer program. Make sure you know what you are downloading. Thanks


Monday, January 26, 2009

Funny Videos Part 4

Here is another tech video : She's An Engineer, soft song

Could not grasp entire lyrics but here is some part of first para.

Ah the way I'm feeling now we could lick the world
cause you know I'm always dreaming about you girl
I've been testing out your structure and found it sound
Been installing all our circuits on solid ground
Ah the way I'm feeling now we could take it on
Turn it in our favor and get it on
Generating answers and getting speed
You've got to run it with the fun of it and take it cause
She's an engineer
We don't have much to fear
Ghost in the computer
Ghost goes in sie puter

Friday, January 23, 2009

Free Collaboration Software : Mikogo

I just used free collaboration software Mikogo. At work place I used WebEX : the default choice but at home I needed to collaborate : basically share screen, show some demos, slides. With content sharing you also need to speak : i used skype for that. For screen sharing I searched google and Mikogo was the first result. It the on of the nest free software I have encountered : No hidden features, nearly all of features : like giving control to other participants, see other participants screen are present.
It also has integration with skype which i did not used but will use for my next meeting. I initially assumed there will be some cap on either on duration on participants but our meeting well last 2 and half hours. In fact we spent 10 minutes in finding and testing out features of Mikogo.

Who needs WebEX? certainly for personal Mikogo is the best.


Sunday, January 4, 2009

CountDownLatch for Terracotta

With Java 5, java has inbuilt concurrency library in java.concurrent with classes like CountDownLatch, CylcicBarrier, FutureTask, ExecutorService, LinkedBlockingQueue, ConcurrentHashMap, ReentrantReadWriteLock, which greatly simplifies writing multi-threaded applications. With increasing number of cores you need to write applications which are multi-threaded. With Terracotta this further makes really simple to run such application across more than one JVM effectively giving you more number of threads with slight degradation of performance but near-linear scalability. Terracotta supports some important data structures of "java.utl.concurrent" out of the box these are mainly : LinkedBlockingQueue, ExecutorService, CycliBarrier., FutureTask and of course Locks.

Below I am presenting one more addition to this library : CountDownLatch. CountDownLatch is used to co-ordinate between threads. You pass number of threads in constructor and each thread then calls countDown() method. When you want to get notified that all threads have finished their work you call await() method. This method will wait till all parties have finished and called countDown() method. If you want to write such code in Terracotta enabled applicaiton you have to use CyclicBarrier where each thread calls await() method. But this will cause finished threads to unnecessarily block on barrier. By using CountDownLatch you can "countdown" and exit the thread thus only master or co-ordinator thread needs to block.

Logic implemented is very simple - initiate with number of parties. decrease the counter in countDown method, when reached to zero notify all waiting threads and in await() method "wait()" on object till count is reached to zero.

Below is the source code for it. You need to put class MyCountDownLatch in instrumented classes section as well as define write-lock for the await() and countDown() method. You can download the source-code for the same here.

Main Class

public class MyCountdownLatch {

int count = -1;
public MyCountdownLatch(int count)
this.count = count;

public synchronized void countDown()
if (count == 0) { notifyAll(); }

public synchronized void reset(int count)
this.count = count;

public synchronized void await() throws InterruptedException
if (count == 0) { notifyAll(); return; }
else {
while(count > 0)


Test Class TestCountDownLatch

import java.util.Random;

public class TestCountDownLatch {

public static int N=10;

public static MyCountdownLatch startSignal = null;;
public static MyCountdownLatch doneSignal = null;

private static Object lock = new Object();

public static void main(String[] args) {

Runnable runs[] = new Runnable[N];

synchronized (lock) {
startSignal = new MyCountdownLatch(1);
doneSignal = new MyCountdownLatch(N);

for(int i=0;i {
runs[i] = new Worker(startSignal,doneSignal);
new Thread(runs[i]).start();

try {
} catch (InterruptedException e) {


System.out.println("All thread finished ...");


public static class Worker implements Runnable
MyCountdownLatch startSignal = null;
MyCountdownLatch doneSignal = null;

public Worker(MyCountdownLatch startSignal, MyCountdownLatch doneSignal)
this.startSignal = startSignal;
this.doneSignal = doneSignal;


public void run()
System.out.println("Waiting for start signal...");
try {
} catch (InterruptedException e) {


public void doWork()
System.out.println("Starting to work now");
Random random = new Random();
try {
} catch (InterruptedException e) {
// TODO Auto-generated catch block
System.out.println("Completd work");



Config file tc-config.xml

<?xml version="1.0" encoding="UTF-8"?>
<tc:tc-config xsi:schemaLocation="" xmlns:tc="" xmlns:xsi="">
<server host="localhost" name="tc-srv01" bind="">
<method-expression>* MyCountdownLatch.countDown(..)</method-expression>
<method-expression>* MyCountdownLatch.await(..)</method-expression>