Domain-Specific Languages


Half the work I’m currently doing here at Video Stream Networks is to design a DSL runtime engine as the agent which drives all the flow of data through our services. In fact, our current implementation is not in heart a true DSL as it donesn’t complete move you away from more mundane tasks like parsing data or having to solve some library/infrastructure matters (we are always trying to improve this point tho).

That was a response to our needs to provide the customers (and us)  a tool to deliver custom solutions, as often our customers are in need of an specific workflow which always requiered modifications to our “traditional” software. Building a server-side scripting system allows this and also lets thridparty enterprises join the game and develop custom tools that operate on top of our solution, further improving integration.

Our scripting language of choice is Boo, a dynamically typed programming language for the CLR, which has some interesting extensibility capabilities. By using macros, some code behind and time, one could develop some kind of DSL that may help the user concentrate on solving domain problems instead of wasting time solving technical problems related to the architecture of choice.

The source of inspiration was DSLs in Boo: Domain-Specific Languages in .Net, a not so structured book which guides the reader to each step involved on implementing a DSL based in Boo and RhinoDSL (his own dsl scripts factory library), it’s worth reading tho.

Our implementation has various key features:

  • Persitence through a well defined persistence points. Flow can be later resumed.
  • Wait for external events.
  • Not tied to any specific domain object.
  • Not tied to any specific domain object.
  • Parallel execution of scripts.
  • Service Oriented Architecture.

The script flow is moved from step to step (stages) which are the main code block within an script. These stages also define where the flow is persisted and subsequent runs won’t enter already executed stages.

The input of the script is open. It is defined by the script itself and its clients, which are responsible of feeding the correct data. To improve data handling we share a set of libraries to clients and server, where the input is defined for both parties.

Our runtime is capable of handling concurrent execution of scripts using the same (shared) underlying servicies (like persistence, tracking or any service that we want to publish to the script side).

We call this scripts “workflows”, being the script runtime one service of the new SOA platform we have built to deliver a product that can be easily modified to our customer needs.


Tracking the flow in the mesh of services

I’m currently worried on how our embrace of SOA and REST can take a big impact on our support team. In a world of closed entities where the boundary of the underlaying operation is well defined, a simple logging system is enough to track down the source of the problem. The failed operation is well confined inside an application scope, but on a service oriented architecture the message could have been propagated to other services. Tracking down the source of the problem definitively doesn’t seem to be trivial.

A central logging service is a good start. All message outcomes can be tracked into this service, which is a central shared point to look for answers. But what if your architecture is a large SOA implementation with a mesh of services? If a call to a service generates smaller calls to secondary services, the logging system is still insufficient. One could start reading log by log  in the hope to understand what was the outcome of each service call in order to get the whole picture, but the always-busy support team is not going to be happy with this.

I could think in a couple of simple solutions: one could be to propagate the id of the root message, the one that triggered everything, so that a query with this id to the logging system would return a list of all related message. Our we can scale this to not only track the root message but to propagate in each message (as embedded info into each message) the id of the previous message in the chain. A list of chained message can easily be looked up in our logging system as a list of log messages (of a list of services messages chain). This information should also be embedded into the event drive part of the system, thus all events should also track this chain of ancestor messages. With this approach we can easily visualize which message triggered a resource operation in another service, consequently allowing the technical support team to look and better comprehend into the flow of that certain execution path.

Our we can move to the next level: some sort of exception handling (in the SOA way)…

Harmful dependency cycles

Rushing things is bad.  You are in a hurry to get to the supermarket just to find, once you are there, that you forgot the shopping list at home. When developing applications, rushing things without a proper background design leads to spaghetti code. You module A depends on B, B depends on C, and C depends on A. There is some hidden master components, the sums of all of them, which is hard to notice, maintain and reuse.


Dependecy structure matrix example


Dependency structure matrix can help you find those harmful dependencies. The key on this tool is the way the important data is presented to the user (you). Matrix organization let’s you easily spot dependencies, where graphs tend to be hard to visualize.

Dependency diagram

This is one of the tools we are currently using to deliver high quality software to our clients. You can find more info in the following links:

Reservation pattern

I was reading the SOA pattern – Reservations from Arnon Rotem (blog) I found it quite interesting. You can see it as a mix of optimistic resource locking with expiration time, where the service itself can revoke the lock in the first place due to some internal reason.

#Reservation patter. Image taken from Arnon Rotem blog.

Although this pattern may not be applicable in all scenarios due to the fact that the reservation may not be effective when the transaction is committed, making this assumption more restrictive (resources are guaranteed until the expiration time) makes this pattern more widely usable.

By providing several levels of guarantee the service can offer its resources to more consumers if some of them doesn’t require a high level of guarantee because they can compensate for ineffective reservations. This also adds another order of complexity in logic though.

Job batch controller

I would like to share an small pattern a came up some time ago which could be handy when throwing a finite number of asynchronous jobs and wait for all of them to finish.

Job batch controller

The basic idea is to execute a finite numbers of parallel jobs using a thread pool and let each one notify its parent controller after the job has been finished successfully or aborted. The controller will be awaken when a new notification arrives, collect it, do something with the data and go back to sleep until all jobs have been executed or some condition is met. After all jobs have been finished a new bunch of jobs can be executed again.

Sompe implementation in C# follows:


using System;
using System.Collections;
using System.Collections.Generic;
using System.Threading;

namespace BatchController
public interface IThreadJob
	void Execute();
	void DoSomething();
	void SetParams(IDictionary parameters, 
	               Queue jobs_queue);
	void SetEventHandlers(ref EventWaitHandle waitHandle, 
	                      ref EventWaitHandle toSignalHandle);

public class DummyJob : IThreadJob
	EventWaitHandle _waitHandle, _toSignalHandle;
	Guid _guid;
	Queue _jobsQueue;
	IDictionary _parameters; 
	public void Execute(){
		// Do you stuff here
		// ...
		int wait = 400;
		System.Console.Write("Job " + _guid.ToString() + " is waiting " 
		                     + wait.ToString() + " millisecs.\n");
		System.Console.Write("Job " + _guid.ToString() 
		                     + " executed.\n");
		// wait for the queue lock
		// Add ourselves to the dosomething queue
		// Let the controller know we are done
	public void DoSomething(){
		System.Console.Write("Job " + _guid.ToString() 
		                     + " has done something.\n");
	public void SetParams(IDictionary parameters, 
	                      Queue jobs_queue){
		_guid = Guid.NewGuid();
		_jobsQueue = jobs_queue;
		_parameters = parameters;
	public void SetEventHandlers(ref EventWaitHandle waitHandle, 
	                             ref EventWaitHandle toSignalHandle)
		_waitHandle = waitHandle;
		_toSignalHandle = toSignalHandle;

public class Controller
	Queue jobsQueue;
	ArrayList jobsList;
	EventWaitHandle jobNotificationEvent;
	EventWaitHandle readyToDoSomethingEvent;
	public Controller()
		jobsQueue = new Queue();
		jobNotificationEvent = 
			new EventWaitHandle(false, EventResetMode.AutoReset);
		readyToDoSomethingEvent = 
			new EventWaitHandle(false, EventResetMode.AutoReset);
	public void SetJobList(ArrayList jobs)
		jobsList = jobs;
	public void ExecuteJobs()
		foreach(IThreadJob job in jobsList)
			job.SetParams(null, jobsQueue);
			job.SetEventHandlers(ref readyToDoSomethingEvent, 
			                     ref jobNotificationEvent);
			Thread.Sleep(200); // Do not launch to many jobs at once
	public void DoSomethingLater(){
		int num = 0;
		while(num < jobsList.Count){
			// Let know the jobs we are idle
			// Go to sleep while waiting
			// We got something to do
			IThreadJob job = jobsQueue.Dequeue();
		// All jobs are done
	static  void Callback_DoJob(object job)
		// Unboxing performance penalty
		IThreadJob thread_job = (IThreadJob)job;


using System;
using System.Collections;

namespace BatchController
class MainClass
	public static void Main(string[] args)
		Console.WriteLine("***Starting demo...\n\n");
		Controller controller = new Controller();
		ArrayList arrayList = new ArrayList();
		for(int i=0; i < 10; i++){
			arrayList.Add(new DummyJob());
		// Do some stuff here with this thread
		Console.WriteLine("\n***Demo finished.");