Search This Blog

2010-02-22

Unit Testing Linq Queries in Moq

After some google research and experimentation I found that it was not worth to mock methods that return IQueryable or IQueryable because in order to use it programmers have to make use of extension methods. And this kind of methods are not supported by Moq ( a minimalistic mock framework ). This is the DAO interface I want to test.
public interface IDAOFactory
{
  public abstract IQueryable Query();
}
This is the moq unit test that fails, since I can´t use Linq directly.
[Test]
public void LinqQueryTest()
{ 
  // This moq configuration will trigger an exception – Can´t make use of extension methods
  daoFactoryMock.Setup(d => (from o in d.Query()
    where o.Id >= 0
    select o.Id).ToList())
    .Returns( new List() { 1, 2 } );
}
It turns out that the solution is easily solved by using a collection as a data source.
[Test]
public void LinqQueryTest()
{
  // Creates a IQueryable from a Collection
  IList lstOrders = new List() { 
    orderMock1.Object, 
    orderMock2.Object, 
    orderMock3.Object };
  IQueryable orderQuery = lstOrders.AsQueryable();

  // Configures the Query to return IQueryable implementation
  daoFactoryMock.Setup(d => d.Query()).Returns(orderQuery);

  // Now the linq queries can be used naturally 
  IList lstResult = (from o in daoFactoryMock.Object.Query() where o.Id >= 0 Select o).ToList();

  // Checking output results
  Assert.AreEqual(3, lstResult.Count);
}
It is important to notice that the collection elements that should also be mock objects must contain all the necessary data in order to make the correct test.

2009-12-26

Model View Controller with Events in .NET

This is often a confused design pattern and its main purpose is to separate objects that assume different roles in a software.
These roles are:
  • models - objects that actual execute the system tasks
  • views - objects that display the system data
  • controllers - objects that capture the user intentions from the view and route to the right actions
Usually Views have a reference to the controller however another approach below shows how to decouple the views from the controllers.
Views can be implemented in several ways depending on the UI library. For that reason, views are better represented as interfaces. However in order to reduce coupling between views and controllers, events can be used in the interface views.
The example below shows a client registration view:
using System;
namespace MyController
{
  public interface IClientRegistrationView
  {
    public long Id { get; set; }
    public string Name { get; set; }
    public string Registration { get; set; }
    public event ClientEventHandler InsertRequested;
    public event ClientEventHandler UpdateRequested;
    public event ObjectIdEventHandler &lt long &gt RemoveRequested;
    public event ObjectIdEventHandler &lt long &gt RetrieveRequested;
  }
}
Specific event arguments were also created for the Client Registration View:
  • ClientEventArgs - contains client fields so that it can be sent to the underlying layer.
  • ObjectIdEventArgs - contains a generic object id for deletion and queries purposes.

See event argument classes below:

using System;
namespace MyController
{
  public delegate void ClientEventHandler(object sender, ClientEventArgs e);
  public class ClientEventArgs : EventArgs
  {
  public long Id { get; set; }
  public string Name { get; set; }
  public string Registration { get; set; }
  }
}

using System;
namespace MyController
{
  public delegate void ObjectIdEventHandler &lt T &gt (object sender, ObjectIdEventArgs &lt T &gt e);
  public class ObjectIdEventArgs &lt T &gt : EventArgs
  {
    public T Id { get; set; }
  }
}


The controller will have a reference to a view (an interface) and it will access the view´s data fields for the client which is Id, Name and Registration.
Besides that, the controller will also be told to trigger actions by listening to the view´s events.
In this example, the service acts as if it was the model of the system.

using System;
using MyService;
namespace MyController
{
  public class ClientRegistrationController
  {
    private IClientRegistrationView View { get; set; }
    private ClientRegistrationService Service { get; set; }
    public ClientRegistrationController(IClientRegistrationView view)
    {
      View = view;
      View.InsertRequested += new ClientEventHandler(View_InsertRequested);
      View.UpdateRequested += new ClientEventHandler(View_UpdateRequested);
      View.RemoveRequested += new ObjectIdEventHandler &lt long &gt (View_RemoveRequested);
      View.RetrieveRequested += new ObjectIdEventHandler &lt long &gt (View_RetrieveRequested);
      Service = new ClientRegistrationService();   
    }
    void View_InsertRequested(object sender, ClientEventArgs e)
    {
      ClientDTO dto = new ClientDTO() { Id = e.Id, Name = e.Name, Registration = e.Registration };
      Service.Insert(dto);
      this.View.Id = dto.Id;
    }
    void View_UpdateRequested(object sender, ClientEventArgs e)
    {
      Service.Update(new ClientDTO() { Id = e.Id, Name = e.Name, Registration = e.Registration });
    }
    void View_RemoveRequested(object sender, ObjectIdEventArgs &lt long &gt e)
    {
      Service.Remove(e.Id);
    }
    void View_RetrieveRequested(object sender, ObjectIdEventArgs &lt long &gt e)
    {
      ClientDTO dto = Service.Retrieve(e.Id);
      this.View.Id = dto.Id;
      this.View.Name = dto.Name;
      this.View.Registration = dto.Registration;
    }
  }
}
As it can be seen above, the View doesn´t need to have a reference to the Controller. The view is totally decoupled form the controller but it can communicate with it by listening to the events.

2009-09-27

Extreme Programming Impressions

Whe I first read about XP Programming in 2002 ( http://www.extremeprogramming.org/ ) which is one of the agile methodologies for software development I didn't take it seriously.
At that time the authors of this methodology were saying that software didn´t need to be documented, models were not necessary or useful at all, people should be the documentation of the software, etc.
Immediately it came to my mind that it couldn´t work for many small (and big) software companies due to several problems:

– Software companies are constantly loosing and hiring workforce so how can they work if they keep loosing “documentation” which is on people´s minds ?

– How can they know the "what", "where" and "how" in the source code ?

Some years have passed and this methodology has matured and besides that other good methodologies of the same family like Scrum have come up too.
It called my attention that many state-of-art tech companies like Google and Yahoo! were working with Scrum and I started to get curious to know what it is about.

Five years later I decided to attend to a presentation about XP Programming in order to get a broader picture about it. It helped me to remove some miths I had such as the lack of documentation. Actually the Agile Methodology do not remove the activity of producing documentation but it just gave a different meaning for the documentation. The documentation should be provided if relevant for developers. It doesn´t have to include fancy diagrams but only the necessary information such as what is the system about, how to compile source-code, or other information that it is not self-explained in the system.

After reading "The Toyota Way" I noticed that agile methodologies was greatly inspired by this administration model. This model is basically driven to reduce waste, in other words, we should do only the necessary to accomplish our objectives, no more or no less. By reducing waste we are also reducing unnecessary work what can mean different things depending on our project such as no documentation, few documentation, no models, etc.

Therefore to be lean (and consequently agile), one must think on what tasks are been carried out and what tasks in the process should be eliminated if they have no value. Read the book above to have a good idea of the process.

2009-08-27

DynamicProxy: An Elegant Solution for Session/Transaction/Exception Management in NHibernate (or any other ORM)

Session management is a well solved problem for web applications and many detailed solutions can be found in the internet. The same is not true for winforms applications. Although there are solutions available in the internet, many of them are theoretical or just “complicated” for the medium programmer. Besides that it was difficult to find a solution (I have never found one) that could work for both web and winforms applications.

After a while (days), it came up to me the idea of using service proxies with Castle Dynamic Proxies. It turned out to be the easiest and cleanest approach I could think of because it has the ability to inject (aspects) behaviour around the service methods.

The idea can be coded in the following way:
  • Service classes with standard namespace and virtual methods


namespace Sample.Service
{
  public class SystemLogRegistrationService
  {
    public virtual void Modify(long codLogSistema)
    {
      SystemLog systemLog = Repository.Get().Load(codLogSistema);            
      systemLog.SetMachine = "MAQUINA" + DateTime.Now;
      systemLog.SetUserName = "PESSOA" + DateTime.Now;            
      systemLog.SetSystemName = "SISTEMA" + DateTime.Now;
      Repository.Get().Save(systemLog);            
    }
  }
}


Do not get distracted with the service code. The important thing to notice above is that the service does not contain anything else other than processing the domain classes (in this case, SystemLog). Also note that all service methods must be virtual. Without that, dynamic proxy won't work for these methods. The details of Repository implementation are out of the scope of this article and this subject is covered in enough details in several articles throughout the internet. (You can also send me a comment or email if you need information about that)

  • Usage Example


In order to make use of proxified services, one must create some kind of generator whose creation will be explained next. The ProxyGenerator below is a simple static class for didactic purposes that is responsible for dynamically generate proxies from a given type injecting the necessary aspects such as session/transaction management and exception handling or any other aspect you might think about.

SomeService serv = ProxyGenerator.InjectSessionTransactionExceptionAspects &lt SomeService &gt ();
serv.Modify(12048); // <= Modify method has session/transaction/exception management
  • Creating a proxy service factory
The proxy generator can be implemented using Castle Dynamic Proxy API.
using System;
using Castle.DynamicProxy;

namespace Sample.Persistence
{
  public static class ProxyGenerator 
  {
    private static ProxyGenerator _generator = new ProxyGenerator();        
    public static TService InjectSessionTransactionExceptionAspects &lt TService &gt ()
    {
      return (TService)_generator.CreateClassProxy(
        typeof(TService),
        new SessionTransactionExceptionAspect());    
    }
  }
}
  • An interceptor for the service class methods
using System;
using Castle.DynamicProxy;
using NHibernate;
using NHibernate.Context;

namespace Sample.Persistence
{
  /// 
  /// Intercepts service methods (must be virtual) and inject
  /// session / transaction and exception aspects
  /// 
  public class SessionTransactionExceptionAspect: IInterceptor
  {
    /// 
    /// Intercepts service methods and adds the following behaviors
    /// >>> Before executing a method:
    ///     * opens session
    ///     * begins transaction
    /// >>> After executing method:
    ///     * Commits transaction
    /// >>> In case there is exception
    ///     * Rollbacks transaction
    ///     * Handles exception
    /// >>> At the end
    ///     * Closes session
    /// 
    public object Intercept(IInvocation invocation, params object[] args)
    {
      object retorno = null;
      ITransaction tx = null;
      try
      {          
        CurrentSessionContext.Bind(SessionFactory.Instance.OpenSession());
        tx = SessionFactory.Instance.GetCurrentSession().BeginTransaction();
        retorno = invocation.Proceed(args);
        tx.Commit();
      }
      catch (Exception exception)
      {
        if (tx != null) { tx.Rollback(); }
          throw exception;
      }
      finally
      {
        ISession s = SessionFactory.Instance.GetCurrentSession();
        s.Close();
        CurrentSessionContext.Unbind(s.SessionFactory);
      }
      return retorno;
    }
  }
}
Above is the center of the whole idea. The interceptor class above captures only the service methods and ignores the rest. The following tasks are executed inside a try-catch-finally: (when it is a service method)
  • Session is created
  • Transaction is initialized
  • The method itself is executed
  • if method is ok, transaction is confirmed
  • if there is exception, transaction is cancelled and exception is handled
  • Finally session is closed

2009-08-21

Avoid "Tall" DAO Factories

A "tall" DAO factory can be defined as a big class that contains too much methods for each business class that compounds your domain model.

public class DAOFactory
{
IClass1DAO GetClass1DAO() { ... }
IClass2DAO GetClass2DAO() { ... }
IClass3DAO GetClass3DAO() { ... }
IClass4DAO GetClass4DAO() { ... }
IClass5DAO GetClass5DAO() { ... }
IClass6DAO GetClass6DAO() { ... }
IClass7DAO GetClass7DAO() { ... }
IClass8DAO GetClass8DAO() { ... }
IClass9DAO GetClass9DAO() { ... }
IClass10DAO GetClass10DAO() { ... }
: : : :
}


Besides big, these kind of class should be modified every time a new domain class is added to your system.
In order to avoid that to happen, one good option is to use a generic method for all DAO interfaces.

public class DAOFactory
{
ICommonDAO GetDAO < I > ( ) where I : ICommonDAO { ... }
}


The action of searching the corresponding DAO interface implementation can be easily achieved by using .NET reflection support for Assemblies and Types.

2009-06-13

Agile Modeling in Software Projects

Recently Jeff Sutherland mentioned another certification for software programmers since Scrum does not include software engineer techniques but very present in XP (extreme programming) management. That is probably the reason why many software developers work with Scrum and XP methodologies together.

However although XP is very software-programming oriented it is still not enough to have a good software design in large systems projects. Additionally in many organizations it is very difficult to find a product owner that fully understand the business rules and can manage the software functionalities.

In order to efficiently use Scrum, there must be someone responsible for understanding the business. If there is no product owner, one employee must be chosen to study and logically model the business. That is exactly why a good business modeling is imperative before any large software development.

Good software design and business understanding prevents or reduces significantly re-work tasks. It is considered wasted work since these tasks do not devliver anything useful to the client and often happens when developers did not captured well the business rules.

Thus the following software development process is proposed to match DDD and agile approach. In this software process, there can be product owners, developers and scrum masters just like original Scrum the difference is that before the sprints (see Scrum reference) can start, a long DDD session is necessary in order to produce a good business model.

Briefly describing the following steps should be taken:

  1. A selected person assumes the role of Product Owner
  2. Product Owner becomes responsible for studying and building a business model
  3. Product Owner writes all the system features using User Stories (from XP)
  4. Product Owner schedules a Planning Meeting with the Scrum Master and Developers to present User Stories and the Business Model
  5. Scrum Master schedules a Sprint Meeting with developers to plan the Next Sprint based on the Stories
  6. Developers begin the Sprint (from 1 to 2 weeks)
  7. Scrum Master Organizes Daily Meetings with Developers (just like Scrum)
  8. At the end of the Spring, Scrum Master schedules a Weekly Meeting to present the system to the Product Owner but it also includes the developers of the project who makes considerations about the system presented
  9. Scrum Master organized a Retrospective Meeting with the developers to discuss what went wrong or right with the Sprint and then they start planning the next Sprint.
  10. Go to Step 6 until Product Owner gets satisfied

2009-06-10

How the repository pattern works ?

The classes that represent the elements of a domain must contain all the business logic inside it such as tax calculation, name validation, etc. However in many circumstances it is also necessary to access data in order to complete the business logic inside these classes.

Take the example below:

Suppose I want to create an instance from the Client class and that clients must have a name and an address (there may be more information but lets stay with those two data for simplification purposes).
So, this could be instantiated like: (C# code)

// Open database connection (and Begin Transaction)
SessionManager.Open( );
: : :
// Parameters are: name, zipcode, adress number, address complement, country
Client client = new Client(“New Client”,”12500”,12,“Room 14”,Country.US);
: : :
// Close database connection (and Commit Transaction)
SessionManager.Close( );

Although simple, the line above hides many steps such as:
  • Check if client name is valid
  • Check if zipcode exists in the county US
  • Check if address number is correct
  • Check if addres complement id correct
  • Check if there are clients with the same name and address
  • Proceed with the client creation
However in order to complete some of this steps, the Client object should be able to access the data layer and that is the responsibility of the repositories. According to Martin Fowler's website: Mediates between the domain and data mapping layers using a collection-like interface for accessing domain objects.
Reference: http://www.martinfowler.com/eaaCatalog/repository.html

In order to make it possible the code line above, an Address and Client class must be implemented.

See full code listing below:

// Address is a value object used by Client class
public class Address
{
public Address(string zipCodeNumber,int number,string addrComplement)
{
// Check if zipcode number exists in the country (ZipCode Repository uses SessionManager inside it)
IZipCodeRepository zipCodeRepository = RepositoryManager.GetRepository( );
ZipCode zipCode = zipCodeRepository.Get(zipCodeNumber,country);
if (zipCode == null) { throw new NonExistentZipCodeException(zipCodeNumber,country);
// Check if address number is correct
if (number <= 0) { throw new InvalidAddressNumberException(number); }
// Check if address complement is correct
if (complement.Trim( ) == string.Empty) { throw new InvalidAddressComplementException(addrComplement)); }
// Sets values
this._zipCode = zipCode;
this._number = number;
this._complement = complement;
}
private ZipCode _zipCode = null;
public ZipCode ZipCode get { return _zipCode; }
private int _number = 0;
public int Number { get { return _number; } }
private string _complement = string.Empty;
public string Complement { get { return _complement; } }
}

// Now the Client class
public class Client
{
private long _id;
public long Id { get { return id; } set { this.id = value; } }
private string _name = string.Empty;
public Name
{
get { return _name; }
// Check if name is valid
set
{
if (name.Trim( ) == string.Empty) { throw new InvalidNameException(); }
// Check if there are Clients with same name and address
IClientRepository clientRepository = RepositoryManager.GetRepository();
bool exists = clientRepository.ClientExists(name,zipCodeNumber,number,addrComplement,country);
if (exists) { throw new ClientExistsException( ); }
this._name = value;

}
}
private Address _address;
public Address { get { return _address; } }
public Client(string name,string zipCodeNumber,int number,string addrComplement,Country country)
{
// Creates a Client
this.Address = new Address(zipCodeNumber,number,addrComplement,country);
this.Name = name;
}
}

Repositories have at least two advantages:
  • It removes data specific code from the domain classes which are concerned only about business logic
  • It allows unit tests since repositories are referred as interfaces in domain classes and thus fake repositories can be created without depend on database connection

2009-03-12

Using .NET Nullable Types with NHibernate 1.2

Originally, NHibernate 1.2 does not support nullable types from .NET such as DateTime?, int?, bool?, etc. but that can be solved by implementing specific NHibernate specific user types.
Not all nullable user types are listed for all .NET nullable types are listed below. But it can be easily done by following the example specially for numeric types.
However if you need help you jut send an email.

Nullable user types code listings:

NullableDateTimeType.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using NHibernate.UserTypes;
using NHibernate;
using System.Data;
using NHibernate.SqlTypes;

namespace Utilitario.GerenciaDados
{
  public class NullableDateTimeType : IUserType
  {
      #region IUserType Members
      public bool Equals(object x, object y)
      {
          return object.Equals(x, y);
      }
      public int GetHashCode(object x)
      {
          return x.GetHashCode();
      }
      public object NullSafeGet(IDataReader rs, string[] names, object owner)
      {
          //object valor = NHibernateUtil.DateTime.NullSafeGet(rs, names[0]);
          object valor = null;
          if (rs[names[0]] != DBNull.Value)
              valor = Convert.ToDateTime(rs[names[0]]);

          DateTime? dateTime = null;

          if (valor != null)
          {
              dateTime = (DateTime)valor;
          }
          return dateTime;
      }
      public void NullSafeSet(IDbCommand cmd, object value, int index)
      {
          if (value == null)
          {
              NHibernateUtil.String.NullSafeSet(cmd, null, index);
          }
          else
          {
              DateTime? dateTime = (DateTime)value;
              NHibernateUtil.AnsiString.NullSafeSet(cmd, dateTime.Value.ToString("yyyy/MM/dd HH:mm:ss.fff"), index);
          }
      }
      public object DeepCopy(object value)
      {
          return value;
      }
      public object Replace(object original, object target, object owner)
      {
          return original;
      }
      public object Assemble(object cached, object owner)
      {
          return cached;
      }
      public object Disassemble(object value)
      {
          return value;
      }
      public SqlType[] SqlTypes
      {
          get { return new SqlType[] { new StringSqlType() }; }
      }
      public Type ReturnedType
      {
          get { return typeof(string); }
      }
      public bool IsMutable
      {
          get { return false; }
      }
      #endregion
  }
}

NullableBooleanType.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using NHibernate.UserTypes;
using NHibernate;
using System.Data;
using NHibernate.SqlTypes;

namespace Utilitario.GerenciaDados
{
  public class NullableBooleanType : IUserType
  {
      #region IUserType Members
      public bool Equals(object x, object y)
      {
          return object.Equals(x, y);
      }
      public int GetHashCode(object x)
      {
          return x.GetHashCode();
      }
      public object NullSafeGet(IDataReader rs, string[] names, object owner)
      {
          object valor = NHibernateUtil.Boolean.NullSafeGet(rs, names[0]);
          bool? caracter = null;
          if (valor != null)
          {
              caracter = (bool)valor;
          }
          return caracter;
      }
      public void NullSafeSet(IDbCommand cmd, object value, int index)
      {
          if (value == null)
          {
              NHibernateUtil.Boolean.NullSafeSet(cmd, null, index);
          }
          else
          {
              bool? caracter = (bool)value;
              NHibernateUtil.Boolean.NullSafeSet(cmd, caracter.Value, index);
          }
      }
      public object DeepCopy(object value)
      {
          return value;
      }
      public object Replace(object original, object target, object owner)
      {
          return original;
      }
      public object Assemble(object cached, object owner)
      {
          return cached;
      }
      public object Disassemble(object value)
      {
          return value;
      }
      public SqlType[] SqlTypes
      {
          get { return new SqlType[] { new StringSqlType() }; }
      }
      public Type ReturnedType
      {
          get { return typeof(string); }
      }
      public bool IsMutable
      {
          get { return false; }
      }
      #endregion
  }
}

NullableCharType.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using NHibernate.UserTypes;
using NHibernate;
using System.Data;
using NHibernate.SqlTypes;

namespace Utilitario.GerenciaDados
{
  public class NullableCharType : IUserType
  {
      #region IUserType Members
      public bool Equals(object x, object y)
      {
          return object.Equals(x, y);
      }
      public int GetHashCode(object x)
      {
          return x.GetHashCode();
      }
      public object NullSafeGet(IDataReader rs, string[] names, object owner)
      {
          object valor = NHibernateUtil.Character.NullSafeGet(rs, names[0]);
          Char? caracter = null;
          if (valor != null)
          {
              caracter = (Char)valor;
          }
         return caracter;
      }
      public void NullSafeSet(IDbCommand cmd, object value, int index)
      {  
          if (value == null)
          {
               NHibernateUtil.Character.NullSafeSet(cmd, null, index);
          }
          else
          {
              Char? caracter = (Char)value;
              NHibernateUtil.Character.NullSafeSet(cmd, caracter.Value, index);
          }
      }
      public object DeepCopy(object value)
      {
          return value;
      }
      public object Replace(object original, object target, object owner)
      {
          return original;
      }
      public object Assemble(object cached, object owner)
      {
          return cached;
      }
      public object Disassemble(object value)
      {
          return value;
      }
      public SqlType[] SqlTypes
      {
          get { return new SqlType[] { new StringSqlType() }; }
      }
      public Type ReturnedType
      {
          get { return typeof(string); }
      }
      public bool IsMutable
      {
          get { return false; }
      }
      #endregion
  }
}

NullableDecimalType.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using NHibernate.UserTypes;
using System.Data;
using NHibernate.Util;
using NHibernate;
using NHibernate.SqlTypes;

namespace Utilitario.GerenciaDados
{
  public class NullableDecimalType : IUserType
  {
      #region IUserType Members
      public bool Equals(object x, object y)
      {
          return object.Equals(x, y);
      }
      public int GetHashCode(object x)
      {
          return x.GetHashCode();
      }
      public object NullSafeGet(IDataReader rs, string[] names, object owner)
      {
          object valor = NHibernateUtil.Decimal.NullSafeGet(rs, names[0]);
          Decimal? inteiro = null;
          if (valor != null)
          {
              inteiro = (Decimal)valor;
          }
          return inteiro;
      }
      public void NullSafeSet(IDbCommand cmd, object value, int index)
      {
          if (value == null)
          {
              NHibernateUtil.Decimal.NullSafeSet(cmd, null, index);
          }
          else
          {
              Decimal? inteiro = (Decimal)value;
              NHibernateUtil.Decimal.NullSafeSet(cmd, inteiro.Value.ToString().Replace(',','.'), index);
          }
      }
      public object DeepCopy(object value)
      {
          return value;
      }
      public object Replace(object original, object target, object owner)
      {
          return original;
      }
      public object Assemble(object cached, object owner)
      {
          return cached;
      }
      public object Disassemble(object value)
      {
          return value;
      }
      public SqlType[] SqlTypes
      {
          get { return new SqlType[] { new StringSqlType() }; }
      }
      public Type ReturnedType
      {
          get { return typeof(string); }
      }
      public bool IsMutable
      {
          get { return false; }
      }
      #endregion
  }
}

NullableDoubleType.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using NHibernate.UserTypes;
using NHibernate;
using NHibernate.SqlTypes;
using System.Data;

namespace Utilitario.GerenciaDados
{
  public class NullableDoubleType : IUserType
  {
      #region IUserType Members
      public bool Equals(object x, object y)
      {
          return object.Equals(x, y);
      }
      public int GetHashCode(object x)
      {
          return x.GetHashCode();
      }
      public object NullSafeGet(IDataReader rs, string[] names, object owner)
      {
          object valor = NHibernateUtil.Double.NullSafeGet(rs, names[0]);
          Double? valorD = null;
          if (valor != null)
          {
              valorD = (double)valor;
          }
          return valorD;
      }
      public void NullSafeSet(IDbCommand cmd, object value, int index)
      {
          if (value == null)
          {
              NHibernateUtil.Double.NullSafeSet(cmd, null, index);
          }
          else
          {
              Double? valor = (Double)value;
              NHibernateUtil.Double.NullSafeSet(cmd, valor.Value.ToString().Replace(',','.'), index);
          }
      }
      public object DeepCopy(object value)
      {
          return value;
      }
      public object Replace(object original, object target, object owner)
      {
          return original;
      }
      public object Assemble(object cached, object owner)
      {
          return cached;
      }
      public object Disassemble(object value)
      {
          return value;
      }
      public SqlType[] SqlTypes
      {
          get { return new SqlType[] { new StringSqlType() }; }
      }
      public Type ReturnedType
      {
          get { return typeof(string); }
      }
      public bool IsMutable
      {
          get { return false; }
      }
      #endregion
  }
}
NullableInt32Type.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using NHibernate.UserTypes;
using System.Data;
using NHibernate.Util;
using NHibernate;
using NHibernate.SqlTypes;

namespace Utilitario.GerenciaDados
{
  public class NullableInt32Type : IUserType
  {
      #region IUserType Members
      public bool Equals(object x, object y)
      {
          return object.Equals(x, y);
      }
      public int GetHashCode(object x)
      {
          return x.GetHashCode();
      }
      public object NullSafeGet(IDataReader rs, string[] names, object owner)
      {
          object valor = NHibernateUtil.Int32.NullSafeGet(rs, names[0]);
          Int32? inteiro = null;
          if (valor != null)
          {
              inteiro = (Int32)valor;
          }
          return inteiro;
      }
      public void NullSafeSet(IDbCommand cmd, object value, int index)
      {
          if (value == null)
          {
              NHibernateUtil.Int32.NullSafeSet(cmd, null, index);
          }
          else
          {
              Int32? inteiro = (int)value;
              NHibernateUtil.Int32.NullSafeSet(cmd, inteiro.Value, index);
          }
      }
      public object DeepCopy(object value)
      {
          return value;
      }
      public object Replace(object original, object target, object owner)
      {
          return original;
      }
      public object Assemble(object cached, object owner)
      {
          return cached;
      }
      public object Disassemble(object value)
      {
          return value;
      }
      public SqlType[] SqlTypes
      {
          get { return new SqlType[] { new StringSqlType() }; }
      }
      public Type ReturnedType
      {
          get { return typeof(string); }
      }
      public bool IsMutable
      {
          get { return false; }
      }
      #endregion
  }
}

2008-12-26

Extremely Short Introduction for Ruby on Rails

Ruby on Rails

This file contains brief descriptions of a Ruby on Rails project.

Important Rails Commands

Here a list of the most relevant rails command-line programs organized by task:

  • Starting a Rails Project: rails

  • Executing a Rails Project: ruby script\server ( on application directory )

  • Generating a new Model: ruby script\generate model

  • Generating a new Controller: ruby script\generate controller


Directory Contents

app

Holds all the code that's specific to this particular application.

app/controllers

Holds controllers that should be named like weblogs_controller.rb for automated URL mapping. All controllers should descend from ApplicationController which itself descends from ActionController::Base.

app/models

Holds models that should be named like post.rb.

Most models will descend from ActiveRecord::Base.

app/views

Holds the template files for the view that should be named like weblogs/index.erb for the WeblogsController#index action. All views use eRuby syntax.

app/views/layouts

Holds the template files for layouts to be used with views. This models the common header/footer method of wrapping views. In your views, define a layout using the layout :default and create a file named default.erb. Inside default.erb, call <% yield %> to render the view using this layout.

app/helpers

Holds view helpers that should be named like weblogs_helper.rb. These are generated for you automatically when using script/generate for controllers. Helpers can be used to wrap functionality for your views into methods.

config

Configuration files for the Rails environment, the routing map, the database, and other dependencies.

db

Contains the database schema in schema.rb. db/migrate contains all the sequence of Migrations for your schema.

doc

This directory is where your application documentation will be stored when generated using rake doc:app

lib

Application specific libraries. Basically, any kind of custom code that doesn't belong under controllers, models, or helpers. This directory is in the load path.

public

The directory available for the web server. Contains subdirectories for images, stylesheets, and javascripts. Also contains the dispatchers and the default HTML files. This should be set as the DOCUMENT_ROOT of your web server.

script

Helper scripts for automation and generation.

test

Unit and functional tests along with fixtures. When using the script/generate scripts, template test files will be generated for you and placed in this directory.

vendor

External libraries that the application depends on. Also includes the plugins subdirectory. This directory is in the load path.

How does Model, View and Controller relate to each other ?

The application directory is structured like below:

app

|-controllers

|-models

|-views

The fastest way to generate a complete crud for a model is to generate a controller with the scaffold option:

> script/generate scaffold blog title:string content:text date_created:datetime

After understanding of ruby-on-rails it is considered better practice to generate models, views and controllers separately:

  • To generate a blog controller, one must type:

> script/generate controller blog

Result: a BlogController class will be generated at app/controllers in BlogController.rb

  • To generate a blog model, one must type:

> script/generate model blog

Result: a Blog class will be generated at app/model in blog.rb

  • views can not be generated, you have to go to app/views/blog and create a blog.html.erb.

Views for BlogController are automatically assigned in app/views/blog by name convention. ( Since BlogController will have a blog directory in app/views ).

Views in app/views/blog, must have a *.html.erb extension and an index.html.erb must be created for initial page. Other auxiliary pages can be created in the same directory with different names.

In order to add/remove/update models fields, one must only update the corresponding table in the data model only. After that the following command should be executed to update the models in Ruby-on-Rails:

> rake db:migrate

2008-11-23

Generating Software Documentation from Unit Tests

In the beginning of my career as a software developer I participated in two software projects with the traditional approach of document-first and code later. It didn't take too much time for me to realize this was a not a good aproach. We passed months capturing requirements and writing use cases just to find later that many use cases was actually different from what the stakeholders needed and many requirements have changed.

Then I came across with agile software development. The idea behind made perfect sense for me and some work colleagues and we started to introduce slowly this new paradigm in our department. Projects that lasted 1 year ( yes ! 1 year ) or more now lasted only a couple of months. Besides that because we were demonstrating the software every week or two to the stakeholders and thus we had feedback often from them.
This made us more productive and the final product (the software!) gained more quality and more confidence. But what happened to our business documentation ? We didn't drop a line of it.

Many agilists advocate that agile software development is about absence of documentation. Recently many agilists say it is not removing documentation but that it is seen from a different perspective from document-oriented traditional approach. Only the real necessary documentation is produced. Although it sounds perfectly reasonable we still need a way to document business rules for the IT sector managers and also stakeholders. They couldn't read and understand these rules directly from the source-code since they were not programmers or technicians. So I started to think of a way to automatically generate this documentation.

I was talking to Anselmo from Siemens and he gave me an interesting idea: Automatic generation of business documentation from unit tests. I started thinking how could I implement this alternative in our software architecture model where we have a service class for each use case and I got to the idea below:

For a Client Registration use case we could have the following unit tests:
[TestFixture][BusinessRules] // Business rule attribute means this test must be documented
class Client_Registration_Use_Case
{
[Test]
void Normal_Flow__Check_if_name_is_not_null() { ... }

[Test]
void Normal_Flow__Check_if_address_is_valid() { ... }

[Test]
void Normal_Flow__Check_if_there_is_another_client_with_same_name() { ... }

[Test]
void Normal_Flow__Save_New_Client() { ... }

[Test ExpectedException(typeof(NullClientNameException)) ]
void Alternative_Flow__If_client_name_is_null_raise_message_to_the_user() { ... }

[Test ExpectedException(typeof(InvalidClientAddressException)) ]
void Alternative_Flow__If_client_address_is_invalid_raise_message_to_the_user() { ... }

[Test ExpectedException(typeof(ClientNameAlreadyExistsException)) ]
void Alternative_Flow__If_another_client_with_the_same_name_was_found_raise_message_to_the_user() { ... }
}


At the end, a script in the continuous integration process can read this file or assembly and transform very easily this information and save to a documentation file such as Docbook, ODT or Word Doc. It is important to note that test methods should be placed in the right order so that the right documentation can be generated. The only work is reading the class and methods names and generate the documentation as follows:

-----------------------------------------------------------------
Client Registration Use Case:

Normal Flow:
  1. Check if name is not null
  2. Check if address is valid
  3. Check if there is another client with same name
  4. Save New Client
Alternate Flows:
  1. If client name is null raise message to the user
  2. If client address is invalid raise message to the user
  3. If another client with the same name was found raise message to the user
-----------------------------------------------------------------

This idea has the following advantages:
  • Documentation reflects exactly what was implemented in the software code and never gets outdated
  • If a new check or action is needed in a use case documentation, the implementation and NOT documentation is changed
  • In order to update the documentation, new unit tests are required what can force discipline among the programmers
  • Every time a release is generated the entire application documentation is updated since the proper unit tests are written
There disadvantages as well:
  • Unit tests become more verbose what can slow down its implementation a little bit
  • Test methods descriptions may not reflect the test performed inside
However I still believe the benefits are higher and I wonder if someone is not applying this idea since it sounds so simple. If you have any ideas for agile business documentation please get in touch with me.

2008-08-19

Efficient Software Development Process with Open-Source Tools for .NET

When a software is been built, a series of characteristics must be pursuit in order to deliver a quality product during the development process:
  • Agility
  • Testability
  • Readability
  • Extensibility
  • Automated Documentation
It it important to say that I presume many readers of this text are already convinced about the advantages brought by object oriented programming when compared to traditional development and the tools presented below support this kind of programming paradigm besides the goals specified above.

Object-Relational Mapping Tool: Castle Project Active Record

One of the most time-consuming things in software development is mapping classes in tables.
This process is partially automated by tools such as NHibernate but the Active Record offered by Castle Project not only maps classes to tables but it is also capable to generate the database from the object model which confers agility to the software process.
http://www.castleproject.org/activerecord/index.html

Source-Code Standards: FXCop

Although it is free, FXCop is not a free-software since its code is private owned by Microsoft.
Anyway it is very a useful tool whose objective is to verify if quality metrics and/or naming conventions of a project are been followed appropriately by the team members.
By using naming well defined and known programming conventions, code readability is enhanced and different projects can be understood by every programmer in a software company.
http://msdn.microsoft.com/en-us/library/bb429476(VS.80).aspx

Unit Tests: NUnit

It is the most used automated testing tool for .NET . These tests play an important role in a software project since they bring more confidence to developers to change software when needed since they can point out when a certain piece of the software might be broken due to some modification. Obviously NUnit brings testability to the software development environment.
http://www.nunit.org/index.php

Test Coverage: PartCover

How can you know what parts of your code is being covered by automated tests ?
This process is called test coverage and it is the objective of PartCover.
Altough NCover is largely mentioned in the net, PartCover is becoming increasingly
important as a test coverage open-source project.
http://sourceforge.net/projects/partcover/

Automated Documentation: NDoc

Documentation can be a time consuming task but without it, the software extensibility and maintainability can slow down considerably specially for people outside the project and unaware of coding practices. One of the fastest ways to get documentation is generating it from the source-code. NDoc can generate developer level documentation from the XML tags in the C# source-code.
http://ndoc.sourceforge.net

Web Framework: MonoRail

The Castle Project seems to understand what is to have an agile development. Having this in mind, MonoRail is a web framework that truly obeys the MVC Design Pattern without slowing developers down. Besides agile, this framework lets us to have an improved readability also compared to traditional ASP.NET programming.
http://www.castleproject.org/monorail/gettingstarted/index.html

Continuous Integration: Cruise Control .NET

Prior sections some tools for software quality were presented such as FXCop, NUnit, PartCover and NDoc. Although useful it is very easy for one of the team members to forget to manually execute some of this tools during the development process. Remembering and executing these tools can also be error prone since some member of the team can forget to execute some of these tools.
In order to overcome these drawbacks and others, continuous integration rised as one the most known practices from Extreme Programming (XP). ( See XP in http://www.extremeprogramming.org/ )
Basically continuous integration is performed by a build tool such as Cruise Control ( http://cruisecontrol.sourceforge.net/ ) that is responsible for executing specified tasks necessary prior to software delivery to the users such as compilation checking, unit tests executing, quality metrics verification and finally publication. However another combination of tasks can be though such as email notification and many others.
Generally the continuous integration process is executed manually or during the night when there is low activity but it is possible to trigger this process through a version control system such as SVN prior to accept a commit requested by a team programmer.

2008-07-24

Scrum as a criteria for Venture Capital Groups

I just read a Foreword from Jeff Sutherland ( co-creator of Scrum ) from the book "Scrum and XP from the Trenches" where he comments how he chooses companies who really apply Scrum framework ( an agile framework or methodology to develop software ) for a venture capital group as an agile coach.

The more time pass the more I realize how agile software development will play an important role in near future. It already plays an important role but it can become a requirement for startup companies in search for funding from venture capital groups.

If a company can not deliver its products in time or can not deliver working software or can not deliver something that was not what the client was expecting, it should not be expected that they would receive funding. As Mike Cohn cited in the same book in his foreword, agile development is not about beautiful documentation or future-problem-proof code, it is about software done and working. And that's exactly what clients need and expect from a software company.
At the same time, Scrum as well as other kinds of agile software ideas ( be it frameworks, methodologies or practices ) is very easy to put in practice. Any company or organization can start using this idea anytime they want.

If you have an idea about how is the Scrum process in five minutes or so, take a look at this articles: http://www.softhouse.se/Uploades/Scrum_eng_webb.pdf

Addionally the book "Scrum and XP from the Trenches" mentioned in the beginning of this post can be downloaded here: http://www.infoq.com/minibooks/scrum-xp-from-the-trenches

Personally, as a software developer and a small investor I will take Jeff's tip for future investments in tech companies.

2008-07-18

5 Things You Should Remember about NHibernate

NHibernate is probably the most used ORM (object-relational mapping tool) for .NET applications and it is based in Hibernate the most used ORM in Java for years.
The learning curve to start working with NHibernate can be reduced if you remember the take the following steps:

1) The domain class should have at least one public or protected parameterless constructor

2) The domain classes' public properties and methods must declared with the virtual reserved word

3) For XML mappings, remember to rename your mapping files ending with *.hbm.xml not only *.xml

4) Also for XML mappings, remember to set the property of each file to embedded resource instead of content

5) Avoid using composite-id's classes as much as you can since they don't work very well when used in cascade collections and they make development more difficult

Unit Tests Rule Software Development

Even after a relatively long time using object oriented systems we still couldn't deal well with a growing problem. The lack of automated tests.

The absence of unit tests reduce the programmers confidence about the system and makes it very difficult if not impossible to modify the source-code. Since each modification can cause a lot of other bugs and undesired side-effects in other parts of the system or in other systems, the system can not evolve with the changing business rules and the changing technological knowledge.

Well that was true some months ago, now we are building a lot of unit tests for the systems already in production. Ironically due to some good architectural choices such as persistence isolation and POCO objects, it was not difficult to start testing with the Stub technique. It was all there, we just started creating test cases.

In reality, although not the ideal way, a set of use-case tests are being implemented in order to assure the right execution of what was working before. So the service layer is tested not the business objects at this time.

Now we are looking for automated ways to create mode real test cases in other to produce all the tests we need as fast as we can. Perhaps a testing framework will be necessary.

2008-03-19

Rich Domain Objects

In object oriented programming, domain objects are the key of the software development. However, many programmers tend to write these classes as simple get/set storage just like the example below in pseudo-code:
class Client
{
private long id;
public string Id
{
get { return this.id; }
set { this.id = value }
}
private string name;
public string Name
{
get {return this.name; }
set { this.name = value; }
}

private int registrationYear;
public int RegistrationYear
{
get { return this.registrationyear; }
set { registrationYear = value; }
}
}

The instances of the class above are the so-called anemic-objects. These objects don't verify their internal state and their behavior and thus accept any value as input.
As a consequence the extra-work is delegated to the application.

However these objects are not very practical for complex domain models and they don't take full power of the object programming. Classes not only carry data but also a functional part which are exactly the methods and properties.

To attack complex domains there are rich domain objects which are objects capable of verifying all these aspects internally preventing the programmers from having to remember them later at the time of building the application.

Rich domain objects help other developers not familiar with the business rules on how to create an application for that particular domain.

In order to work efficiently with rich domain objects the following design rules can be adopted:
  • Constructors should contain required parameters to create a new instance for the class from the business point of view

// Rule 1 - Constructor with mandatory parameters
public Client(name,registrationDate) {...}


  • Always use properties instead of accessing fields directly to read or modify data in the class implementation except obviously in the properties implementations

// Rule 2 - Use properties instead of fields
{ this.Name = name; this.RegistrationDate = registrationDate; }

  • Set parameters should check the input parameters appropriately and check the objects internal state before altering the object according to the domain rules

public string Name
{
get {return this.name; }
// Rule 3 - Set property checks the input value
set { if (value == "") throw new SystemException("Empty name is invalid");
this.name = value; }
}

  • Get parameters should check the object internal state before returning a value

public int RegistrationYear
{
//Rule 4-Get property checks object and application state before returning a value
get
{
if (User.CurrentUser().IsManager())
return this.registrationYear;
else
throw new SystemException("No permission for Client's Registration Year.");
}
set
{
if ((value <> DateTime.now.year))
throw new SystemException("Year must be from 1980 until now");
this.registrationYear = value; }
}
}

  • Classes should be designed in order to follow primarily the business model and then the data model

Following this rules a Client class could be:
// Rule 5 - Client Class is designed according to the business model
class Client
{
// Rule 1 - Constructor with mandatory parameters
public Client(name,registrationDate)
// Rule 2 - Use properties instead of fields
{ this.Name = name; this.RegistrationDate = registrationDate; }

private string id;
public string Id
{
get { return this.id; }
set { this.id = value; }
}

private string name;
public string Name
{
get {return this.name; }
// Rule 3 - Set property checks the input value
set
{
if (value == "") throw new SystemException("Empty name is invalid");
this.name = value;
}
}

private int registrationYear;
public int RegistrationYear
// Rule 4-Get property checks object and application state before returning a value
{
get
{
if (User.CurrentUser().IsManager())
return this.registrationYear;
else
throw new SystemException("No permission for Client's Registration Year.");
}
set
{
if ((value != DateTime.now.year))
throw new SystemException("Year must be from 1980 until now");
this.registrationYear = value;
}
}
}

2007-12-20

Best Practice to Handle Exceptions

Modern programming languages come with try/catch/finally blocks and many times an Exception Hierarchy of classes is provided also.
Exceptions are a powerful tool that can be used to handle both system and user errors.
When exceptions are thrown in some part of the code, the system keeps popping ( removing the head ) of the stack until it finds a catch statement that can handle that exception type.
One common way to capture exceptions is to write several catches from the most specific to the most generic exception class until you can capture the exception type appropriately according to its type.
Check code below:

try
{
// code block
}
catch ( SpecialSystemException e)
{
// code to handle SpecialSystemException
}
catch ( SystemException e)
{
// code to handle SystemException
}
catch ( Exception e)
{
// code to handle Exception
}
finally
{
// finishes code block
}


However in many cases it is desired to treat each exception same way according to its type in the entire system. In this case the code template shown above has some disadvantages:
  • Programmers must write more code to implement try/catch/finally code blocks to due the several "catch" statements for each exception type that is intended to treat
  • Code to treat exception classes are replicated in the software code
  • And finally if it demands more work and it is repeated than it is error prone
In order to solve this problem a better approach is to create a separate centralized class to treat all exceptions thrown from the system internally. Like the .NET pseudo-code example below:

public class ExceptionHandler
{
public static void Handler(Exception exception)
{
if (exception.GetType() == typeof(SpecialSystemException))
{
// Code that deals with SpecialSystemException
}
else if (exception.GetType() == typeof(SystemException))
{
// Code that deals with SystemException
}
else if (exeception.GetType() == typeof(Exception))
{
// Code that deals with Exception
}
}


Replacing the code of the first example:

try
{
// code block
}
catch ( Exception e)
{
// Exceptions are now treated internally by the method below
ExceptionHandler.Handler(exception);
}

2007-12-03

Choosing the Right Primary Keys

One of the most common data modeling is the use of composite keys and natural keys as primary keys to identify tables in the database.
Composite-key tables use more than one key to identify a row while natural keys use domain information to identify a row such as Social Security Number to identify a Person. In most cases a composite-key is actually a composite-natural-key.

However these tecniques are not consider good practices to choose the table primary keys due to the following factors:
  • Business Rules change over time but primary keys don't
    • Business rules are dynamic but primary keys are static until you update the database model what can cost a lot time and effort if not impossible. For example if you identify a Client with a Social Security number you may not be able to identify a foreign Person if needed or maybe the government can decide to use the same Social Security Number for more than one person. This lack of capacity to change the database model can reduce significantly the agility of an organization to adapt its business model to new kind of reality what reduces its capacity to compete with other organizations.

Figure 1 - Table identified by a natural key

  • Composite Keys require more work
    • It is necessary to write longer SQL queries since you have to use all the primary keys to join this table to another one.


select cli.name, pd.description, sa.date, sa.quantity
from Client cli
inner join Sales sa on
( cli.socialSecurityNumber = sa.socialSecurityNumber )
inner join Product pd on
( sa.idProductType = pd.idProductType and
sa.vendorRegistration = pd.vendorRegistration )
where sa.date >= '2007-12-01'

  • Make the Database model less readable
    • Composite keys spread to other tables as a foreign key and are added to other foreign composite-key columns. In some cases where composite-keys are widely used there are much more composite-key columns in a table than useful information.

Figure 2 - Only date and quantity are Sales columns
the rest are inherited primakey keys from other tables


In the other hand one of the stronger arguments in favor of composite and natural-keys is that they provide a safer way to restrict data integrity avoiding certain columns in a table to repeat while with a single primary key can not garantee this.
But many people forget that this data integrity can be done in single-primary tables too by using alternate keys. With alternate keys you can choose a group of columns in a table make them unique just like a composite-key would do it.

In order to present this idea in more details below there are two examples, one with the traditional composite-key and natural keys aproach and the other with the proposed idea of using single primaty keys ( non-natural keys ) and alternate keys.

Figure 3- Data model example that makes use of natural-keys


Figure 4 - Data model example with single primary keys and alternate keys

2007-10-17

Less is More - Dynamically typed languages

When I was in the university I learned that strongly typed languages should be preferred since they avoid the programmers from using a certain variable in way that it was not meant to.
If I declare "string name;" I know this will accept only strings and will prevent a programmer from assign a number or a boolean value on it.

But this seems to be a false concern with the success of other languages such as Python and Ruby.
This programming languages are now used to each time more complex software and nobody is complaining about their "dynamic" features.

The goal the new development tools are trying to achieve is how can I do my software faster in a clean and organized way. Less is More !

2007-08-30

Packages: A Tool to Organize Classes

One of these days I was taking to a work colleague about what should be the package (a.k.a. Namespace) of a certain class. Not rare there is a small debate about it. Should the class WorkerRiskActivity belong to the Health package or to the Activity package ? And so on ....

Although it may sound a useless discussion it has been proven ( at least to our organization ) that this kind of worry is healthy specially in the long term.

I can enumerate the following advantages in using good packages for classes:

  • It breaks the complexity of hundreds of classes in blocs composed by few classes just like folders for files.

  • It gives a clue for developer about what is the role of this class in the system.
The fully qualified nameFinance.Sevices.DebtManagement tell you what the class actually does.

  • It organizes the logic architecture of your software. This is specially useful for layered architecture where the classes are organized according to the class role.
Organizing classes can also be confusing. In order to avoid wrong classification, the following anti-patterns are listed below:

  • Packages with vagues names
One of the traps that should be avoided is to create packages with too vague names such as General, Etc or Miscellaneous. This names are going to be used by lazy developers to place classes they don't want to think about how to classify to the right package.

  • Too much debate about packages

In the other hand, organizing classes in packages should not give rise to a long debate. If a class was placed in the wrong package, a code refactoring can solve the problem later.
  • Package-by-layer
Although it can sound perfectly reasonable to have classes organized by layer. Actually this kind of packages can be harmful since it is no more possible determine what kind of permission a specific package can have since this package actually has all kinds of classes from varying parts of your system that don't relate to each other. The preferred way to use packages is to organize them by feature as it can be seen in all examples listed in this post. In fact this is not a "personal way" of modeling packages, this is the original purpose of a package.
Thus the package Finance has all classes that compounds a financial application including its UI layer, services and the problem domain classes.

2007-07-03

Why developers do not like to model the Real World ?

Years ago the concept of roles, real world modeling, reusability and many others were not known. People with different roles were modeled in different entities ( tables or classes ) as if they were different things. In many database models such as the Northwind example we would find a table for Clients, another table for Vendors, a table for Employees and so on.
This kind of approach leads to data ambiguity since a Person can be an Employee in some occasions and a Vendor in other occasions or this Person can be a Client. So if a Persons must have his address, name, contact, etc. modified, this change must be reflected in several other tables. This business rule or better this developer rule should be remembered by anyone responsible for developing a system that could modify the data in one of these tables.

The approach changed but recently I was taking a look at a Class Diagram from a work colleague and it called my attention that he modeled the Accounting Role as Class Accountant.
It is not uncommon to see the same pattern when modeling peoples roles in many other Class Diagrams or even Data Diagram ( Entity Relation ) . Many developers usually create a class (or table) for each role like this:

It is not hard to see that as long as the system grows, more roles are needed. The additional classes multiply in the Class Model making it each time more complex to understand.

Interestingly there is an Analysis Pattern that deals with people's roles problem by providing a more elegant Class Modeling like this:
Now the roles are better organized and if more roles are needed they can be added to the Class Hierarchy. And that is all right.

This could be a perfect solution to the problem but let me ask you a question.
Do the roles really exist in Real World ?
I mean, imagine there are no computers and information systems, how do you do to know someone's role in your organization ? Do you ask: What's your role ? Off course not.

If you are working for a Health Care Company, for example, in order to check if "Jack" is a doctor you ask him his Doctor License ( or Medical License or whatever document is used in your country ). To work as a Doctor, Jack must have this document (or license). He may not carry this document with him maybe it may not exist physically but there is some sort of license or document that grants him the permission to work with others people's lives.

The same happens for accountants. Accountants do not work with people lives but in many countries they must have a document or license that enables him or her to do their jobs.
In order to know if someone is an accountant, you just have to check if he has an Accountant Registration ( or any name you prefer ).

Using document or license ( think of licenses and document as the same thing ) metaphor instead of roles gives you a more natural representation of your business rules. If the Class Diagram is more naturally represented by using Real World characteristics it is also more intuitive. Thus it can be better understood and learned by newcome developers or other developers in a large enterprise system. Check out the Class Diagram below:

The Class Diagram above shows a Person with his/her documents the way as it is in the Real World. If a another Document Type is needed it can be added to the hierarchy just like the prior class diagram.