Thursday, February 10, 2011

Inversions of Dependencies for Agile Software Development

Anyone who has experienced software development in large-scale will be very familiar with the following scenario:

A base software module was developed to offer some services for a specific business case. After sometime, a new requirement emerges which requires the additions of a new feature to a base service. A software developer, who in most cases does not have the direct assistance from the original developer – e.g. because that developer already left the company, has now to read the sources, tries to understand the original logic, and to modify the original code in order to insert his new implementation within the right context.  

Without sufficient knowledge about the base module, the modified sources are inherently error-prone. With each modification and the quick bug-fixes afterwards the software becomes increasingly dirtier and more difficult for further extensions. Moreover, an increasing number of tests fail with each new feature.  Finally, in case the lifespan of a feature expires, it is also very difficult to delete it from the software module.

Very soon, the management will face the question of whether it is necessary to re-implement the module from scratch – with the associated sunk costs, new investment and service outages.

The problem is the inherent cyclic dependencies resulted from embedding the new feature into its base module, as depicted in Figure 1.

Figure 1:  Cyclic Depdendency in Software Evolution

In one direction, we have the direct dependency from the base module to its extension, at least for the feature invocation. In the other direction we have the implicit dependency from the new feature on the base, as this faeture has to operate in the context defined by the base module.  Such a cycle results in a tight coupling and the overlapping of the domains of concerns or responsibilities, and therefore the error-proness, inflexibility and complexity in the software evolution process.

One possible solution to this problem is to eliminate or invert the invocation dependency as in Figure 1. In this way, it is no longer necessary to directly modify the source codes in the base module. All the test cases for the base module continue to be valid. The base module and the new feature can be maintained, developed or tested separately.  Concerns, focuses and responsibilities of the different developers are clearly separated.  Introduction or deletion of features can be easily implemented by adding or deleting the resources (e.g. the corresponding artifact) for the new feature.

The concept of Dependency inversion (DI) is frequently used in the context of Object-Oriented frameworks, referring to the utilization of interfaces or abstract classes for separating the client and service implementations. In our scenario, the new features could be realized as new services or new implementations of the existing services, e.g. by wrapping the previous service interfaces or modify the existing implementations. Due to associated restrictions (e.g. only the external interface can be wrapped) or overheads (e.g. due to the necessity to modify legacy codes), the OO-based dependency inversion framework is in-sufficient to provide a general solution for the aforementioned problem.

The following blogs present a simple, easily manageable/maintainable solution for the agile software development as discussed in the previous scenario, with emphases on the simplicity in coding and analyses, and on minimal runtime overhead. This framework utilizes java annotation framework as specified in JSR-175/250/269. The inverted dependencies from new, extended feature to their base classes are specified by indentifying the corresponding service extension points and the corresponding extension types  in the annotations attached to the new feature methods.
  
The extension method calls will be inserted via a specific annotation processor into the byte codes for the extended classes. The insertion will happen after the compilation phase and ahead of the deployment phase.

This presentation is based on the result of a prototyping project.

Next Page - Dependency Inversion Based on Java Annotations

Dependency Inversion Based on Java Annotations

The key concept within the annotation-based  dependency inversion framework is the concept of service extension points. As defined in the snippet in List 1, a service extension point identifies the base service method that can be extended with new features.

A service extension point can be extended by calls to the new feature methods either before or after the invocation of the base service method. This is done by identifying the corresponding service extension point, i.e.the targeted class, method name and the signature, and the intended extension type, as in the following code example:

@IServiceExtension(targetClass = "LoginService", 
   targetMethod = "login", 
   targetSignature = {"String", "String", "Session"},
   extensionType = IServiceExtension.AFTER)
 public void countUp() {
  ……
 }
}
List 1: Service Extension

Basically this annotation says that the method countUp from the current class shall be invoked before every call of the service extension point login method in the corresponding service class LoginService.

The IServiceExtension annotation can be defined by the following annotation interface:

@Retention(RetentionPolicy.SOURCE)
@Target({ElementType.METHOD})
public @interface IServiceExtension {
 
//Shall be invoked before the targetMethod
public static final int BEFORE = 0;
//Shall be invoked before the targetMethod
public static final int AFTER = 1;
 
   String targetClass() default "java.lang.Object";
   String targetMethod() default "";
   String[] targetSignature () default "";
   int extensionType() default IServiceExtension.BEFORE ; 
    
}
List 2: Extension to the Login Service

As an example, suppose for a web-based application we have the implementation of a user management module which provides the login and logout services. These services are realized by the class LoginService and LogoutService as in List 3.

public class LoginService {

 public User user;
private Session session;
  
 @IServiceExtensionPoint 
 public void login(String password, String username, Sessions session) {
  // Look for a user that matches the credentials in Database
  user = getUserFromDB (password, username);
  //user will be null if no match was found
  session.setUser(user);
  this.session = session;
 } 

 private User getUserFromDB(String password, String username) {
……
 }

 public User getUser() {
  return user;
 }

 public void setUser(User user) {
 session.setUser(user); 
 }
}

public class LogoutService {
  
 @IServiceExtensionPoint 
 public void login(Sessions session) {
  //Save the seesion information back to the DB
  storeSession (session);
 }

 private void storeSession(Sessions session) {
 ……  
 } 
}
List 3: The Base Login Service



public class LoginExtension {
 
 private static final int LOGIN_LIMIT = 1000;

 private LoginService parent = null;
 
 private static int  loginCounter = 0;

 public LoginRestriction(LoginService parent) {
  super();
  this.parent = parent;
 }

 @IServiceExtension(targetClass = "LoginService", 
   targetMethod = "login", 
   targetSignature = {"java.lang.String", " java.lang.String ", "Session"},
   extensionType = IServiceExtension.AFTER)
 public void countUp() {
  loginCounter++;
  if (loginCounter > LOGIN_LIMIT) parent.setUser(null);
 }
 
 public static void countDown() {
  if (loginCounter > 0) loginCounter--;
 }
}

List 4: Extension to the Login Service

public class CountLogouts {
 
 @IServiceExtension(targetClass = "LogoutService", 
   targetMethod = "logout", 
   targetSignature = {"Session"},
   extensionType = IServiceExtension.BEFORE) 
 public void countDown() {
  LoginExtension.countDown();
 }
}
 
List 5: Extension to the Logout Service

The base login service takes three parameters, the password, the user name and the current session object as input, and
  • retrieves the user object from the from the datanbase that matches the password/username pair,
  • sets the user into the session object. In case no user was found, the value null will be stored into the session instance.
The base logout service simply saves the relevant session information back into the user DB.

This implementation offers limited functionality but is sufficient for the basic scenarios and for the initial lauch of the application.

Now suppose that sometime later, due to the live capacity problems the operation manager decides that the number of user logged into the portal shall be limited. This can be a temperoal restriction, because it can be erased once, e.g. the running campaign expires or extra hardware becomes available.

To suport this restruction, we have to implement a counter for the live user sessions and block any new logins after the limit is reached. The corresponding code snippet is listed in List 4. Similarly, as listed in List 5, the logout service shall be extended to for reducing the count for each logout operation.

Service extensions can access the public fields and method of the service class to be extended. To use such methods or fields to get context from the service executions or to influence the base service logics, the default pattern for service extensions can has a dedicated constructor as in the List 4.


A dedicated annotation processor will process each IServiceExtension annotation in the following steps:

1. The corresponding byte code for the service class to be extended will be retrieved from the class path.

2. The service method to be extended will be retrieved via the targeted method name and signature.

3. The extension method invocation will be added to the service extension point in the following three sub-steps:

a. for the default service extension pattern mentioned above, an instance of the extension class will be generated using the specified constructor,

b. the extension method will be called upon this instance,

c. the codes generated for a and b will be inserted into the service extension point method at the beginning or before every final return command depending on the extension type.

4. After processing the annotations the modified byte code will be written back into the java .class file.

One possible variant to the default pattern is that one can omit the constructor which refers to the service instance to be extended. In this case, the default object constructor (without parameter) will be called in 3.a. to generate the instance for the extension class.

Another more useful variant is to define the extension method as static. In this case, the annotation processing will simply call the extension method with the {class name for the service extension point} + “.” prefix. In this way the extension method can directly access the public static fields and methods from the base service.
If several features shall extend one single service extension point, the ordering of the extensions in the final byte code is not determined.


Next Page - Dependency Inversion in Software Configuration Management

Dependency Inversion in Software Configuration Management

An effective software project management will include a mandatory configuration management system, which manages among others the process for compiling, packaging, testing and deploying the software codes. In this context, the dependency inversion framework prototype presented in this and previous blogs is based on
As discussed in the last chapter, the annotation processor works on the byte codes of the classes to inject the necessary commands for the adaptation of the run-time behaviors. As a result, the processing of the annotations shall happen after the compilation phases, but before the packaging lifecycle phases.
An appropriate build phase for the processing of  dependency inversion annotations is the process-test-classes phase. At his stage, all classes, including the test classes are already compiled to byte code, while the testing phase is not yet started. Therefore it is possible to test the annotated codes in the unit test phases before packaging the software for integration tests.

List 6 shows a snippet from the pom.xml for the artifact that includes some extension classes for some new features. In this configuration, the corresponding annotation processor is applied by the annotation plugin at the process-test-classes phase. The modified byte codes are then used in later phases to package and deploy the software for tests and operations.
……
<build>
<plugins>
    <plugin>
      <groupId>org.bsc.maven</groupId>
      <artifactId>maven-processor-plugin</artifactId>
      <executions>
        <execution>
          <id>process</id>
          <goals>  <goal>process</goal>  </goals>
          <phase>process-test-classes</phase>
          <configuration>
            <processors>
               <!-- list of processors to use -->
               <processor>invframework.AnnotationProcessor</processor>
            </processors>
          </configuration> 
        </execution>
      </executions>
</plugin>
……
</plugins>
</build>
</dependencies>
  <dependency>
   <groupId>invframework-annotation</groupId>
   <artifactId>invframework-annotation-processor</artifactId>
   <version>0.5</version>
   <scope>compile</scope>
  </dependency>
 ……
</dependencies>
……
List 6: Maven Project pom.xml for Annotation Processing
 
As an example, the Maven command upon the corresponding project:
     mvn clean intall
will create the needed jar, war and sar files and also install them in the Maven repository for the deployment phase.

In our previous example, we have the base module for the login service and the new feature for counting the active sessions. Suppose the based module is implemented in artifact usermgr, while the extensions are positioned in another artifact loginextensions, we have two different configurations for deployment (e.g. ear) packaging as depicted in Figure 2.
Figure 2: Deployment Configurations
Depending the live requirements, an operation manager can decide to include or exclude the new extensions from the servers to be deployed.

E.g. if the capacity problem is solved and the limitation of concurrent sessions is no longer necessary, one can decide to re-deploy the server and switch back to the base deployment A by deleting the artifact loginextensions from the deployment dependency.

As can be seen in the example, new features can easily added to or deleted from the running servers using different deployment configurations - without changing the source code for the base modules and without overlapping the potentially conflicting responsibilities and focuses.

Next Page - Summary

Summary

A framework for inverting the dependency in evolutionary and agile software development was presented. This framework utilizes the java annotation mechanism to specify the targeted service extension points in the base module for the new feature methods. A specific annotation processor is used to inject the corresponding commands into the byte codes before packaging and deployment for the run-time environment.

In this way, this framework mapped the original dependency in software development to the reversed dependency from the extension towards the base. Concerns and responsibilities of the developer roles in the software evolution are clearly separated. The CCP (Close Open Principle) in (Draeger & Mussawisade, 2011) is implemented by a simple, concise coding framework, which minimizes the run-time overhead and optimally support the heterogeneous needs for agility and stability in software developments.

By integrating the basic annotation framework within a Maven-based software configuration management environment. Different features extensions can be regarded as building blocks that can be comabined in building up applications with a flexible architecture. Obsoleted features can then be easily deleted from the application via deployment configurations. In this way, this approach for software development also supports the major features of a BPM framework, in that it enables the flexible configurations of business flows by composing different features/activities.

Basically this framework is comparable to the Aspect-oriented Programming (AOP) frameworks like AspectJSpring or to the event-based framework for dependency inversion as proposed in (Draeger & Mussawisade, 2011).

An AOP framework typically requires a higher number of configuration files and gluing classes. E.g. in case of AspectJ we will need one class for the base logic, one for the extensions (aspects), one file for the configuration that maps the extensions to base classes/methods, and finally another class file which load the configuration in run-time to adapt the services. The existence of a large number of configurations after integrating numerous follower features will make the whole application less maintainable and understandable. 

The event-based framework intruduces similar complexity in specifing the event listners, registrations and processing. The dynamic, event-based run-time logic also makes it more difficult to make static analyses of the programs. Moreover, event management and processing will impose extra run-time overheads on the applications.

The key advantages of the new approach presented in this blog are its simplicity, easy configurability and maintability, and in its minimal overhead for the run-time environment.  Based on these features, this framework offers a better solution for extensive and major adaptation of service logics in an agile development process and in a bigger scale.



Start Page - Inversions of Dependencies for Agile Software Development

References:  

(DraegerMussawisade, 2011) Joachim Draeger, Klaresch Mussawisade (2011), "Der Kreis schließt sich“, Javamagazin, 2/2011.