Monday, July 30, 2012

How to change logging level in runtime

Changing the log logging level in runtime is important  mainly in production environment where you might want to have debug logging for limited amount of time.

Well, changing the root logger is very simple - assuming you have an input parameter with the wanted logging level simply get the root logger and set by the input logging level, such as:

Logger root = Logger.getRootLogger();

//setting the logging level according to input
if ("FATAL".equalsIgnoreCase(logLevel)) {
    root.setLevel(Level.FATAL);
}else if ("ERROR".equalsIgnoreCase(logLevel)) {
    root.setLevel(Level.ERROR);
}

However - the common case is that we maintain log instance per class, for example:

class SomeClass{

//class level logger
static Logger logger - Logger.getLogger(SomeClass.class);
}

and setting the root logger is not enough, since the class level logger will not be affected.

The trick is to remember get all the loggers in the system and change their logging level too.
For example:

Logger root = Logger.getRootLogger();
Enumeration allLoggers = root.getLoggerRepository().getCurrentCategories();

//set logging level of root and all logging instances in the system
if ("FATAL".equalsIgnoreCase(logLevel)) {
    root.setLevel(Level.FATAL);
    while (allLoggers.hasMoreElements()){
        Category tmpLogger = (Category) allLoggers.nextElement();
        tmpLogger .setLevel(Level.FATAL);
    }
}else if ("ERROR".equalsIgnoreCase(logLevel)) {
    root.setLevel(Level.ERROR);
    while (allLoggers.hasMoreElements()){
        Category tmpLogger = (Category) allLoggers.nextElement();
        tmpLogger .setLevel(Level.ERROR);
    }
}


So just wrap it up in a service class and call it from your controller, with a dynamic logLevel String parameter which represent the logging level you wish to set your system to.

If any of you need the complete solution, please let me know.

Reference to the basic approach is in this link.

Thursday, July 26, 2012

Build documentation to last - choose the agile way

Lately I wondered what the best way to document a project is?

Taken from lwiki's GalleryMy documentation experience vary among different tools and methodologies.
I would like to share some observation I have and a conclusion about the best way to document a project.


The documentation could be classified to the following categories:

Documentation place:
  • In line code/ version control system (via code commit description)
  • In separate server linked to the code
  • In separate server, decoupled from the code (no direct linkage)
Documentation done by:
  • Developer
  • Product team/ Design team / architects
  • Technical writers
Documentation too:
  • IDE
  • Design documents
  • Modeling tool, Process Flow
  • Wiki
  • version control (e.g. git, svn,..) commits
  • Interface documentation
Not surprisingly there is a direct correlation between the tool used , the person who document the code, the amount of documentation, the "distance" of documentation from the code and the accuracy of that documentation.

Given the categories above it could be organized in the following manner:
  • Developers
    • In line code/ version control system (via code commit description) 
      • IDE/ version control
  • Product team/ Design team / architects
    • In separate server linked to the code 
      •  Design documents, Interface documentation
  • Technical writers
    •  In separate server, decoupled from the code (no direct linkage)
      • Modeling tool, Process Flo, wiki

Developers tend to write short inline documentation using IDE, well interface semantics and complementary  well written code commits.
As long as the person who document the functionality has more distance from the code, the documentation would usually be in places more decoupled from where the code exist and more comprehensive.

From my experience, even good design tend to change a bit and even if the documentation is good but is decoupled from the code, most chances are that it won't catch up with code change.

In real life, when requirements keep coming from the business into development, it sometimes brings with it not only additional code to directly support functionality, but often we see the need for some structural or infra change and refactoring.

The inline code documentation is agile and change with minimum effort along the change in functionality. If the developer submit the code grouped by functionality and provide good explanation about changes that were done it would the most updated and accurate  documentation .

I know that some of you might wonder about heavy duty design or complex functionality documentation.
I would recommend tackle these issues as much as possible inline the code, for example, assuming you read some pattern or some bug solution in the web put a link to that solution near the method/class which implement the solution. Try to model your code by known patterns so it would avoid documentation. Try to use conventions so it would reduce amount of configuration and make your code flow more predictable and discoverable.

This approach is even more important when managing a project in agile methodology.
Usually such methodology would rather direct communication with product/business to understand requirements rather than documented PRDs. This makes it even more important to have the code self explanatory easy for orientation. Moreover, frequent changes in design and business change would cause decoupled documentation soon be obsolete (or will drag hard maintenance)

Although it sounds easier said than done and it is not a silver bullet for every project, writing documentation as close as possible to the code itself should be taken as a guideline / philosophy when developing a project.


Acknowledgment - above image was taken from lwiki's Gallery : https://picasaweb.google.com/lh/photo/ScQcKRBjhY7UvnzJ7vNArdMTjNZETYmyPJy0liipFm0

Wednesday, July 25, 2012

Spring Profile pattern , part 4

Phase 3: using the pattern

As you can recall, in previous steps we defined an interface for configuration data.
Now we will use the interface in a class which needs different data per environment.

Please note that this example is the key differentiator from the example given in  the Spring blog, since now we don't need to create a class for each profile, since in this case we use the same method across profiles and only the data changes.


Step 3.1 - example for using the pattern
@Configuration
@EnableTransactionManagement
//DB connection configuration class 
//(don't tell me you're still using xml... ;-)
public class PersistenceConfig {

 @Autowired
 private SystemStrings systemStrings; //Spring will wire by active profile

 @Bean
 public LocalContainerEntityManagerFactoryBean entityManagerFactoryNg(){
  LocalContainerEntityManagerFactoryBean factoryBean
  = new LocalContainerEntityManagerFactoryBean();
  factoryBean.setDataSource( dataSource() );
  factoryBean.setPersistenceUnitName("my_pu");       
  JpaVendorAdapter vendorAdapter = new HibernateJpaVendorAdapter(){
   {
    // JPA properties
    this.setDatabase( Database.MYSQL);
this.setDatabasePlatform("org.hibernate.dialect.MySQLDialect");
    this.setShowSql(systemStrings.getShowSqlMngHibernate());//is set per environemnt..           
   
   }
  };       
  factoryBean.setJpaVendorAdapter( vendorAdapter );
  factoryBean.setJpaProperties( additionalProperties() );

  return factoryBean;
 }
//...
@Bean
 public ComboPooledDataSource dataSource(){
  ComboPooledDataSource poolDataSource = new ComboPooledDataSource();
  try {
   poolDataSource.setDriverClass( systemStrings.getDriverClassNameMngHibernate() );
  } catch (PropertyVetoException e) {
   e.printStackTrace();
  }       
                 //is set per environemnt..
  poolDataSource.setJdbcUrl(systemStrings.getJdbcUrl());
  poolDataSource.setUser( systemStrings.getDBUsername() );
  poolDataSource.setPassword( systemStrings.getDBPassword() );
  //.. more properties...       
  return poolDataSource;
 }
}

I would appreciate comments and improvements.
Enjoy!

part 1, part 2 , part 3, part 4

Spring Profile pattern , part 3

Phase 2: implementing the profile pattern
This phase utilizes the infra we built before and implements the profile pattern.



Step 2.1 - create a properties interface
Create an interface for the configuration data you have.
In our case, the interface will provide access to the four configuration data  items.
so it would look something like:

public interface SystemStrings {

String getJdbcUrl();
String getDBUsername();
String getDBPassword();
Boolean getHibernateShowSQL();
//..... 
Step 2.2 - create a class for each profile
Example for a development profile:
@Dev //Notice the dev annotation
@Component("systemStrings")
public class SystemStringsDevImpl extends AbstractSystemStrings implements SystemStrings{
      
 public SystemStringsDevImpl() throws IOException {
                //indication on the relevant properties file
  super("/properties/my_company_dev.properties");
 } 
}
Example for a production profile:
@Prouction //Notice the production annotation
@Component("systemStrings")
public class SystemStringsProductionImpl extends AbstractSystemStrings implements SystemStrings{
      
 public SystemStringsProductionImpl() throws IOException {
                //indication on the relevant properties file
  super("/properties/my_company_production.properties");
 } 
}

The two classes above are where the binding between the properties file and the related environment occur.

You've probably noticed that the classes extend an abstract class. This technique is useful so we won't need to define each getter for each Profile, this would not be manageable in the long run, and really, there is no point of doing it.

The sweet and honey lies in the next step, where the abstract class is defined.

Step 2.3 - create an abstract file which holds the entire data

public abstract class AbstractSystemStrings implements SystemStrings{

 //Variables as in configuration properties file
private String jdbcUrl;
private String dBUsername;
private String dBPassword;
private boolean hibernateShowSQL;

public AbstractSystemStrings(String activePropertiesFile) throws IOException {
  //option to override project configuration from externalFile
  loadConfigurationFromExternalFile();//optional..
                //load relevant properties
  loadProjectConfigurationPerEnvironment(activePropertiesFile);  
 }

private void loadProjectConfigurationPerEnvironment(String activePropertiesFile) throws IOException {
  Resource[] resources = new ClassPathResource[ ]  {  new ClassPathResource( activePropertiesFile ) };
  Properties props = null;
  props = PropertiesLoaderUtils.loadProperties(resources[0]);
                jdbcUrl = props.getProperty("jdbc.url");
                dBUsername = props.getProperty("db.username"); 
                dBPassword = props.getProperty("db.password");
                hibernateShowSQL = new Boolean(props.getProperty("hibernate.show_sql"));  
}

//here should come the interface getters....



part 1, part 2 , part 3, part 4, next >>

Spring Profile pattern ,part 2

Spring Profile pattern -  phase 1: infra preparation


This phase will establish the initial infra for using Spring Profile and the configuration files.

Step 1.1  - create a properties file which contains all configuration data
Assuming you have a maven style project, create a file in src/main/resources/properties for each environment, e.g:
my_company_dev.properties
my_company_test.properties
my_company_production.properties

example for my_company_dev.properties content:

jdbc.url=jdbc:mysql://localhost:3306/my_project_db
db.username=dev1
db.password=dev1
hibernate.show_sql=true

example for my_company_production.properties content:


jdbc.url=jdbc:mysql://10.26.26.26:3306/my_project_db
db.username=prod1
db.password=fdasjkladsof8aualwnlulw344uwj9l34
hibernate.show_sql=false


Step 1.2  - create an annotation for each profile
In src.main.java.com.mycompany.annotation create annotation for each Profile, e.g :

@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
@Profile("DEV")
public @interface Dev {
}

@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
@Profile("PRODUCTION")
public @interface Production {
}

Create an enum for each profile:
public interface MyEnums {

public enum Profile{
DEV,
TEST,
PRODUCTION
}


Step 1.3  - make sure the profile is loaded during context loading
  • Define a system variable to indicate on which environment the code is running.
    In Tomcat, go to ${tomcat.di}/conf/catalina.properties and insert a line:
    profile=DEV  (according to your environment)
  • Define a class to set the active profile
    public class ConfigurableApplicationContextInitializer implements
      ApplicationContextInitializer {
    
     @Override
     public void initialize(ConfigurableApplicationContext applicationContext) {
          
      String profile = System.getProperty("profile");
        
      if (profile==null || profile.equalsIgnoreCase(Profile.DEV.name())){
       applicationContext.getEnvironment().setActiveProfiles(Profile.DEV.name());   
      }else if(profile.equalsIgnoreCase(Profile.PRODUCTION.name())){
       applicationContext.getEnvironment().setActiveProfiles(Profile.PRODUCTION.name()); 
      }else if(profile.equalsIgnoreCase(Profile.TEST.name())){
       applicationContext.getEnvironment().setActiveProfiles(Profile.TEST.name()); 
            }
     }
    }
    
  • Make sure the class is loaded during context loading
    in the project web.xml, insert the following:
    
         contextInitializerClasses
         com.matomy.conf.ConfigurableApplicationContextInitializer
     
    


part 1, part 2 , part 3, part 4 , next >>

Spring Profile pattern , part 1

Recently we were introduced with the concept of Spring Profiles.
This concept is an easy configuration differentiator  for different deployment environments.
The straight forward use case (which was presented) was to annotate the relevant classes so Spring would load the appropriate class according to the active profile.

However, this approach might not always serve the common use case... often, the configuration keys would be the same and only the values will change per environment.

In this post, I would like to present a pattern to support loading configuration data per environment, without the need to create/maintain multiple classes for each profile (i.e. for each environment).

Throughout the post I would take the DB connection configuration as a sample, assuming we have different DB definitions (e.g. username or connection URL) for each deployment environment.

The main idea is to use one class for loading the configuration (i.e.. one class for DB connection definition) and inject into it the appropriate instance which holds the correct profile configuration data.

For convenience and clarity, the process was divided into 3 phases:

Phase 1: infra preparation
Step 1.1  - create a properties file which contains all configuration data
Step 1.2  - create an annotation for each profile
step 1.3  - make sure the profile is loaded during context loading

Phase 2: implementing the profile pattern
Step 2.1 - create a properties interface
Step 2.2 - create a class for each profile
Step 2.3 - create an abstract file which holds the entire data

Phase 3: using the pattern
Step 3.1 - example for using the pattern

next part >>

Tuesday, July 24, 2012

Email filtering using Aspect and Spring Profile

During web application development, often the need for sending emails arise.

However, sometimes the database is populated by data from production and there is a risk of sending emails to real customers during email test execution.

This post will explain  how to avoid it without explicitly write code in the send email function.

We would use 2 techniques:
  1. Spring Profiles - a mechanism to indicate what the running environment is (i.e. development,  production,..)
  2. AOP - in simplified words its a mechanism to write additional logic on methods in decoupled way.

I would assume you already have Profiles set on your projects and focus on the Aspect side.

In that example the class which sends emails is EmailSender with the method send, as specified below:

public class EmailSender {
//empty default constructor is a must due to AOP limitation
public EmailSender() {}

//Sending email function
//EmailEntity - object which contains all data required for email sending (from, to, subject,..)
public void send(EmailEntity emailEntity) {
//logic to send email
}
}


Now, we will add the logic which prevent sending email to customers where code is not running on production.
For this we will use Aspects so we won't have to write it in the send method and by that we would maintain the separation of concern principle.

Create a class that will contain the filtering method:
@Aspect
@Component
public class EmailFilterAspect {

public EmailFilterAspect() {}
}

Then create a PointCut for catching the send method execution:

@Pointcut("execution(public void com.mycompany.util.EmailSender.send(..))")
 public void sendEmail(){}

Since we need to control whether the method should be executed or not, we need to use the Arround annotation.

@Around("sendEmail()")
public void emailFilterAdvice(ProceedingJoinPoint proceedingJoinPoint){
 try {
  proceedingJoinPoint.proceed(); //The send email method execution
 } catch (Throwable e) {                           
  e.printStackTrace();
 }
}

As a last point, we need to access the send method input parameter (i.e. get the EmailEntity) and verify we don't send emails to customers on development.

@Around("sendEmail()")
 public void emailFilterAdvice(ProceedingJoinPoint proceedingJoinPoint){

 //Get current profile
ProfileEnum profile = ApplicationContextProvider.getActiveProfile();

Object[] args = proceedingJoinPoint.getArgs();        //get input parameters
        if (profile != ProfileEnum.PRODUCTION){
            //verify only internal mails are allowed
            for (Object object : args) {
                if (object instanceof EmailEntity){
                    String to = ((EmailEntity)object).getTo();
                    if (to!=null && to.endsWith("@mycompany.com")){//If not internal mail - Dont' continue the method                        
                        try {
                            proceedingJoinPoint.proceed();
                        } catch (Throwable e) {
                            e.printStackTrace();
                        }
                    }
                }
            }
        }else{
            //In production don't restrict emails
            try {
                proceedingJoinPoint.proceed();
            } catch (Throwable e) {
                e.printStackTrace();
            }
        }
}

That's it.
Regarding configuration, you need to include the aspect jars in your project.
In Maven it's look like this:

        org.aspectj
 aspectjrt
 ${org.aspectj.version}


 org.aspectj
 aspectjweaver
 ${org.aspectj.version}
 runtime


and in your spring application configuration xml file, you need to have this:




Good luck!

Thursday, July 19, 2012

Seamlessly Static Meta Model generation

I would like to introduce a simple, quick and straight forward way to create Static Meta Model classes.

First, I would like to correct a perception I had in my previous post regarding the place such files are created.
Since the Static Meta Model files are generated classes and should automatically changed for each @Entity modification they should placed in the target folder and not comitted to the repository.

Moreover, creation of static meta model files via Eclipse works correctly if you use the appropriate generation project version and te right plugin.

First step is to get the class generation jar. Put in pom.xml :
        <dependency>
            <groupId>org.hibernate</groupId>
            <artifactId>hibernate-jpamodelgen</artifactId>
            <version>1.1.1.Final</version>
        </dependency>

Then in eclipse add the annotation generation plugin via Help ->Eclipse Market place -> search for "Apt M2E" and install it.

After the installation right click on your project -> properties -> Java compiler-> Annotation processing -> mark "enable project specific settings" (and actually all the checkboxes on that screen), in the Generated Source Directory put "target\generated-sources" (This will generate the classes in your target folder).

Inside the Annotation processing item there is a Factory Path item, enable this part as well and set the jar we import via maven to generate the classes. You can do it by clicking Add Variable -> M2_REPO -> Extend -> and choose the following: path : /org/hibernate/hibernate-jpamodelgen/1.2.0.Final/hibernate-jpamodelgen-1.2.0.Final.jar

Make sure only that path is checked.

As a final step, please make sure the target\generated-sources folder is on your classpath (right click-> build path -> ad as source folder).

That's it. Every change should trigger automatic static meta model generation.