Saturday, December 8, 2012

PixMix - very cool app recommendation

I would like to recommend a cool mobile app I recently installed called PixMix.

You might have came across it in a Jelastic technological post, in their spotlight corner.
http://blog.jelastic.com/2012/12/07/the-jelastic-spotlight-pixmix/


PixMix app allows one to instantly create photo albums and share it with friends.
So every photo me and my friends  take in a certain album, instantly uploaded into an online space which show all the photos.

What distinguish it apart from other apps is the fact that it has cool feature of animated photo gallery.
This feature is a killer feature for me.
Every new photo, of a certain album, is immediately shown in the live animated gallery.

There is also the right amount of balance between social sharing and private albums. It allows you to share what right (and not instantly takes the whole album public) and for who is right.

I tried it in my daughter 2 years birthday party.
I connected my laptop into my living room TV, opened the album via the PixMix site and displayed the animated live gallery.
I started to take few photos vie PixMix, just to see it works, and to my surprise it was instantly appeared in the animated gallery.
That really took the attention of my family, and before I realized, everyone started taking photos and compete each other to see which photo will appear in the animated gallery...

The nice thing is that at the end of the birthday, I was left with many photos, capturing many moments, I wasn't even part of...

Check it out  :-)

Tuesday, August 14, 2012

DB connection pool - production tips

Every developer knows that DB IO is the bottle neck of most web application.

It is also a common best practice to use an open connection to the DB and manage it in a pool, since opening a DB connection is costly operation.

However, in many DB connection pools tutorials, often the mentioned parameters are the size of the initial pool (min, max size) and how many connection to increment.

I would like to state here additional configuration which is important mainly in production use to efficient pool management -

The following are configuration parameters which are specific to C3P0 pool, however, one can

  1. maxIdleTime - Num of seconds a Connection can remain pooled but unused before being discarded. Zero means idle connections never expire. I would recommend not keeping idle connection much time.
  2. maxIdleTimeExcessConnections - number of seconds that Connections in excess of minPoolSize should be permitted to remain idle in the pool before being culled. Intended for applications that wish to aggressively minimize the number of open Connections.
  3. maxConnectionAge -  A Connection older than maxConnectionAge will be destroyed and purged from the pool. This differs from maxIdleTime in that it refers to absolute age.
  4. unreturnedConnectionTimeout - Will destroy open/active connection if  it wasn't returned to the pool within the specified amount of time. This could potentially prevent memory leaks if exception which prevent connection to close occurred. use it with parameter- debugUnreturnedConnectionStackTraces=true so you could debug the and find the reason for such behavior.
 
 



references - C3P0 configuration

Monday, July 30, 2012

How to change logging level in runtime

Changing the log logging level in runtime is important  mainly in production environment where you might want to have debug logging for limited amount of time.

Well, changing the root logger is very simple - assuming you have an input parameter with the wanted logging level simply get the root logger and set by the input logging level, such as:

Logger root = Logger.getRootLogger();

//setting the logging level according to input
if ("FATAL".equalsIgnoreCase(logLevel)) {
    root.setLevel(Level.FATAL);
}else if ("ERROR".equalsIgnoreCase(logLevel)) {
    root.setLevel(Level.ERROR);
}

However - the common case is that we maintain log instance per class, for example:

class SomeClass{

//class level logger
static Logger logger - Logger.getLogger(SomeClass.class);
}

and setting the root logger is not enough, since the class level logger will not be affected.

The trick is to remember get all the loggers in the system and change their logging level too.
For example:

Logger root = Logger.getRootLogger();
Enumeration allLoggers = root.getLoggerRepository().getCurrentCategories();

//set logging level of root and all logging instances in the system
if ("FATAL".equalsIgnoreCase(logLevel)) {
    root.setLevel(Level.FATAL);
    while (allLoggers.hasMoreElements()){
        Category tmpLogger = (Category) allLoggers.nextElement();
        tmpLogger .setLevel(Level.FATAL);
    }
}else if ("ERROR".equalsIgnoreCase(logLevel)) {
    root.setLevel(Level.ERROR);
    while (allLoggers.hasMoreElements()){
        Category tmpLogger = (Category) allLoggers.nextElement();
        tmpLogger .setLevel(Level.ERROR);
    }
}


So just wrap it up in a service class and call it from your controller, with a dynamic logLevel String parameter which represent the logging level you wish to set your system to.

If any of you need the complete solution, please let me know.

Reference to the basic approach is in this link.

Thursday, July 26, 2012

Build documentation to last - choose the agile way

Lately I wondered what the best way to document a project is?

Taken from lwiki's GalleryMy documentation experience vary among different tools and methodologies.
I would like to share some observation I have and a conclusion about the best way to document a project.


The documentation could be classified to the following categories:

Documentation place:
  • In line code/ version control system (via code commit description)
  • In separate server linked to the code
  • In separate server, decoupled from the code (no direct linkage)
Documentation done by:
  • Developer
  • Product team/ Design team / architects
  • Technical writers
Documentation too:
  • IDE
  • Design documents
  • Modeling tool, Process Flow
  • Wiki
  • version control (e.g. git, svn,..) commits
  • Interface documentation
Not surprisingly there is a direct correlation between the tool used , the person who document the code, the amount of documentation, the "distance" of documentation from the code and the accuracy of that documentation.

Given the categories above it could be organized in the following manner:
  • Developers
    • In line code/ version control system (via code commit description) 
      • IDE/ version control
  • Product team/ Design team / architects
    • In separate server linked to the code 
      •  Design documents, Interface documentation
  • Technical writers
    •  In separate server, decoupled from the code (no direct linkage)
      • Modeling tool, Process Flo, wiki

Developers tend to write short inline documentation using IDE, well interface semantics and complementary  well written code commits.
As long as the person who document the functionality has more distance from the code, the documentation would usually be in places more decoupled from where the code exist and more comprehensive.

From my experience, even good design tend to change a bit and even if the documentation is good but is decoupled from the code, most chances are that it won't catch up with code change.

In real life, when requirements keep coming from the business into development, it sometimes brings with it not only additional code to directly support functionality, but often we see the need for some structural or infra change and refactoring.

The inline code documentation is agile and change with minimum effort along the change in functionality. If the developer submit the code grouped by functionality and provide good explanation about changes that were done it would the most updated and accurate  documentation .

I know that some of you might wonder about heavy duty design or complex functionality documentation.
I would recommend tackle these issues as much as possible inline the code, for example, assuming you read some pattern or some bug solution in the web put a link to that solution near the method/class which implement the solution. Try to model your code by known patterns so it would avoid documentation. Try to use conventions so it would reduce amount of configuration and make your code flow more predictable and discoverable.

This approach is even more important when managing a project in agile methodology.
Usually such methodology would rather direct communication with product/business to understand requirements rather than documented PRDs. This makes it even more important to have the code self explanatory easy for orientation. Moreover, frequent changes in design and business change would cause decoupled documentation soon be obsolete (or will drag hard maintenance)

Although it sounds easier said than done and it is not a silver bullet for every project, writing documentation as close as possible to the code itself should be taken as a guideline / philosophy when developing a project.


Acknowledgment - above image was taken from lwiki's Gallery : https://picasaweb.google.com/lh/photo/ScQcKRBjhY7UvnzJ7vNArdMTjNZETYmyPJy0liipFm0

Wednesday, July 25, 2012

Spring Profile pattern , part 4

Phase 3: using the pattern

As you can recall, in previous steps we defined an interface for configuration data.
Now we will use the interface in a class which needs different data per environment.

Please note that this example is the key differentiator from the example given in  the Spring blog, since now we don't need to create a class for each profile, since in this case we use the same method across profiles and only the data changes.


Step 3.1 - example for using the pattern
@Configuration
@EnableTransactionManagement
//DB connection configuration class 
//(don't tell me you're still using xml... ;-)
public class PersistenceConfig {

 @Autowired
 private SystemStrings systemStrings; //Spring will wire by active profile

 @Bean
 public LocalContainerEntityManagerFactoryBean entityManagerFactoryNg(){
  LocalContainerEntityManagerFactoryBean factoryBean
  = new LocalContainerEntityManagerFactoryBean();
  factoryBean.setDataSource( dataSource() );
  factoryBean.setPersistenceUnitName("my_pu");       
  JpaVendorAdapter vendorAdapter = new HibernateJpaVendorAdapter(){
   {
    // JPA properties
    this.setDatabase( Database.MYSQL);
this.setDatabasePlatform("org.hibernate.dialect.MySQLDialect");
    this.setShowSql(systemStrings.getShowSqlMngHibernate());//is set per environemnt..           
   
   }
  };       
  factoryBean.setJpaVendorAdapter( vendorAdapter );
  factoryBean.setJpaProperties( additionalProperties() );

  return factoryBean;
 }
//...
@Bean
 public ComboPooledDataSource dataSource(){
  ComboPooledDataSource poolDataSource = new ComboPooledDataSource();
  try {
   poolDataSource.setDriverClass( systemStrings.getDriverClassNameMngHibernate() );
  } catch (PropertyVetoException e) {
   e.printStackTrace();
  }       
                 //is set per environemnt..
  poolDataSource.setJdbcUrl(systemStrings.getJdbcUrl());
  poolDataSource.setUser( systemStrings.getDBUsername() );
  poolDataSource.setPassword( systemStrings.getDBPassword() );
  //.. more properties...       
  return poolDataSource;
 }
}

I would appreciate comments and improvements.
Enjoy!

part 1, part 2 , part 3, part 4

Spring Profile pattern , part 3

Phase 2: implementing the profile pattern
This phase utilizes the infra we built before and implements the profile pattern.



Step 2.1 - create a properties interface
Create an interface for the configuration data you have.
In our case, the interface will provide access to the four configuration data  items.
so it would look something like:

public interface SystemStrings {

String getJdbcUrl();
String getDBUsername();
String getDBPassword();
Boolean getHibernateShowSQL();
//..... 
Step 2.2 - create a class for each profile
Example for a development profile:
@Dev //Notice the dev annotation
@Component("systemStrings")
public class SystemStringsDevImpl extends AbstractSystemStrings implements SystemStrings{
      
 public SystemStringsDevImpl() throws IOException {
                //indication on the relevant properties file
  super("/properties/my_company_dev.properties");
 } 
}
Example for a production profile:
@Prouction //Notice the production annotation
@Component("systemStrings")
public class SystemStringsProductionImpl extends AbstractSystemStrings implements SystemStrings{
      
 public SystemStringsProductionImpl() throws IOException {
                //indication on the relevant properties file
  super("/properties/my_company_production.properties");
 } 
}

The two classes above are where the binding between the properties file and the related environment occur.

You've probably noticed that the classes extend an abstract class. This technique is useful so we won't need to define each getter for each Profile, this would not be manageable in the long run, and really, there is no point of doing it.

The sweet and honey lies in the next step, where the abstract class is defined.

Step 2.3 - create an abstract file which holds the entire data

public abstract class AbstractSystemStrings implements SystemStrings{

 //Variables as in configuration properties file
private String jdbcUrl;
private String dBUsername;
private String dBPassword;
private boolean hibernateShowSQL;

public AbstractSystemStrings(String activePropertiesFile) throws IOException {
  //option to override project configuration from externalFile
  loadConfigurationFromExternalFile();//optional..
                //load relevant properties
  loadProjectConfigurationPerEnvironment(activePropertiesFile);  
 }

private void loadProjectConfigurationPerEnvironment(String activePropertiesFile) throws IOException {
  Resource[] resources = new ClassPathResource[ ]  {  new ClassPathResource( activePropertiesFile ) };
  Properties props = null;
  props = PropertiesLoaderUtils.loadProperties(resources[0]);
                jdbcUrl = props.getProperty("jdbc.url");
                dBUsername = props.getProperty("db.username"); 
                dBPassword = props.getProperty("db.password");
                hibernateShowSQL = new Boolean(props.getProperty("hibernate.show_sql"));  
}

//here should come the interface getters....



part 1, part 2 , part 3, part 4, next >>

Spring Profile pattern ,part 2

Spring Profile pattern -  phase 1: infra preparation


This phase will establish the initial infra for using Spring Profile and the configuration files.

Step 1.1  - create a properties file which contains all configuration data
Assuming you have a maven style project, create a file in src/main/resources/properties for each environment, e.g:
my_company_dev.properties
my_company_test.properties
my_company_production.properties

example for my_company_dev.properties content:

jdbc.url=jdbc:mysql://localhost:3306/my_project_db
db.username=dev1
db.password=dev1
hibernate.show_sql=true

example for my_company_production.properties content:


jdbc.url=jdbc:mysql://10.26.26.26:3306/my_project_db
db.username=prod1
db.password=fdasjkladsof8aualwnlulw344uwj9l34
hibernate.show_sql=false


Step 1.2  - create an annotation for each profile
In src.main.java.com.mycompany.annotation create annotation for each Profile, e.g :

@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
@Profile("DEV")
public @interface Dev {
}

@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
@Profile("PRODUCTION")
public @interface Production {
}

Create an enum for each profile:
public interface MyEnums {

public enum Profile{
DEV,
TEST,
PRODUCTION
}


Step 1.3  - make sure the profile is loaded during context loading
  • Define a system variable to indicate on which environment the code is running.
    In Tomcat, go to ${tomcat.di}/conf/catalina.properties and insert a line:
    profile=DEV  (according to your environment)
  • Define a class to set the active profile
    public class ConfigurableApplicationContextInitializer implements
      ApplicationContextInitializer {
    
     @Override
     public void initialize(ConfigurableApplicationContext applicationContext) {
          
      String profile = System.getProperty("profile");
        
      if (profile==null || profile.equalsIgnoreCase(Profile.DEV.name())){
       applicationContext.getEnvironment().setActiveProfiles(Profile.DEV.name());   
      }else if(profile.equalsIgnoreCase(Profile.PRODUCTION.name())){
       applicationContext.getEnvironment().setActiveProfiles(Profile.PRODUCTION.name()); 
      }else if(profile.equalsIgnoreCase(Profile.TEST.name())){
       applicationContext.getEnvironment().setActiveProfiles(Profile.TEST.name()); 
            }
     }
    }
    
  • Make sure the class is loaded during context loading
    in the project web.xml, insert the following:
    
         contextInitializerClasses
         com.matomy.conf.ConfigurableApplicationContextInitializer
     
    


part 1, part 2 , part 3, part 4 , next >>

Spring Profile pattern , part 1

Recently we were introduced with the concept of Spring Profiles.
This concept is an easy configuration differentiator  for different deployment environments.
The straight forward use case (which was presented) was to annotate the relevant classes so Spring would load the appropriate class according to the active profile.

However, this approach might not always serve the common use case... often, the configuration keys would be the same and only the values will change per environment.

In this post, I would like to present a pattern to support loading configuration data per environment, without the need to create/maintain multiple classes for each profile (i.e. for each environment).

Throughout the post I would take the DB connection configuration as a sample, assuming we have different DB definitions (e.g. username or connection URL) for each deployment environment.

The main idea is to use one class for loading the configuration (i.e.. one class for DB connection definition) and inject into it the appropriate instance which holds the correct profile configuration data.

For convenience and clarity, the process was divided into 3 phases:

Phase 1: infra preparation
Step 1.1  - create a properties file which contains all configuration data
Step 1.2  - create an annotation for each profile
step 1.3  - make sure the profile is loaded during context loading

Phase 2: implementing the profile pattern
Step 2.1 - create a properties interface
Step 2.2 - create a class for each profile
Step 2.3 - create an abstract file which holds the entire data

Phase 3: using the pattern
Step 3.1 - example for using the pattern

next part >>

Tuesday, July 24, 2012

Email filtering using Aspect and Spring Profile

During web application development, often the need for sending emails arise.

However, sometimes the database is populated by data from production and there is a risk of sending emails to real customers during email test execution.

This post will explain  how to avoid it without explicitly write code in the send email function.

We would use 2 techniques:
  1. Spring Profiles - a mechanism to indicate what the running environment is (i.e. development,  production,..)
  2. AOP - in simplified words its a mechanism to write additional logic on methods in decoupled way.

I would assume you already have Profiles set on your projects and focus on the Aspect side.

In that example the class which sends emails is EmailSender with the method send, as specified below:

public class EmailSender {
//empty default constructor is a must due to AOP limitation
public EmailSender() {}

//Sending email function
//EmailEntity - object which contains all data required for email sending (from, to, subject,..)
public void send(EmailEntity emailEntity) {
//logic to send email
}
}


Now, we will add the logic which prevent sending email to customers where code is not running on production.
For this we will use Aspects so we won't have to write it in the send method and by that we would maintain the separation of concern principle.

Create a class that will contain the filtering method:
@Aspect
@Component
public class EmailFilterAspect {

public EmailFilterAspect() {}
}

Then create a PointCut for catching the send method execution:

@Pointcut("execution(public void com.mycompany.util.EmailSender.send(..))")
 public void sendEmail(){}

Since we need to control whether the method should be executed or not, we need to use the Arround annotation.

@Around("sendEmail()")
public void emailFilterAdvice(ProceedingJoinPoint proceedingJoinPoint){
 try {
  proceedingJoinPoint.proceed(); //The send email method execution
 } catch (Throwable e) {                           
  e.printStackTrace();
 }
}

As a last point, we need to access the send method input parameter (i.e. get the EmailEntity) and verify we don't send emails to customers on development.

@Around("sendEmail()")
 public void emailFilterAdvice(ProceedingJoinPoint proceedingJoinPoint){

 //Get current profile
ProfileEnum profile = ApplicationContextProvider.getActiveProfile();

Object[] args = proceedingJoinPoint.getArgs();        //get input parameters
        if (profile != ProfileEnum.PRODUCTION){
            //verify only internal mails are allowed
            for (Object object : args) {
                if (object instanceof EmailEntity){
                    String to = ((EmailEntity)object).getTo();
                    if (to!=null && to.endsWith("@mycompany.com")){//If not internal mail - Dont' continue the method                        
                        try {
                            proceedingJoinPoint.proceed();
                        } catch (Throwable e) {
                            e.printStackTrace();
                        }
                    }
                }
            }
        }else{
            //In production don't restrict emails
            try {
                proceedingJoinPoint.proceed();
            } catch (Throwable e) {
                e.printStackTrace();
            }
        }
}

That's it.
Regarding configuration, you need to include the aspect jars in your project.
In Maven it's look like this:

        org.aspectj
 aspectjrt
 ${org.aspectj.version}


 org.aspectj
 aspectjweaver
 ${org.aspectj.version}
 runtime


and in your spring application configuration xml file, you need to have this:




Good luck!

Thursday, July 19, 2012

Seamlessly Static Meta Model generation

I would like to introduce a simple, quick and straight forward way to create Static Meta Model classes.

First, I would like to correct a perception I had in my previous post regarding the place such files are created.
Since the Static Meta Model files are generated classes and should automatically changed for each @Entity modification they should placed in the target folder and not comitted to the repository.

Moreover, creation of static meta model files via Eclipse works correctly if you use the appropriate generation project version and te right plugin.

First step is to get the class generation jar. Put in pom.xml :
        <dependency>
            <groupId>org.hibernate</groupId>
            <artifactId>hibernate-jpamodelgen</artifactId>
            <version>1.1.1.Final</version>
        </dependency>

Then in eclipse add the annotation generation plugin via Help ->Eclipse Market place -> search for "Apt M2E" and install it.

After the installation right click on your project -> properties -> Java compiler-> Annotation processing -> mark "enable project specific settings" (and actually all the checkboxes on that screen), in the Generated Source Directory put "target\generated-sources" (This will generate the classes in your target folder).

Inside the Annotation processing item there is a Factory Path item, enable this part as well and set the jar we import via maven to generate the classes. You can do it by clicking Add Variable -> M2_REPO -> Extend -> and choose the following: path : /org/hibernate/hibernate-jpamodelgen/1.2.0.Final/hibernate-jpamodelgen-1.2.0.Final.jar

Make sure only that path is checked.

As a final step, please make sure the target\generated-sources folder is on your classpath (right click-> build path -> ad as source folder).

That's it. Every change should trigger automatic static meta model generation.

Friday, June 1, 2012

Spring Data JPA - Limit query size

This one is very informative and short:

If you are using Spring Data JPA, you are probably familiar with the technique of query creation out of methods names.

Following that mechanism, there is no explicit way to use the LIMIT keyword in the query name,
However, there is a simple way to use the SQL LIMIT with these queries.
The implicit way to limit the query result size is to utilize the pagination mechanism.
 
Simply provide and extra paging object with the objects number you need to limit.


Below is a simplified example of how the repository interface should be used:
public interface EntityRepository extends CrudRepository, JpaSpecificationExecutor {

 List findByEntityMemberNameLike(String query, Pageable pageable);//Pageable will limit the query result

}


Below is a simplified example of how the usage of such query could be:
@Autowired
EntityRepository entityRepository;

...

int queryLimit = 10;
List queryResults = entityRepository.findByEntityMemberNameLike(queryString, new PageRequest(0, queryLimit));



LESS is more?

Recently I've been exposed to LESS which is a UI framework.
In a nutshell, this framework extends the standard CSS in a way that insert programming flavor into it, such as inheritance, variables, functions and build.
(By the way - this is not the only framework with such concept)


Using this framework has some pros and cons which I would like to highlight.
Pros:
  • Enable css reuse via inheritance, variables and more.
  • Build *one* css for the final html to use.
Cons:
  • Introduce another development language, which requires strong CSS knowledge (if you use it well, otherwise, no point to use it)
  • End-to-end developers (UI to server) might need another domain to maser
  • Coupling your project with another framework. Disconnecting from less is not simple
  • Produced css are not ideal for development phase and debug
  • CSS changes can be done only via this framework/language


At first review this LESS technique sounds like a css simplification magic, however at the end we stopped using it since its cost/benefit was low.

Please let me specify my claims.
The main disadvantage of this framework, IMHO, is that it makes your project too couples with it.
While the generated CSS is ideal for production, in development environment, it's very hard to work with.
In a typical project where several UI frameworks (e.g. jquery) are involved - imagine how long the one output css file will be.
Moreover, not only that it would be very hard to work on the CSS output directly, it would be very hard in the future to separate it to file per framework since LESS does not support it.

Another disadvantage is that in order to use this framework properly, the developer need to learn another language and expertise in a domain which is not necessary the developer strong side.

This tool is mainly used by UI side people, so knowing that tool is like Java developer to know Spring framework, however, this does not work the other way around - for Java developers such tool added complexity. It is one thing to know JS or modify slightly css, and another thing to have experience in CSS inheritance and advanced techniques.

I think it applies for LESS developers as well. How many UI developers do you know who master Spring or Hibernate...

Anyway I hope my experience would help other to estimate whether to use such framework or not.

Eclipse metamodel generation (JPA 2.0) issues

Update:
Please see updated post in that manner which simplifies the process a bit better.

Static Metamodel classes are uses for type safe queries in JPA2.0 standard.

There are few ways to create these classes:
1. Define the IDE (e.g. eclipse) to generate the files automatically (like the IDE automatic build which generates the class files on each Java file change).
2. Define maven to generate the files. The generation will be done when running the maven script.

 I would recommend to avoid the generation via the Eclipse IDE, since I've noticed that the generated static meta model are inaccurate (e.g. some members were missing).

 I'm not sure what the reason is since the missing member has no special attribute that differentiate it form the other class members that are generated, however, the fact is that it is constantly not generated.

Static meta model files generation via maven overcome this issue.


Good luck !

Thursday, May 10, 2012

Too L@zy for Debug

If you use JPA in your application and have @Lazy fetching on your Entities properties, please note that debugging might be impossible.

for example:
@Entity
public class Order[
@OneToMany(fetch=FetchType.LAZY,mappedBy = "order")
Set items;
//...
}
I encounter the issue with Spring IDE (STS 2.9), Spring 3.1 and JUnit 4.
When running a JUnit test  against the Service layer which needed to read lazy properties - same line of code (the invocation of a Lazy property) failed on Debug (exception) however succeeded on Run mode.

The problematic line is something like that:
Order order = getOrderById(orderId);
order.getItems(); // Exception on debug

Black magic indeed.

Hope this save time to someone..



Friday, April 20, 2012

When developers responsible to UX

While browsing in a mobile eCommerce site, I found something I haven't seen before and which I think quite amusing.
I guess that how web sites would have been look if developers would be responsible for UI.. :-)
 


Pay attention to the number the counting start with. :-)

Thursday, April 19, 2012

Know your audience

Recently I've look into this blog statistics and saw the usage by browser type.
I was surprised to see that IE has less than 9% share.

Yes ! less than 9 percent.

The second surprise hit me when I looked in the usage by operating system.
Here MS has 53%.

Few years ago, when Firefox just arrived, the trend of MS losing its power just began, however, MS was still the dominant player.

That made me wonder if we are already in a time where MS empire lost its power or is it just a mirage..?

After a quick search I've noticed there are significant differences between sites.
For instance, in w3schools.com, another developer oriented site IE was ~18% . I guess we could find the opposite statistics on mainstream news oriented sites.

It appears that each site has it's unique characteristics based on its audience.

So I guess now it is the right time to put the famous quote "There are Lies, damned lies, and statistics", so whenever there is an opinion about the current market share in the browser domain, always try to find the origin and source and try to see if there is reference about the audience that research is based on. It might make the whole difference.

I know it sounds obvious, however, as developers, when we think about new feature to develop, we tend to think in the terms, concepts and examples we borrow from our daily activities, such as the browser type, standards to use, etc..  and assume our clients will catch that gap sometime..

So my advice is that next time just stop a bit and ask yourself who your audience is?

Try to have some statistics on them, so you'll have facts and not speculations and your opinion will be less biased.
(do it at least till IE will indeed be 9% share in the mainstream as well.. :- )




Tuesday, April 17, 2012

Spring 3.1 Release Train is complete

On March 14, 2012 SpringSource and VMware announced (and here) that the Spring 3.1 Release Train is complete.

The following is now fully support Spring 3.1 (with some changes in bullets)
  • Spring Integration 
  • Spring Security 
    • Introduce session "validation" strategy instead of having invalid session check directly in SessionManagmentFilter  
    • Remove internal evaluation of JSP EL
    • Support HttpOnly Flag for Cookies in Servlet 3.0 Environments
    • Allow HttpSessionSecurityContextRepository to have different session key for different instances
  • Spring Batch 
  • Spring Data 
    • Support for locking
    • Support for @IdClass in entities
    • Support for LessThanEqual and GreaterThanEquals, True/False keywords in query methods
    • Added CDI integration for repositories
    • Improved parameter binding for derived queries for null values
  • Spring Mobile 
    • iPad is now recognized as a mobile device
    • Split out Spring Mobile's WURFL integration into a separate project
    • Added DeviceUtils and SitePreferenceUtils for convenient lookup of the 'currentDevice' and 'currentSitePreference', respectively.
    • Simplified packaging by collapsing the device.mvc and device.lite sub-packages into the device root package
  • Spring for Android  
    • Android Auth - support for Spring Social & spring Security
    • Updated RestTemplate to be compatible with Spring Framework 3.1
    • Added support for Basic Authentication
    • defaulting to standard J2SE facilities (HttpURLConnection) in Gingerbread and newer, as recommended by Google

    Wish you an easy upgrading!

Saturday, April 14, 2012

Mobile domination war


Recent article outlined the patent war that occur in the mobile arena.

It's clear that every company in this market understand the importance of  this market.
IMHO, the company who will dominant this field will be the next Microsoft (in terms of influence, leadership and standards setting)




source:
 Regulatory, Anti-Trust and Disruptive Risks Threaten Apple’s Empire
Adam Thierer Forbes,
4/08/2012
 http://www.forbes.com/sites/adamthierer/2012/04/08/regulatory-anti-trust-and-disruptive-risks-threaten-apples-empire/

Judgement day weapon for circular autowiring dependency error

On recent post I demonstrated a way to resolve circular dependency error.
However, sometimes, even the mentioned solution  is not enough and redesign is not possible.

I would like to suggest a solution of Dependency Injection which will be as much loyal as possible to the Spring IoC way and will help to overcome the circular dependency error.

This means that we will have a single class that will have a reference to all the classes which need to be injected and will be responsible for the injection.

I'll try to walk you through the steps.

Assuming we have the following service classes which cause the circular error:

@Service
public class ServiceA{

@Autowire
ServiceB serviceB;
@Autowire
ServiceC serviceC;

}

@Service
public class ServiceB{

@Autowire
ServiceA serviceA;
@Autowire
ServiceC serviceC;
}

First step is to remove the @Autowire annotations, so we will move the wiring responsibility out of Spring hands.

Second step is to create a class which will hold a reference to all the classes to inject.
Such as:
@Component
public class BeansManager{

@Autowire
private ServiceA serviceA;
@Autowire
private ServiceB serviceB;
@Autowire
private ServiceC serviceC;

get...
set... 
}


Third step is to create an interface name Injectable with method inject.
public interface Injectable{
public void inject(BeansManager beansManager);
}


Forth step is to set each service class/interface to implement the injectable interface.

e.g. :

@Service
public class ServiceA implements Injectable{

ServiceB serviceB;
ServiceC serviceC;
//method to inject all the beans which were previously were injected by Spring
public void inject(BeansManager beansManager){
this.serviceB =  beansManager.getServiceB();
this.serviceC = beansManager.getServiceC(); 
}

}


Fifth and final step is to set the BeansManager to be ready for the injection.
But take a moment to think -
It's obvious that we need  a reference of all the classes which need to be injected in the BeansManager, however, how can we make sure that the following sequence is maintained:
1. All Service classes are initiated
2. BeansManager is initiated with all the services injected by Spring
3. After BeansManager initiated and exist in its steady state, call all the service classes which need injection and inject the relevant service.

Step 3 can be achieved by a method which executed after constructor finished (via @PostConstruct annotation), however the big question here is how to make sure the BeansManager is initiated last? (after all the relevant services are initiated)
The trick is to have @Autowire on the Injectable set. This way Spring will make sure that:
a. All the injectable classes are subscribed to the BeansManager for the injection
b. The BeansManager will be initiated last (after all injectable services are initiated)

public class BeansManager

//This line will guarantee the BeansManager class will be injected last
@Autowired
private Set<Injectable> injectables = new HashSet();

//This method will make sure all the injectable classes will get the BeansManager in its steady state,
//where it's class members are ready to be set
@PostConstruct
private void inject() {
   for (Injectable injectableItem : injectables) {
       injectableItem.inject(this);
   }
}

}

Make sure you understand all the magic that happened in the fifth step, it's really cool.

Note:
If you think on a way to implement even more generic mechanism that will achieve the same, please drop me a note.

Good luck!

acknowledgement:
http://stackoverflow.com/questions/7066683/is-it-possible-to-guarantee-the-order-in-which-postconstruct-methods-are-invoke

Wednesday, March 28, 2012

Avoid i18n runtime exception

Recently a colleague came up with a nice tip of how to avoid some runtime exceptions causes by empty i18n key.

Very short and informative - enjoy :

Wednesday, March 14, 2012

Resolve circular dependency in Spring Autowiring

I would consider this post as best practice for using Spring in enterprise application development.

When writing enterprise web application using Spring, the amount of services in the service layer, will probably grow.
Each service in the service layer will probably consume other services, which will be injected via @Autowire .
The problem: When services number start growing, a circular dependency might occur. It does not have to indicate on design problem... It's enough that a central service, which is autowired in many services, consuming one of the other services, the circular dependency will likely to occur.

The circular dependency will cause the Spring Application Context to fail and the symptom is an error which indicate clearly about the problem:

Bean with name ‘*********’ has been injected into other beans [******, **********, **********, **********] in its raw version as part of a circular reference,

but has eventually been wrapped (for example as part of auto-proxy creation). This means that said other beans do not use the final version of the bean. This is often the result of over-eager type matching – consider using ‘getBeanNamesOfType’ with the ‘allowEagerInit’ flag turned off, for example.


 The problem in modern spring application is that beans are defined via @nnotations (and not via XML) and the option of allowEagerInit flag, simply does not exist.
The alternative solution of annotating the classes with @Lazy, simply did not work for me.

The working solution was to add default-lazy-init="true" to the application config xml file:

<?xml version="1.0" encoding="UTF-8"?>
<beans default-lazy-init="true" xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns:context="http://www.springframework.org/schema/context"
    xsi:schemaLocation="
        http://www.springframework.org/schema/beans    http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
        http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd >
    <context:component-scan base-package="com.package">
       
    </context:component-scan>
    <context:annotation-config/>
   ...
</beans>

Hope this helps.  Not sure why it is not a default configuration.
If you have suggestion why this configuration might be not ok, kindly share it with us all.

Update:
Following redesign I had, this mentioned solution simply did not do the trick.
So I designed more aggressive solution to resolve that problem in 5 steps.

Good luck!

Wednesday, February 1, 2012

JPA 2.0 Metamodel generation using Eclipse Maven build

Update:
Please see this updated post in that manner which simplifies the process a bit better.


In order to write JPA2.0 queries using the criteria API in a type safe manner, one can use the corresponding @Entity generated metamodel.

This post is for developers who are familiar with this concept, and feel their current metamodel files generation process need improvement.

Over the web there are some guidelines regarding the best way to generate these static metamodel classes.
However, maybe it's just me, but I could not run them in a proper way.
Some methods generated the metamodel classes in the target directory and some simply did not work.
When running the metamodel generation tool, I had to set the pom file with the proc parameter manually with the "only" value and revert it when I wanted to generate the other java files.
(Very tedious when rapid model changes  are needed)

I would like to suggest a way to create the static metamodel files in the source directory (right beside the @Entities they refer)  AND run the build without modifying the pom/maven file.

The process is set from two steps:
1) Configure the POM.xml
2) Configure eclipse to run the build

So, first thing first - lets configure the pom file:




            org.hibernate

            hibernate-jpamodelgen

            1.1.1.Final

        

    

    

        

            

                maven-clean-plugin

                2.4.1

                

                  

                       

                     ../your_project_name/src/main/java/com/your_path/model/                     

                      

                        **/*_.java                       

                                           

                      false

                    

                  

                

              

           

            

                org.apache.maven.plugins

                maven-compiler-plugin

                2.3.2

                

                    1.7

                    1.7                                           

                    

                        org.hibernate.jpamodelgen.JPAMetaModelEntityProcessor

                    

                    

                        ../your_project_name/src/main/java/

                    

                

            

        

    


The artifact is needed for the @Entity classes analysis and static Metamodel generation.
The maven-clean section is to clean the previous metamodel files (pattern: *_.java ) before generation (useful since the metamodel generation raise an error if any previous file already exist).
The maven-compileer section refer to the class with the metamodel generation class and set the target directory as the source directory (not the target!).

Next step is to define the eclipse maven build.
You can follow the below screenshots (sample build for project called "management"):


Configure the metamodel build:
Configure the project build:

  • For every "regular" build simply run the project build.
  • For any change in the @Entity files, run the compile metamodel build.

That's it..

Important note:
If you know how to run the two builds automatically one after the other, kindly leave a comment.

Tuesday, January 24, 2012

RESTful standard resolved! - part I

Latly I'm trying to build a web application which will be exposed in a RESTfull manner.
There are some general guideline and hints about how to define it, but no explicit standard or accepted schema structure to use.

After reading some info on the web, I think I manage to crack the pattern :-)
I would like to share the rules and structure I formed and hopefully to get some feedback about it and improve it, so please don't hesitate to leave notes and point any pain point that structure has.

The high level pattern is:
http(s)://server.com/app-name/{version}/{domain}/{rest-convention}

Where {version} is the api version this interface work with and {domain} is an area you wish to define for any technical (e.g. security - allow certain users to access that domain) or business reason (e.g. gather functionality under same prefix).

The {rest-convention} denotes the set of REST API which is available under that domain.
It has the following convention:

  • singular-resourceX/
    • URL example: order/  (order is the singular resource X)
      • GET - will return a new order
      • POST - will create a new order. values are taken from the post content body.
  • singular-resourceX/{id} 
    • URL example: order/1 (order is the singular resource X)
      • GET - will return an order with the id 1
      • DELETE - will delete an order with the id 1
      • PUT - will update an order with the id 1.  Order values to update are taken from the post content body.

  • plural-resourceX/
    • URL example: orders/
      • GET - will return all orders
  • plural-resourceX/search
    • URL example: orders/search?name=123
      • GET - will return all orders that answer search criteria (QBE, no join) -order  name equal to 123
  • plural-resourceX/searchByXXX
    • URL example: orders/searchByItems?name=ipad
      • GET - will return all orders that answer the customized query - get all orders  that associated to items with name ipad
  • singular-resourceX/{id}/pluralY
    • URL example: order/1/items/ (order is the singular resource X, items is the plural resource Y)
      • GET - will return all items that associated to order #1
  •  singular-resourceX/{id}/singular-resourceY/
    • URL example: order/1/item/
      • GET - return a new item (transient) that is associated order #1
      • POST - create a new item and associate it to order  #1. Item values are taken from the post content body.
  •  singular-resourceX/{id}/singular-resourceY/{id}/singular-resourceZ/
    • URL example: order/1/item/2/package/
      • GET - return a new package (transient) that is associated to item 2 (i.e. how to pack the item) and is associated to order #1
      • POST - create a new package and associate it to item #2 & order #1. package values are taken from the post content body.

One basically can have further nesting as long as the above convention is maintained and no plural resource is defined after another plural resource.
There are further guidelines/notes to make things clear:
- When using plural resource, the returning instances will be those of the last plural resource used.
- When using singular resource the returning instance will be the last singular resource used.
- On search, the returning instances will be those of the last plural entity used.

Hopefully your insight will help me improve this structure and overcome issues which you might came across.

In next post, after this suggested structure will be improved, I will try to give technical examples how to implement it using Spring MVC 3.1


Friday, January 20, 2012

Advanced QBE pattern (common using generics)

On previous post I introduced the QBE pattern.

The pattern saves us a lot of tedious binding code.
However, one still need to implement the same method for each Entity.

To save us the redundant work of defining the same QBE method for each entity, where only the entity is changed, we could utilize Generics.

Using Generics we will implement only one method in a base class, and each concrete QBE (on certain @Entity) will inherit from that base class.


Lets see that in action.

First, we define the interface for the base class:

public interface BaseDAO {

   

    /**
     * Perform a query based on the values in the given object.
     * Each (String) attribute will be searched based on the Like comparison.
     * Comply the QueryByExample (QBE) pattern
     * @param modelEntity The Entity with values to search
     * @param propertiesToExclude Any property which the search will ignored.
     * @return Instances that answer the given parameters values (empty if none)
     */
    public List getByElementsLike(T modelEntity, List propertiesToExclude);   

    /**
     * Perform a query based on the values in the given object.
     * Each attribute will be searched based on the Equality comparison.
     * Comply the QueryByExample (QBE) pattern
     * @param modelEntity The Entity with values to search
     * @param propertiesToExclude Any property which the search will ignored
     * @return Instances that answer the given parameters values (empty if none)
     */
    public List getByElementsEqual(T modelEntity, List propertiesToExclude);

}



 Second step is to write the Base class implementation

public class BaseDaoImpl implements BaseDAO {
//Get DB session access using Spring 3.1 Entity Manger
     @PersistenceContext
    protected EntityManager em;
  
    protected Class entityClass;

   
     //Initialize the entity class
     @SuppressWarnings("unchecked")
    public BaseRepositoryImpl() {
            ParameterizedType genericSuperclass = (ParameterizedType) getClass().getGenericSuperclass();
            this.entityClass = (Class) genericSuperclass.getActualTypeArguments()[0];
        }

    protected Class getEntityClass() {
        return entityClass;
    }   
  

    public List getByElementsLike(T modelEntity,List propertiesToExclude) {

        // get the native hibernate session
        Session session = (Session) em.getDelegate();

        // create an example from our model-entity, exclude all zero valued numeric properties
        Example example = Example.create(modelEntity).excludeZeroes().enableLike(MatchMode.ANYWHERE);

        for (String property : propertiesToExclude) {
            example.excludeProperty(property);
        }       

        // create criteria based on the customer example
        Criteria criteria = session.createCriteria(getEntityClass()).add(example);

        // perform the query       
        return criteria.list();

    }

    public List getByElementsEqual(T modelEntity,List propertiesToExclude) {

        // get the native hibernate session
        Session session = (Session) em.getDelegate();

        // create an example from our customer, exclude all zero valued numeric properties
        Example example = Example.create(modelEntity).excludeZeroes();

        for (String property : propertiesToExclude) {
            example.excludeProperty(property);
        }   

        // create criteria based on the customer example
        Criteria criteria = session.createCriteria(getEntityClass()).add(example);

        // perform the query       
        return criteria.list();
    }

}


Third step is to extend the base class:
@Entity

public class Order{
..
}

 

@Entity
public class Item{
..
}

 
public class OrderDaoImpl extends BaseDaoImpl implements
        BaseDao{

//No need to implement the QBE again here

}

public class ItemDaoImpl extends BaseDaoImpl implements
        BaseDao{

//No need to implement the QBE again here

}


Kindly let me know for any further improvements.

QBE pattern

When writing an enterprise system, there is a common pattern of a user who fill in a search form and get a corresponding result based on the given search parameters.

Handling this request manually in the backend is a bit tedious since for every search parameter, the developer need to check if the user insert a value and if so, concatenate the corresponding string into the WHERE clause in the SQL.

The Query By Example (QBE) pattern is a pattern that helps deal with the mentioned task.

It is implemented by ORM frameworks (such as Hibernate) and also avlaiable as part of possible to implement utilizing the the spec in JPA 2.0 (as seen in OpenJPA project)

The QBE implementation expect the model instance (the object which is annotated with @Entity).
That instance should have the form search values in it.
Than the implementation  framework does all the SQL wiring magic and save us the tedious work.
So at the end we just return the query result.

Lets discuss a case where a user want to query the Order object.

The Order class need to be annotated with @Entity


@Entity
public class Order{
..
}

Using a Hibernate implementation :
public List findByOrderElements(Order order) {

 Session session = getSession();

//Hibernate assembles the query with the given entity to query
 Example example = Example.create(order) //create Example object given the instance to query
     .enableLike(MatchMode.ANYWHERE) //optional parameters
     .excludeZeroes()
     .ignoreCase();

//Get the query result
 List result = session.createCriteria(Order.class)
     .add(example)
     .list();
        
 return result;
}


That's it...

Important note:
  • QBE will only work on one object (won't help in case the form input has properties of other related fields, i.e.  requires join)
  • Version properties, identifiers and associations are ignored (if QBE would consider the id than it would match only one instance...)

For even more advanced stuff regarding QBE pattern, please follow my next post.