OutOfMemoryError  PermGen



Here I look into what is meant when a Java program runs into a OutOfMemoryError: PermGen Space error. I first explain what the permanent generation heap space is, after which I explain the usual cause of the Permgen Space error and I give some pointers on how to avoid it.

 Introduction

 To understand the error, we have to look into how the jvm memory is structured. There are two memory regions in the JVM: the heap and the stack. Local variables  reside on the stack, everything else on the heap. This Java heap memory is structured again into regions, called generations.  The longer an object lives, the higher the chance it will be promoted to an older generation. Young generations(such as Eden on Sun JVM) 
 are more garbage collected than older generations(survivor and tenured on Sun JVM). However, there is also some separate heap space called permanent generation.  Since it is a separate region, it is not considered part of the Java Heap space. Objects in this space are relatively permanent.  Class definitions are stored here, as are static instances and string pools that have been interned

 From experience, the PermGen space issues tend to happen frequently in dev environments really since Tomcat has to load new classes every time it deploys a WAR  or does a jspc (when you edit a jsp file). Personally, I tend to deploy and redeploy wars a lot when I’m in dev testing so I know I’m bound to run out sooner or later.

OutOfMemoryError: PermGen Space

The OutOfMemoryError: PermGen Space error occurs when the permanent generation heap is full. Although this error can occur in normal circumstances, usually, this error is caused by a memory leak. In short, such a memory leak means that a classloader and its classes cannot be garbage collected after they have been undeployed/discarded.

To give an example on how this can happen, let’s say we have a Shape class, which is part of a jar in a web application that is deployed on some webserver. In the lib folder of the web server, there is some logging framework present, which has a Log class with the method register(Class clazz) with which classes
 can be registered for logging. Let’s say that the Shape class gets registered by this method and the Log class starts keeping a reference to the clazz object. When the Shape class gets undeployed, it is still registered with the Log class. The Log class will still have a reference to it and hence, it will never be garbage
 collected. Moreover, since the Shape Class has a reference to its ClassLoader in turn, the ClassLoader itself will never be garbage collected either, and so will  none of the classes it loaded.

An even more typical example is with the use of proxy objects. Spring and Hibernate often make proxies of certain classes. Such proxy classes are loaded by a classloader as well, and often, the generated class definitions – which are loaded like classes and stored in permanent generation heap space – are never discarded,  which causes the permanent generation heap space to fill up


 Avoiding the error

 This should theoretically be less of an issue in production environments since you (hopefully) don’t change the codebase on a 10 minute basis.  If it still occurs, that just means your codebase (and corresponding library dependencies) are too large for the default memory allocation and you’ll just  need to mess around with stack and heap allocation. I think the standards are stuff like:

1. Increasing the PermGen Memory Size

The first thing one can do is to make the size of the permanent generation heap space bigger. This cannot be done with the usual –Xms(set initial heap size) and –Xmx(set maximum heap size) JVM arguments, since as mentioned, the permanent generation heap space is entirely separate from the regular Java Heap space, 
and these arguments set the space for this regular Java heap space. However, there are similar arguments which can be used(at least with the Sun/OpenJDK jvms) to make the size of the permanent generation heap bigger:

 -XX:MaxPermSize=128m

 default is 64m

2. Enable Sweeping
 Another way to take care of that for good is to allow classes to be unloaded so your PermGen never runs out:

-XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled

Stuff like that worked magic for me in the past. One thing though, there’s a significant performance trade off in using those, since permgen sweeps will  make like an extra 2 requests for every request you make or something along those lines. You’ll need to balance your use with the tradeoffs.


2

View comments

  1. Two remarks

    1) The default permgen size is platform dependent. For example, on my Mac OS X with Java 7 I have it set at 82M:

    my-machine:~ me$ java -XX:+PrintFlagsFinal -version | grep MaxPermSize
    uintx MaxPermSize = 85983232 {pd product}

    2) Specifying the class unloading (which is by default switched on on Java 8) only helps if the classes are no longer referenced and can be garbage collected. So it will not be something that will save you lets say from permgen leaks

    You might be interested in reading the following analysis about java.lang.outofmemoryerror: Permgen space - https://plumbr.eu/outofmemoryerror/permgen-space

    ReplyDelete
    Replies
    1. ivo
      1. you can set your permgen size to 82M on windows as well. i wrote default size is 64M and you can increase it upto 512M irrespective of operating system using. http://www.oracle.com/technetwork/java/javase/7u5-relnotes-1653274.html

      2. you are right it will not help if classes have reference but it will be helping and ofcourse there will be classes which dont have active reference

      Delete
  1.                            

                                                                  


    Hypertext Transfer Protocol (HTTP) is stateless: a client computer running a web browser must establish a new Transmission Control Protocol (TCP) network connection to the web server with each new HTTP GET or POST request. The web server, therefore, cannot rely on an established TCP network connection for longer than a single HTTP GET or POST operation. Session management is the technique used by the web developer to make the stateless HTTP protocol support session state. For example, once a user has been authenticated to the web server, the user's next HTTP request (GET or POST) should not cause the web server  to ask for the user's account and password again.

    If not all alot of web applications want to know who is the visitor on their site. They want to expose some portions of the site with some specific set of users  and some portions for all the visitors. Web application need to identify the user by their credentials. Here we will see how this can be made possible in stateless  http enironment using spring mvc

     Server Side.

     we will start by writing a login method in Controller.

     @RequestMapping(value = { "/login" }, method = RequestMethod.POST)
        @ResponseBody
        public String login(HttpSession session,String username,String password) throws Exception {
            Member member=userService.authenticateUser(username, password);
            if(member!=null)
            {
                session.setAttribute("MEMBER", member);
            }else
            {
                throw new Exception("Invalid username or password");
            }
            return Utils.toJson("SUCCESS");
        }

     user will be passing username and password while Spring will automatically inject session attribute. we will authenticate this username and password from db. For this we will be using some service method which will in turn call some method of repository to get the Object of Member class and return it  here in the controller.

     Now this Member object will be their in the session untill the session is not destroyed due to any reason. Now what we want is that every call of method should be verified either it is a from a authenticated user or not. Some calls will be exempted from this like this login call. Of course the user who is going to login can't be pre authenticated. 

    To intercept every call to handler method we will be writing handlerinterceptor.

    what is HandlerInterceptor?

    Sometimes we want to intercept the HTTP Request and do some processing before handing it over to the controller handler methods. That’s where Spring MVC Interceptors come handy. Just like we have Struts2 Interceptors, we can create our own interceptors in Spring by either implementing
    org.springframework.web.servlet.HandlerInterceptor interface or by overriding abstract class org.springframework.web.servlet.handler.HandlerInterceptorAdapter  that provides the base implementation of this interface.

    HandlerInterceptor declares three methods based on where we want to intercept the HTTP request.
    we will be using this method preHandle. boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler): This method is used to intercept the request before it’s handed  over to the handler method. This method should return ‘true’ to let Spring know to process the request through another interceptor or to send it to handler  method if there are no further interceptors. If this method returns ‘false’ Spring framework assumes that request has been handled by the interceptor itself and no further processing is needed. We should use response object to send response to the client request in this case. Object handler is the chosen handler object to handle the request. This method can throw Exception also, in that case Spring MVC Exception Handling should be useful to send error page as response.

     package com.faisalbhagat.web.interceptor;

    import javax.servlet.http.HttpServletRequest;
    import javax.servlet.http.HttpServletResponse;
    import javax.servlet.http.HttpSession;

    import org.springframework.web.method.HandlerMethod;
    import org.springframework.web.servlet.handler.HandlerInterceptorAdapter;



    public class SessionValidator extends HandlerInterceptorAdapter {

        @Override
        public boolean preHandle(HttpServletRequest request,
                HttpServletResponse response, Object handler) throws Exception {
                HttpSession session = request.getSession();
            if (!(((HandlerMethod)handler).getBean() instanceof CommonController)) {
                if (session == null || session.getAttribute("MEMBER") == null) {
                    throw new Exception("Invalid session please login");
                }
            }
            return true;
        }

    }

    so what this method prehandle is doing is obvious from the code . it gets HttpSession object from request. Then it verifies it has "MEMBER" attribute set or not remember we had set this MEMBER attribute during login. if this attribute is not null it means user has passed login method so he is a verified user if this attribute is null it means he is not verified so we must throw an Exception to tell the client "please login".  Now we dont want to have this check for every method like login and some other methods which are available to non authenticated users aswell. we will be keeping all those methods or handlers in CommonController. Outer if is saying if this handler was part of CommonController dont go for verification just return true.

    Now we have to declare this SessionValidator or handlerinterceptor in ourproject-servlet.xml as follows

    <mvc:interceptors>
            <bean class="com.faisalbhagat.web.interceptor.SessionValidator" />
    </mvc:interceptors>


    Client Side


    Here is the client side code for testing
      function login()
      {
          console.log("clicked login");
          $
            .ajax({dataType : 'json',
                url : "${pageContext.request.contextPath}/login",
                data : {'username' : 'faisal' ,'password' : 'bhagat'
                },
                type : "POST",
                success : function(result) {
                alert('success'+JSON.stringify(result));
                getFeedbackList();
                },
                error : function(result){
                    alert('errorsds '+JSON.stringify(result));
                }
            });
      }

      function getFeedbackList()
      {
          console.log("clicked getFeedbackList");
          $
            .ajax({dataType : 'json',
                url : "${pageContext.request.contextPath}/getFeedbackList",
                data : {'memberId' : '1'
                },
                type : "GET",
                success : function(result) {
                alert('success'+JSON.stringify(result));
                },
                error : function(result){
                    alert('errorsds '+JSON.stringify(result));
                }
            });
      }
    4

    View comments


  2.                    SpringMVC FileUpload with ajax & JQuery


    If you want to upload a file in a SpringMVC application with ajax you need to do following things

    Server Side

    On server side there are many ways but MultipartHttpServletRequest is easiest way to do that. On server side you need some logic to handle the file uploaded.For this we will be using apache commons utilities commons-fileupload and commons-io . Include these two jars in the class path of your project. commons-fileupload-1.3.1.jars and commons-io-2.4.jar. for me it is C:\TestProject\WebContent\WEB-INF\lib. And one more jar com.fasterxml.jackson.core.jar for json conversion if required.
    As we have included necessary jars to handle the uploaded file now we will write the Controller which will have the methods to actually handle the upload.

    package com.faisalbhagat.web.controller;

    @Controller
    @RequestMapping(value = { "" })
    public class UploadController {
     
        @RequestMapping(value = "/uploadMyFile", method = RequestMethod.POST)
        @ResponseBody
        public String handleFileUpload(MultipartHttpServletRequest request)
                throws Exception {
            Iterator<String> itrator = request.getFileNames();
            MultipartFile multiFile = request.getFile(itrator.next());
                    try {
                // just to show that we have actually received the file
                System.out.println("File Length:" + multiFile.getBytes().length);
                System.out.println("File Type:" + multiFile.getContentType());
                String fileName=multiFile.getOriginalFilename();
                System.out.println("File Name:" +fileName);
                String path=request.getServletContext().getRealPath("/");
                       
                //making directories for our required path.
                byte[] bytes = multiFile.getBytes();
                File directory=    new File(path+ "/uploads");
                directory.mkdirs();
                // saving the file
                File file=new File(directory.getAbsolutePath()+System.getProperty("file.separator")+picture.getName());
                BufferedOutputStream stream = new BufferedOutputStream(
                        new FileOutputStream(file));
                stream.write(bytes);
                stream.close();
            } catch (Exception e) {
                // TODO Auto-generated catch block
                e.printStackTrace();
                throw new Exception("Error while loading the file");
            }
            return toJson("File Uploaded successfully.")
        }

        public String toJson(Object data)
        {
            ObjectMapper mapper=new ObjectMapper();
            StringBuilder builder=new StringBuilder();
            try {
                builder.append(mapper.writeValueAsString(data));
            } catch (Exception e) {
                // TODO Auto-generated catch block
                e.printStackTrace();
            }
            return builder.toString();
        }
    }


    We will Register “CommonsMultipartResolver” in our TestProject-servlet.xml to tell Spring to use commons-upload library to handle the file upload form. The rest is just normal bean declaration.
    it will be like this

    <bean id="multipartResolver" class ="org.springframework.web.multipart.commons.CommonsMultipartResolver"/>

    Client Side

    we are implementing client side with html,jquery and javascript
    This is our index.html

    <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
    <html>
    <head>
    <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
    <title>Spring MVC - Upload File</title>
      <script src="http://code.jquery.com/jquery-1.9.1.js"></script>
      <script>
    $(document).ready(function(){
      $("#subbutton").click(function(){
              processFileUpload();
      });

      $("#loader1").on('change',prepareLoad);
      var files;
      function prepareLoad(event)
      {
          console.log(' event fired'+event.target.files[0].name);
          files=event.target.files;
      }
      function processFileUpload()
      {
          console.log("fileupload clicked");
          var oMyForm = new FormData();
          oMyForm.append("file", files[0]);
         $
            .ajax({dataType : 'json',
                url : "${pageContext.request.contextPath}/uploadMyFile",
                data : oMyForm,
                type : "POST",
                enctype: 'multipart/form-data',
                processData: false,
                contentType:false,
                success : function(result) {
                alert('success'+JSON.stringify(result));
                },
                error : function(result){
                    alert('error'+JSON.stringify(result));
                }
            });
      }
    });
    </script>
    </head>
    <body>
    <input type="file" name="loader1" id="loader1" />
    <input type="button" id="subbutton" value="Upload"/>
    </body>
    </html> 

    First of all we add include of jquery in head section

    <script src="http://code.jquery.com/jquery-1.9.1.js"></script>
      
    Then in the body section we will declare input with type="file" to select the file and a button to actually upload the file.

      <input type="file" name="loader1" id="loader1" />
    <input type="button" id="subbutton" value="Upload"/>


    After that we will be adding a 'change' event handler for the input "loader1" as follows

      $("#loader1").on('change',prepareLoad);
      var files;
      function prepareLoad(event)
      {
          console.log(' event fired'+event.target.files[0].name);
          files=event.target.files;
      }


     it is storing selected file in the global variable named files. On the click event of button   processFileUpload() is called which actually sends the file to server. we are sending the file packed in the FormData Object.

                                                 
    3

    View comments

  3.                                        

                      InnoDB vs MYISAM An Accumulation

                                               
                                               

    Introduction

    InnoDB
    InnoDB is a storage engine for MySQL. MySQL 5.5 and later use it by default. It is included as standard in most binaries distributed by MySQL AB, the exception being some OEM ersions. InnoDB became a product of Oracle Corporation after its acquisition of Innobase Oy in October 2005.In September 2000 Innobase Oy started collaboration with MySQL AB, which resulted in the release of MySQL that incorporated InnoDB in March 2001.InnoDB was originally closed source, but was released to open source after Innobase failed to find a buyer for InnoDB and started collaboration with MySQL. MySQL tried to close a deal with Innobase in the following years, but eventually Oracle acquired Innobase in October, 2005. Oracle eventually also acquired Sun Microsystems, owner of MySQL AB, in January 2010. The software is dual licensed; it is distributed under the GNU General Public License, but can also be licensed to parties wishing to combine InnoDB in proprietary software.MariaDB and Percona Server use a fork of InnoDB called XtraDB by default.XtraDB is maintained by Percona. Oracle InnoDB's changes are regularly imported into XtraDB, and some bug fixes and extra features are added.
                                               
    MYISAM
    MyISAM was the default storage engine for the MySQL relational database management system versions prior to 5.5.It is based on the older ISAM code but has many useful extensions. MariaDB has a storage engine called Aria, which is described as a "crash-safe alternative to MyISAM". However, the MariaDB developers still work on MyISAM code. The major improvement is the Segmented Key Cache.[4] If it is enabled, MyISAM indexes's cache is divided into segments.
    This improves the concurrency, because threads rarely need to lock the entire cache.

    InnoDB Features

    1.  Provides Full transaction capability with full ACID (Atomicity, Consistency, Isolation, and   Durability) compliance.

    2.  It has row level locking.By supporting row level locking, you can add data to an InnoDB table without the engine locking the table with each insert and  this speeds up both the recovery and storage of information in the database.

    3.  The key to the InnoDB system is a database, caching and indexing structure where both indexes and data are cached in memory as well as being stored on disk This enables very fast recovery, and works even on very large data sets.

    4.  InnoDB supports foreign key constraints
    5.  InnoDB supports  automatic crash recovery
    6.  InnoDB supports  table compression (read/write)
    7.  InnoDB supports spatial data types (no spatial indexes)

    8.  Innodb support non-locking ANALYZE TABLE and is only required when the server has been running for a long time since it dives into the index statistics and gets the index information when the table opens.

    9.  Innodb does not have separate index files so they do not have to be opened.

    10.  Innodb builds its indexes one row at a time in primary key order (after an ALTER), which means index trees aren't built in optimal order and are fragmented.There is currently no way to defragment InnoDB indexes, as InnoDB can't build indexes by sorting in MySQL 5.0. Even dropping and recreating InnoDB indexes  may result in fragmented indexes, depending on the data.

    11.  A table can contain a maximum of 1000 columns.

    12.  The InnoDB internal maximum key length is 3500 bytes, but MySQL itself restricts this to 3072 bytes. (1024 bytes for non-64-bit builds before MySQL 5.0.17, and for all builds before 5.0.15.)
    13.  The default database page size in InnoDB is 16KB. By recompiling the code, you can set it to values ranging from 8KB to 64KB. You must update the values of UNIV_PAGE_SIZE and UNIV_PAGE_SIZE_SHIFT in the univ.i source file.
    14.  InnoDB tables do not support FULLTEXT indexes.

     MYISAM Features

     1.  No Transaction support
     2.  Table level locking
     3.  Provides Full Text search
     4.  No limit to data in table.
     5.  fast COUNT(*)s (when WHERE, GROUP BY, or JOIN is not used)
     6.  full text indexing
     7.  smaller disk footprint
     8.  very high table compression (read only)
     9.  spatial data types and indexes (R-tree)
    10. By using DATA DIRECTORY='/path/to/data/directory' or INDEX DIRECTORY='/path/to/index/directory' you can specify where the MyISAM storage engine should
     put a table's data file and index file. The directory must be the full path name to the directory, not a relative path.


    Analysis

    when we say MyISAM does not support ACID it has greater implication upon the stability of system. When atomic updates are not supported lets see what happens in the following example

    1. Issue an update statement that takes  6 seconds on MyISAM table
    2. when the statement is in progress lets say after 3 seconds hit ctrl-c to interrupt it.
    3. Observe the effects on table. how many of the rows are updated and how many are not? is the table even readable or was corrupted when you hit ctrl-c?
    4. Try same experiment on InnoDB table interrupt the query in the middle
    5. Zero rows are updated InnoDB makes sure you get atomic updates. you get all or none. As the query was interrupted in the middle so none of the rows are actually updated. when it was interrupted all the changes were rolled back. This is true even if you use killall -9 mysqld to simulate the crash Performance is desirable ofcourse but not losing the data should



    In general using InnoDB will result in a much LESS complex application, probably also more bug-free. Because you can put all referential integrity(Foreign Key-constraints) into the datamodel, you don't need anywhere near as much application code as you will need with MyISAM.Every time you insert,  delete or replace a record, you will HAVE to check and maintain the relationships. E.g. if you delete a parent, all children should be deleted too. For instance, even in a simple blogging system, if you delete a blogposting record, you will have to delete the comment records, the likes, etc.
     In InnoDB this is done automatically by the database engine (if you specified the contraints in the model) and requires no application code. In MyISAM this will have to be coded into the application, which is very difficult in web-servers. Web-servers are by nature very concurrent  / parallel and because these actions should be atomic and MyISAM supports no real transactions, using MyISAM for web-servers is risky / error-prone.

    Also in most general cases, InnoDB will perform much better, for a multiple of reasons, one them being able to use record level locking as opposed to table-level locking. Not only in a situation where writes are more frequent than reads, also in situations with complex joins on large datasets. 

    We noticed a 3 fold performance increase just by using InnoDB tables over MyISAM tables for very large joins (taking several minutes).I would say that  in general InnoDB (using a 3NF datamodel complete with referential integrity) should be the default choice when using MySQL. MyISAM should only be used  in very specific cases. It will most likely perform less, result in a bigger and more buggy application.Having said this. Datamodelling is an art seldom found among webdesigners / -programmers. No offence, but it does explain MyISAM being used so much.

     I found that the table-level locking in MyISAM caused serious performance problems for heavy workload . Unfortunately I also found that performance under  InnoDB was also worse than I'd hoped. In the end I resolved the contention issue by fragmenting the data such that inserts went into a "hot" table and selects never queried the hot table. This also allowed deletes (the data was time-sensitive and we only retained X days worth) to occur on "stale" tables that again weren't touched by select queries. InnoDB seems to have poor performance on bulk deletes so if you're planning on  purging data you might want to structure it in such a way that the old data is in a stale table which can simply be dropped instead of running deletes on it.

      Based on  traffic estimates, close to 200 writes per second. With MyISAM, only one of these could be in progress at any time. You have to make sure that your  hardware can keep up with these transaction to avoid being overrun, i.e., a single query can take no more than 5ms.   That suggests to me you would need a storage engine which supports row-level locking, i.e., InnoDB.


    0

    Add a comment


  4.                                              Can static methods be overrriden?


    It is not possible to override static methods. If a subclass defines a static method with same signature as a static method which is defined parent class. Method in child class will be hiding the method in parent class. This phenomena is called method hiding. However if you override a static method compiler will not give an error. That means, if you try to override, Java doesn't stop you doing that. But you certainly don't get the 
    same effect as you get for non-static methods.

     Overriding in Java simply means that the particular method would be called based on the run time type 
    of the object and not on the compile time type of it (which is the case with overriden static methods). As static methods are class methods they are notinstance methods so they have nothing to do with the fact which reference is pointing to which Object or instance.as per the nature of static method it belongs to specific class, but you can redeclare it in to the subclass but that subclass doesn't know anything about the parent class' static methods because as I said it is specific  to only that class in which it has been declared .Accessing them using object references is just an extra liberty given by the designers of Java and we should certainly not think of stopping that practice only when they restrict it .

    package com.faisalbhagat.test.

    public class TestClassSuper {

    public static void show()
    {
    System.out.println("I AM SUPER SHOW");
    }

    public  void overridden()
    {
    System.out.println("I AM SUPER OVERRIDEN");
    }

    }


    public class TestClassChild extends TestClassSuper {

    public static void show()
    {
    System.out.println(" I AM CHILD SHOW");
    }

    public  void overridden()
    {
    System.out.println("I AM CHILD OVERRIDEN");
    }
    }

    public class TestClassTest {
    public static void main(String[] args)
    {
    TestClassSuper c= new TestClassChild();
    c.show();
    c.overridden();
    }

    }


    Output:-

    I AM SUPER SHOW
    I AM CHILD OVERRIDEN

    ----------------------------


    i am storing child class object in super class reference. Now through this reference i am calling both static and non static method which are overriden, You saw the output : Non static method is called of child class because the reference was of child class . it was decided on run time due to late binding but the static method called is from super class because the declared reference was from super class it was STATICALLY bound to its static method at compile time. it didnt see which Object it is going to have in future.






    0

    Add a comment



  5.                                            OutOfMemoryError  PermGen



    Here I look into what is meant when a Java program runs into a OutOfMemoryError: PermGen Space error. I first explain what the permanent generation heap space is, after which I explain the usual cause of the Permgen Space error and I give some pointers on how to avoid it.

     Introduction

     To understand the error, we have to look into how the jvm memory is structured. There are two memory regions in the JVM: the heap and the stack. Local variables  reside on the stack, everything else on the heap. This Java heap memory is structured again into regions, called generations.  The longer an object lives, the higher the chance it will be promoted to an older generation. Young generations(such as Eden on Sun JVM) 
     are more garbage collected than older generations(survivor and tenured on Sun JVM). However, there is also some separate heap space called permanent generation.  Since it is a separate region, it is not considered part of the Java Heap space. Objects in this space are relatively permanent.  Class definitions are stored here, as are static instances and string pools that have been interned

     From experience, the PermGen space issues tend to happen frequently in dev environments really since Tomcat has to load new classes every time it deploys a WAR  or does a jspc (when you edit a jsp file). Personally, I tend to deploy and redeploy wars a lot when I’m in dev testing so I know I’m bound to run out sooner or later.

    OutOfMemoryError: PermGen Space

    The OutOfMemoryError: PermGen Space error occurs when the permanent generation heap is full. Although this error can occur in normal circumstances, usually, this error is caused by a memory leak. In short, such a memory leak means that a classloader and its classes cannot be garbage collected after they have been undeployed/discarded.

    To give an example on how this can happen, let’s say we have a Shape class, which is part of a jar in a web application that is deployed on some webserver. In the lib folder of the web server, there is some logging framework present, which has a Log class with the method register(Class clazz) with which classes
     can be registered for logging. Let’s say that the Shape class gets registered by this method and the Log class starts keeping a reference to the clazz object. When the Shape class gets undeployed, it is still registered with the Log class. The Log class will still have a reference to it and hence, it will never be garbage
     collected. Moreover, since the Shape Class has a reference to its ClassLoader in turn, the ClassLoader itself will never be garbage collected either, and so will  none of the classes it loaded.

    An even more typical example is with the use of proxy objects. Spring and Hibernate often make proxies of certain classes. Such proxy classes are loaded by a classloader as well, and often, the generated class definitions – which are loaded like classes and stored in permanent generation heap space – are never discarded,  which causes the permanent generation heap space to fill up


     Avoiding the error

     This should theoretically be less of an issue in production environments since you (hopefully) don’t change the codebase on a 10 minute basis.  If it still occurs, that just means your codebase (and corresponding library dependencies) are too large for the default memory allocation and you’ll just  need to mess around with stack and heap allocation. I think the standards are stuff like:

    1. Increasing the PermGen Memory Size

    The first thing one can do is to make the size of the permanent generation heap space bigger. This cannot be done with the usual –Xms(set initial heap size) and –Xmx(set maximum heap size) JVM arguments, since as mentioned, the permanent generation heap space is entirely separate from the regular Java Heap space, 
    and these arguments set the space for this regular Java heap space. However, there are similar arguments which can be used(at least with the Sun/OpenJDK jvms) to make the size of the permanent generation heap bigger:

     -XX:MaxPermSize=128m

     default is 64m

    2. Enable Sweeping
     Another way to take care of that for good is to allow classes to be unloaded so your PermGen never runs out:

    -XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled

    Stuff like that worked magic for me in the past. One thing though, there’s a significant performance trade off in using those, since permgen sweeps will  make like an extra 2 requests for every request you make or something along those lines. You’ll need to balance your use with the tradeoffs.


    2

    View comments

    1. Two remarks

      1) The default permgen size is platform dependent. For example, on my Mac OS X with Java 7 I have it set at 82M:

      my-machine:~ me$ java -XX:+PrintFlagsFinal -version | grep MaxPermSize
      uintx MaxPermSize = 85983232 {pd product}

      2) Specifying the class unloading (which is by default switched on on Java 8) only helps if the classes are no longer referenced and can be garbage collected. So it will not be something that will save you lets say from permgen leaks

      You might be interested in reading the following analysis about java.lang.outofmemoryerror: Permgen space - https://plumbr.eu/outofmemoryerror/permgen-space

      ReplyDelete
      Replies
      1. ivo
        1. you can set your permgen size to 82M on windows as well. i wrote default size is 64M and you can increase it upto 512M irrespective of operating system using. http://www.oracle.com/technetwork/java/javase/7u5-relnotes-1653274.html

        2. you are right it will not help if classes have reference but it will be helping and ofcourse there will be classes which dont have active reference

        Delete

  6. Base 64 Encoding and Decoding


    Base64 is a group of similar binary-to-text encoding schemes that represent binary data in an ASCII string format by translating it into a radix-64 representation.
    The term Base64 originates from a specific MIME content transfer encoding.

    Base64 encoding schemes are commonly used when there is a need to encode binary data that needs to be stored and transferred over media that are designed to deal
    with textual data. This is to ensure that the data remains intact without modification during transport. Base64 is commonly used in a number of applications
    including email via MIME, and storing complex data in XML.

    Design
    The particular choice of character set selected for the 64 characters required for the base varies between implementations. The general rule is to choose a set
    of 64 characters that is both part of a subset common to most encodings, and also printable. This combination leaves the data unlikely to be modified in transit
    through information systems, such as email, that were traditionally not 8-bit clean.For example, MIME's Base64 implementation uses A–Z, a–z, and 0–9
    for the first 62 values. Other variations, usually derived from Base64, share this property but differ in the symbols chosen for the last two values;
    an example is UTF-7.

    Base64 Encoding and decoding using Apache Commons


    // encrypt data on your side using BASE64
    byte[]   bytesEncoded = Base64.encodeBase64(str .getBytes());
    System.out.println("ecncoded value is " + new String(bytesEncoded ));

    // Decrypt data on other side, by processing encoded data
    byte[] valueDecoded= Base64.decodeBase64(bytesEncoded );
    System.out.println("Decoded value is " + new String(valueDecoded));


    Base64 Encoding and decoding using Javascript

    /**
    *
    *  Base64 encode / decode
    *
    *
    **/
    var Base64 = {

    // private property
    _keyStr : "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=",

    // public method for encoding
    encode : function (input) {
        var output = "";
        var chr1, chr2, chr3, enc1, enc2, enc3, enc4;
        var i = 0;

        input = Base64._utf8_encode(input);

        while (i < input.length) {

            chr1 = input.charCodeAt(i++);
            chr2 = input.charCodeAt(i++);
            chr3 = input.charCodeAt(i++);

            enc1 = chr1 >> 2;
            enc2 = ((chr1 & 3) << 4) | (chr2 >> 4);
            enc3 = ((chr2 & 15) << 2) | (chr3 >> 6);
            enc4 = chr3 & 63;

            if (isNaN(chr2)) {
                enc3 = enc4 = 64;
            } else if (isNaN(chr3)) {
                enc4 = 64;
            }

            output = output +
            this._keyStr.charAt(enc1) + this._keyStr.charAt(enc2) +
            this._keyStr.charAt(enc3) + this._keyStr.charAt(enc4);

        }

        return output;
    },

    // public method for decoding
    decode : function (input) {
        var output = "";
        var chr1, chr2, chr3;
        var enc1, enc2, enc3, enc4;
        var i = 0;

        input = input.replace(/[^A-Za-z0-9\+\/\=]/g, "");

        while (i < input.length) {

            enc1 = this._keyStr.indexOf(input.charAt(i++));
            enc2 = this._keyStr.indexOf(input.charAt(i++));
            enc3 = this._keyStr.indexOf(input.charAt(i++));
            enc4 = this._keyStr.indexOf(input.charAt(i++));

            chr1 = (enc1 << 2) | (enc2 >> 4);
            chr2 = ((enc2 & 15) << 4) | (enc3 >> 2);
            chr3 = ((enc3 & 3) << 6) | enc4;

            output = output + String.fromCharCode(chr1);

            if (enc3 != 64) {
                output = output + String.fromCharCode(chr2);
            }
            if (enc4 != 64) {
                output = output + String.fromCharCode(chr3);
            }

        }

        output = Base64._utf8_decode(output);

        return output;

    },

    // private method for UTF-8 encoding
    _utf8_encode : function (string) {
        string = string.replace(/\r\n/g,"\n");
        var utftext = "";

        for (var n = 0; n < string.length; n++) {

            var c = string.charCodeAt(n);

            if (c < 128) {
                utftext += String.fromCharCode(c);
            }
            else if((c > 127) && (c < 2048)) {
                utftext += String.fromCharCode((c >> 6) | 192);
                utftext += String.fromCharCode((c & 63) | 128);
            }
            else {
                utftext += String.fromCharCode((c >> 12) | 224);
                utftext += String.fromCharCode(((c >> 6) & 63) | 128);
                utftext += String.fromCharCode((c & 63) | 128);
            }

        }

        return utftext;
    },

    // private method for UTF-8 decoding
    _utf8_decode : function (utftext) {
        var string = "";
        var i = 0;
        var c = c1 = c2 = 0;

        while ( i < utftext.length ) {

            c = utftext.charCodeAt(i);

            if (c < 128) {
                string += String.fromCharCode(c);
                i++;
            }
            else if((c > 191) && (c < 224)) {
                c2 = utftext.charCodeAt(i+1);
                string += String.fromCharCode(((c & 31) << 6) | (c2 & 63));
                i += 2;
            }
            else {
                c2 = utftext.charCodeAt(i+1);
                c3 = utftext.charCodeAt(i+2);
                string += String.fromCharCode(((c & 15) << 12) | ((c2 & 63) << 6) | (c3 & 63));
                i += 3;
            }

        }

        return string;
    }

    }





    0

    Add a comment




  7. i wrote a simple plugin for openfire. It is very simple one.It just catches the traffic of openfire and prints it to info.log. It has been very much helpful in working and debugging with applications for openfire.




    package com.faisal.bhagat.plugin;

    import java.io.File;

    import org.apache.log4j.Logger;
    import org.jivesoftware.openfire.container.Plugin;
    import org.jivesoftware.openfire.container.PluginManager;
    import org.jivesoftware.openfire.interceptor.InterceptorManager;
    import org.jivesoftware.openfire.interceptor.PacketInterceptor;
    import org.jivesoftware.openfire.interceptor.PacketRejectedException;
    import org.jivesoftware.openfire.session.Session;
    import org.xmpp.packet.Packet;

    public class XMLLoggerPlugin implements Plugin {

    private XMLPacketInterceptor interceptor = new XMLPacketInterceptor();

    public XMLLoggerPlugin() {

    }

    public void initializePlugin(PluginManager manager, File pluginDirectory) {
    InterceptorManager.getInstance().addInterceptor(interceptor);
    }

    public void destroyPlugin() {
    }

    private class XMLPacketInterceptor implements PacketInterceptor {
    public void interceptPacket(Packet packet, Session session,
    boolean incoming, boolean processed)
    throws PacketRejectedException {

    if(!processed)
    {
    Logger.getLogger("XML LOGGER").info(
    "+++: " + packet.toXML());
    }

    }

    }
    }

    0

    Add a comment

  8. i found good material on using maven with scala i am resharing it for further use.

    Introduction to maven


    Maven is a builder like make or ant, written in java. It's a commande line tool, IDE (Eclipse, Netbeans, IDEA) have plugins to handle and integrate project powered by maven. It could be used to create lib (jar), webapps (ear) and any other type of "artifact". It prefers convention over configuration, and configuration over instruction. What that mean exactly ?
    • every action have a default configuration (= the convention).
    • every action is a goal defined into a plugin (aka mojo), and for common case, you will try to use existing plugin instead of calling (more) low level instruction (like copy file,...)
    Before creating your first scala project, You need to know some info:

    a command line tool
    "mvn" is the name of the command line tool to call maven 2.x. To display help, run mvn help
    the project descriptor : the file [prj]/pom.xml
    It's the file when every project information are stored (name, version, dependencies, license, mailing-list,...)
    the build lifecycle :
    The build lifecycle is defined by a sequence of phases, the main are :
    • compile - compile the source code of the project
    • test - test the compiled source code using a suitable unit testing framework. These tests should not require the code be packaged or deployed
    • package - take the compiled code and package it in its distributable format, such as a JAR.
    • integration-test - process and deploy the package if necessary into an environment where integration tests can be run
    • install - install the package into the local repository, for use as a dependency in other projects locally
    • deploy - done in an integration or release environment, copies the final package to the remote repository for sharing with other developers and projects.
    A phase is depend of the previous one, so when you request the phase test, then the phase compile is done before,...
    Directory layout
    see below for a scala project
    repository
    Maven use repositories (local and remote) to store and to retrieve artifacts and their descriptor (pom). Artifacts are jar, war,... and they could be used as dependencies, maven plugin,...
    By default, maven search artifact in the central repository. A "dedicated" repository for Scala stuff is available at http://scala-tools.org/repo-releases/.

    Your first scala project with maven


    In the following, we will run maven for the first time. Maven download what it need to work from remote repositories, and cache the downloaded artifact into its local repository (default is $HOME/.m2/repository). It only download what it need for the requested phases/goals (lazy downloading). So the first runs could be very long.

    Step 0: installation


    • install jdk 1.5+ (eg : on my box $HOME/bin/soft-linux/jdk-1.5.0_03)
    • install maven 2.0.8+ (eg : on my box $HOME/bin/soft-java/apache-maven-2.0.8)
      • download it
      • unarchive it
      • add the apache-maven-2.0.8/bin directory to your PATH
      • check that maven is in the path:
        • go into any directory outside the maven installation
        • run mvn help, you should see
          usage: mvn [options] [] []
          
          Options:
          -q,--quiet                    Quiet output - only show errors
          ...

    Step 1: create a project


    You could create a project skeleton with your favorite file system tools (following directory layout as below) or you could use archetypes. Maven Archetypes are project'skeleton that could be used to create new project.
    mvn org.apache.maven.plugins:maven-archetype-plugin:1.0-alpha-7:create \
    -DarchetypeGroupId=org.scala-tools.archetypes \
    -DarchetypeArtifactId=scala-archetype-simple \
    -DarchetypeVersion=1.1 \
    -DremoteRepositories=http://scala-tools.org/repo-releases \
    -DgroupId=your.proj.gid -DartifactId=your-proj-id

    At the end of the process you should see something like
    ...
    [INFO] ------------------------------------------------------------------------
    [INFO] BUILD SUCCESSFUL
    [INFO] ------------------------------------------------------------------------
    [INFO] Total time: 1 second
    [INFO] Finished at: Sat Jan 05 17:39:47 CET 2008
    [INFO] Final Memory: 6M/63M
    [INFO] ------------------------------------------------------------------------

    !! Success you now have a empty project under your-proj-id directory with the following directory layout :
    your-proj-id/
    |-- pom.xml
    `-- src
    |-- main
    |   `-- scala
    |       `-- your
    |           `-- proj
    |               `-- gid
    |                   `-- App.scala
    `-- test
        `-- scala
            `-- your
                `-- proj
                    `-- gid
                        `-- AppTest.scala

    In fact, the project is not empty it contains an helloworld application (App.scala) and a JUnit test (AppTest.scala).
    In the next step, you will request phase (or goals). The results will be put under your-proj-id/target directory. The target directory is the working directory where every plugin put the result of computation. If you want to clean up, request the goal "clean"
    mvn clean

    Step 2: compile the project


    # only compile
    mvn compile

    If it's the first time you use maven with scala, the build should failed with a message like
    ...
    [ERROR] FATAL ERROR
    [INFO]
    ------------------------------------------------------------------------
    [INFO] The PluginDescriptor for the plugin Plugin [org.scala-tools:maven-scala-plugin] was not found.
    [INFO]
    ...


    Cause :

    the pom.xml (autogenerated) doesn't specify wish version to use for maven-scala-plugin, so maven try to use the latest available localy, and none was previously downloaded.

    Solutions :
    • edit the pom.xml and define a version for the plugin

    • request to download the latest available on remote repositories

    I prefer the second solution (in this case):
    # only compile
    mvn -U compile

    now you should see
    ...
    [INFO] ------------------------------------------------------------------------
    [INFO] BUILD SUCCESSFUL
    [INFO] ------------------------------------------------------------------------
    ...

    Step 3: compile and running test


    The skeleton create a JUnit test AppTest.scala as sample, try to compile and run it
    # compile + compile test + run test
    mvn test

    you should get :
    ...
    -------------------------------------------------------
    T E S T S
    -------------------------------------------------------
    Running your.proj.gid.AppTest
    Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.054 sec <<< FAILURE!
    
    Results :
    
    Failed tests:
    testKO(your.proj.gid.AppTest)
    
    Tests run: 2, Failures: 1, Errors: 0, Skipped: 0
    
    [INFO] ------------------------------------------------------------------------
    [ERROR] BUILD FAILURE
    [INFO] ------------------------------------------------------------------------
    [INFO] There are test failures.
    
    Please refer to /home/dwayne/tmp/your-proj-id/target/surefire-reports for the individual test results.

    BUILD FAILURE, it's not good! So read the log on console :
    • there is 2 tests and one of them failed
    • the failed test is the method testKO from the class your.proj.gid.AppTest
    • see the content of the directory .../your-proj-id/target/surefire-reports for details

    So you could read the problem in .../your-proj-id/target/surefire-reports/your.proj.gid.AppTest.txt
    ...
    testKO(your.proj.gid.AppTest)  Time elapsed: 0.01 sec  <<< FAILURE!
    junit.framework.AssertionFailedError
        at junit.framework.Assert.fail(Assert.java:47)
        at junit.framework.Assert.assertTrue(Assert.java:20)
        at junit.framework.Assert.assertTrue(Assert.java:27)
        at your.proj.gid.AppTest.testKO(AppTest.scala:26)
        at your.proj.gid.AppTest.testKO(AppTest.scala:26)
    ...

    So edit the test and fix it (it's easy), and rerun test until it pass.
    Why the empty project is created with a failed test? to check that test are running and are used.

    Step 4: generate the jar


    # compile + run test + generate the jar
    mvn package

    If you fixed the test in Step 3, then a jar should be generated under the target directory. The jar doesn't contains the test classes, only the classes from src/main/scala/...

    Step 5: start coding


    • add scala file under src/main/scala/... or src/test/scala/...
    • run the phases or goal you wish,...
    • if you need more lib (dependencies), edit the pom.xml and add node. By default you could declare dependency available on central repo (I suggest to use mvnrepository as a search engine in central repo), or in http://scala-tools.org/repo-releases/ (browse the directory, no search engine available :()
    1

    View comments




  9. The SCP protocol is a network protocol, based on the BSD RCP protocol,[1] which supports file transfers between hosts on a network. SCP uses Secure Shell (SSH) for data transfer and uses the same mechanisms for authentication, thereby ensuring the authenticity and confidentiality of the data in transit. A client can send (upload) files to a server, optionally including their basic attributes (permissions, timestamps). Clients can also request files or directories from a server (download). SCP runs over TCP port 22 by default. Like RCP, there is no RFC that defines the specifics of the protocol.

    How it works
    Normally, a client initiates an SSH connection to the remote host, and requests an SCP process to be started on the remote server. The remote SCP process can operate in one of two modes: source mode, which reads files (usually from disk) and sends them back to the client, or sink mode, which accepts the files sent by the client and writes them (usually to disk) on the remote host. For most SCP clients, source mode is generally triggered with the -f flag (from), while sink mode is triggered with -t (to).[2] These flags are used internally and are not documented outside the SCP source code.

    Here is the sample code which i wrote to upload a file to remote server via scp. i did it using jsch library


    package com.scp;


    import java.io.File;
    import java.io.FileInputStream;
    import java.io.IOException;
    import java.io.InputStream;
    import java.io.OutputStream;
    import java.util.Properties;

    import com.jcraft.jsch.Channel;
    import com.jcraft.jsch.ChannelExec;
    import com.jcraft.jsch.JSch;
    import com.jcraft.jsch.Session;

    public class SCPTest{
      public static void main(String[] arg){
     System.out.println("++++++SCP START");

        FileInputStream fis=null;
        try{

          String lfile="/home/faisal/workspace/mavinmanualy.txt";
          String user="username";
       
          String host="54.245.239.104";
          String rfile="TheRemoteTwo.txt";

          JSch jsch=new JSch();
          Session session=jsch.getSession(user, host, 22);
          session.setPassword("password");
       

       
       
          Properties config = new Properties();
          config.put("StrictHostKeyChecking","no");
          session.setConfig(config);
          session.connect();

          boolean ptimestamp = true;

          // exec 'scp -t rfile' remotely
          String command="scp " + (ptimestamp ? "-p" :"") +" -t "+rfile;
          Channel channel=session.openChannel("exec");
          ((ChannelExec)channel).setCommand(command);

          // get I/O streams for remote scp
          OutputStream out=channel.getOutputStream();
          InputStream in=channel.getInputStream();

          channel.connect();

          if(checkAck(in)!=0){
    System.exit(0);
          }

          File _lfile = new File(lfile);

          if(ptimestamp){
            command="T "+(_lfile.lastModified()/1000)+" 0";
            // The access time should be sent here,
            // but it is not accessible with JavaAPI ;-<
            command+=(" "+(_lfile.lastModified()/1000)+" 0\n");
            out.write(command.getBytes()); out.flush();
            if(checkAck(in)!=0){
       System.exit(0);
            }
          }

          // send "C0644 filesize filename", where filename should not include '/'
          long filesize=_lfile.length();
          command="C0644 "+filesize+" ";
          if(lfile.lastIndexOf('/')>0){
            command+=lfile.substring(lfile.lastIndexOf('/')+1);
          }
          else{
            command+=lfile;
          }
          command+="\n";
          out.write(command.getBytes()); out.flush();
          if(checkAck(in)!=0){
    System.exit(0);
          }

          // send a content of lfile
          fis=new FileInputStream(lfile);
          byte[] buf=new byte[1024];
          while(true){
            int len=fis.read(buf, 0, buf.length);
    if(len<=0) break;
            out.write(buf, 0, len); //out.flush();
          }
          fis.close();
          fis=null;
          // send '\0'
          buf[0]=0; out.write(buf, 0, 1); out.flush();
          if(checkAck(in)!=0){
    System.exit(0);
          }
          out.close();

          channel.disconnect();
          session.disconnect();
          System.out.println("++++++SCP END");
          System.exit(0);
        }
        catch(Exception e){
          System.out.println(e);
          try{if(fis!=null)fis.close();}catch(Exception ee){}
        }
      }

      static int checkAck(InputStream in) throws IOException{
        int b=in.read();
        // b may be 0 for success,
        //          1 for error,
        //          2 for fatal error,
        //          -1
        if(b==0) return b;
        if(b==-1) return b;

        if(b==1 || b==2){
          StringBuffer sb=new StringBuffer();
          int c;
          do {
    c=in.read();
    sb.append((char)c);
          }
          while(c!='\n');
          if(b==1){ // error
    System.out.print(sb.toString());
          }
          if(b==2){ // fatal error
    System.out.print(sb.toString());
          }
        }
        return b;
      }

    }

    0

    Add a comment



  10. Last day i was configuring a ubuntu system as my work machine. First of all i tried to install eclipse an IDE most basic requirement for a developer's machine. in ubuntu 12.04 by default Eclipse Indigo is available but i wanted Eclipse Kepler.i downloaded eclipse from their site extracted it in some folder and tried to run it by double clicking eclipse in its folder nothing happend. It was my first encounter to ubuntu so i thought may be i have extracted it in wrong folder so i moved it into opt then tried to run it again but result was same?

    Then i started googling found many tutorials how to install eclipse kepler on ubuntu 12.04. i repeated the process according to those tutorials but nothing positive happened. i confirmed my java version 1.6.x was available by running java -version command in terminal. It was the version which was required by this release of eclipse. so what to do? Then one thing came into my mind may be error is due to the fact that i have 64 bit jdk and my eclipse release was 32 bit. But as of my knowledge Eclipse is developed in java and a 64bit jvm should support a class file compiled on a 32bit jvm.

    So their should be some other reason. Then i tried to execute eclipse from terminal result was same but atlease it threw some useful information. This was the output when i tried to run eclipse from terminal

    JVM terminated. Exit code=13
    /usr/bin/java
    -Xms40m
    -Xmx384m
    -Dorg.eclipse.equinox.p2.reconciler.dropins.directory=/usr/share/eclipse/dropins
    -XX:MaxPermSize=256m
    -jar /usr/lib/eclipse//plugins/org.eclipse.equinox.launcher_1.2.0.dist.jar
    -os linux
    -ws gtk
    -arch x86_64
    -showsplash
    -launcher /usr/lib/eclipse/eclipse
    -name Eclipse
    --launcher.library /usr/lib/eclipse//plugins       /org.eclipse.equinox.launcher.gtk.linux.x86_64_1.1.100.dist/eclipse_1407.so
    -startup /usr/lib/eclipse//plugins/org.eclipse.equinox.launcher_1.2.0.dist.jar
    --launcher.overrideVmargs
    -exitdata a8004
    -vm /usr/bin/java
    -vmargs
    -Xms40m
    -Xmx384m
    -Dorg.eclipse.equinox.p2.reconciler.dropins.directory=/usr/share/eclipse/dropins
    -XX:MaxPermSize=256m




    One thing was meaningful regarding this output and that was jvm exit code 13. Then i searched for jvm exit code 13 couldnt get alot of details but found one thing that it is thrown when u try to run a 32 java application with some native code involved in it on 64bit jvm. i downloaded 64bit eclipse release and run it. It was success :). So it means eclipse is not pure java it uses some native code aswell.



    1

    View comments

Blog Archive
About Me
About Me
Loading