Today let’s learn preface to java virtual machine and architecture.
Preface to java virtual machine and architecture
When we write a java program we use java compiler to compile and get the dot class (.class) file or bytecode as a output.
JVM responsibility is to convert bytecode to machine instruction that can be executed where application is running.
Also read – java features
Now our application consists of .class (dot class) file which is a sub-output that is generated by the java compiler.
It is loaded into first major component of the JVM which is the class loader subsystem.
Also read – class and objects in java
The class loader subsystem is responsible for loading bytecode and treating bytecode as an instruction.
Now it should be loaded to perform some operations with respect to other classes which are associated with the JVM.
So for example, collection classes, system classes and all the various types of classes which are there in the class loader subsystem provided by the JVM are taken care of.
Also read – java overview
By using the class loader subsystem the class files are been loaded. But there are various certain internal processing in class loader subsystem which are,
Loading is a phase where class files are loaded. Basically loading is of three – bootstrap loader, application class loader and extension loader. Let’s understand one by one,
Bootstrap loader is responsible for loading internal java class files in the file called rt.jar. You often encounter rt.jar file in java directory which consists of all important classes and packages which are required by java.
All primary packages and classes are available in rt.jar file.
Extension class type loader:
Extension class type loader is responsible for loading important extensible file which are needed by JVM or classes which are required by JVM for further processing.
It is basically in lib / hd directory. So lib / hd contain all extension classes which are loaded after bootstrap loader. Now after extension class is loaded successfully classpath application is loaded.
Application class paths are specified with the cp parameter or -cp (minus cp) parameter. -cp (minus cp) parameter can be passed explicitly.
Linking is the phase where most of the work is done. Now linking involves three sub process which is verifying, preparing and resolving.
Verify phase is a phase where java bytecode is taken care of. It basically checks bytecode whether it is compatible to JVM specification or not.
If there is a certain problem while verifying java bytecode it will throw common error which is ClassNotFoundException which happens during the verifying phase.
In prepare phase all class data variable or the instance variable initialized to their default value.
Also read – variables in java
public static boolean bool = false;
In prepare phase variable “bool” which is of boolean type will be initialized to the default value of boolean type which is false not true.
Also read – static keyword in java
Because prepare phase involves initializing with default value not with original value.
Now resolve phase job is to load other associated class which are there in the main class. For example, all the reference variables are initialized during resolve phase.
Say, you have a class called “Bike”. Now class “Bike” can consist of another class “Owner”. Now during resolve phase class “Bike” will go and check whether there is definition for “Owner” or not.
If there is a definition then there is no problem it will continue to resolve. But if it’s failed to find class or the another class “Owner” it will throw an error or exception ClassNotFoundException.
After resolve phase, third phase comes into play which is initialization. Now initialization is a phase where static initialization part is initialized first and after that all values which we assigned in prepare phase.
For example, in prepare phase we have assigned variable to false which is default value, is initialized to actual value.
As we discussed in prepare phase we have “public static boolean” initialized with false. Now in initialization phase it will be initialized as
public static boolean bool = true;
which is actual value in initialization phase. So that’s how class loader subsystem works in a much simpler way.
There is some other component which is called as “Runtime data part” which comprises of all memories which are going to be utilized by JVM itself.
Method part is an runtime data access area whose responsibility is to load class data or metadata which correspond to class.
All data which is present or which correspond to class will get stored in method part. Basically method part size is 64 megabyte by default but it can be tuned up by using a “comp gen command”.
But this can be tuned up by using “xx” or “xmx” command if server load is thousand and thousand of classes or millions of classes.
Also read – method in java
Now there maybe a probability that you will get a java.lang.OutOfMemoryError. This is due to exceeding size of memory utilization.
So you can tune up by using “xx”, “xmx” and using comp gen commands. When you talk about java 8, comp gen is been replaced by using meta space.
What java developer of 8 do is, they do not need any kind of external developer input to tune up method part space.
They introduced meta space. Now meta space is responsible for automatically allocating memory, look for expansion as well as shrinking.
Heap is hardly used memory area in JVM. Heap basically stores object data. For example,
Class obj = new Class();
Now object created will be there in heap memory only. All properties, characteristics and attributes of an object will get stored in heap. For example, you can store arrays. Since arrays are also objects.
Also read – string literal in java
So objects get stored in heap. By default heap memory is one fourth of physical memory.
But this can also be tuned up using “xx command”, “xs” for the small size and “xmx” for the maximum size. So this can be tuned up by using parameter command.
Java stack contain stack frame. It is basically per method invocation. Task of java stack is to load up method and invoke method based on the preference or based on last in first out preference.
Now we often call method one by one. So for example, thread one is calling method one, method one is calling method two.
Here method one will stack first. Then method two will come into picture and then method three will come to picture.
This is pop-up based. On returning occurrence we don’t need to worry about that because they are going to return.
Sometimes java stack is stacked up again and again. What happens actually is whenever you write a program and there is a recursive algorithm that is not encountered while handling by the developer.
So there occurs stack frame added again and again. It will end up in stack overflow exception.
Basically you don’t need to worry because it will be cleared automatically by the jvm itself.
PC registers are basically program counter registers which point to next instruction to be executed.
PC register is responsible for per thread management. Suppose there are three threads, one, two and three. Thread one counts instruction for thread two, thread two counts instruction for thread three and basically it is a pointer to next instruction to be executed.
Native Method Stack:
Native method stack works parallelly with java stack. In java, native method stacks is operating system dependent.
All native operating system dependent classes are loaded into this native method stack. For example, there is a library in the “lib” folder called as dll’s.
There are lot of “dll” available if you are using windows. Now “dll” is responsible for holding class data corresponding to operating system dependency.
If you are using windows then there is a .dll (dot dll) file. If you are using Linux or UNIX there may be probability that you will find .sor (dot sor) kind of file extension. Now these all belong to native method stack.
Execution Engine is responsible for executing bytecode instruction. It basically comprises of various subsystem which are interpreter, garbage collection, just-in-time compiler and there is hotspot profiler.
Interpreter interprets bytecode instruction line by line. It checks whether bytecode instructions are compatible to execution engine or not and it passes using native method interface and helps in execution.
Now here comes picture of just-in-time compiler. Whenever execution engine encounter similar kind of instruction to be executed again and again, what it does is it compiles piece of classes or some piece of code.
It compiles some part of code which is repeated over and again. So that it can improve performance in later time. For example, if it encounters XYZ XYZ XYZ multiple times; what JIT does is, JIT pre-compiles XYZ automatically. Now in next instruction if XYZ is encountered, JVM reduces that time, thus leading to performance improvisation.
A Hotspot profiler keeps an eye on bytecode and it helps JIT compiler to look for statistics and provide statistic results.