Please follow instructions for your operating system on how to launch the profiler UI:
Launch YourKit Java Profiler from in the Start menu
Install the profiler application from DMG image and run it.
Alternatively you can run profiler from terminal with the command
<Profiler Installation Directory>/Contents/Resources/bin/profiler.sh
Unpack the profiler archive and run
<Profiler Installation Directory>/bin/profiler.sh
Unpack the profiler archive and run
<Profiler Installation Directory>/bin/profiler.sh
The profiler UI requires Java 8 to run. Please ensure that the appropriate Java version is installed.
The profiled application and the profiler can run on the same machine or on different machines.
These modes are called local profiling and remote profiling respectively.
To profile a Java application, be it local or remote, the profiler agent should be loaded into the JVM.
This approach is recommended as it provides the full set of profiling capabilities. To learn how to apply it, see appropriate subtopic:
Attaching the profiler agent to a running JVM instance simplifies profiling, as it avoids a special step to enable profiling: any running Java application can be profiled on demand.
However, attaching to a running JVM is not always possible, and some profiling features are not available.
Read more about attach.
Windows, 32-bit Java (x86) and 64-bit Java (x86-64):
Linux, 32-bit Java and 64-bit Java:
Local profiling is the case when the profiled application and the profiler UI run on the same machine, usually the developer workstation.
See also Remote profiling.
You should perform two actions to perform local profiling:
To profile a Java application, the profiler agent should be loaded into the JVM.
There are different approaches depending on the application type:
Use one of the following:
Use one of the following:
Java EE server integration wizard: use if the application server runs standalone, i.e. you start it with a script or it runs as a service.
The wizard automatically enables profiling in a number of popular application servers, generating appropriate startup scripts. Even if your server is not in the list of known servers, the wizard offers the "Generic server" option which will instruct you on how to manually perform the necessary changes.
The wizard can be started from the profiler Welcome screen, or by using Tools | Profile Local Java EE Server or Application....
Use one of the following:
Start Java Web Start application with the profiler agent by
setting the environment variable JAVAWS_VM_ARGS
:
JAVAWS_VM_ARGS=-agentpath:<agent library path>
If necessary, specify other VM options too:
JAVAWS_VM_ARGS=-agentpath:<agent library path> <other VM options>
Please learn how to specify -agentpath:<agent library path>
in Enabling profiling manually.
Hint: on Windows, you can set JAVAWS_VM_ARGS
variable globally in "My Computer" settings.
Use one of the following:
appletviewer
command
pass -J-agentpath:<agent library path>
(see Enabling profiling manually) as a command line parameter.
When the local application is running, connect to it from the profiler UI to perform profiling.
Remote profiling is the case when the profiled application and the profiler UI run on different machines, usually on a server and your developer machine correspondingly.
See also Local profiling.
To profile remote application you have to do two things:
There are 3 options to load the profiler agent into JVM:
Use remote profiling from user interface if you have SSH access to the remote machine. The profiler will transfer all necessary files to the remote computer and will attach the agent to a running JVM.
Start JVM with the profiler agent by applying the console Java EE server integration wizard on the remote machine or enable profiling manually. We recommend to start JVM with the agent, because attaching the agent to a running JVM has limitations in profiling functionality and is not always possible.
Run the console attach wizard on the remote machine.
When the remote application is running, connect to it from the profiler UI to perform profiling.
If connection fails, please refer to the troubleshooting instructions.
To profile Java applications running on the remote machine use Profile remote Java EE server or application... action on Welcome screen or in "Tools" menu.
Specify the host name or IP address of the remote machine in the opened "Profile Remote Application" dialog.
There are two methods for detecting applications running on the remote machine:
Simple method. Profiler discovers applications running with the profiler agent. Ports in the given range will be scanned. By default the profiler agent port is allocated in the range 10001-10010. If the profiler port was changed with startup option 'port', also change the Profiler agent port(s) field accordingly.
Advanced method. Profiler creates SSH connection to the remote machine. It will detect all running Java applications. Applications running without the profiler agent will be detected as well. You will be able to attach to them. Specify SSH user and SSH port in the corresponding fields. Authentication with a password or a private key is supported.
If the remote host is not directly reachable you can build SSH tunnel. To do so, navigate to the SSH Tunnel tab and enable SSH tunnel using Use SSH tunnel checkbox.
Created connection will appear in the "Monitor Applications" list on Welcome screen under a given name. You can then connect to the application to perform profiling.
If you do not see your application it the list, please read Troubleshoot connection problems.
For a usage example see Profiling in Amazon EC2 instance.
Please note that for SSH connections known hosts are not checked, and StrictHostKeyChecking
SSH parameter is set to no
.
This article describes how to use console wizard to integrate profiler with Java EE server. Console wizard might be useful if you want to profile remote Java EE server or the machine has no graphics environment.
See also Java EE high-level profiling.
The wizard enables profiling in a number of popular application servers, generating appropriate startup scripts. Even if your server is not in the list of known servers, the wizard offers the "Other Java application" option which will instruct you on how to manually perform the necessary changes.
Install profiler on the machine where your Java EE server is running. License key is not required to use the wizard.
Run the command below:
<Profiler Installation Directory>\bin\integrate.bat
<Profiler Installation Directory>/Contents/Resources/bin/integrate.sh
<Directory with Unpacked Content>/bin/integrate.sh
Follow the instructions that will appear:
The integration wizard will generate output files (copies of configuration files, additional startup scripts) in the directories where original files are located. Please ensure that you run the command with sufficient access rights.
When the server is running with the profiler agent, connect to it from the profiler UI.
This article describes how to use console wizard to attach profiler agent to a running JVM. Console wizard might be useful if you want to profile remote Java EE server or the machine has no graphics environment.
Note that attaching profiler agent to a running JVM has limitations, which can be avoided starting application with the profiler agent.
See also Attaching profiler agent to a running JVM.
Install profiler on the machine where your Java application is running. License key is not required to use the wizard.
Run the command below:
<Profiler Installation Directory>\bin\attach.bat
or
<Profiler Installation Directory>\bin\attach.bat <PID>
or
<Profiler Installation Directory>\bin\attach.bat <PID> <startup_options>
<Profiler Installation Directory>/Contents/Resources/bin/attach.sh
or
<Profiler Installation Directory>/Contents/Resources/bin/attach.sh <PID>
or
<Profiler Installation Directory>/Contents/Resources/bin/attach.sh <PID> <startup_options>
<directory with unpacked content>/bin/attach.sh
or
<directory with unpacked content>/bin/attach.sh <PID>
or
<directory with unpacked content>/bin/attach.sh <PID> <startup_options>
<PID>
is an optional parameter.
Specify it to immediately attach the agent to particular application.
If <PID>
is not specified,
the wizard will show the list of running JVMs
and offer to choose to which JVM to connect.
<startup_options>
is an optional parameter to specify additional startup options.
If <PID>
parameter is not specified,
the wizard will offer to specify startup options.
Follow the instructions that will appear:
After the profiler agent is attached, its port will be printed out. Use this port number to connect to the application and start profiling.
The startup options allow to customize some aspects of profiling. These options can be configured when you start the profiled application.
In most cases, you do not have to specify any of these options, because default behavior suits fine in most cases.
These options can be configured when you start the profiled application:
The options are comma-separated if you specify more than one option.
Main optionsThese options can be switched on startup only, i.e. corresponding behavior cannot be altered during runtime. |
|
port =<port>or port =<min port>-<max port> |
Specify the port that the profiler agent listens on for communication with the Profiler. By default, the port is chosen automatically: if port 10001 is free, it is used; otherwise, if port 10002 is free, it is used etc.; if no port in the range 10001..10010 is free, an arbitrary free port is used. If port range is specified, profiler will choose first free port in the range. |
listen =<option>
|
Specify the profiler agent connectivity option.
|
delay =<milliseconds>
|
Postpone start of telemetry collection. This option is mostly intended to prevent startup issues of some Java EE servers. By default, lightweight telemetry is collected right from the start of the profiled application. The telemetry is collected via so called platform MBeans ("managed beans") - the components for monitoring and managing the JVM. Some Java EE servers install their own implementations of standard MBeans. In earliest stages of the server startup the MBeans can not be functional because they depend on other components of the server (such as custom logging) which have not initialized so far. Accessing such MBeans in earliest stages can cause the server startup failure (e.g. with ClassNotFoundException).
The
The Java EE integration wizard by default uses If the 10 seconds delay is not enough in your particular case, try a bigger value. |
telemetrylimit =<hours> |
The telemetry information is remembered in a circular buffer in the profiler agent memory. This allows you to connect to a profiled application on demand and discover how the application behaved in the past. By default, the telemetry buffer is limited to store approximately 1 hour of recent telemetry data.
With the help of the Do not use unnecessarily long buffers Extending the telemetry buffer will allocate additional amount of memory in the profiled application's address space. That is especially important for 32-bit processes because their address space is limited. Also, the longer the buffer, the more time it takes to retrieve the telemetry information from the profiler agent when you connect to a long running profiled application. |
telemetryperiod =<milliseconds> |
Specify how often telemetry information is obtained. By default, the period is 1 second (1000 milliseconds). Note that setting smaller period can add overhead. |
probetablelengthlimit =<rows> |
Probes: limit the number of rows to be stored by the profiler agent per table. Read more... |
deadthreadlimit =<threads> |
Specify the number of recently finished threads for which
CPU sampling, tracing
and monitor
profiling results are kept (the default value is 50).
Profiling results for the finished threads beyond this limit are merged to
|
onexit=memory |
Always capture a memory snapshot on profiled application exit. If this option is not specified, the memory snapshot will be captured on exit if object allocation recording is running at that moment. |
onexit=snapshot |
Always capture a performance snapshot on profiled application exit. If this option is not specified, the performance snapshot will be captured on exit if CPU sampling or tracing or monitor profiling is running at that moment. This option is automatically added when the profiled application is started from the IDE. |
dir =<directory for snapshots>
|
Specify custom snapshot directory for the particular profiled application |
logdir =<directory> |
By default, the profiler agent log file is Use this option to create logs in different directory.
For example, it can be useful when profiling applications running as a Windows service.
They usually run under special user,
thus the logs are located in that special user's home directory.
For example, it can be
Instead, make the logs created in an arbitrary easily accessible directory,
e.g. |
united_log |
Store logs from several runs of the same application
as a series of log files named This mode may simplify log maintenance and cleanup when profiling applications such as servers.
Session name is the presentable name of the application;
see the startup option
Running number starts with 1.
If the first log file <session name> Note: the oldest log files are not automatically removed. If you need to clean them up, do it manually or write a script. |
sessionname =<name> |
Specify alternate presentable name of the profiled application used in:
If this option is not specified, the session name is automatically chosen for particular application basing on its main jar file name, or its main class name, or the custom executable name, or on the run configuration name when profiling from within IDE. |
snapshot_name_format =<format> |
Specify alternate rule to compose snapshot file names. Available macros:
The default format is Characters not allowed in file names, if specified, will be replaced with '-'. |
sampling_settings_path =<file path> |
Specify a custom location of the CPU sampling settings configuration file.
If this option is not specified,
the settings are read from
|
tracing_settings_path =<file path> |
Specify a custom location of the CPU tracing settings configuration file.
If this option is not specified,
the settings are read from
|
Control which profiling modes are turned on right from the startNote that you do not have to perform measuring right from the start. Instead, in many cases it's better to start or stop measuring at a later moment - from the UI or by using the Profiler API. |
|
sampling |
Immediately start CPU profiling in the CPU sampling mode. Note that you do not have to profile CPU right from the start; instead, in many cases it's better to start or stop measuring at a later moment - from the UI or by using the Profiler API. |
tracing |
Immediately start CPU profiling in the CPU tracing mode.
Note that you do not have to profile CPU right from the start;
instead, in many cases it's better to start or stop measuring at a later moment -
from the UI or by using the Profiler API.
This option cannot be used in combination with |
call_counting |
Immediately start CPU profiling in the call counting mode.
Note that you do not have to profile CPU right from the start;
instead, in many cases it's better to start or stop measuring at a later moment -
from the UI or by using the Profiler API.
This option cannot be used in combination with |
alloceach =<N> |
Immediately start object allocation recording, recording each N-th allocation.
This option can be used in combination with
(Since 2016.02) To record only those objects whose size exceeds the threshold set with Note that you do not have to record allocations right from the start; instead, you can start or stop recording later from the profiler UI or using Profiler API. |
allocsizelimit =<size in bytes> |
Immediately start object allocation recording, recording allocation of all objects with size bigger than or equal to specified value.
This option can be used in combination with Note that you do not have to record allocations right from the start; instead, you can start or stop recording later from the profiler UI or using Profiler API. |
allocsampled |
Use sampled object allocation recording.
This option influences only object allocation recording started
|
alloc_object_counting |
Immediately start object allocation recording in the object counting mode.
This option is mutually exclusive with Note that you do not have to record allocations right from the start; instead, you can start or stop recording later from the profiler UI or using Profiler API. |
monitors |
Immediately start monitor profiling. Note that you do not have to profile monitor usage right from the start; instead, you can start or stop recording later from the profiler UI or using Profiler API. |
usedmem =<percent> |
Automatically capture a memory snapshot when used heap memory reaches the threshold. Note: this option just adds corresponding trigger on startup. Use triggers directly for a more sophisticated functionality. |
usedmemhprof =<percent> |
Automatically capture a HPROF snapshot when used heap memory reaches the threshold. Note: this option just adds corresponding trigger on startup. Use triggers directly for a more sophisticated functionality. |
periodicperf =<period in seconds> |
Periodically capture performance snapshots. Note: this option just adds corresponding trigger on startup. Use triggers directly for a more sophisticated functionality. |
periodicmem =<period in seconds> |
Periodically capture memory snapshots in the profiler's format (*.snapshot). Note: this option just adds corresponding trigger on startup. Use triggers directly for a more sophisticated functionality. |
periodichprof =<period in seconds> |
Periodically capture HPROF snapshots. Note: this option just adds corresponding trigger on startup. Use triggers directly for a more sophisticated functionality. |
disablestacktelemetry |
Do not collect thread stack and status information shown in Thread view as well as in other telemetry views. This information can be very useful because it allows you to connect to the profiled application on demand and discover how the application behaved in the past. In most cases, there is no significant overhead of collecting this information. However, it makes sense to disable it in production Java EE servers in order to ensure minimum profiling overhead. See also: Profiling overhead: how to reduce or avoid. |
exceptions=on |
Enable exception events in the JVM and immediately start recording the exception telemetry. This is the default mode on Sun/Oracle Java, OpenJDK, JRockit (i.e. non-IBM JVMs). See also: Profiling overhead: how to reduce or avoid. |
exceptions=off |
Enable exception events in the JVM but do not immediately start recording the exception telemetry that can instead be recorded later in runtime. See also: Profiling overhead: how to reduce or avoid. |
exceptions=disable |
Fully disable exception events in the JVM to totally eliminate corresponding overhead. The exception telemetry will not be available. This is the default mode on IBM JVMs because the overhead is significant. See also: Profiling overhead: how to reduce or avoid. |
disableoomedumper |
Disable on OutOfMemoryError snapshots. Note that enabling on OutOfMemoryError snapshots adds absolutely no overhead. If OutOfMemoryError happens, memory snapshot is written to disk for further analysis. You may want to disable the snapshots in some very special situations, e.g. if you profile an application with a huge heap, for which capturing the snapshot may take significant resources (time and/or disk space), but do not plan to perform its memory analysis. |
|
Specify which probes should be registered on startup. Read more... |
probeclasspath =<classpath> |
Specify where to find probe class(es) which are registered by class name. Read more... |
probebootclasspath =<classpath> |
Specify where to find probe class(es) which are registered by class name. Read more... |
triggers =<file path> |
Specify the file with description of the triggers to be applied from startup.
If this option is not specified, the trigger description is read from
By default, that file does not exist, thus no triggers are applied. |
Optimization and troubleshooting optionsReduce profiling overhead or troubleshoot stability issues by disabling some profiling capabilities. These options can be switched on startup only, i.e. corresponding behavior cannot be altered during runtime. See also: Profiling overhead: how to reduce or avoid. |
|
disablealloc |
Do not instrument bytecode with instructions needed for object allocation recording. See also: Profiling overhead: how to reduce or avoid. |
disabletracing |
Do not instrument bytecode with instructions needed for CPU tracing. Only CPU sampling will be available. See also: Profiling overhead: how to reduce or avoid. |
disablenatives |
Do not wrap native methods for bytecode instrumentation. When this option is specified, native methods will not be shown in CPU tracing results and in events recorded with probes if they depend on native method invocations. Try this option as a workaround if you experience crashes of an IBM JVM. |
disableall |
Disable several capabilities at once:
|
Miscellaneous |
|
help |
Print brief description of startup options. |
Attach technique allows to load the profiler agent into running JVM. Attached agent has several limitations in functionality in comparison with agent loaded on JVM startup. Attaching simplifies configuration by avoiding special steps to enable profiling, and makes profiling even more "on-demand" than ever.
All detected running Java processes are shown in the "Monitor Applications" list on Welcome screen. The colored circle indicates the profiler agent status. When agent is already loaded the circle is green. Orange circle shows that agent is not yet loaded but can be attached.
Attach to the application you want to profile by clicking on its name. Attach from context menu to provide custom agent options.
Attaching is possible to both local and remote applications.
Local applications are shown under the node with icon.
Applications running on the remote machines are shown under their own nodes with
icon.
To add a new remote machine click
. See remote profiling from user interface to learn more.
If you do not see your application it the list, please read Troubleshoot connection problems. You can use a web browser to check the profiler agent status.
Run console attach wizard and then connect from Welcome screen.
Unfortunately, the attach mode is not an ultimate solution. The problem is that existing JVMs provide only limited set of profiling capabilities for the attached agents. To get all profiling capabilities, you still have to start the profiled application with the profiler agent instead.
The attach mode is only supported for Sun/Oracle Java, OpenJDK, JRockit.
Existing IBM VMs do not provide necessary capabilities to let YourKit Java Profiler function in the attach mode. To profile on IBM Java, please start application with the profiler agent. If necessary capabilities are added in future versions of IBM Java, the profiler will support attach mode for IBM Java too.
Attach may fail due to insufficient access rights. For example, it may not be possible to attach to a Java EE server running as a Windows service. If attempt to attach fails, start the application with the profiler agent instead.
Client JVM can crash in attach mode due to a JVM bug
Due to a JVM bug 6776659 HotSpot client JVM can crash in attach mode.
There is no crash for the server JVM:
JVM option -server
solves the problem.
A long pause is possible on first attempt to start CPU tracing or object allocation recording, because classes loaded before the profiler agent has attached need to be instrumented. Depending on application, it can take from several seconds to several tens of seconds, or even a few minutes in worst cases.
The good news is that there will be no pause on subsequent starts of CPU tracing or object allocation recording for the same running JVM instance.
No profiling results for some methods:
This means that their calls will be missing in:
The profiler may add some overhead to the performance of applications you profile. This overhead may vary from virtually zero to significant, depending on the conditions described below.
To enable such features as recording object allocation and CPU tracing, the profiler inserts some supporting code into the bytecode of the profiled application by means of bytecode instrumentation. When object allocation recording and CPU tracing are not performed, this inserted code is in inactive state but still adds a small overhead to the performance of instrumented methods (1-5%, depending on the application). The process of bytecode instrumentation itself, of course, also requires some fixed time that depends on the number of loaded classes and their methods.
In most cases, such overhead is more than acceptable.
For cases when maximum performance is needed, e.g. if profiling in production, this overhead can be totally eliminated by avoiding bytecode instrumentation. The price you pay is that some features are disabled. But even when they are disabled, you can still capture memory snapshots and perform CPU sampling, which is enough in many cases (see Solving performance problems).
You can disable bytecode instrumentation by specifying "disabletracing", "disablealloc" and "probe_disable=*" startup options.
Since the greatest share of the overhead described above is caused by instrumentation needed for tracing, as a compromise you can disable this feature alone, keeping the ability to record object allocations on demand.
There is another, almost negligible, issue: if JVM loads an agent that is capable of profiling heap memory, class data sharing is disabled. This may slightly increase startup time, i.e. the time the JVM needs to load its core classes.
When CPU profiling and/or object allocation recording are performed, the profiler adds extra overhead. After measuring is done and turned off, overhead should decrease to the level described above in "Overhead of running an application with the profiler".
During the capture, the profiled application is paused. The time it takes to capture a memory snapshot depends on the heap size. Capturing memory snapshots of huge heaps takes more time because of the intensive use of the system swap file (if little free physical memory is available).
Thread stack and status information is shown in Thread view as well as in other telemetry views. This information can be very useful because it allows you to connect to the profiled application on demand and discover how the application behaved in the past. In most cases, there is no significant overhead of collecting this information.
However, it makes sense to disable it in production Java EE servers in order to ensure minimum profiling overhead. This can be done with the help of "disablestacktelemetry" startup option.
Exception telemetry helps discovering performance issues and logic errors. In most cases, there is no significant overhead of collecting this information.
However, it makes sense to disable it in production Java EE servers in order to ensure minimum profiling overhead. This can be done with the help of "disablestacktelemetry" startup option.
Most likely, you will not need to configure profiling manually. Please first consider the automated ways to enable profiling in your local or remote application.
Add -agentpath:<full agent library path>
VM option
to the command line of Java application to be profiled.
The agent library path depends on your OS:
Platform | VM option | |
Windows | 32-bit Java | -agentpath:<profiler directory>\bin\win32\yjpagent.dll |
64-bit Java | -agentpath:<profiler directory>\bin\win64\yjpagent.dll |
|
macOS | -agentpath:<profiler directory>/bin/mac/libyjpagent.jnilib |
|
Linux | x86, 32-bit Java | -agentpath:<profiler directory>/bin/linux-x86-32/libyjpagent.so |
x86, 64-bit Java | -agentpath:<profiler directory>/bin/linux-x86-64/libyjpagent.so |
|
ARM 32-bit: ARMv7 and higher, hard-float | -agentpath:<profiler directory>/bin/linux-armv7-hf/libyjpagent.so |
|
ARM 32-bit: ARMv5 and higher, soft-float | -agentpath:<profiler directory>/bin/linux-armv5-sf/libyjpagent.so |
|
ARM 64-bit (AArch64) | -agentpath:<profiler directory>/bin/linux-aarch64/libyjpagent.so |
|
ppc, 32-bit Java | -agentpath:<profiler directory>/bin/linux-ppc-32/libyjpagent.so |
|
ppc64, 64-bit Java, big-endian | -agentpath:<profiler directory>/bin/linux-ppc-64/libyjpagent.so |
|
ppc64le, 64-bit Java, little-endian | -agentpath:<profiler directory>/bin/linux-ppc-64le/libyjpagent.so |
|
Solaris | SPARC, 32-bit Java | -agentpath:<profiler directory>/bin/solaris-sparc-32/libyjpagent.so |
SPARC, 64-bit Java | -agentpath:<profiler directory>/bin/solaris-sparc-64/libyjpagent.so |
|
x86, 32-bit Java | -agentpath:<profiler directory>/bin/solaris-x86-32/libyjpagent.so |
|
x86, 64-bit Java | -agentpath:<profiler directory>/bin/solaris-x86-64/libyjpagent.so |
|
HP-UX | IA64, 32-bit Java | -agentpath:<profiler directory>/bin/hpux-ia64-32/libyjpagent.so |
IA64, 64-bit Java | -agentpath:<profiler directory>/bin/hpux-ia64-64/libyjpagent.so |
|
AIX | ppc, 32-bit Java | -agentpath:<profiler directory>/bin/aix-ppc-32/libyjpagent.so |
ppc64, 64-bit Java | -agentpath:<profiler directory>/bin/aix-ppc-64/libyjpagent.so |
|
FreeBSD | x86, 32-bit Java | -agentpath:<profiler directory>/bin/freebsd-x86-32/libyjpagent.so |
x86, 64-bit Java | -agentpath:<profiler directory>/bin/freebsd-x86-64/libyjpagent.so |
If you have copied the profiler agent library file from the profiler installation directory to another location, please change the path accordingly.
You can find examples of startup scripts for your platform in
<profiler directory>/samples
To check that Java can load the profiler agent, invoke the following command that prints a description of agent parameters:
java -agentpath:<full agent library path>=help
If JVM reports an error, refer to the knowledge base article for troubleshooting.
You can specify additional startup options. In most cases there's no need to use them.
To profile a Java EE server (especially JBoss!), specify startup option delay=10000
The options are comma separated: -agentpath:<full agent library path>[=<option>, ...]
.
java -agentpath:c:\yourkit\yjpagent.dll FooClass
java -agentpath:c:\yourkit\yjpagent.dll=alloceach=10,allocsizelimit=4096 FooClass
This is an advanced topic. It provides details that you do not normally have to know to profile your applications. Read the following if you want to learn more about profiling capabilities provided by different versions of Java.
Any profiler, in order to provide profiling results, communicates with JVM by means of a special API. This API provides different services and influences the range of a profiler's capabilities.
Starting with Java 5, a new standardized API was introduced - JVMTI. It had replaced JVMPI API used in previous Java versions.
JVMTI utilizes so-called "bytecode instrumentation". This means that profilers, in order to collect profiling information, should modify the bytecode of profiled application, inserting at certain points some supporting bytecode instructions. There may be some performance issues because of this.
Once you have profiled application running, you should connect to it to obtain and analyze profiling results.
All detected JVMs are shown in the "Monitor Applications" list on Welcome screen. Connect to the application you want to profile by clicking on its name.
The colored circle indicates the profiler agent status in the Java process. Green circle means the agent has been loaded, and connection will happen immediately. Orange circle means the agent has not been loaded yet, and the profiler will automatically attach the agent before connection.
If you launch the profiled application from an IDE, the profiler UI will automatically start and connect to the application, unless you turned this option off in the IDE plugin.
Local applications are shown under the node with icon.
Applications running on the remote machines are shown under their own nodes with
icon.
To add a new remote machine click
. See remote profiling from user interface to learn more.
Profiler hides development tools, such as IDEs, in the list by default. This can be configured by clicking on the filter icon.
If you do not see your application it the list, please read Troubleshoot connection problems. You can use a web browser to check the profiler agent status.
After a connect to profiled application you will be able to control profiling and review an application telemetry.
Toolbar Button | Description |
![]() |
Capture performance snapshot - save the profiling results to a file, for comprehensive analysis |
![]() |
Control CPU profiling |
![]() |
Control thread telemetry |
![]() |
Capture memory snapshot |
![]() |
Control object allocation recording |
![]() |
Advance object generation |
![]() |
Force garbage collection in the profiled application |
![]() |
Triggers - configure actions automatically performed on events |
![]() |
Control monitor profiling |
![]() |
Control exception telemetry |
![]() |
Clear all telemetry charts |
![]() |
"Pause" - stop/start receiving data from profiled application; "Refresh" - immediately receive profiling data from the profiled application and update the views |
You can close the profiling session tab by using File | Close Profiling Session.
You can connect to and disconnect from the profiled application as many times as you wish during its run time.
If the profiled Java application is missing in "Monitor Applications" list:
Ensure the application you want to profile is running. Restart the application if has been shut down. If the application fails to start or terminates abnormally, check the application's output, logs etc. for possible errors.
If you run your application with custom Java launcher, please always
start it with profiler agent.
Profiler will automatically detect java
, java.exe
and
javaw.exe
, but processes with different names might be missing.
If you run your application and profiler UI on the same machine but under different users, please run an application and UI under the same user or start your application with profiler agent. Java applications running under different user might be missing.
If you use remote profiling from user interface with SSH connection, login with the same SSH user under which your application runs or start your application with profiler agent.
Start your application with profiler agent and connect to it when it is running.
Ensure that startup option 'listen' is properly configured.
By default, profiler binds agent socket to localhost only.
This disables a direct remote connection to the agent via host:port
,
but connection to the profiler agent is possible via port forwarding e.g. an SSH tunnel.
If you want to allow direct connect to the remote application via host and port,
you should specify the startup option listen=all
.
If the application is up and running, ensure that the network connection is not blocked by a firewall, an antivirus etc.
Check both the remote machine side and the local machine side. The profiler agent port must be allowed.
Note: the profiler agent port is not one of the ports you may use to communicate with the profiled application or server, like HTTP 8080. Instead, it's a special dedicated port used solely by the profiler.
By default, the profiler agent port is allocated in the range 10001-10010 or a random one is used
if all ports in this range are busy.
The port can also be specified explicitly with the startup option 'port'.
If the profiler agent port is not in the default range 10001-10010,
explicitly specify the port in the connect dialog as host:port
If you are unsure which port is used, look at the profiler agent log on the remote machine.
The profiler agent log file is <user home>/.yjp/log/<session name>-<PID>.log
,
where <user home>
corresponds to the account under which the profiled application is started.
If you are uncertain about the log file path,
you may find it in the profiler agent's output to stderr:
[YourKit Java Profiler <version>] Loaded. Log file: <full path to the log file>
Version 8.0.x and older agents didn't create a log file, hence the port was printed directly to stderr:
[YourKit Java Profiler <version>] Profiler agent is listening on port <port number>...
The profiler agent status can be checked by opening
the URL http://localhost:<agent port>
for a local profiled application
or http://<remote host>:<agent port>
for a remote profiled application
in a web browser.
The remote application status can be checked this way only if a direct connection to the agent port was allowed with the help of the agent startup option listen.
The shown page resembles the content of the Summary tab in the profiler UI plus provides detail on currently active profiling modes.
Memory-related issues can affect an application's execution speed and stability:
OutOfMemoryError
is thrown, which can cause an application crash or further unstable operation.
Read more about memory-related problems:
Let us assume there is a task in your application that you want to profile. Please do the following:
Capture memory snapshot.
To identify the moment when to capture the snapshot, use Telemetry to see when and how the used memory grows.
Also, a snapshot can be captured automatically on high memory usage and/or on out of memory.
Memory leak is an existence of objects that are not needed anymore according to the application logic,
but still retain memory and cannot be collected because they are referenced from other live objects,
due to a bug in application itself.
Obviously, each leaked object is accessible from at least one GC root or represents a GC root.
In other words, for each leaked object there is always a path that starts from GC root(s) and contains (ends with) the leaked object.
If you do not know yet what objects are leaked, YourKit Java Profiler can help you find out.
You can suspect the presence of memory leaks with the help of the memory telemetry, by watching how used memory grows. You can use Force Garbage Collection to immediately see whether some of the objects that consume the memory can be collected, thus decreasing used memory. If after 2-3 explicit garbage collections the used memory remains on the same level or decreases insignificantly, this possibly means you have a leak. ("Possibly," because it can turn out not to be a leak, but simply high memory consumption -- Learn how to deal with that case)
Also consider the capture snapshot on high memory usage feature. Snapshots captured automatically using this feature may possibly be captured in the state when memory usage reaches the specified threshold because of memory leaks. Thus, the snapshots will provide information sufficient to discover and fix the leaks.
To detect a memory leak, use the following scenario:
Using the generations feature helps a lot in finding memory leaks, but you can also effectively find memory leaks analyzing a snapshot which does not contain object generation information, e.g. if you have not advanced generation when appropriate or if you have a HPROF snapshot. In this case use objects view.
Often, potential leaks in particular profiled application are well known in advance, i.e. you know a class of objects that have a "tendency" to leak while application is being developed, changed, refactored. You can easily check for the presence of such objects, with the help of Memory | Instances by Class... (Ctrl+N). In more complex cases, you can use Set description language to declare sets of potentially leaked objects.
Note that even one class can have objects legally and illegally (leaked objects) retained in memory.
To distinguish between them, you can use objects explorer to see outgoing and/or incoming references. For example,
an object may have a String
field with the value that can identify this object among other objects of the same type.
Select the leaked object(s) in a memory view, and use Memory | Paths from GC Roots... (Ctrl+P). See Working with paths for details.
Let us look at a simple example of how to use paths.
Assume that we are profiling an Editor application that can open, edit and close text files. There is a singleton object that acts as a manager of opened files, and the data of each opened file is represented with an instance of class Document.
During the profiling session we open several text files, edit them, close them and take a memory snapshot.
If everything is correct, there should be no instances of Document that cannot be collected. So, first of all, we use Memory | Instances by Class... (Ctrl+N), to see if there are leaked Documents.
Assume we have found such objects - so we do have a leak. There should be paths from GC roots to these objects (or, perhaps, some of them may belong to GC roots themselves - this will be indicated in the view).
Thus we search for the paths from GC roots to one of the Documents (Ctrl+P), or to all Documents defining
the set.
If all the paths go through the manager singleton, the code responsible for closing files in our editor must have a bug.
If none of the paths contains the manager singleton, then the closing operation works correctly, but there are object(s)
in the path(s) that erroneously hold references to Documents and cause the memory leak.
Browsing paths, use navigation feature of IDE integration.
The so-called GC (Garbage Collector) roots are objects special for garbage collector. Garbage collector collects those objects that are not GC roots and are not accessible by references from GC roots.
There are several kinds of GC roots. One object can belong to more than one kind of root. The root kinds are:
java.lang.Class
happen to be roots of other kind(s).
If an object is a root, it is specially marked in all views showing individual objects. For example, the following picture shows a fragment of paths view:
Purpose: Reduce time that garbage collector spends on collecting temporary objects.
If garbage collection takes a significant amount of time, it is advised to profile memory allocation to pin-point and optimize the problematic code.
Let us assume there is a task in your application that you want to profile.
Optionally, the profiled application can be launched with object allocation recording started with the help of corresponding startup options. Memory snapshot with recorded allocation information can be captured automatically on profiled application exit and/or on high memory usage. Read more in the Startup options section.
A memory snapshot is captured automatically on first OutOfMemoryError, if the profiled application runs on Sun/Oracle/OpenJDK Java 6 or newer, or on JRockit R28.0.0 or newer.
On OutOfMemoryError snapshots are captured via JVM's built-in dumper, which for some reason is disabled by default. JVM option -XX:+HeapDumpOnOutOfMemoryError enables it.
However, you do not need to specify this option when profiling applications because the profiler agent programmatically enables the dumper upon profiled application startup or when the agent is attached to a running application.
Enabling the dumper adds absolutely no overhead. Technically, enabling is simply setting the state of a boolean flag. When the first OutOfMemoryError occurs, the JVM dumps the heap to file if the flag is "true".
Anyway, if for some reason you want not to enable the dumper,
specify startup option disableoomedumper
.
To check the status, connect to the profiled application and hover over corresponding button as shown on the picture below:
The profiler shows the following notification when a snapshot is captured on OutOfMemoryError.
Please note that this approach has several benefits over the capturing snapshot on high memory usage feature, because it uses the JVM's internal lightweight dumping algorithm. This algorithm is specially designed to work in low memory conditions, where the JVM general purpose profiling interface JVMTI used by profilers may fail due to low resources.
See also Support of HPROF format snapshots.
Get CPU usage overview with the help of the CPU telemetry.
For comprehensive analysis, record CPU information with the help of sampling or tracing.
CPU tab shows live CPU consumption statistics under CPU usage telemetry.
It is available when you are connected to the profiled application, as well as in snapshots.
The telemetry information is remembered in a circular buffer in the profiler agent memory. This allows you to connect to a profiled application on demand and discover how the application behaved in the past.
The buffer capacity is 1 hour by default, and can be changed with the startup option telemetrylimit.
When the profiler is connected to the profiled application, the toolbar contains the following CPU profiling controls:
To begin obtaining profiling results, start CPU measuring when your application requires it.
When sampling is used, the profiler periodically queries stacks of running threads to estimate the slowest parts of the code. No method invocation counts are available, only CPU time.
Sampling is typically the best option when your goal is to locate and discover performance bottlenecks. With sampling, the profiler adds virtually no overhead to the profiled application.
However, the probes for the high-level statistics, if enabled, may impose additional overhead.
You can configure some CPU sampling aspects with CPU sampling settings.
When tracing is used, the profiler instruments the bytecode of the profiled application for recording thread CPU time spent inside each profiled method. Both times and invocation counts are available.
Although tracing provides more information, it has its drawbacks. First, it may noticeably slow down the profiled application, because the profiler executes special code on each enter to and exit from the methods being profiled. The greater the number of method invocations in the profiled application, the lower its speed when tracing is turned on.
The second drawback is that, since this mode affects the execution speed of the profiled application, the CPU times recorded in this mode may be less adequate than times recorded with sampling. Please use this mode only if you really need method invocation counts.
To control profiling overhead and accuracy of the results use CPU tracing settings.
Also, the probes for the high-level statistics, if enabled, may impose additional overhead.
Call counting is the most lightweight CPU profiling mode.
It's a simple tool for identifying potential performance problems caused by suboptimal algorithms. The approach is based on assumption that method(s) with a big number of invocations may indicate a performance problem.
Call counting is specially designed to have minimal possible, almost zero overhead:
Use call counting to initially detect possible problems: thanks to its low overhead you may do this even in production.
Further investigation may involve using CPU tracing or sampling to get comprehensive profiling results including times and stack traces (call tree).
Please note that CPU tracing and call counting are based on bytecode instrumentation.
If the startup options disabletracing
or disableall
are specified,
it will be disabled, making CPU sampling the only available mode.
When CPU profiling is started, the results are immediately available in "Call tree" (with threads merged) and "Method list" tabs.
In case of CPU tracing, both method times and invocation counts are shown. In case of CPU sampling, only times are shown.
The live view provides only basic information. To perform comprehensive analysis, capture performance snapshot, open it and use the full featured CPU view.
When the task you intended to profile has finished (or has performed for a sufficient amount of time), capture a performance snapshot with all the recorded information.
When this is done from the profiler UI, you can open the results for immediate analysis.
Further topics in this section describe the profiler's UI for analyzing CPU profiling results.
CPU sampling settings allow to customize some aspects of CPU sampling.
CPU tracing settings are specified separately, as described here.
The settings are applied each time you start CPU sampling. This means you can change the settings without restarting the profiled application.
To configure CPU sampling settings use Settings | CPU Sampling... in the main menu.
The settings are also accessible via a link in the CPU profiling toolbar popup:
The following dialog appears:
Configurable properties:
Specify whether CPU or wall time will be measured for profiled methods.
There can be any number of lines in the this format:
walltime=<fully qualified class name> : <method name> ( <comma-separated parameter types> )
or
walltime=<fully qualified class name>
Matching methods will be measured with wall time, all other methods - with CPU time.
If there are no lines with walltime
specified, all methods will be measured with CPU time.
Wildcards ('*') are accepted. E.g. the following specifies all methods of class com.Foo.Bar, which names start with 'print':
walltime=com.Foo.Bar : print*(*)
The default configuration for CPU sampling is to measure wall time for I/O methods and CPU time for all other methods.
"Use Preconfigured Settings..." allows you to choose one of recommended presets.
Specify how often samples are taken with sampling_period_ms=<time in milliseconds>
.
By default samples are taken each 20 milliseconds (sampling_period_ms=20
).
The settings are stored in the file
<user home>/.yjp/sampling.txt
where user home corresponds to the account under which a profiled application is launched.
This file is automatically updated in your user home directory when you apply changes in the UI (see above).
You can edit this file manually, but note that it may be fully overwritten with "Use Predefined Settings..." in UI.
The settings file is read and applied when CPU sampling is started with:
Controller.startCPUSampling(null)
StartCPUSampling
You can specify a custom settings file for a particular application by using the startup option sampling_settings_path
CPU tracing settings allow to customize some aspects of CPU tracing.
CPU sampling settings are specified separately, as described here.
The settings are applied each time you start CPU tracing. This means you can change the settings without restarting the profiled application.
To configure CPU tracing settings use Settings | CPU Tracing... in the main menu.
The settings are also accessible via a link in the CPU profiling toolbar popup:
The following dialog appears:
Configurable properties:
Specify whether CPU or wall time will be measured for profiled methods.
There can be any number of lines in the this format:
walltime=<fully qualified class name> : <method name> ( <comma-separated parameter types> )
or
walltime=<fully qualified class name>
Matching methods will be measured with wall time, all other methods - with CPU time.
If there are no lines with walltime
specified, all methods will be measured with CPU time.
Wildcards ('*') are accepted. E.g. the following specifies all methods of class com.Foo.Bar, which names start with 'print':
walltime=com.Foo.Bar : print*(*)
The default configuration for CPU tracing is to measure wall time for all methods, because the wall time timers are faster and more accurate than CPU time timers. This reduces CPU tracing overhead and increases accuracy of the results.
"Use Preconfigured Settings..." allows you to choose one of recommended presets.
Adaptive tracing mode automatically reduces profiling overhead by skipping short running methods with big number of invocations. Such methods usually cause the most significant profiling overhead, and their exclusion results in both faster tracing and more realistic results. The decision to exclude particular method is made basing on the statistics collected during the current tracing session.
Excluding a method means that there will be no exact results for it (time, invocation count), but for its callers only. Due to reduced overhead, its caller method time will be more adequate.
The benefit of adaptive tracing is that it does not require human interaction and works with any application, eliminating the need to explicitly specify filters for particular application if default filters do not fit well. However, please note that the adaptive tracing needs some time to warm up, i.e. to collect statistics on which methods deserve to be excluded.
To enable adaptive tracing specify adaptive=true
(default).
To disable adaptive tracing specify adaptive=false
.
Methods excluded by adaptive tracing are specially presented in UI:
The settings are stored in the file
<user home>/.yjp/tracing.txt
where user home corresponds to the account under which a profiled application is launched.
This file is automatically updated in your user home directory when you apply changes in the UI (see above). You can also edit it manually.
The settings file is read and applied when CPU tracing is started with:
Controller.startCPUTracing(null)
StartCPUTracing
You can specify a custom settings file for a particular application by using the startup option tracing_settings_path
CPU view (View | CPU) shows CPU consumption details.
The view consists of the following sections:
Shows a top-down call tree for each thread ("by thread")
or with calls from all threads merged ("all threads together").
The tree is shown based on current filters.
Shows methods that consumed the most time.
Methods are shown based on current filters:
Thread.run()
).
Methods are shown based on current filters:
Thread.run()
).
For each method, the list shows its time, its own time and, with CPU tracing, its invocation count.
You can narrow down the list by typing a method name inside the text field.
Method invocation counts are available with CPU tracing. Invocation counts are not cumulative.
You can apply the following actions to the selected method (available from the popup menu as well):
"Callees list" view shows which methods were called from certain methods. It is available as a slave view in call tree, hot spots and method list:
Callees list for "Call tree" shows methods invoked inside a selected subtree. When you view "Call tree - By thread", callees list will show methods invoked in the subtree in particular thread only. To see all methods invoked in a thread, select the thread node.
Callees list for "Hot spots" and "Method list" shows methods invoked inside a selected method.
This view shows merged callees for a particular method, i.e. all call traces started from this method. This gives a summary of method execution and its "overall" behavior.
If method invocation counts were recorded, they are shown in call trees as well. Invocation counts are not cumulative.
You can apply the following actions to the selected method (available from the popup menu as well):
This view shows where a particular method was called.
If method invocation counts were recorded, they are shown in call trees as well. Invocation counts are not cumulative.
You can apply the following actions to the selected method (also available from the popup menu):
You can profile applications in high-level terms like SQL statements and URLs.
High-level statistics depends on corresponding built-in probes whose activity mode is Auto by default, which means they are active only while CPU profiling is running, in either CPU sampling or CPU tracing mode. If neither CPU tracing nor CPU sampling was running, high-level profiling results are not available.
This statistics is not available in live views. To access it, capture and open a snapshot, then switch to the "Java EE statistics" view:
The view consists of the following sections:
Shows database requests and their invocation method back traces.
You can see the requests as a plain list or group them by type.
Supported databases:
Shows list a of URLs that correspond to JSP and Servlet calls, and merged callees for all methods invoked with these URLs
Shows list of URLs that correspond to JNDI calls, and back traces for all methods invoked with these URLs
For each Java EE call CPU time and invocation count are reported.
Java EE profiling requires bytecode instrumentation and adds some additional overhead to the profiled application. For detailed information, see Profiling overhead: how to reduce or avoid
Lines can be copied to clipboard by using File | Copy to Clipboard (Ctrl + C):
"What-if" feature helps you to analyze CPU profiling results by ignoring particular methods or focusing on particular methods only.
This feature requires capturing a snapshot. It is not available in live views.
In views with CPU profiling results (call tree, method list, hot spots), use menu actions (available as popup menu and in main menu):
You can compare two arbitrary snapshots that contain recorded CPU information, obtained with CPU sampling or CPU tracing.
To compare snapshots, do the following:
A new tab with the comparison opens. It contains "Call tree" and "Method list" views displaying the differences in method execution times and invocation counts.
The invocation count columns are shown only if both compared snapshots contain CPU tracing results.
If at least one of the compared snapshots contains CPU sampling results, only time differences will be shown.
You can estimate CPU usage in given time range, basing on available stack trace telemetry.
This feature is similar to CPU sampling, as it also uses sampling approach, but there are sufficient differences:
Comparison criteria | CPU usage estimation | "Real" CPU sampling | Comments |
Results availability | Always, as long as stack telemetry is enabled | Should be explicitly started | CPU usage estimation is ideal for analysis of anomalies such as CPU spikes, especially those which has already happened, so you do not need to turn CPU sampling or tracing on and try reproduce the spike. |
Accuracy | Lower | Higher |
CPU usage estimation is based on thread telemetry, whose frequency is normally as low as 1 sample per second (can be changed using the startup option telemetryperiod). CPU usage estimation can adequately measure events not shorter than the thread telemetry period. So, it suits for measuring events or method calls that last at least several seconds. If the measured event or method call is long enough, the estimation will do its job - locate problematic code responsible for the CPU spike. For measuring shorter events or methods, use normal CPU profiling, or decrease the telemetry period using the startup option telemetryperiod. |
Granularity | Results are available for each particular event, as well as for entire time range | Results are aggregated for entire period of CPU profiling |
CPU usage estimation enables analysis of particular events or time ranges within single snapshot. CPU profiling results are aggregated since CPU profiling has been started. It is not possible to "extract" CPU profiling results for a smaller time range within one snapshot. However, you can choose which method calls to analyze. |
When you are connected to the profiled application, use the "Threads" tab to track the live threads.
The telemetry information is remembered in a circular buffer in the profiler agent memory. This allows you to connect to a profiled application on demand and discover how the application behaved in the past.
The buffer capacity is 1 hour by default, and can be changed with the startup option telemetrylimit.
Please also consider the automatic deadlock detector
The thread stack and state telemetry information can be very useful because it allows you to connect to the profiled application on demand and discover how the application behaved in the past. In most cases, there is no significant overhead of collecting this information, and thus you do not need to disable it.
However, it makes sense to disable it in production Java EE servers in order to ensure minimum profiling overhead.
The telemetry is enabled by default, unless "disablestacktelemetry" startup option was specified.
When you are connected to the profiled application, use corresponding toolbar button to enable/disable the telemetry:
For HPROF snapshot, "Threads" tab shows thread stacks at the moment of the snapshot capture (if available).
If a Java-level deadlock happens in the profiled application, it will be automatically detected.
When you are connected to the profiled application, switch to the "Deadlocks" tab.
If the deadlock is found, a notification will be shown. Find the deadlock detail in the "Deadlocks" tab.
The profile can detect deadlocks which are not otherwise reported by standard Java mechanism which detects only Java-level deadlocks, but does not detect deadlocks of Java threads caused by JVM internal locks (e.g. class loader) or native methods which explicitly use low-level synchronization primitives.
An heuristics detects threads whose stack does not changed for some period of time, which is a sign of potential deadlock or hung thread.
Well-known cases when thread can legally stay in same state for a long time are excluded. In particular, it can be threads which are waiting for incoming connection in ServerSocket.accept(), some JVM internal threads etc.
Important: the potential deadlock detection depends on thread stack telemetry. If thread state telemetry is disabled, the detection is not possible.
Note: potential deadlocks are not necessarily actual deadlocks. It is possible that the reported threads are performing the same operation for a long time, and will eventually finish. Use "Refresh" button to check if detected threads are still considered frozen.
After you are connected to the profiled application, find the Memory tab on the session panel.
Memory tab's section Memory & GC Telemetry shows the following graphs:
The telemetry information is remembered in a circular buffer in the profiler agent memory. This allows you to connect to a profiled application on demand and discover how the application behaved in the past.
The buffer capacity is 1 hour by default, and can be changed with the startup option telemetrylimit.
Classes view shows object counts by class. It is located in the Memory & GC Telemetry section of the Memory tab.
This view is available when the profiler is connected to a running application, allowing to instantly learn object counts without capturing and opening a memory snapshot.
This view is also available in performance snapshots, but is not available in memory snapshots being superseded with objects view.
The presented information can be useful as an overview of memory consumed by the profiled application and also as a clue to detecting memory leaks. For details, see How to find out why application eats that much memory? and How to find memory leaks?
This view, unlike other telemetry views, does not automatically periodically update. This is for performance considerations: gathering the statistics may take significant time in case of huge heaps with many objects, thus should run on demand.
Instead, it updates when:
You can profile object allocation without capturing a memory snapshot.
Memory tab's section Allocations shows counts and sizes for objects whose allocations have been recorded, including objects which are still alive as well as objects that have been collected by the moment.
This live view provides only basic information, and you still need to capture memory snapshot to perform comprehensive analysis: to separate live objects from dead objects, to see where live objects are retained, etc.
A memory snapshot represents the memory state of the profiled application at the moment it was captured. It contains information about all loaded classes, about all existing objects, and about references between objects.
Snapshots can contain values of fields and arrays of primitive types (int
, long
, char
etc.).
Read more.
Optionally, snapshot can contain information about object allocations.
You have an option to capture snapshot in YourKit Java Profiler format or via JVM built-in dumper:
Read more about HPROF snapshots.
YourKit Java Profiler can optionally record object allocations, that is, track method calls where objects are created.
To start allocation recording, connect to the profiled application and use corresponding toolbar button:
Recording of allocations adds performance overhead. This is the reason why allocations should not be recorded permanently. Instead, it is recommended to record allocations only when you really need them.
Also, you may choose from two available recording modes to balance between result accuracy and fullness and profiling overhead.
Record thread and stack where objects are allocated (traditional recording)
This mode provides most detail: full stack and thread where particular object is created is obtained and remembered for each recorded object. Allocation information for particular objects in available in a memory snapshot. Also, this mode enables comprehensive analysis of excessive garbage allocation.
Memory snapshots captured when allocations are being recorded, or after object allocation recording has been stopped, contain allocation information.
If an object is created when allocations are not being recorded, or recording is restarted after the object has been created, or allocation results are explicitly cleared after the object has been created, snapshot will contain no allocation information for that object.
In order to keep moderate overhead, it is reasonable to skip allocation events for some percent of objects. This approach is useful to find the excessive garbage collection.
Also, you can record allocations for each object with size bigger than certain threshold. It is valuable to know where the biggest objects are allocated. Normally there are not so many such big objects, thus recording their allocation should not add any significant overhead.
In some rare cases you can record each created object e.g. when allocation information for some particular object must be obtained. To achieve this, set "Record each" to 1.
Count allocated objects (quick recording)
This is the most lightweight allocation recording mode.
Use this mode to quickly get insight on how many objects are created and of which classes. In particular, this identifies excessive garbage allocation problems (lots of temporary objects).
Object counting is specially designed to have minimal possible, almost zero overhead:
Use object counting to initially detect possible problems: thanks to its low overhead you may do this even in production.
Further investigation may involve using traditional allocation recording to get comprehensive profiling results with stack traces (call tree).
You can start and stop recording of allocations during execution of your application as many times as you wish. When allocations are not recorded, memory profiling adds no performance overhead to the application being profiled.
You can control recording of allocations from the profiler UI as described above, or via Profiler API. You can also record allocations from the start of application execution (see Running applications with the profiler).
Recorded allocations are shown in Allocations view, Garbage collection view and Quick info
Allocation recording is based on bytecode instrumentation, which is on by default and imposes almost no overhead while allocation recording is not running.
However, if you apply the startup options disablealloc
or disableall
to totally eliminate the overhead, allocation recording will not be possible.
The greatest contribution to object allocation recording overhead in the traditional mode is made by obtaining exact stack trace for each recorded new object.
The idea is to estimate stack trace instead in order to reduce the overhead. It is similar to how CPU sampling works. The sampling thread periodically obtains stacks of running threads. When a thread creates a new object, the stack recently remembered for that thread is used as the object allocation stack estimation.
And just like in case of CPU sampling, the sampled object allocation recording results are relevant only for methods longer than the sampling period.
Use this mode to get allocation hot spots, to find top methods responsible for most allocations. Don't use it if you need precise results for all recorded objects.
Exact stack traces are gathered by default when you start object allocation recording. To use sampled stacks instead:
in the profiler UI: choose "Estimated (sampled) stacks instead of exact stacks" in the "Start Object Allocation Recording" toolbar popup window
Controller.startAllocationRecording(...)
parameter boolean sampledAllocationRecording
.
The old version of the method without that parameter records exact stacks.
YourKit Java Profiler is capable of measuring shallow and retained sizes of objects.
Shallow size of an object is the amount of memory allocated to store the object itself, not taking into account the referenced objects. Shallow size of a regular (non-array) object depends on the number and types of its fields. Shallow size of an array depends on the array length and the type of its elements (objects, primitive types). Shallow size of a set of objects represents the sum of shallow sizes of all objects in the set.
Retained size of an object is its shallow size plus the shallow sizes of the objects that are accessible, directly or indirectly, only from this object. In other words, the retained size represents the amount of memory that will be freed by the garbage collector when this object is collected.
To better understand the notion of the retained size, let us look at the following examples:
In order to measure the retained sizes, all objects in memory are treated as nodes of a graph where its edges represent references from objects to objects. There are also special nodes - GC root objects, which will not be collected by Garbage Collector at the time of measuring (read more about GC roots).
The pictures below show the same set of objects, but with varying internal references.
Figure 1: |
Figure 2: |
![]() |
![]() |
Let us consider obj1.
As you can see, in both pictures we have highlighted all of the objects that are directly or indirectly accessed only by obj1.
If you look at Figure 1, you will see that obj3 is not highlighted,
because it is also referenced by a GC root object. On Figure 2, however, it is already included into
the retained set, unlike obj5, which is still referenced by GC root.
Thus, the retained size of obj1 will represent the following respective values:
Looking at obj2, however, we see that its retained size in the above cases will be:
In general, retained size is an integral measure, which helps to understand the structure (clustering) of memory and the dependencies between object subgraphs, as well as find potential roots of those subgraphs.
The objects view allows you to comprehensively examine objects in a memory snapshot.
When a memory snapshot opens, the Memory tab is shown automatically and represents all objects.
You can also open it for a subset of objects:
There are different views:
Objects by category:
Class list - examine how memory is distributed among instances of different classes
Class and package - similar to the Class list, but classes are grouped by package
Class loaders - distribute objects by class loader
Web applications - distribute objects by web application
Generations - distribute objects by time of their creation
Reachability - shows objects distributed according to how/whether they are reachable from GC roots
Shallow size - shows objects distributed according to their shallow size range
Individual objects:
Object explorer - browse individual objects
Biggest objects - find individual objects that retain most of memory
Allocation recording results:
Allocations - explore methods where objects were created
Object ages - distribute objects by how long they exist
Other views (available as slave views only):
Merged paths - examine how objects are retained
Class hierarchy - shows super and derived classes
This view is a powerful tool for examining how memory is distributed among instances of different classes.
Classes are shown as a plain list. To see them grouped by package, use Class and package.
Information for a class is shown in four columns: "Class", "Objects" (number of objects), "Shallow Size" and "Retained Size" (for details please see Shallow and retained sizes).
Classes whose objects retain most memory are shown at the top, as the list is sorted by retained size.
On opening the view, estimated retained sizes are shown instead of exact sizes, which cannot be immediately calculated. The exact sizes may be obtained by using "Calculate exact retained sizes" balloon above the "Retained Size" column. However, for most classes the estimation is very close to the exact value, so there is almost no need to run exact size calculation.
You can narrow down the list by typing a class name in the text field.
To open all instances of a class by its name use Memory | Instances by Class... (Ctrl+N)
This view is similar to class list, but shows classes grouped by package instead of a plain list.
"Class loaders" view shows objects grouped by class loader
For each loader, the number of loaded classes is shown, as well as the number of classes without instances; this information can help in finding leaked loaders.
Paths from GC roots to the loader object are explicitly available as a slave view "Paths to Loader". This allows you to learn why particular loader is retained in memory.
"Web applications" view shows objects grouped by web application.
Web applications are detected for a number of popular servers:
Objects that do not belong to a particular web application will be indicated as such; it will be all objects if it is not a snapshot of a supported server, or not of a server at all.
Generations distribute objects by time of their creation, and are thus very helpful in finding memory leaks and performing other analyses of how heap content evolves over time.
When an object is created, it is associated with the current generation number. The generations are sequentially numbered starting from 1. The current generation number is automatically advanced on capturing memory snapshot. It can also be explicitly advanced with the help of "Advance Object Generation Number" toolbar button, as well as via API.
The generation represents an object's age: the smaller the generation number, the older the object.
All tabs representing live objects have a "Generations" view. In this view, you can see objects separated by time of their creation:
To see the generation of a single object, use Quick Info:
See also the typical memory leak detection scenario.
Reachability scopes distribute objects according to how/whether they are reachable from GC roots.
This information is helpful when analyzing the profiled application's memory consumption and searching for memory leaks. Also it helps examining excessive garbage allocation, especially if the snapshot doesn't contain recorded object allocation information.
Object.finalize()
will be placed to the finalizer queue before actual deletion.
In addition to the "Reachability scopes" view, objects view header shows brief summary on the number of strong reachable objects, and if there are any, provides an easy way to open them in new tab, by clicking corresponding link (useful when analyzing memory leaks):
Action Memory | Instances by Class... (Ctrl+N) allows you to choose the reachability scope of objects to open:
The reachability scope for individual objects is shown in reference explorers and paths views:
Shallow size view distributes objects by their shallow size range. It allows you to evaluate how many small, medium and large objects are in the heap, and how much memory they use.
This information can be useful when tuning GC options.
Shows all objects of the set and allows you to browse their outgoing references. Outgoing references of an object are fields or array elements of that object that point to other objects.
Shows all objects of the set and allows you to browse their incoming references. Incoming references of an object are references to that object.
DO NOT use "Incoming references" to find out why an object is retained in memory. Use paths instead.
This view shows individual objects that retain most of memory. The objects are shown as a dominator tree. That is, if object A retains object B, then object B will be nested in object A's node. Read more about retention.
This information can be useful to explore and reduce memory usage. In particular, it helps finding memory leaks caused by individual objects. Sometimes you can learn a lot by looking at memory distribution in terms of individual objects.
Also consider "Class list" which shows similar information for objects grouped by classes.
This section is available only for snapshots that contain any recorded object allocations.
Allocations let you discover methods where objects were created.
Shows a top-down call tree with the methods in which objects from the set were created, either for each particular thread ("by thread")
or with calls from all threads merged ("all threads together").
The tree is shown according to current filters.
Shows methods where the greatest number of objects from the set ("Hot spots by object count")
or objects with the greatest total shallow size ("Hot spots by object size") were created.
Methods are shown according to current filters:
Thread.run()
).
Methods are shown according to current filters:
Thread.run()
).
For each method, the list shows the number and shallow size of objects it had created.
You can narrow down the list by typing a method's name in the text field.
Shows objects distributed by their ages.
This section is available only for snapshots that contain any recorded object allocations.
Recorded objects are shown distributed by how long they exist. This information is helpful in tuning garbage collector parameters.
Object ages are also shown as:
"Object Ages" view in Garbage Collection view for collected objects
Quick info view for particular object
Merged paths view is a tool for examining how objects are retained.
It is especially useful for analyzing objects of classes with a great number of instances, such as
int[]
, java.lang.String
etc.
Merged paths is similar to Paths from GC roots; however, it does not show paths through individual objects, but paths from multiple objects grouped by class.
For example, see the picture below.
It shows that the memory held by int[]
instances is mostly retained by
IntegerInterleavedRaster
instances, which are in turn retained by
BufferedImage
and OffScreenImage
.
Another difference between Merged paths and Paths from GC roots is that the merged paths are build on the dominator tree while the paths use the full object reference graph as is. This means that some intermediate nodes seen in Paths from GC roots may be missing in Merged paths for objects which are retained indirectly.
'Class hierarchy' view shows super classes and derived classes of given class.
'Class hierarchy' is available as a slave view. It is shown for the class selected in the master table (e.g. in Class list) or for the class of the object selected in the master table (e.g. in Objects explorer).
You may want to open the class in your IDE to use more powerful hierarchy analysis capabilities it provides.
This view is available only for snapshots that contain any recorded object allocations.
This view shows merged callees for a particular method that allocated objects. In other words, it shows all call traces started from that method. This gives a summary of method execution, and its "overall" behavior.
To open this view for the selected method, use Memory | Method Merged Callees (Ctrl+M).
See also:
This view is available only for snapshots that contain any recorded object allocations.
This view shows where a particular method, that allocated objects, was called.
To open this view for the selected method, use Memory | Method Back Traces (Ctrl+Shift+M).
See also:
"Quick Info" view shows useful information about selected object(s) and is available:
The view shows retained and shallow size and object count for the current selection:
If a single object is selected, its generation is shown. If allocation has been recorded for the object, the allocation thread, stack trace and object ages are shown as well.
For a byte array "Quick info" shows its text representation in specified encoding (the snapshot must contain primitive values):
If object is a GC root of type "Stack Local" or "JNI Local", corresponding stack trace is shown, as well as local variable name if available:
The so-called GC (Garbage Collector) roots are special objects for garbage collector. The objects it collects are those that 1) are not GC roots and a) are not accessible by references from GC roots.
"GC Roots" view shows garbage collector roots sorted by types.
"GC Roots" view is NOT the best place to start memory leak detection - see Working with paths for
a better approach.
Instead, "GC Roots" view acts as an overview of all objects that could not be collected at the moment
the snapshot was created.
The view is provided for information purposes only.
For memory leak analysis please use the Find paths feature.
The profiler provides a very powerful way to detect memory leak -
calculation of paths between objects in memory.
A path is a very simple and intuitive concept. A path between Object 1 and Object n
is a sequence of objects where:
When you have found a leaked object and want to fix the memory leak, use paths from GC roots to find out why that object is retained in memory.
To see the paths, select an object and switch to Paths from GC Roots:
You can limit the number of paths to show. It is guaranteed that the shortest paths are shown first, i.e. there are no paths shorter than the ones displayed.
To open the paths in a new tab use Memory | Paths from GC Roots... (Ctrl+P)
You can ignore particular references in paths. This feature is a great aid when investigating memory leaks, because it immediately shows whether nulling particular reference eliminates the leak, and if not, which remaining references you should also consider.
Memory | Paths between Predefined Sets... is the most common way to find out how an object of the source set references objects of the target set.
See also Event inspections
Typical memory-related problems can be recognized with the help of the "Inspections" feature. Inspections enable automatic high-level analysis of application memory. Each inspection automatically detects a specific memory issue. Performing this type of analysis by hand would be a very complicated (if at all possible) task.
With the help of inspections you can easily find the causes and possible solutions of usual memory-related problems.
The "Inspections" view is added to any objects view, such as "Memory" or a tab representing a subset of objects. Inspections for all live objects (i.e. for the entire snapshot) are also available via top-level tab "Inspections'.
(1) To run all inspections as a batch use "Run All Inspections" button.
(2) To run a single inspection, select it in the tree and use "Run This Inspection Only" button
(this is especially useful if you want to apply the changes made to an inspection's options).
All inspections are grouped by category:
Find all java.lang.String
's with identical text values.
Problem: Duplicate strings waste memory.
Possible solution: Share string instance via pooling or using intern().
Find objects of the same class equal field by field, and arrays equal element by element.
Problem: duplicate instances waste memory.
Possible solution: reduce the number of instances by sharing, lazily creating, not storing permanently.
Find multiple instances of zero-length arrays of particular type.
Problem: Memory waste and additional load for garbage collector.
Possible solution: Use empty array per-class singleton e.g. via a static field in class.
Find instance fields with high percentage of 'null' values.
Problem: Possible memory waste.
Possible solutions: If some of the fields are not used, get rid of them rarely assigned fields can be moved to subclasses in the class hierarchy.
Find arrays with big number of 'null' elements.
Problem: Possible memory waste.
Possible solution: Use alternate data structures e.g. maps or rework algorithms.
Find arrays of primitive types with big number of 0 elements.
Problem: Possible memory waste.
Possible solution: Use alternate data structures e.g. maps or rework algorithms.
Find arrays such that all their elements are the same.
Problem: possible memory waste.
Possible solution: use alternate data structures e.g. maps or rework algorithms.
Find objects retained via synthetic back reference of its inner classes.
Problem: Such objects are potential memory leaks.
Find SWT control instances not accessible from shown UI.
Technically, it finds instances of org.eclipse.swt.widgets.Control which are not accessible from org.eclipse.swt.widgets.Display's field 'controlTable'
Problem: Possible memory leaks.
Possible solutions: Examine paths to lost objects to see if they really leaked
Find HashMaps with non-uniformly distributed hash codes.
To achieve good HashMap performance, hash codes of objects used as keys should be uniformly distributed. Otherwise, map access performance degrades due to hash collisions. The inspection finds HashMaps with entries most unevenly distributed among chunks.
Possible solution: consider better hashCode() implementation for objects used as keys, or use wrappers with properly implemented hashCode().
Find objects referenced by a large number of other objects.
Possible problems: Incorrect relations between objects in memory, logical errors and/or non-optimal data structures.
Find objects with fields referencing 'this'.
Problem: Possibly incorrect logic and/or memory waste.
Possible solution: Remove redundant fields.
If a class implements interface java.io.Serializable
and one of its serialized fields
refers to a non-serializable object (directly or through intermediate objects),
java.io.NotSerializableException
will be thrown in runtime on attempt to serialize an instance of this class.
This inspection automatically detects such situations.
You can inspect all objects implementing Serializable
,
selecting Inspections in the Memory tab,
or only particular serializable objects.
For example, test whether HTTPSessions
would have serialization problems
(assume memory snapshot is open):
serialPersistentFields
transient
modifier are serializable
If a serializable class overrides writeObject(ObjectOutputStream)
and
readObject(ObjectInputStream)
methods to change the default serialization behavior,
it is impossible to automatically find out what fields will actually be serialized.
Thus, the inspection can provide incorrect results for such classes.
However, this should not be a big problem, because in most cases this
only leads to "false alarms": the inspection would report a referenced non-serializable object
which is not actually serialized by writeObject(ObjectOutputStream)
.
Please learn more about serialization in this article: http://java.sun.com/developer/technicalArticles/ALT/serialization/
This inspection is only available for the profiler's own format snapshots, and is not available for HPROF-format snapshots.
The problem with HPROF snapshots is that they do not contain essential information needed for this inspection:
transient
The profiler cannot obtain missing data as the HPROF snapshots are produced by a JVM internal dumper which stores only fixed kinds of information.
Find objects with longest paths to GC root.
Intention: helps finding longest chains of objects such as linked lists, queues, trees etc.
This feature lets you compare any two memory snapshots.
To locate memory leaks, consider using the object generations feature.
To compare snapshots:
When comparing memory snapshots, there is an option to choose objects of which reachability scopes to be included to the comparison results:
The Comparison tab with memory views Classes and Classes and packages will open. The views have 2 columns: Objects (+/-) and Size (+/-). These display the differences in object counts and sizes. Positive values mean that Snapshot 2 (the later memory state) has more objects and/or its objects have the bigger total size.
100% of size corresponds to the total size of all objects in the old snapshot. Likewise, 100% of count corresponds to the object count in the old snapshot.
Please note that memory snapshot comparison provides only object count and size difference, it cannot tell which particular objects have gone or have been created between the snapshots being compared. This is because an object in the JVM heap does not essentially have a persistent ID. An object identifier is valid in particular snapshot only, it is in fact the object's physical address which may alter when the garbage collector compacts the heap or moves objects between heap generations. To address this limitation and enable object identification, YourKit profiler offers the object generations feature whose implementation involves object tagging with the help of the JVMTI API.
Java has a built-in feature for dumping heap snapshots to files in HPROF binary format. You can analyze these snapshots using all of the powerful features that YourKit Java Profiler provides for its own memory snapshots.
HPROF snapshots can be created in the following ways:
Java's jmap
utility can connect to a running Java process and dump its Java heap:
jmap -dump:format=b,file=file_name.hprof <PID>
Hint: to learn the PID (process identifier) of running JVM, you can use jps
or jconsole
JDK utilities.
The benefit is that memory can be analyzed on demand with no additional configuration of JVM or Java application. You can dump memory of any running instance of JVM that supports this feature.
Java utility jconsole
allows you to connect to a running Java process
for monitoring and management.
Using jconsole
, you can dump Java heap via HotSpotDiagnostic
MBean
This approach has a lot of drawbacks and is not useful nowadays, but is mentioned here to show the complete picture.
HPROF snapshots (*.hprof) can be opened the same way as YourKit Java Profiler format snapshots (*.snapshot)
Some HPROF snapshots do not contain values of primitive types.
When such snapshots are opened in the profiler, values of java.lang.String
's will not be available.
Values of primitive types (int
, boolean
, double
etc.)
are available in a memory snapshot if it is:
YourKit format snapshot (*.snapshot) of profiled application running on Java 6 or newer.
HPROF-format snapshot of profiled application running on Java 6 or newer, or captured on OutOfMemoryError
Values of primitive fields and arrays of primitive types are shown in object explorers:
Also, a text representation in specified encoding can be seen for byte[] in Quick Info.
Values of strings help locating (identifying) a particular object among other objects of the same class. Use Memory | Strings by Pattern... (Ctrl+F) to search for strings, char arrays, and byte arrays, matching given substring or a regular expression.
Use popup menu and main menu to see what actions are enabled in particular context.
Memory | Quick Info (Ctrl+Q) - Quick info view shows detail on selected object(s).
Memory | Instances by Class... (Ctrl+N) -
shows instances of a class after you specify its name.
Hint: You can also use this action to get quick info on the number of instances of a particular class: just type in the class name, and the instance count will appear next to the class name in the lookup list.
Then hit ESC to close the lookup window, or press Enter to open the new tab.
Memory | Selected Objects (F4) - shows selected live objects in a new tab. Works in any memory view if the selection represents live objects.
Memory | Paths from GC Roots... (Ctrl+P) - finds paths from GC roots to the objects represented within the current selection. Works in any memory view if the selection represents live objects.
Memory | Strings by Pattern... (Ctrl+F) -
shows instances of java.lang.String
,
char[]
and byte[]
(in specified encoding) matching the given text pattern.
This can be useful to locate particular objects if their fields refer to a known string or a char sequence.
Tools | Open in IDE (F7) - opens the currently selected class, field or method in the IDE editor. See IDE integration.
Memory | Method Merged Callees (Ctrl+M) - shows merged callees of the selected method.
Memory | Method Back Traces (Ctrl+Shift+M) - shows back traces of the selected method.
This XML-based language provides the advanced ability to specify sets of objects in a declarative way. It can be used, for example, to examine memory distribution in automated memory tests.
To see some samples and define your custom sets, please use Settings | Sets of Objects... in the main menu. The sets can be opened via Memory | Predefined Set....
Tag objects specifies objects of a particular class, or multiple classes.
Mandatory attribute class specifies the fully qualified class name. Wildcards (*) are also accepted.
Optional attribute subclasses specifies whether subclasses should be accepted (subclasses="true"), or the class should match exactly (subclasses="false"). Default value is true.
<objects class="com.company.MyClass"/> - all instances of the class com.company.MyClass
and its subclasses
<objects class="com.company.MyClass" subclasses="false"/> - all instances of
com.company.MyClass
but excluding instances of classes derived from com.company.MyClass
.
<objects class="*[]"/> - all arrays
Tag roots specifies all objects that are GC roots.
Example
<roots/>
Tag and intersects the specified sets and returns all objects that are present in every set. There should be at least 2 nested sets.
The specification of all objects of classes that implement both com.company.MyInterfaceA
and
com.company.MyInterfaceB
interfaces:
<and>
<objects class="com.company.MyInterfaceA"/>
<objects class="com.company.MyInterfaceB"/>
</and>
Tag or joins the specified sets and returns all objects that are present at least in one of the sets. There should be at least 2 nested sets.
The specification of all objects of the class com.company.A
and its subclasses, and
all objects of the class com.company.B
and its subclasses:
<or>
<objects class="com.company.A"/>
<objects class="com.company.B"/>
</or>
Tag not specifies all objects that are not present in the specified set. There should be one and only one set specified.
The specification of all objects except for the objects of class com.company.A
and its subclasses:
<not>
<objects class="com.company.A"/>
</not>
Tag reachable-objects specifies objects accessible by references from the set specified via mandatory subtag from.
The result will not include objects other than the ones specified via mandatory subtag object-filter. In terms of graphs, object-filter specifies allowed nodes.
Mandatory subtag field-filter allows you to search for objects reachable only from particular fields of classes. Restrictions for any number of classes can be specified as class subtags of field-filter:
<field-filter>
<class name="com.company.ClassA">
<allowed field="field1"/>
...
<forbidden field="field2"/>
...
</class>
<class name="com.company.ClassB">
...
</class>
...
</field-filter>
For each class, you can specify any number of names (including none) of the allowed and forbidden fields. If at least one allowed tag is specified, only the allowed fields will be allowed for the class. If no allowed fields are specified, any fields except for the forbidden ones will be allowed for the class.
Any fields of classes not specified in field-filter are acceptable.
The following example is a predefined set "Lost UI" (see Settings | Sets of Objects...). It lets you find all AWT/Swing UI controls that are not in the window hierarchy.
UI controls are instances of java.awt.Component and its subclasses. Class java.awt.Container is a subclass of java.awt.Component and represents controls that contain other controls. To be shown, a control must be contained in windows, represented with objects of the class java.awt.Window with subclasses; java.awt.Window extends java.awt.Container.
Our goal is to find any UI control (i.e. instance of java.awt.Component) not accessible from the window hierarchy. So we start from the windows (see the from section).
We know that java.awt.Container stores its children in the field named component which is an array of components. According to this we form the object-filter and field-filter sections. Note that we have to include java.awt.Component[] to the object filter, so that the result of the entire reachable-objects tag includes arrays of components as well as components.
To complete the task, we use a combination of and and not tags, to retrieve components that are not accessible from the windows in the specified way.
<and>
<objects class="java.awt.Component"/>
<not>
<reachable-objects>
<from>
<objects class="java.awt.Window"/>
</from>
<object-filter>
<or>
<objects class="java.awt.Component"/>
<objects class="java.awt.Component[]"/>
</or>
</object-filter>
<field-filter>
<class name="java.awt.Container">
<allowed field="component"/>
</class>
</field-filter>
</reachable-objects>
</not>
</and>
Tag retained-objects retrieves all objects that will be garbage-collected if all objects of the given set are garbage-collected. The given set is included. In other words, the retained set of set A is A itself plus all the objects accessible from A and only from A. These objects are called the retained set, and its size is called the retained size (see Shallow and retained sizes for more details).
The following specifies of all objects that will be garbage collected if all objects of the class com.company.A
and
its subclasses are garbage collected:
<retained-objects>
<objects class="com.company.A"/>
</retained-objects>
Tag generation returns objects from specified generations.
Objects from generation 3: <generation name="#3:*">
Objects from all generation(s) with description "foo": <generation name="*: foo">
All strings from generation #5:
<and>
<objects class="java.lang.String"/>
<generation name="#5:*">
</and>
Tags strong-reachable, weak-soft-reachable, pending-finalization, unreachable return all objects of particular reachability scope.
All weak or soft reachable strings:
<and>
<objects class="java.lang.String"/>
<weak-soft-reachable/>
</and>
Garbage collection telemetry graphs shown in the Memory tab will help you estimate garbage collector load. If garbage collection takes a significant amount of time, it is advised to run object allocation recording to pin-point and optimize the problematic code.
The "Garbage Collections" and "Time Spent in GC" graphs are always available.
You can explicitly run garbage collection using "Force Garbage Collection" toolbar button:
If memory snapshot contains recorded allocations, "Garbage Collection" view, in addition to garbage collection telemetry described above, will also contain methods that were the sources of excessive garbage allocation.
See Solving performance problems for details on why one should avoid excessive garbage allocation.
The shown number and shallow sizes correspond to the objects that were created and recycled since the object allocation recording was started and prior to the moment of the snapshot capture.
Shows a top-down call tree with methods in which collected objects were created, for each particular thread ("by thread")
or with calls from all threads merged ("all threads together").
The tree is shown according to current filters.
Shows methods that made the biggest contribution to creating objects that were collected, either by object count or shallow size:
Methods are shown according to current filters:
Thread.run()
).
Methods are shown according to current filters:
Thread.run()
).
For each method, the list shows the number and shallow size of collected objects it had created.
You can narrow down the list by typing a method's name in the text field.
Recorded objects are shown distributed by how long they existed. This information is helpful in tuning garbage collector parameters.
Monitor profiling helps you analyze synchronization issues, including:
The times are measured as wall time.
To start monitor profiling use "Start Monitor Profiling" toolbar button, after the profiler is connected to the profiled application.
Monitor profiling results are shown in the "Monitor Usage" view. Results can be grouped by waiting thread, by blocker thread or by monitor class name.
- waiting thread (thread which called wait())
- blocked thread (thread failed to immediately enter the synchronized method/block)
- blocker thread (thread that held the monitor preventing the blocked thread from entering the synchronized method/block)
(1) comboboxes to select results grouping
(2) checkbox to show blocked threads only (i.e. to filter out waiting threads)
(3) method back traces for the selection in the upper table
Percents in the tree are shown using the duration of monitor profiling (i.e. time passed since last start or clear) as 100%. This allows you to estimate the average percentage of time when thread waits/is blocked.
In some cases, it may also be useful to launch the application with monitor profiling started (see Startup options).
"Exceptions" telemetry shows exceptions which were thrown in the profiled application.
Exceptions may be grouped by their exception class or by thread in which they occurred. Selecting an exception in the upper table allows viewing the exception stack traces.
Checkbox "Show exceptions thrown and caught inside filtered methods" enables to filter out exceptions which have been thrown and caught in methods of library classes. By default, the checkbox is unselected, as such kind of exceptions are usually of no interest when profiling your application.
You can clear recorded exceptions with the help of corresponding toolbar button:
You can compare exception statistics of two snapshots with the help of File | Compare Snapshot with....
Exception telemetry helps discovering performance issues and logic errors. In most cases, there is no significant overhead of collecting this information.
However, it makes sense to disable it in production Java EE servers in order to ensure minimum profiling overhead.
Whether the exception telemetry is enabled by default depends on the JVM and can be adjusted with the following startup options:
exceptions=on
enables exception events in the JVM and immediately starts recording the exception telemetry.
This is the default mode on Sun/Oracle Java, OpenJDK, JRockit (i.e. non-IBM JVMs).
exceptions=off
enables exception events in the JVM but does not immediately start recording the exception telemetry
that can instead be recorded later in runtime.
exceptions=disable
fully disables exception events in the JVM to totally eliminate corresponding overhead.
This is the default mode on IBM JVMs because the overhead is significant.
When you are connected to the profiled application, use corresponding toolbar button to enable/disable the telemetry.
Get almost unlimited analysis capabilities by recording events specific to your particular application, and automatically recognize problems typical to wide range of applications.
Gather information about your application according to your own rules by setting up your custom probes.
Recognize typical problems with the help of built-in probes which are ready to use out of the box. Also, as the source code of the built-in probes is available, use it as a good example should you decide creating custom probes.
Bytecode instrumentation engine will inject calls to your probes to the methods which you specified. The probes are written in Java. You can access method parameters, method return value, the object for which the method is called, as well as intercept uncaught exceptions thrown in the method. This provides virtually unlimited capabilities to monitor applications. Read more...
Class loading probe monitors class load and unload events
Please also consider triggers. They provide a wider range of events to monitor than probes do. However, triggered actions should be chosen from a predefined set, while probes can implement any necessary functionality.
Data storage allows you to uniformly record the following information for each event:
Although it is intended to use data storage to gather information, if you wish you can also store it your own way, e.g. write it to your application's log, to a file or simply write to console.
Probes UI provides rich set of tools to analyze the gathered information, or to export it for external processing.
The probes and their results are shown in tab "Events".
The tab is available when the profiler is connected to profiled application, as well as browsing saved snapshot.
Some functionality is available for a saved snapshot only.
The view consists of two parts:
Shows all available tables as a tree: dependent tables are show as nested nodes of their master tables.
Selected table is shown in Events by Table (see below).
Also, the table selector controls events from which tables are shown in Event Timeline and Event Call Tree views (see below) via checkboxes.
To open selected event(s) in another view, use corresponding popup menu items. Read more...
This view shows events in a particular table. If the table has dependent tables, they are presented as slave views:
Number of events in table is shown in the title.
Events can be presented as a plain list, as well as grouped by arbitrary column(s).
For each group you can see sum, minimum and maximum values for numeric columns.
In addition to own table columns, you can also get statistics on associated rows of dependent columns: the number of the rows, as well as sum, minimum and maximum values of the dependent table's metrics.
Also, you can hide columns to give more space for others.
All this can be configured via "Configure columns and groping" action:
To open selected event(s) in another view, use corresponding popup menu items. Read more...
Profiler events such as switching profiling modes, starting/stopping profiling, capturing snapshots etc. are logged to the log file, as well to built-in table "Messages" (categories "Profiler" and "Profiler UI").
You can add your own messages in your probes via utility class com.yourkit.probes.builtin.Messages.
You can see "Messages" table in the probes UI:
This view shows a sequence of events from different tables, allowing to analyze nested and sequential events of different kind.
Choose which tables to include via the table selector.
To open selected event(s) in another view, use corresponding popup menu items. Read more...
This view shows events distributed by stack trace.
For each table it shows where and how many events have happened.
Choose which tables to include via the table selector.
To open selected event(s) in another view, use corresponding popup menu items. Read more...
If event stack trace is available, top method is shown as a hyperlink. To see the stack trace, click the link or use a popup menu:
The stack is recorded when event starts and when event ends. You can also estimate CPU usage inside the event.
You can apply filters or see full stack trace:
To open event(s) selected in Events by Table, Event Timeline or Event Call Tree in another view, use corresponding popup menu items:
In "Events by Table":
In "Event Timeline":
In "Event Call Tree":
To open event(s) selected in Events by Table, Event Timeline or Event Call Tree in a telemetry graph or in Threads, use corresponding popup menu items:
Use telemetry graph popup menu to open in Event Timeline the event nearest to the selected time point:
Instead of examining all recorded events, you can focus on events intersecting with given event or group of events.
For example, select event corresponding to particular servlet request, to see nested events such as database operations, file I/O, socket connections etc.
To invoke the action, use popup menu:
As the result, a new "Events" tab will open.
See also Memory inspections
Typical behavioral problems can be recognized with the help of the "Inspections" feature. Inspections enable automatic high-level analysis of built-in probe results. Each inspection automatically detects a specific issue. Performing this type of analysis by hand would be a very complicated (if at all possible) task.
With the help of inspections you can easily find the causes and possible solutions of usual problems.
The feature is presented with "Inspections" view.
(1) To run all inspections as a batch use "Run All Inspections" button.
(2) To run a single inspection, select it in the tree and use "Run This Inspection Only" button
(this is especially useful if you want to apply the changes made to an inspection's options).
We have prepared a set of probes to help investigating typical problems. The probes are ready to use out of the box. Also you can use them as good examples to start with should you decide creating custom probes.
The following built-in probes are available at the moment:
com.yourkit.probes.builtin.Servlets
- JSP/Servlet requestscom.yourkit.probes.builtin.Databases
- JDBC/SQL database connections and requestscom.yourkit.probes.builtin.MongoDB
- MongoDB requestscom.yourkit.probes.builtin.Cassandra
- Cassandra database requestscom.yourkit.probes.builtin.HBase
- HBase database requestscom.yourkit.probes.builtin.JPA_Hibernate
,com.yourkit.probes.builtin.JPA_EclipseLink
,com.yourkit.probes.builtin.JPA_OpenJPA
,com.yourkit.probes.builtin.JPA_DataNucleus
- JPA calls
com.yourkit.probes.builtin.Sockets
- socket I/O operations via streams and NIO channelscom.yourkit.probes.builtin.AsyncChannels
- socket I/O operations via asynchronous channelscom.yourkit.probes.builtin.Files
- file I/O operations via streams, random access files, NIO channelscom.yourkit.probes.builtin.Threads
- thread creation, start, name change. Read more...com.yourkit.probes.builtin.Processes
- external processes launched via Runtime.exec()
and ProcessBuilder.start()
Class loading
- class load and unload events. Read more...com.yourkit.probes.builtin.JUnitTests
- execution of JUnit testscom.yourkit.probes.builtin.TestNG
- execution of TestNG testscom.yourkit.probes.builtin.AwtEvents
- long AWT/Swing events, which cause UI irresponsiveness. Read more...com.yourkit.probes.builtin.JNDI
- JNDI calls
Also, the package com.yourkit.probes.builtin
contains utility class
com.yourkit.probes.builtin.Messages providing means to log arbitrary text messages
from within probes.
Built-in probes are enabled by default. You can change their activity mode or disable them. Read more....
Source code of the built-in probes can be found in <Profiler Installation Directory>/probes/src
Probes to monitor some aspects of thread life cycle:
java.lang.Thread
or its subclass)
are created
Thread.start()
invoked)
Thread.setName()
invoked)
The table rows represent lasting events. The event starts when thread object is created, and ends when thread is started.
Thus, non-closed events in the table correspond to threads which has not started.
Such threads indicate potential design flaw.
However, note that threads actually started in native code can be shown as not started,
because their start()
method is not called.
Thread constructor allows not to specify thread name explicitly.
Such threads get automatically generated name in form Thread-<number>
.
To improve maintainability and clearness of your application, avoid creating anonymous threads, specifying thread name explicitly.
To check if there are anonymous threads in the profiled application,
sort the table by name column and look for threads named Thread-<number>
.
Group by name column and sort by row count to check if there are threads with same name.
To improve maintainability and clearness of your application, ensure thread names are unique.
<Profiler Installation Directory>/probes/src
Probe records class load and unload events.
Results are available in the probes UI in tab "Class loading".
You can quickly access them from "Classes" telemetry clicking link "Detail...":
There is no source code available for this probe, as it is based on low-level JVM events, available via native API only (JVMTI).
This probe has no associated probe class (see above), thus a pseudo-name com.yourkit.probes.builtin.ClassLoading
should be used in corresponding startup options,
e.g. probe_off=com.yourkit.probes.builtin.ClassLoading
disables the class loading probe.
This built-in probe records AWT/Swing events longer than 0.3 second, which can cause UI irresponsiveness.
If a long operation is performed directly in the event queue thread, UI may become irresponsive: do not redraw, do not accept user input etc.
Good practice is to perform long operations in a separate thread, and use the event queue to present prepared results only.
For analysis of the results provided by this probe, select AWT event in probes table and apply CPU usage estimation.
<Profiler Installation Directory>/probes/src
This class is not a probe class itself nor contains probe classes.
Instead, it provides a utility method to store arbitrary text messages.
For this purposes, the class defined a table named "Messages"
with columns of string type, and introduces method
public static void message(String category, String message, String detail)
which creates new row in the table and sets corresponding column values.
The messages can be seen in UI.
<Profiler Installation Directory>/probes/src
Probe class is a Java class intended to monitor execution of particular methods.
Bytecode instrumentation engine will inject calls to your probes to the methods which you specified.
You can access method parameters, method return value, the object for which the method is called, as well as intercept uncaught exceptions thrown in the method.
This provides virtually unlimited capabilities to monitor applications. Read more...
After you have written your probe classes,
compile them with Java compiler (javac
) as regular Java classes.
Please add yourkit.jar
to the compiler class path
to make classes in package com.yourkit.probe
accessible.
Unlike compile time, you do not need to add yourkit.jar
to class path in runtime.
All necessary classes in com.yourkit.probe
will be automatically
loaded by the profiler agent.
Optionally, you can put compiled classes to a JAR file. The probe classes are self-explanatory, so there is no need in any additional description in the JAR file manifest.
To apply a probe, it should be registered.
Built-in probes are pre-compiled and are ready for registration out of the box. If you have written a custom probe, compile it before registering.
There are two ways to register probes:
When a probe is registered, its target classes are permanently instrumented with the probe callbacks, which behave depending on the probe activity mode:
On: the callbacks are active and do their job.
Off: the callbacks are empty and do nothing. The effect is the same as if the probe was not registered at all. The probe overhead in this mode is almost negligible.
Auto: the probe is active only while CPU profiling is running: the probe automatically turns On when CPU profiling starts and then automatically turns Off when CPU profiling stops. This mode is intended for the probes whose results are naturally associated with CPU profiling session, and/or would impose undesirable overhead when no profiling is being performed. When CPU profiling results are cleared, either by starting CPU profiling or explicitly, the tables of the Auto probes are automatically cleared too.
Initial activity mode is controlled via corresponding startup options (see below).
getProbeActivityModes()
and setProbeActivityModes()
of com.yourkit.api.Controller
Probes | Default mode |
JSP/Servlets, databases, files, sockets | Auto |
Class loading, threads, processes, AWT events | On |
JUnit and TestNG tests | On (*) |
user-defined probes | n/a (On or Auto explicitly specified) |
(*) Only if the agent loads on start. In attach mode the probe is by default disabled.
Register probes on startup to apply them without altering the profiled application source code. This is especially important for applications such as Java EE servers, and/or production versions of the software.
Also, registration on startup is the simplest way to apply probes.
The following startup options control the initial mode. They work for both built-in and user-defined probes.
Startup option | Description |
|
Until any of the The options set the initial mode of matching probes to On, Off, Auto.
If the specified full qualified name is a pattern, i.e. contains If a full qualified name without the wildcard is specified, the probe will be added to the list of probes to be registered, and its initial mode will be as the option specifies. You can later change the mode of these probes in runtime by using the UI or the API.
Note:
the order of |
probe_disable =<full qualified probe class name or pattern>
|
Until any of the
This totally eliminates the matching probes: no classes will be instrumented with their callbacks.
You won't be able to change the probe mode in runtime by using the UI.
Although you will be able to use the API to register the probe in runtime,
it's instead recommended to use the option
Note:
the order of |
To specify where a custom probe class should be loaded from, use the following options:
Startup option | Description |
probeclasspath =<classpath>
|
Add the list of jar files and/or directories to system class path to search probe classes in. If several path elements are specified they should be separated with the system-specific path separator which is ';' on Windows and ':' on other platforms. There is no need to specify the path for built-in probes.
Note:
this option alone does not automatically register any probe classes,
it just defines where they are located.
You must explicitly specify which probe classes to register by using
the options |
probebootclasspath =<classpath>
|
Add the list of jar files and/or directories to boot class path to probe classes in. If several path elements are specified they should be separated with the system-specific path separator which is ';' on Windows and ':' on other platforms. There is no need to specify the path for built-in probes.
Note:
this option alone does not automatically register any probe classes,
it just defines where they are located.
You must explicitly specify which probe classes to register by using
the options |
1. probe_on=*
All (built-in) probes are On. This was the default behavior in the previous versions.
2. probe_off=com.yourkit.probes.builtin.Databases
A short form is available for built-in probes: use a dot instead of the package name probe_off=.Databases
Databases
probe will be Off. Other probes will be in their default mode.
3. probe_disable=*,probe_on=com.foo.Bar
Disable all built-in probes and register a user-defined probe class com.foo.Bar
,
whose initial mode is On.
4. probe_auto=com.foo.Bar
Register a user-defined probe class com.foo.Bar
, whose initial mode is Auto.
The built-in probes will be in their default mode.
Use API to register probes programmatically in runtime.
To register the probe, invoke static method registerProbes()
of class com.yourkit.probes.Probes
:
// register probe classes
public static void registerProbes(Class... probeClasses);
// register probe classes by class name
public static void registerProbes(String... probeClassNames);
When probes are registered by name using
registerProbes(String...)
,
the probe classes will be searched in paths specified with the help
of startup options
probeclasspath=<classpath>
and
probebootclasspath=<classpath>
(see above).
There is no need to specify the paths for built-in probes.
Using registerProbes(Class...)
you already supply loaded probe classes,
thus no search in paths is performed.
Example:
import com.yourkit.probes.*;
// ...
Probes.registerProbes(MyProbe.class);
Probes.registerProbes("com.foo.Probe1", "com.foo.Probe2");
Registered probe can be unregistered in runtime. This means that all methods which have been instrumented with the probe's callbacks will be returned to their original state.
We strongly recommend to change the probe's activity mode to Off instead.
Probe unregistration is only possible using the profiler API:
invoke static method unregisterProbes()
of class com.yourkit.probes.Probes
:
public static void unregisterProbes(final String... probeClassNames);
public static void unregisterProbes(final Class... probeClasses);
import com.yourkit.probes.*;
// ...
Probes.unregisterProbes(MyProbe.class);
Probes.unregisterProbes("com.foo.Probe1", "com.foo.Probe2");
Probe class for monitoring method invocation events is a Java class which meets the following requirements:
It must be annotated with @MethodPattern annotation in order to specify which methods the probe will be applied to.
It must define at least one of the following callback methods (must be public static too):
return
or when a void method body execution completes)
Bytecode instrumentation engine will inject calls to your probes to the methods which you specified on probe registration.
You can access method parameters, method return value, the object for which the method is called, as well as intercept uncaught exceptions thrown in the method. This provides virtually unlimited capabilities to monitor applications.
Write the probe classes.
How to write the probe class and how many probe classes you need, depends on your task.
You may intend one probe class to handle one method call only, or you can write a probe with a pattern which corresponds to several methods which you want to handle uniformly. Also, you may need several probes to solve one complex task.
Take a look at the built-in probes to get some ideas on how to write your own ones.
Register the probe classes to make them work.
Upon successful registration of the probe, methods matching the probe pattern will be instrumented according to the following rules.
Probe class must be annotated with @MethodPattern
annotation
in order to specify which methods the callbacks will be applied to.
One or more method patterns can be specified in the following format ('*' wildcard is allowed):
class_name : method_name ( parameter_types )
You can optionally specify that class or method must have particular annotations.
Put @annotation
before the class and/or the method pattern accordingly:
[@class_annotation] class_name : [@method_annotation] method_name ( parameter_types )
Part | Description | Examples |
class_name |
Full qualified class name, dot-separated. Inner class part(s) are separated with "$" sign. |
Particular class:
Class with name ending with 'Helper':
Any class in a package:
Any class: |
class_annotation (optional) |
Specify that matching classes must have particular annotation. If not specified, class with any annotations or with no annotations at all will match. |
All |
method_name |
Method name.
To specify a constructor, use <init> .
|
Method with particular name:
Any getter:
Any method, including constructors: |
method_annotation (optional) |
Specify that matching methods must have particular annotation. If not specified, methods with any annotations or with no annotations at all will match. |
All JUnit tests: |
parameter_types |
Comma-separated list of parameter types. Space characters between type names and commas are allowed (but not required).
Class names should be full qualified, dot-separated,
except for
Primitive types should be named as in Java source code:
Arrays are specified as Use empty string to specify a method with no parameters.
|
No parameters: empty
Any number of parameters, including zero:
3 parameters of particular types:
First parameter is an array of class com.Foo:
|
The annotation is declared as...
package com.yourkit.probes;
import java.lang.annotation.ElementType;
import java.lang.annotation.Target;
@Target(ElementType.TYPE)
public @interface MethodPattern {
String[] value();
}
...and specifies one or more patterns for a probe. For example:
import com.yourkit.probes.*;
// Instrument calls of method 'findPerson' in class 'com.foo.Person', which has 2 parameters of particular types
@MethodPattern("com.foo.Person:findPerson(String, int)")
public class MyProbe1 {
//...
}
// Instrument all methods in classes 'com.Foo' and 'com.Bar'
@MethodPattern(
{
// should match at least one of them
"com.Foo:*(*)",
"com.Bar:*(*)"
}
)
public class MyProbe2 {
//...
}
// Instrument methods toString() in all classes in package com.foo
@MethodPattern("com.foo.*:toString()")
public class MyProbe3 {
//...
}
// Instrument all methods in classes whose names end with 'Helper' and have int[][] as a first parameter
@MethodPattern("*Helper:(int[][],*)")
public class MyProbe4 {
//...
}
To execute a code on method entry, define callback method onEnter()
in probe class.
Only one method named onEnter()
can exist in a probe class.
You can omit onEnter()
method if it is not needed for your probe.
The callback can have any number of parameters, including zero.
Each of the parameters must be annotated with one of the following annotations:
onEnter()
can be void or return a value.
If onEnter()
is not void,
the returned value will be accessible
in onReturn(),
onUncaughtException() and
onUncaughtExceptionExt()
callbacks of the same probe class with the help of
a parameter annotated with @OnEnterResult.
This approach enables effective way of data transfer
between method enter and exit callbacks
(the value is passed via stack).
Example 1:
import com.yourkit.probes.*;
// Instrument calls of method 'findPerson' in class 'com.foo.Person', which has 2 parameters
@MethodPattern("com.foo.Person:findPerson(String, int)")
public class MyProbe {
public static void onEnter(
@Param(1) Object param1,
@Param(2) int param2
) {
//...
}
}
Example 2:
import com.yourkit.probes.*;
// Instrument calls of method(s) 'findPerson' in class 'com.foo.Person', with any signature,
// and print method execution times
@MethodPattern("com.foo.Person:findPerson(*)")
public class MyProbe {
public static long onEnter() {
return System.currentTimeMillis();
}
public static void onReturn(
@OnEnterResult long enterTime
) {
long exitTime = System.currentTimeMillis();
System.out.println("method execution time: " + (exitTime - enterTime));
}
}
Using callback parameter annotated with @This
in a probe applied to a constructor affects
the point where the call to onEnter()
is injected.
Please find detail here.
To execute a code when method normally exits
(i.e. via return
or when a void method body execution completes),
define callback method onReturn()
in probe class.
Only one method named onReturn()
can exist in a probe class.
You can omit onReturn()
method if it is not needed for your probe.
The callback can have any number of parameters, including zero.
Each of the parameters must be annotated with one of the following annotations:
onReturn()
can be void or return a value.
If onReturn()
is void, the original return value will be returned.
The following pseudo-code demonstrates how the callback works:
Before instrumentation:
ReturnType foo() {
bar(); // do something
return result;
}
After instrumentation with void onReturn()
:
ReturnType foo() {
bar(); // do something
SomeProbeClass.onReturn(...);
return result;
}
Declare onReturn()
as non-void to change the value
returned from the instrumented method.
If onReturn()
is non-void, its return type must
be the same as the return type of method to be instrumented.
If the types mismatch, the probe will not be applied to the method.
If needed, access the original return value with the help of @ReturnValue
annotated parameter, and optionally return the original return value or a different value.
The capability to change return value has more to do with debugging than with monitoring or profiling, as it affects application logic. Be very careful: inappropriate use can break the application.
After instrumentation with ReturnType onReturn()
:
ReturnType foo() {
bar(); // do something
return SomeProbeClass.onReturn(...); // return value must be ReturnType
}
To execute a code when method terminates because of an uncaught exception (a throwable),
define callback method onUncaughtException()
or onUncaughtExceptionExt()
in probe class.
Only one method named onUncaughtException()
or onUncaughtExceptionExt()
can exist in a probe class.
You can omit these callback methods if uncaught exception handling is not needed for your probe.
onUncaughtException()
is intended to monitor possible uncaught exceptions
without changing the logic of instrumented code.
After the callback returns, the exception will be automatically re-thrown.
onUncaughtExceptionExt()
does not automatically re-throw
the original exception.
This provides an opportunity to alter instrumented code behavior
by suppressing the exception and, for a non-void method,
returning a value instead.
The capabilities provided with onUncaughtExceptionExt()
have more to do with debugging than with monitoring or profiling,
as they affect application logic.
Be very careful: inappropriate use can break the application.
The callback can have any number of parameters, including zero.
Each of the parameters must be annotated with one of the following annotations:
onUncaughtException()
must be void.
Technically, onUncaughtException()
is called inside
try/finally block surrounding instrumented method body:
onUncaughtException()
returns,
the exception will be automatically re-thrown
onUncaughtException()
body,
it will be thrown instead of the original exception
The following pseudo-code demonstrates how the callback works:
Before instrumentation:
ReturnType foo() {
bar(); // do something
return result;
}
After instrumentation with onUncaughtException()
:
ReturnType foo() {
try {
bar(); // do something
return result;
}
catch (Throwable e) {
SomeProbeClass.onUncaughtException(...);
throw e; // automatically re-throw original exception
}
}
onUncaughtExceptionExt()
return type must
be the same as the return type of method to be instrumented.
If the types mismatch, the probe will not be applied to the method.
The following pseudo-code demonstrates how the callback works:
Before instrumentation:
ReturnType foo() {
bar(); // do something
return result;
}
After instrumentation with onUncaughtExceptionExt()
:
ReturnType foo() {
try {
bar(); // do something
return result;
}
catch (Throwable e) {
// may throw exception or return a value
return SomeProbeClass.onUncaughtExceptionExt(...); // return value must be ReturnType
}
}
Callback parameter annotated with @Param(<number>)
provides the value of the instrumented method parameter with given number.
Parameters are numbered starting with 1.
Parameter annotated with @Param(<number>)
can be used in any callback.
If the callback parameter is used in onReturn(), onUncaughtException() or onUncaughtExceptionExt() the original value of the instrumented method parameter variable will be provided, even if the parameter variable was modified inside the instrumented method body.
However, note that parameter of a reference type can refer an object which state is modified inside the instrumented method body. In this case, if you need the original state, access the parameter in onEnter().
The following code snippet illustrates the rules:
// Class being instrumented
class Foo {
void bar(StringBuffer buffer, Object obj, int number) {
buffer.append("some string"); // change object state
obj = null; // modify parameter variable
++number; // modify parameter variable
}
}
//...
// Probe class which instruments Foo.bar()
@MethodPattern("Foo:bar(java.lang.StringBuffer, Object, int)")
public class MyProbe {
public static void onEnter(
@Param(1) StringBuffer buffer
@Param(2) Object obj,
@Param(3) int number
) {
System.out.println(buffer.toString()); // will print original 'buffer' content
System.out.println(obj); // will print original 'obj'
System.out.println(number); // will print original 'number'
}
public static void onExit(
@Param(1) StringBuffer buffer
@Param(2) Object obj,
@Param(3) int number
) {
System.out.println(buffer.toString()); // will print modified 'buffer' content
System.out.println(obj); // will print original 'obj' (not null)
System.out.println(number); // will print original 'number'
}
}
If parameter number in @Param(<number>)
is bigger than the actual number
of method parameters,
it will be assigned with null
, if declared as a reference type,
or with 0, if declared as primitive type.
Parameter annotated with @Param(<number>)
can be declared as a reference type or as a primitive type.
If the callback parameter is declared as java.lang.Object
,
it will be assigned with object reference as is if the actual parameter type is a reference type,
or with a boxed value if the actual parameter type is primitive.
If the callback parameter
is declared as some reference type other than java.lang.Object
,
it will be assigned with the actual method parameter value only if it is declared as strictly the same type.
Otherwise the callback parameter value will be null
.
If the callback parameter is declared as a primitive type,
it will be assigned with the actual method parameter value only if it is declared as strictly the same type.
Otherwise it will be assigned with 0; no type conversions will be performed.
For example, int
will not be cast to long
.
Type of callback parameter annotated with @Param | Actual instrumented method parameter type | Resulting value of the callback parameter |
java.lang.Object |
any reference type | the value as is |
any primitive type | a boxed value | |
some reference type T1 other than java.lang.Object |
same reference type T1 |
the value as is |
any reference or primitive type T2 != T1 |
null | |
some primitive type T1 |
same primitive type T1 |
the value as is |
any reference or primitive type T2 != T1 |
0 |
Note: The type matching rules for @Param
are similar
to ones for @ReturnValue
and differ from the rules for @This
Callback parameter annotated with @Params
provides the values of all parameters passed to the instrumented method.
Parameter annotated with @Params
can be used in any callback.
Parameter annotated with @Params
must be declared as java.lang.Object[]
.
Parameter annotated with @Params
will be assigned with a newly created array
with length equal to the number of method parameters.
First parameter is stored in the array at index 0, second parameter - at index 1 etc.
If instrumented method has no parameters, parameter annotated with @Params
will be assigned with an empty array.
Parameters of reference types will be stored as is. Parameters of primitive types will be stored boxed.
Note, that each time a callback with parameter annotated with @Params
is invoked, a new array is created.
For performance considerations,
use @Param({number}) instead whenever possible.
Callback parameter annotated with @This
will be assigned with a reference to the object whose method is being executed.
Callback parameter annotated with @This
should be declared as a reference type,
capable to store a reference to the objects whose method(s) the probe is applied to.
To make the probe applicable to any method (even static!),
declare the callback parameter as java.lang.Object
.
If needed, cast the value to appropriate class in the callback body.
If the callback parameter
is declared as some reference type T
other than java.lang.Object
,
the probe callbacks will only be called for methods of class T
or of a class which extends T
or implements T
.
In particular this means that callback with parameter annotated with @This
declared as T != java.lang.Object
will never be called for a static method.
Parameter annotated with @This
can be used in any callback.
However, if more than one probe callback
has parameters annotated with @This
,
all of them must be declared as same type.
@MethodPattern("*:findPerson(*)")
public class GoodProbe1 {
public static void onEnter() {/*....*/} // will only be called to methods of Person (because of onReturn()'s parameter)
public static void onReturn(@This Person person) {/*...*/} // will only be called to methods of Person
}
@MethodPattern("*:foo(*)")
public class GoodProbe2 {
public static void onEnter(@This Object _this) {/*....*/} // will be called to any methods matching the pattern
public static void onReturn(@This Object _this) {/*...*/} // will be called to any methods matching the pattern
}
@MethodPattern("*:bar(*)")
public class BadProbe2 { // the probe is invalid - @This type mismatch
public static void onEnter(@This Object _this) {/*....*/}
public static void onUncaughtException(@This String _this) {/*...*/}
}
If callback is applied to a static method,
parameter annotated with @This
will be null
.
If probe is applied to a constructor, it may be important to account all activity in super constructors, e.g. to properly measure the constructor execution time or handle nested events should the constructor execution be considered a lasting event.
However, the bytecode verifier forbids access to the object reference
before the constructor of its superclass (super(...);
)
or another constructor of the same class (this(...);
) is invoked.
Hence using callback parameter annotated with @This
in a probe applied to a constructor affects
the point where the call to onEnter() is injected,
as well as the scope of exceptions monitored via
onUncaughtException() or
onUncaughtExceptionExt().
See the table below for detail.
If @This is used in onEnter(), or
onUncaughtException(), or
onUncaughtExceptionExt(), or
if @This declared as non-java.lang.Object
is used in onReturn()
|
Otherwise | |
onEnter() injection point |
onEnter() will be called
right after the call to super(...) or this(...)
|
onEnter() will be injected at the very start of the constructor,
i.e. before the call to super(...) or this(...)
|
onUncaughtException() or onUncaughtExceptionExt() monitored scope |
The scope of monitored exceptions
will start right after the call to super(...) or this(...)
|
The scope of monitored exceptions
will be the whole method,
i.e. will include the call to super(...) or this(...)
|
In particular, this means that using @This
may not let to
monitor what happens inside super(...)
or this(...)
.
If you need both onEnter() and onReturn()
and want to avoid this limitation,
do not use @This
in onEnter();
use it in onReturn() only and declare it as java.lang.Object
.
Callback parameter annotated with @ClassRef
will be assigned with a reference to the class where the method being executed is defined.
Callback parameter annotated with @ClassRef
should be declared as java.lang.Class
.
Parameter annotated with @ClassRef
can be used in any callback.
Use @ClassRef
when applying probe to static methods,
because @This is null
in that case.
When applying probe to non-static methods,
you can either use @ClassRef
or
call getClass()
method of the object passed via @This.
However, please note the difference:
the class returned by getClass()
method will be the actual class of the object whose method is running,
while @ClassRef
will give the class where the method is defined.
The result will be different if the method is defined in a base class
and is not overridden in derived classes.
Example:
package com.foo;
class Base {
void f() {...}
void g() {...}
}
class Derived extends Base {
// f() is overridden, g() is not overridden
void f() {...}
}
...
@MethodPattern("foo.bar.*:*(*)")
public class MyProbe {
public static void onEnter(@This Base _this, @ClassRef Class _class) {
System.out.println(_this.getClass());
System.out.println(_class);
}
}
...
Derived obj = new Derived();
// Applying MyProbe to this call will print:
// class com.foo.Derived
// class com.foo.Derived
obj.f();
// Applying MyProbe to this call will print:
// class com.foo.Derived
// class com.foo.Base
obj.g();
Callback parameter annotated with @MethodName
will be assigned with the name of the method being executed.
For constructors, the name will be <init>
.
Callback parameter annotated with @MethodName
should be declared as java.lang.String
.
Parameter annotated with @MethodName
can be used in any callback.
Callback parameter annotated with @MethodParameterTypes
will be assigned with the list of parameter types of the method being executed.
Callback parameter annotated with @MethodParameterTypes
should be declared as java.lang.String
.
Parameter annotated with @MethodParameterTypes
can be used in any callback.
Callback parameter annotated with @MethodSignature
will be assigned with the signature of the method being executed.
Callback parameter annotated with @MethodSignature
should be declared as java.lang.String
.
Parameter annotated with @MethodSignature
can be used in any callback.
Callback parameter annotated with @OnEnterResult
will be assigned with a value returned from onEnter()
defined in the same probe class.
This approach enables effective way of data transfer
between method enter and exit callbacks
(the value is stored in the stack).
Parameter annotated with @OnEnterResult
can be used in onReturn(),
onUncaughtException() and
onUncaughtExceptionExt(),
and only if the probe class has non-void onEnter()
defined.
Parameter annotated with @OnEnterResult
must be declared with the same
type as the return type of onEnter().
If onEnter() is declared void,
@OnEnterResult
cannot be used.
Callback parameter annotated with @ReturnValue
provides the instrumented method's return value.
Parameter annotated with @ReturnValue
can be used in
onReturn() only.
Parameter annotated with @ReturnValue
can be declared as a reference type or as a primitive type.
If the callback parameter is declared as java.lang.Object
,
it will be assigned with object reference as is if the actual parameter type is a reference type,
or with a boxed value if the actual parameter type is primitive.
If the callback parameter
is declared as some reference type other than java.lang.Object
,
it will be assigned with the actual method parameter value only if it is declared as strictly the same type.
Otherwise the callback parameter value will be null
.
If the callback parameter is declared as a primitive type,
it will be assigned with the actual method parameter value only if it is declared as strictly the same type.
Otherwise it will be assigned with 0; no type conversions will be performed.
For example, int
will not be cast to long
.
Type of callback parameter annotated with @ReturnValue | Actual instrumented method return value type | Resulting value of the callback parameter |
java.lang.Object |
any reference type | the value as is |
any primitive type | a boxed value | |
void | null | |
some reference type T1 other than java.lang.Object |
same reference type T1 |
the value as is |
any reference or primitive type T2 != T1 |
null | |
void | null | |
some primitive type T1 |
same primitive type T1 |
the value as is |
any reference or primitive type T2 != T1 |
0 | |
void | 0 |
Note: The type matching rules for @ReturnValue
are similar
to ones for @Param
and differ from the rules for @This
Callback parameter annotated with @ThrownException
provides the uncaught exception instance thrown in the instrumented method.
Parameter annotated with @ThrownException
can only be used in
onUncaughtException() or
onUncaughtExceptionExt().
Parameter annotated with @ThrownException
must be declared as java.lang.Throwable
.
The following describes rules which define whether a probe class will be applied to particular method.
On probe registration its annotation, callbacks and their parameters are validated. If any violations of the specification are found, the probe is not registered, and thus will not be applied to any methods.
The probe will be applied to methods of classes which have been loaded at the moment of probe registration, as well as of classes loaded afterwards.
The probe's callback(s) are applied to methods that match the probe's method pattern.
If there is a mismatch in number and/or type of parameters
of an instrumented method and callback parameters annotated with
@Param,
the callback will be called anyway, passing null
or 0
for missing parameters.
Please read about @Param for detail.
If there is a mismatch in return type
of an instrumented method and callback parameter annotated with
@ReturnValue,
the callback will be called anyway, passing null
or 0
as the callback parameter value.
Please read about @ReturnValue for detail.
If callbacks have parameter annotated with @This
not declared as java.lang.Object
,
the callbacks will be called for instances of corresponding classes only.
Please read about @This for detail.
Also, there are special requirements for non-void onReturn() and onExceptionExt() callbacks. Please read the callback descriptions for detail.
If probe is unregistered in runtime, all methods which have been instrumented with its callbacks will be returned to their original state.
Data storage allows you to uniformly record the following information for each event:
This information will be available as telemetry when you connect to the monitored application, as well as saved in captured snapshots.
The UI will provide rich set of tools to analyze the gathered information, or to export it for external processing.
Although it is intended to use the storage to gather information, if you wish you can also store it your own way, e.g. write it to your application's log, to a file or simply write to console.
Conceptually, the storage is based on the relation database model.
The information is stored in tables.
Each table has fixed number of columns and arbitrary number of rows.
Each column can store values of particular type, which is specified on table creation.
Supported types are: int
, long
, String
.
Also, column can refer to particular row in another table;
such columns are called foreign keys.
Each column has a name which is a free form text describing column content when table data is presented to user.
The number of columns, their types and names are specified on table creation and cannot be changed afterwards.
Each row stores information for all columns according to their data types.
When table is created, it contains no rows. New rows can be added and existing rows updated. Row removal is not supported.
Rows are numbered starting with 1. This number is called row index.
The number of rows is limited via startup option probetablelengthlimit
Each table row contains information on some event.
There are two kinds of events:
Whether particular event is lasting event or point event, depends on your tasks.
For example, you may think of method Foo.bar()
execution as of a lasting event,
which starts when the method enters and ends when the method exits,
if you are interested to know how long the method execution takes,
or to discover other events which happen while Foo.bar()
is running,
e.g. what kind of I/O it performs, whether it accesses database or other resources etc.
In case you are only interested in Foo.bar()
invocations as in a fact,
you may think of them as of point events.
On table creation, you decide which kind of events the table is intended for, specifying appropriate parameters of the table object constructor.
So, each table can store either of the event kinds.
A lasting event starts when table row is created with Table.createRow()
and ends when the row is closed with Table.closeRow()
.
Table.closeRow()
must not be used for tables with point events.
Thread, stack trace and/or times are recorded for all events on row creation, and also on row closing for lasting events.
For lasting events recorded as some table rows it is possible to find rows in other tables which correspond to events which happened during the lasting event. For example, if processing of JSP pages is recorded as lasting events in some table, and database access events are recorded in another table, it will be possible to learn which SQL activities happen during particular JSP page processing.
To create a table, create instance of class com.yourkit.probes.Table
.
You will need that instance to create rows, as well as to close table rows
for lasting events.
Table columns are described with instances of classes
com.yourkit.probes.IntColumn
,
com.yourkit.probes.LongColumn
,
com.yourkit.probes.StringColumn
,
com.yourkit.probes.ForeignKeyColumn
.
The column instances are passed as parameters to com.yourkit.probes.Table
constructor.
They are also used to set or modify values at specified row.
Please see Javadoc for detail.
Please also read about compiling probe classes.
Please find examples of the API usage in the built-in probes source code.
To avoid infinite growth of collected data, it is possible to limit the number of events to be recorded in the profiler agent.
When writing a probe:
table API method
Table.setMinimumRecordedLastingEventTime()
suppresses recording of non-interesting short events.
See the method's Javadoc for detail.
Built-in probe AwtEvents is an example of using the method.
In profiler agent: startup option
probetablelengthlimit=<number of rows>
limits the number of rows to be stored by the profiler agent per table.
If a table reaches the limit, it will no longer be populated until cleared.
Default value is 20000.
The "Performance charts" tab shows all telemetry graphs in the same place one above the other, with their time axis synchronized.
You can easily correlate higher level events with basic telemetry graphs, as well as to simultaneously see basic telemetry graphs from different tabs e.g. CPU and Memory.
In addition to the basic telemetry graphs, high level statistics for EE and SE applications is presented:
MongoDB
, Cassandra
, HBase
):
accept()
or connect()
and how many closed, per secondFileInputStream
, FileOutputStream
, RandomAccessFile
):
Typical problems can be recognized with the help of the "Inspections" feature. Inspections enable automatic high-level analysis of profiled application snapshots. Each inspection automatically detects a specific issue. Performing this type of analysis by hand would be a very complicated (if at all possible) task.
The "Inspections" tab offers the following kinds of inspections:
The triggers allow you to configure actions to be automatically performed on following events:
The possible actions include:
When you are connected to the profiled application, click corresponding toolbar button to view or change the triggers:
To specify triggers to be applied from profiled application start, use startup option triggers=<path> which points to a file with trigger description (see below).
If the option is not specified, the trigger description is read from
<user home>/.yjp/triggers.txt
,
where user home corresponds to the account under which a profiled application is launched.
By default, that file does not exist, thus no triggers are applied.
To get or set the triggers programmatically, use the following profiler API methods:
The trigger description file is a text file in UTF-8. It contains the list of events and corresponding actions. The lines describing the actions start with space characters.
event_name [parameters...]
action_name [parameters...]
action_name [parameters...]
...
event_name [parameters...]
action_name [parameters...]
action_name [parameters...]
...
...
Instead of manually forming the description, please use the "Edit Triggers" dialog (see above): configure necessary triggers and actions, then use export actions in popup menu.
You can instruct the profiler to automatically capture memory snapshot when used memory reaches the specified threshold.
When used memory constantly increases, this often means there's a memory leak. Based on this concept, this feature greatly simplifies the detection of such situations in e.g. long-running applications such as servers. One of the benefits is that, after being triggered, the feature requires no further human interaction.
Please also consider the ability to dump memory on OutOfMemoryError. The JVM's internal lightweight dumping algorithm is used. This algorithm is specially designed to work in low memory conditions, when the JVM general purpose profiling interface JVMTI used by profilers may fail due to low resources.
To toggle this feature, connect to the profiled application and press the button shown on the picture below to edit triggers:
Use corresponding template menu to add the trigger and actions, then edit parameters if necessary:
Then, if the threshold is reached, a memory snapshot will be created, a notification will be shown in the UI and the feature will deactivate. You can enable it again afterwards.
Use startup option usedmem or usedmemhprof.
Export the template using corresponding popup menu item,
and pass it to API method com.yourkit.api.Controller.setTriggers()
Please also consider the ability to capture memory snapshots on high memory usage or on OutOfMemoryError.
You can instruct the profiler to capture a snapshot after a specified period of time.
When used memory constantly increases, this often means there is a memory leak. This feature greatly simplifies the detection of such situations in e.g. long-running applications such as servers. One of the benefits is that, after being triggered, the feature requires no further human interaction.
To toggle this feature, connect to the profiled application and press the button shown on the picture below to edit triggers:
Use corresponding template menu to add the trigger and actions, then edit parameters if necessary:
Then, every time the specified period elapses, a memory snapshot will be created and the following notification will be shown in the UI:
Use startup option periodicperf, periodicmem, periodichprof.
Export the template using corresponding popup menu item,
and pass it to API method com.yourkit.api.Controller.setTriggers()
The "Summary" tab provides an overview of JVM properties and parameters of profiled application, as well as a summary of application telemetry information.
The tab is available when the profiler is connected to profiled application, or when you open a saved snapshot.
If the profiled application is obfuscated, YourKit Java Profiler can automatically restore original names of classes, fields and methods if you specify the path to the obfuscation log file (1). Deobfuscator can be configured for a specific snapshot, as well as when you are connected to the profiled application (in this case the deobfuscator will apply to live results and will be chosen by default for captured snapshots).
Snapshot annotation is a free-form text descriptions directly in the snapshot file.
Annotation can be viewed and edited on the "Summary" tab of snapshot (2), on the Welcome screen:
... and from "Open Snapshot" dialog:
Snapshot annotation is supported for the profiler-format snapshots only i.e. .snapshot
files.
A HPROF snapshot cannot be annotated because the HPROF file format
does not allow records with arbitrary content.
IDE integration provides:
Please find details for each particular supported IDE:
To enable integration, you should install the profiler plugin.
To install the plugin, run the profiler.
Use Profile from within IDE... action on Welcome screen or in "Tools" menu, select "Eclipse" and follow the instructions.
The wizard will automatically open these instructions in your browser. Please follow them to complete the plugin installation.
After the plugin is installed, Profile actions appear in the toolbar ...
... in the main menu ...
... and in context menus.
Additional launch parameters can be configured with Run | Profile..., tab YourKit Java Profiler.
The Profile action starts the profiled application, and connects to it in profiler UI (unless opposite behavior is configured). The output of the profiled application appears in console, same as for Run action.
This topic is not applicable to macOS where the profiler agent a universal binary.
On a 64-bit machine, the Profile action must know whether the JVM for launching the profiled application is 32-bit or 64-bit, in order to supply appropriate profiler agent version.
By default, the plugin attempts to automatically detect the JVM kind by obtaining available information from Eclipse.
For particular run configurations this may happen that the automatic detection is not possible.
In this case Profile will fail with an error like
Error occurred during initialization of VM. Could not find agent library
printed in the Eclipse console.
In this case use the "32-bit or 64-bit JRE Selection" section to specify Java bitness explicitly.
When profiling applications, you usually need to browse the related source code to understand the performance problems at hands. After the problem is located, you edit the source code to fix it.
Instead of forcing you to tell profiler where the source code of your application is located and showing the code in feature-restricted custom-made "editor surrogate", YourKit provides an alternative approach. When you have a method, class or field selected in the profiler UI, just use Tools | Open in IDE (F7), to automatically open the underlying source code in the editor of your IDE - the best place to browse and edit code.
The navigation action works on the current selection and is available in both CPU and memory views. Take note of the extremely useful ability to locate the code of anonymous classes and their methods, which is a very difficult thing to do manually.
Please follow instructions for your IDE:
If you already have installed the plugin from the YourKit Java Profiler version you use (i.e. having the same update site URL - see below), just make sure it is up to date: use Help | Check for Updates in Eclipse's main menu.
Use Help | Install New Software... in Eclipse's main menu:
Press "Add..." button:
Copy appropriate URL:
Profiler version | URL |
YourKit Java Profiler 2019.1 | https://www.yourkit.com/download/yjp2019_1_for_eclipse/ |
YourKit Java Profiler 2018.04 | https://www.yourkit.com/download/yjp2018_04_for_eclipse/ |
YourKit Java Profiler 2017.02 | https://www.yourkit.com/download/yjp2017_02_for_eclipse/ |
YourKit Java Profiler 2016.02 | https://www.yourkit.com/download/yjp2016_for_eclipse/ |
YourKit Java Profiler 2015 | https://www.yourkit.com/download/yjp2015_for_eclipse/ |
YourKit Java Profiler 2014 | https://www.yourkit.com/download/yjp2014_for_eclipse/ |
... and paste it to the "Location" field:
Important: Eclipse update manager needs the Internet access. If your computer sits behind a proxy server, you will need to configure Eclipse accordingly: use Window | Preferences in Eclipse's main menu, select General -> Network Connections and enter the host name or IP and port of your proxy server.
If there is no Internet access, use the bundled update site archive:
<Profiler Installation Directory>/lib/eclipse-plugin/yjp<version>_for_eclipse.zip
Select YourKit Java Profiler plugin and press "Next":
After restarting Eclipse, you should see Profile action in the toolbar:
Read about using the plugin...
If you already have installed the plugin from the YourKit Java Profiler version you use (i.e. having the same update site URL - see below), just make sure it is up to date: use Help | Check for Updates... in MyEclipse's main menu.
Use Help | Install from Site... in MyEclipse's main menu:
Press "Add..." button:
Copy appropriate URL:
Profiler version | URL |
YourKit Java Profiler 2019.1 | https://www.yourkit.com/download/yjp2019_1_for_eclipse/ |
YourKit Java Profiler 2018.04 | https://www.yourkit.com/download/yjp2018_04_for_eclipse/ |
YourKit Java Profiler 2017.02 | https://www.yourkit.com/download/yjp2017_02_for_eclipse/ |
YourKit Java Profiler 2016.02 | https://www.yourkit.com/download/yjp2016_for_eclipse/ |
YourKit Java Profiler 2015 | https://www.yourkit.com/download/yjp2015_for_eclipse/ |
YourKit Java Profiler 2014 | https://www.yourkit.com/download/yjp2014_for_eclipse/ |
... and paste it to the "Location" field:
Important: MyEclipse update manager needs the Internet access. If your computer sits behind a proxy server, you will need to configure MyEclipse accordingly: use Window | Preferences in MyEclipse's main menu, select General -> Network Connections and enter the host name or IP and port of your proxy server.
If there is no Internet access, use the bundled update site archive:
<Profiler Installation Directory>/lib/eclipse-plugin/yjp<version>_for_eclipse.zip
Select YourKit Java Profiler plugin and press "Next":
After restarting MyEclipse, you should see Profile action in the toolbar:
Read about using the plugin...
If you already have installed the plugin from the YourKit Java Profiler version you use (i.e. having the same update site URL - see below), just make sure it is up to date: use Help | MyEclipse Configuration Center | Software... in MyEclipse's main menu.
Use Help | MyEclipse Configuration Center in MyEclipse's main menu:
Switch to Software and press "Add Site" button:
Copy appropriate URL:
Profiler version | URL |
YourKit Java Profiler 2019.1 | https://www.yourkit.com/download/yjp2019_1_for_eclipse/ |
YourKit Java Profiler 2018.04 | https://www.yourkit.com/download/yjp2018_04_for_eclipse/ |
YourKit Java Profiler 2017.02 | https://www.yourkit.com/download/yjp2017_02_for_eclipse/ |
YourKit Java Profiler 2016.02 | https://www.yourkit.com/download/yjp2016_for_eclipse/ |
YourKit Java Profiler 2015 | https://www.yourkit.com/download/yjp2015_for_eclipse/ |
YourKit Java Profiler 2014 | https://www.yourkit.com/download/yjp2014_for_eclipse/ |
... and paste it to the "URL" field.
Please also specify arbitrary name for the update site, e.g. "YourKit":
Important: MyEclipse update manager needs the Internet access. If your computer sits behind a proxy server, you will need to configure MyEclipse accordingly: use Window | Preferences in MyEclipse's main menu, select General -> Network Connections and enter the host name or IP and port of your proxy server.
If there is no Internet access, use the bundled update site archive:
<Profiler Installation Directory>/lib/eclipse-plugin/yjp<version>_for_eclipse.zip
In pop-up, press "Add to Profile...":
Press "Apply 1 change...":
Accept the license and press "Next":
After restarting MyEclipse, you should see Profile action in the toolbar:
Read about using the plugin...
To enable integration, you should install the profiler plugin.
To install the plugin, run the profiler.
Use Profile from within IDE... action on Welcome screen or in "Tools" menu, select "IntelliJ IDEA" and follow the instructions.
After the plugin is installed, the Profile actions are added to the main toolbar ...
... to the main menu ...
... and to context popup menus:
To configure profiler parameters open the Run/Debug Configurations dialog, select the configuration and select the Startup/Connection tab.
The Profile action starts the profiled application, and connects to it in profiler UI (unless opposite behavior is configured). The output of the profiled application appears in console, same as for Run action.
This topic is not applicable to macOS where the profiler agent a universal binary.
On a 64-bit machine, the Profile action must know whether the JVM for launching the profiled application is 32-bit or 64-bit, in order to supply appropriate profiler agent version.
By default, the plugin attempts to automatically detect the JVM kind by obtaining available information from IDEA.
For particular run configurations this may happen that the automatic detection is not possible.
In this case Profile will fail with an error like
Error occurred during initialization of VM. Could not find agent library
printed in the IDEA console.
In this case use the "32-bit or 64-bit JRE Selection" section to specify Java bitness explicitly.
When profiling applications, you usually need to browse the related source code to understand the performance problems at hands. After the problem is located, you edit the source code to fix it.
Instead of forcing you to tell profiler where the source code of your application is located and showing the code in feature-restricted custom-made "editor surrogate", YourKit provides an alternative approach. When you have a method, class or field selected in the profiler UI, just use Tools | Open in IDE (F7), to automatically open the underlying source code in the editor of your IDE - the best place to browse and edit code.
The navigation action works on the current selection and is available in both CPU and memory views. Take note of the extremely useful ability to locate the code of anonymous classes and their methods, which is a very difficult thing to do manually.
To enable integration, you should install the profiler plugin.
To install the plugin, run the profiler.
Use Profile from within IDE... action on Welcome screen or in "Tools" menu, select "NetBeans" and follow the instructions.
After the plugin is installed, Profile actions are added to the main toolbar ...
... and to context menu of the editor:
To configure profiling parameters in NetBeans use Tools | Options, section YourKit Profiler:
The Profile action starts the profiled application, and connects to it in profiler UI (unless opposite behavior is configured). The output of the profiled application appears in console, same as for Run action.
This topic is not applicable to macOS where the profiler agent a universal binary.
On a 64-bit machine, the Profile action must know whether the JVM for launching the profiled application is 32-bit or 64-bit, in order to supply appropriate profiler agent version.
By default, the plugin attempts to automatically detect the JVM kind by obtaining available information from NetBeans.
For particular run configurations this may happen that the automatic detection is not possible.
In this case Profile will fail with an error like
Error occurred during initialization of VM. Could not find agent library
printed in the NetBeans console.
In this case use the "32-bit or 64-bit JRE Selection" section to specify Java bitness explicitly.
When profiling applications, you usually need to browse the related source code to understand the performance problems at hands. After the problem is located, you edit the source code to fix it.
Instead of forcing you to tell profiler where the source code of your application is located and showing the code in feature-restricted custom-made "editor surrogate", YourKit provides an alternative approach. When you have a method, class or field selected in the profiler UI, just use Tools | Open in IDE (F7), to automatically open the underlying source code in the editor of your IDE - the best place to browse and edit code.
The navigation action works on the current selection and is available in both CPU and memory views. Take note of the extremely useful ability to locate the code of anonymous classes and their methods, which is a very difficult thing to do manually.
To enable integration, you should install the profiler plugin.
To install the plugin, run the profiler.
Use Profile from within IDE... action on Welcome screen or in "Tools" menu, select "JDeveloper" and follow the instructions.
After the plugin is installed, Profile actions are added to the main menu ...
... and to context menus:
...and in the main toolbar:
You can configure profiling parameters in the Project Properties dialog's node YourKit Java Profiler.
The Profile action starts the profiled application, and connects to it in profiler UI (unless opposite behavior is configured). The output of the profiled application appears in console, same as for Run action.
This topic is not applicable to macOS where the profiler agent a universal binary.
On a 64-bit machine, the Profile action must know whether the JVM for launching the profiled application is 32-bit or 64-bit, in order to supply appropriate profiler agent version.
By default, the plugin attempts to automatically detect the JVM kind by obtaining available information from JDeveloper.
For particular run configurations this may happen that the automatic detection is not possible.
In this case Profile will fail with an error like
Error occurred during initialization of VM. Could not find agent library
printed in the JDeveloper console.
In this case use the "32-bit or 64-bit JRE Selection" section to specify Java bitness explicitly.
When profiling applications, you usually need to browse the related source code to understand the performance problems at hands. After the problem is located, you edit the source code to fix it.
Instead of forcing you to tell profiler where the source code of your application is located and showing the code in feature-restricted custom-made "editor surrogate", YourKit provides an alternative approach. When you have a method, class or field selected in the profiler UI, just use Tools | Open in IDE (F7), to automatically open the underlying source code in the editor of your IDE - the best place to browse and edit code.
The navigation action works on the current selection and is available in both CPU and memory views.
Note: navigation to inner classes is not possible because JDeveloper does not provide appropriate API.
There are two ways to measure time:
You can customize CPU vs wall time measurement in CPU sampling settings and CPU tracing settings. By default, sampling measures wall time for I/O methods only, tracing - for all methods.
Monitor profiling measures wall time for all waits and blocks.
Filters help you to ignore methods and instances of classes which you are not interested in, such as standard Java classes, libraries, framework internals, application server core classes etc., so you can more easily focus on own classes of the profiled application.
While reviewing profiling results in a snapshot or in live views, you can use different filters or use none at all. In other words, you do not need to start a new profiling session to start or stop using filters. Views are automatically updated when filter settings are changed.
Filters reduce the depth of call trees and length of stack traces, by skipping successive calls of methods from filtered classes, so you can more easily see the methods of the profiled application.
Filters are applied to views where method call stacks are shown, as well as to hot spot and method list views.
Non-filtered methods are marked with a filled arrow .
Filtered methods have an outlined arrow
:
Some automatic inspections use filter settings to focus on potential problems in own code of profiled application.
A quick way to turn applying the configured filters on/off is to use
Settings | Collapse Filtered Calls
Use Settings | Filters... in the main menu to configure filters.
Each filter is specified as a list of class or method patterns to be filtered, one pattern per line.
To filter all methods in given class(es), use this format:
<fully qualified class name>
To filter particular methods in given class(es), use this format:
<fully qualified class name> : <method name> ( <comma-separated parameter types> )
Wildcards ('*') are accepted.
foo.bar.MyClass
- filter all methods of given class
foo.bar.MyClass:*(*)
- same as above
bar.*
- filter methods in all matching classes
bar.* : print*(*)
- filter all methods from bar.* with name starting with 'print' and any number of parameters
* : toString()
- filter toString() in all classes
com.foo.* : <init>(int, String)
- filter constructors of classes in com.foo.* with given signature
To specify classes or methods which must not be filtered, prepend the pattern with '+'.
Example: filter classes in packages 'foo' and 'bar' (with subpackages), but not in package 'bar.myclasses' (with subpackages):
foo.*
bar.*
+bar.myclasses.*
Snapshot directory is the directory where snapshot files are created.
By default, the snapshot directory is <user home>/Snapshots
For some reason you may want to change the snapshot directory location. For example, the user home can be located on a disk with insufficient free space, or on a network drive with slow access. To change the snapshot directory, use Settings | Snapshot Directory... in the main menu.
This is an advanced topic. The following text contains technical details that you do not normally have to know to profile your applications.
Action Settings | Snapshot Directory... stores the specified
directory in file <user home>/.yjp/snapshotdir.txt
This is a text file in UTF-8 encoding. The file may not exist until you customize the directory first time.
When the profiler UI and the profiled application run under same user,
the situation is very simple:
snapshots are stored in directory specified in <user home>/.yjp/snapshotdir.txt
,
or, if the file does not exist or defines an invalid directory,
the default location <user home>/Snapshots
will be used.
When the profiled application runs under different user (e.g. it is started as a service), the profiler UI and the profiler agent loaded as a part of the profiled application will have different home directories and different access rights to file system. As the result, the directory which you have configured in the profiler UI can be inaccessible for the profiled application, and your setting will be ignored.
Please find below detail on the order in which the snapshot directory is chosen in different cases.
We will refer the user which runs the profiler UI as UI user, and the user which runs the profiled application as agent user. These can be the same user or different users, as mentioned above.
The basic idea is: if possible, use the directory configured in the profiler UI, otherwise use settings in the profiled application user home
Performance snapshots of local or remote applications captured from within the profiler UI, transferred memory snapshots of remote applications, or snapshots unpacked opening ZIP/GZIP archives:
Snapshot is created by UI user in
<UI user home>/.yjp/snapshotdir.txt
,
if the file contains a directory to which UI user has read and write access
<UI user home>/Snapshots
Snapshot is created by agent user in
dir
startup option
of the profiled application, if specified
<UI user home>/.yjp/snapshotdir.txt
,
if the file exists and contains a directory to which agent user has read and write access
<UI user home>/Snapshots
if agent user has read and write access
<agent user home>/.yjp/snapshotdir.txt
,
if the file contains a directory to which agent user has read and write access
<agent user home>/Snapshots
Snapshot is created by agent user in
dir
startup option
of the profiled application, if specified
<agent user home>/.yjp/snapshotdir.txt
,
if the file contains a directory to which agent user has read and write access
<agent user home>/Snapshots
You can export all reports and data to:
Use File | Export to... (Ctrl+S) to export current view:
To export telemetry data, right-click a graph to invoke a popup menu:
File | Copy To Clipboard... (Ctrl+C or another platform specific shortcut) action copies text of selected row in all trees and lists.
You can automatically export some basic views using the following command:
java -jar <Profiler Installation Directory>/lib/yourkit.jar -export <snapshot file> <target directory>
Note: the target directory where the resulting files will be created must exist when you issue the command.
The following views can be exported:
By default, all the views are exported, if corresponding data is present in the snapshot.
To export only particular views, specify one or several of the following system properties:
export.class.list
export.class.loaders
export.method.list.cpu
export.call.tree.cpu
export.method.list.alloc
export.method.list.gc
export.summary
export.probes
export.charts
If at least one of the properties is specified, only specified views will be exported. If none of the properties is specified, all available views will be exported.
By default, filters are not applied when exporting call trees and method lists, in order to provide all available profiling information for further processing. Thus the results may look differently than in the UI where the filters are applied by default. To apply the filters to the exported results, specify system property
export.apply.filters
Example:
java -Dexport.method.list.cpu -Dexport.class.list -Dexport.apply.filters -jar /usr/yjp/lib/yourkit.jar -export foo.snapshot outputDir
- export CPU method list and class list only, and apply the filters
java -jar /usr/yjp/lib/yourkit.jar -export foo.snapshot outputDir
- export all available views, the filters will not be applied
By default, views are exported in each of applicable formats (HTML, CSV, XML, plain text).
To export in only particular formats, specify one or several of the following system properties:
export.txt
export.html
export.csv
export.xml
If at least one of the properties is specified, export will be performed in only specified format(s). If none of the properties is specified, export will be performed in all formats available for each of the views.
Example:
java -Dexport.method.list.cpu -Dexport.class.list -Dexport.txt -jar /usr/yjp/lib/yourkit.jar -export foo.snapshot outputDir
- export CPU method list and class list only, and only as text
java -Dexport.csv -jar /usr/yjp/lib/yourkit.jar -export foo.snapshot outputDir
- export all available views, and only as CSV
java -jar /usr/yjp/lib/yourkit.jar -export foo.snapshot outputDir
- export all available views in all available formats
The profiler API allows you to control profiling programmatically. Also, in your automatic memory tests you can open saved memory snapshots and examine them via the set description language.
To control profiling programmatically, also consider command line tool which in some cases may be a simpler approach.
Please find API JavaDoc here
Class com.yourkit.api.Controller
allows you to profile (i.e. turn on and off profiling modes and capture snapshots)
the application itself or another Java application.
To use this part of the API, please include <Profiler Installation Directory>/lib/yjp-controller-api-redist.jar
in the classpath.
Classes in com.yourkit.probes.*
provide API for probes.
Classes com.yourkit.api.MemorySnapshot
and com.yourkit.api.Annotations
support the analysis of captured memory snapshots and snapshot annotation.
To use this part of the API, please include <Profiler Installation Directory>/lib/yourkit.jar
in the classpath.
Important: Do not remove yourkit.jar
from the installation directory.
The API will not work with yourkit.jar
moved to an arbitrary directory,
because it needs the other files from the installation.
Command line tool is another way to control profiling, in addition to the profiler UI and API.
It has much in common with API and may also be used for automated profiling. You may prefer the command line tool to API in some cases as an easier solution not requiring any Java code to be written. However the command line tool provides less functionality than the API.
Also, the command line tool may be useful in remote profiling when you only have console access to the remote machine and no UI is available.
Run the tool with the following command:
java -jar <Profiler Installation Directory>/lib/yjp-controller-api-redist.jar <options>
To get list of available options, run:
java -jar <Profiler Installation Directory>/lib/yjp-controller-api-redist.jar
Also consider connecting to a remote application from a locally running profiler UI, which may be a better approach.
Q: How do I profile a Java server or application running in Docker container?
A: Please follow these instructions.
Q: How do I profile a Java application in Amazon EC2 instance?
A: Please follow these instructions.
Q: How do I profile from within Eclipse or its derivative such as MyEclipse etc.?
A: Use IDE integration wizard to install the profiler plugin.
Q: How do I profile from within IntelliJ IDEA?
A: Use IDE integration wizard to install the profiler plugin.
Q: How do I profile from within JDeveloper?
A: Use IDE integration wizard to install the profiler plugin.
Q: How do I profile from within NetBeans?
A: Use IDE integration wizard to install the profiler plugin.
Q: How do I profile arbitrary Java application which I start from a command line?
A: To start a Java application with the profiler agent, add Java command line
argument -agentpath
with appropriate parameters, as described here.
Alternatively, use "Profile local Java EE server or application..." from the profiler's Welcome screen or
Tools | Profile Local Java EE Server or Application... from the main menu,
then choose the option "Other Java application".
The wizard will help you point -agentpath
to the profiler agent library
appropriate for your platform, as well as to specify agent startup options, if necessary.
Please note that instead of starting an application with the profiler agent, you may also attach the profiler agent to already running Java process on demand. This approach named attach mode is easy to use, but it has limitations and is not always available.
Q: How do I profile a Java server or application running on a remote machine?
A: There are two approaches.
Approach 1: attach mode (easy to use, but may lack profiling capabilities): You can directly attach the profiler agent to a remote Java process started without the profiler agent by using "Profile remote Java EE server or application..." from the profiler's Welcome screen or Tools | Profile Remote Java EE Server or Application... from the main menu. To discover on the remote host Java processes without the agent you should choose advanced application discovery method.
This approach named attach mode is easy to use, but it has limitations and is not always available.
Approach 2: load the profiler agent on start (full profiling capabilities, requires steps to enable): Use the console Java EE server integration wizard to enable profiling in the remote server or application. When the remote server or application is running, connect to it from the profiler UI to perform profiling.
Q: How do I profile a Java EE server running on my local machine?
A: This depends on how you start the server.
If you start the server from your integrated development environment (IDE), use IDE integration wizard to install the profiler plugin. Supported are Eclipse (with derivatives such as MyEclipse etc.), IntelliJ IDEA, NetBeans, JDeveloper.
If you start the server with a startup script or as a service, use "Profile local Java EE server or application..." from the profiler's Welcome screen or Tools | Profile Local Java EE Server or Application... from the main menu.
Please note that instead of starting a Java server with the profiler agent, you may also attach the profiler agent to already running server process on demand. This approach named attach mode is easy to use, but it has limitations and is not always available.
Q: How do I profile a Java EE server not supported by integration wizard?
A: Please follow generic instructions for local or remote profiling, whichever applies to your case. If you use the Java EE server integration wizard, either local or remote, you should choose the "Other Java application" option.
To profile a Java application or a server running in a Docker container, you should run it with the profiler agent, and then use remote profiling in the profiler UI, as described below.
(1) Add few lines to your image's Dockerfile
.
(1.1) Install YourKit Java Profiler agents:
RUN wget https://www.yourkit.com/download/docker/YourKit-JavaProfiler-2019.1-docker.zip -P /tmp/ && \
unzip /tmp/YourKit-JavaProfiler-2019.1-docker.zip -d /usr/local && \
rm /tmp/YourKit-JavaProfiler-2019.1-docker.zip
(1.2) If you use Alpine Linux, install libc6-compat
.
RUN apk add --no-cache libc6-compat
(1.3) Expose the profiler agent port.
For example, if you specify the port with the agent startup option
port=10001
(see below), add the following line.
EXPOSE 10001
Note: we use the same example port 10001 throughout these instructions. If you decide to change it, please ensure you have changed it everywhere.
(1.4) Load the agent to the JVM by adding the Java command line option -agentpath
.
For example, if you start your application with
java -jar my-app.jar
...do it like this:
java -agentpath:/usr/local/YourKit-JavaProfiler-2019.1/bin/linux-x86-64/libyjpagent.so=port=10001,listen=all -jar my-app.jar
Please find detailed description of how to specify -agentpath
and
choose the agent startup options here.
(2) After modifying Dockerfile
, don't forget to build the container.
(3) While running your Docker container make the agent port visible with the option -p
:
docker run -p 10001:10001 your-docker
When the application is running in the container, connect to it from the profiler UI to perform profiling.
Note: if you're running Docker locally on your developer machine, connect to localhost
.
Start your Java application with the profiler agent using this instruction.
For communication with profiler UI profiler agent opens TCP port. To provide better security profiler agent will not accept remote connections by default. Although this can be changed, exposing profiler agent port is a bad idea. Secure network communication with SSH protocol can be a useful alternative.
In this example we will use SSH connection to the Amazon EC2 instance to profile a Java application. First you need to know a public IP address. Select your instance in EC2 Management Console and find the address at "IPv4 Public IP" in "Description" tab.
To get authentication key see "Key pair name" in "Description" tab and locate the corresponding key on your computer. We will use key pair aws-key-pair and aws-key-pair.pem file, which was generated by the Management Console.
Amazon Linux image, like in this example, has default user with the name ec2-user. If you have different image, please see its documentation or consult system administrator to obtain your authentication settings.
Use Profile remote Java EE server or application... action on Welcome screen or in "Tools" menu.
In the opened dialog enter public IP address noted above in the "Host or IP address" field.
Choose "Advanced" application discovery method and provide SSH user and SSH port in the corresponding fields.
Click on Authentication settings... and provide the private key.
Created connection will appear in the "Monitor Applications" list on Welcome screen.
In the first communication profiler will upload all necessary files, and you will see "Connecting..." message. This might take up to a few minutes depending on your network speed.
You will see applications running with the profiler agent marked with green circle. Click on the application name to connect and profile it.
Applications without the profiler agent are marked with orange circle. You can attach and profile the application, by clicking on its name. Please note that you will not see applications running under a different user, unless your start it with the profiler agent.
If you do not see your application it the list, please read Troubleshoot connection problems.