/ December 2011 ~ Java EE Support Patterns

12.23.2011

HashMap infinite loop problem – Case Study

This article will provide you with complete root cause analysis and solution of a java.util.HashMap infinite loop problem affecting an Oracle OSB 11g environment running on IBM JRE 1.6 JVM.
This case study will also demonstrate how you can combine AIX ps –mp command and Thread Dump analysis to pinpoint you top CPU contributor Threads within your Java VM(s). It will also demonstrate how dangerous using a non Thread safe HashMap data structure can be within a multi Thread environment / Java EE container.

Environment specifications

-        Java EE server: Oracle Service Bus 11g
-        Middleware OS: AIX 6.1
-        Java VM: IBM JRE 1.6 SR9 – 64-bit
-        Platform type: Service Bus

Monitoring and troubleshooting tools

-        AIX nmon & topas (CPU monitoring)
-        AIX ps –mp (CPU and Thread breakdown OS command)
-        IBM JVM Java core / Thread Dump (thread analysis and ps –mp data corrleation)

Problem overview

-        Problem type: Very High CPU observed from our production environment

A high CPU problem was observed from AIX nmon monitoring hosting a Weblogic Oracle Service Bus 11g middleware environment.

Gathering and validation of facts

As usual, a Java EE problem investigation requires gathering of technical and non-technical facts so we can either derived other facts and/or conclude on the root cause. Before applying a corrective measure, the facts below were verified in order to conclude on the root cause:

·        What is the client impact? HIGH
·        Recent change of the affected platform? Yes, platform was recently migrated from Oracle ALSB 2.6 (Solaris & HotSpot 1.5) to Oracle OSB 11g (AIX OS & IBM JRE 1.6)
·        Any recent traffic increase to the affected platform? No
·        How does this high CPU manifest itself?  A sudden CPU increase was observed and is not going down; even after load goes down e.g. near zero level.
·        Did an Oracle OSB recycle resolve the problem? Yes, but problem is returning after few hours or few days (unpredictable pattern)

-        Conclusion #1: The high CPU problem appears to be intermittent vs. pure correlation with load
-        Conclusion #2: Since high CPU remains after load goes down, this indicates either JVM threshold are triggered along with point of non-return and / or the presence of some hang or infinite looping Threads

AIX CPU analysis

AIX nmon & topas OS command were used to monitor the CPU utilization of the system and Java process. The CPU utilization was confirmed to go up as high as 100% utilization (saturation level).

Such high CPU level did remain very high until the JVM was recycled.

AIX CPU Java Thread breakdown analysis

One of the best troubleshooting approaches to deal with this type of issue is to generate an AIX ps –mp snapshot combined with Thread Dump. This was achieved by executing the command below:

ps -mp <Java PID> -o THREAD

Then immediately execute:

kill -3 <Java PID>

** This will generate a IBM JRE Thread Dump / Java core file (javacorexyz..) **

The AIX ps –mp command output was generated as per below:

USER      PID     PPID       TID ST  CP PRI SC    WCHAN        F     TT BND COMMAND
user 12910772  9896052         - A    97  60 98        *   342001      -   - /usr/java6_64/bin/java -Dweblogic.Nam
-        -        -   6684735 S    0  60  1 f1000f0a10006640  8410400      -   - -
-        -        -   6815801 Z    0  77  1        -   c00001      -   - -
-        -        -   6881341 Z    0 110  1        -   c00001      -   - -
-        -        -   6946899 S    0  82  1 f1000f0a10006a40  8410400      -   - -
-        -        -   8585337 S    0  82  1 f1000f0a10008340  8410400      -   - -
-        -        -   9502781 S    2  82  1 f1000f0a10009140  8410400      -   - -
-        -        -  10485775 S    0  82  1 f1000f0a1000a040  8410400      -   - -
-        -        -  10813677 S    0  82  1 f1000f0a1000a540  8410400      -  
-        -        -  21299315 S    95  62  1 f1000a01001d0598   410400      -   - -
-        -        -  25493513 S    0  82  1 f1000f0a10018540  8410400      -   - -
-        -        -  25690227 S    0  86  1 f1000f0a10018840  8410400      -   - -
-        -        -  25755895 S    0  82  1 f1000f0a10018940  8410400      -   - -
-        -        -  26673327 S    2  82  1 f1000f0a10019740  8410400      -  



As you can see in the above snapshot, 1 primary culprit Thread Id (21299315) was found taking ~95% of the entire CPU.

Thread Dump analysis and PRSTAT correlation

Once the primary culprit Thread was identified, the next step was to correlate this data with the Thread Dump data and identify the source / culprit at the code level.

But first, we had to convert the decimal format to HEXA format since IBM JRE Thread Dump native Thread Id’s are printed in HEXA format.

Culprit Thread Id 21299315 >> 0x1450073 (HEXA format)

A quick search within the generated Thread Dump file did reveal the culprit Thread as per below.

Weblogic ExecuteThread #97 Stack Trace can be found below:

3XMTHREADINFO      "[STUCK] ExecuteThread: '97' for queue: 'weblogic.kernel.Default (self-tuning)'" J9VMThread:0x00000001333FFF00, j9thread_t:0x0000000117C00020, java/lang/Thread:0x0700000043184480, state:CW, prio=1
3XMTHREADINFO1            (native thread ID:0x1450073, native priority:0x1, native policy:UNKNOWN)
3XMTHREADINFO3           Java callstack:
4XESTACKTRACE                at java/util/HashMap.findNonNullKeyEntry(HashMap.java:528(Compiled Code))
4XESTACKTRACE                at java/util/HashMap.putImpl(HashMap.java:624(Compiled Code))
4XESTACKTRACE                at java/util/HashMap.put(HashMap.java:607(Compiled Code))
4XESTACKTRACE                at weblogic/socket/utils/RegexpPool.add(RegexpPool.java:20(Compiled Code))
4XESTACKTRACE                at weblogic/net/http/HttpClient.resetProperties(HttpClient.java:129(Compiled Code))
4XESTACKTRACE                at weblogic/net/http/HttpClient.openServer(HttpClient.java:374(Compiled Code))
4XESTACKTRACE                at weblogic/net/http/HttpClient.New(HttpClient.java:252(Compiled Code))
4XESTACKTRACE                at weblogic/net/http/HttpURLConnection.connect(HttpURLConnection.java:189(Compiled Code))
4XESTACKTRACE                at com/bea/wli/sb/transports/http/HttpOutboundMessageContext.send(HttpOutboundMessageContext.java(Compiled Code))
4XESTACKTRACE                at com/bea/wli/sb/transports/http/wls/HttpTransportProvider.sendMessageAsync(HttpTransportProvider.java(Compiled Code))
4XESTACKTRACE                at sun/reflect/GeneratedMethodAccessor2587.invoke(Bytecode PC:58(Compiled Code))
4XESTACKTRACE                at sun/reflect/DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37(Compiled Code))
4XESTACKTRACE                at java/lang/reflect/Method.invoke(Method.java:589(Compiled Code))
4XESTACKTRACE                at com/bea/wli/sb/transports/Util$1.invoke(Util.java(Compiled Code))
4XESTACKTRACE                at $Proxy115.sendMessageAsync(Bytecode PC:26(Compiled Code))
4XESTACKTRACE                at com/bea/wli/sb/transports/LoadBalanceFailoverListener.sendMessageAsync(LoadBalanceFailoverListener.java:141(Compiled Code))
4XESTACKTRACE                at com/bea/wli/sb/transports/LoadBalanceFailoverListener.onError(LoadBalanceFailoverListener.java(Compiled Code))
4XESTACKTRACE                at com/bea/wli/sb/transports/http/wls/HttpOutboundMessageContextWls$RetrieveHttpResponseWork.handleResponse(HttpOutboundMessageContextWls.java(Compiled Code))
4XESTACKTRACE                at weblogic/net/http/AsyncResponseHandler$MuxableSocketHTTPAsyncResponse$RunnableCallback.run(AsyncResponseHandler.java:531(Compiled Code))
4XESTACKTRACE                at weblogic/work/ContextWrap.run(ContextWrap.java:41(Compiled Code))
4XESTACKTRACE                at weblogic/work/SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:528(Compiled Code))
4XESTACKTRACE                at weblogic/work/ExecuteThread.execute(ExecuteThread.java:203(Compiled Code))
4XESTACKTRACE                at weblogic/work/ExecuteThread.run(ExecuteThread.java:171(Compiled Code))

Thread Dump analysis – HashMap infinite loop condition!

As you can see from the above Thread Stack Trace of Thread #97, the Thread is currently stuck in an infinite loop / Thread race condition over a java.util.HashMap object (IBM JRE implementation).

This finding was quite interesting given this HashMap is actually created / own by the Weblogic 11g kernel code itself >> weblogic/socket/utils/RegexpPool

Root cause: non Thread safe HashMap in Weblogic 11g (10.3.5.0) code!

Following this finding and data gathering exercise, our team created a SR with Oracle support which did confirm this defect within the Weblogic 11g code base.

As you may already know, usage of non Thread safe / non synchronized HashMap under concurrent Threads condition is very dangerous and can easily lead to internal HashMap index corruption and / or infinite looping. This is also a golden rule for any middleware software such as Oracle Weblogic, IBM WAS, Red Hat JBoss which rely heavily on HashMap data structures from various Java EE and caching services.

The most common solution is to use the ConcurrentHashMap data structure which is designed for that type of concurrent Thread execution context.

Solution

Since this problem was also affecting other Oracle Weblogic 11g customers, Oracle support was quite fast providing us with a patch for our target WLS 11g version. Please find the patch description and detail:

Content:
========
This patch contains Smart Update patch AHNT for WebLogic Server 10.3.5.0

Description:
============
HIGH CPU USAGE AT HASHMAP.PUT() IN REGEXPPOOL.ADD()

Patch Installation Instructions:
================================
- copy content of this zip file with the exception of README file to your SmartUpdate cache directory (MW_HOME/utils/bsu/cache_dir by default)
- apply patch using Smart Update utility

Conclusion

I hope this case study has helped you understand how to pinpoint culprit of high CPU Threads at the code level when using AIX & IBM JRE and the importance of proper Thread safe data structure for high concurrent Thread / processing applications.

Please don’t hesitate to post any comment or question.

12.12.2011

PRSTAT Solaris – Pinpoint high CPU Java VM Threads fast

The Solaris OS prstat command is widely used but are you really using it to its full potential?

Are you struggling with a high CPU utilization of your middleware Java VM processes but unable to understand what is going on?

How about extracting the proof and facts about the root cause for your clients and / or application development team?

If you are a Java EE production middleware or application support individual then this article is mainly for you and will demonstrate why you should add this command immediately to your list of OS command “friend” list and learn this analysis approach.

This article will provide with a step by step tutorial along with an example to achieve those goals.

For an AIX OS prstat command equivalent and strategy, please visit the other article below:

Step #1 – Generate a PRSTAT snapshot of your affected Java process

Execute the prstat command on the Java process after extracting the Java PID.
You can also run it several iterations in order to identify a pattern.

prstat -L -p <PID> 1 1

Example: prstat -L -p 9116 1 1

PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/LWPID
  9116 bea      3109M 2592M sleep   59    0  21:52:59 8.6% java/76
  9116 bea      3109M 2592M sleep   57    0   4:28:05 0.3% java/40
  9116 bea      3109M 2592M sleep   59    0   6:52:02 0.3% java/10774
  9116 bea      3109M 2592M sleep   59    0   6:50:00 0.3% java/84
  9116 bea      3109M 2592M sleep   58    0   4:27:20 0.2% java/43
  9116 bea      3109M 2592M sleep   59    0   7:39:42 0.2% java/41287
  9116 bea      3109M 2592M sleep   59    0   3:41:44 0.2% java/30503
  9116 bea      3109M 2592M sleep   59    0   5:48:32 0.2% java/36116
……………………………………………………………………………………

-         PID corresponds to your Solaris Java process ID
-         CPU corresponds to the CPU utilization % of a particular Java Thread
-         PROCESS / LWPID corresponds to Light Weight Process ID e.g. your native Java Thread ID

In our example, prstat did identify Thread #76 as our top CPU contributor with 8.6% utilization.

** Please note that If you use regular thread libraries (i.e., you do not have /usr/lib/lwp in the LD_LIBRARY_PATH) in Solaris, a LWP is not mapped to OS thread directly. In that situation, you will have to also take a pstack snapshot (in addition to prstat) from the process.

Step #2 – Generate a Thread Dump snapshot of your affected Java process

Great now you have one or a list of high CPU Thread contributors from your Java process but what should you do next?

PRSTAT data of your Java VM process is nothing without a Thread Dump data snapshot.

It is very important that you generate a Thread Dump via kill -3 <Java PID> at the same time or immediately after the prstat execution.

The Thread Dump will allow you to correlate the native Thread Id with the Java Thread Stack Trace and allow you to understand what high CPU consumption this Thread is currently involved with (infinite or heavy looping, GC Thread, high IO / disk access etc.).

Example: kill -3 9116

The command will generate a JVM Thread Dump from your Java process or middleware standard output log and typically starting with Full thread dump Java HotSpot(TM)… (HotSpot VM format).

Step #3 – Thread Dump and PRSTAT data correlation

It is now time to correlate the generated prstat and Thread Dump data.

First, you have to convert the PRSTAT Thread Id decimal format to HEXA format.

In our example, Thread ID #76 (decimal format) corresponds to 0X4C (HEXA format)

The next step is to simply search within your Thread Dump data and search for a Thread with native Id corresponding to the PRSTAT Thread Id (HEXA f format).

nid=<prstat Thread ID HEXA format>

In our example, such Thread was found as per below (complete case study is also available from this Blog):

"pool-2-thread-1" prio=10 tid=0x01952650 nid=0x4c in Object.wait() [0x537fe000..0x537ff8f0]
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:474)
        at weblogic.common.CompletionRequest.getResult(CompletionRequest.java:109)
        - locked <0xf06ca710> (a weblogic.common.CompletionRequest)
        at weblogic.diagnostics.archive.wlstore.PersistentStoreDataArchive.readRecord(PersistentStoreDataArchive.java:657)
        at weblogic.diagnostics.archive.wlstore.PersistentStoreDataArchive.readRecord(PersistentStoreDataArchive.java:630)
        at weblogic.diagnostics.archive.wlstore.PersistentRecordIterator.fill(PersistentRecordIterator.java:87)
        at weblogic.diagnostics.archive.RecordIterator.fetchMore(RecordIterator.java:157)
        at weblogic.diagnostics.archive.RecordIterator.hasNext(RecordIterator.java:130)
        at com.bea.wli.monitoring.alert.AlertLoggerImpl.getAlerts(AlertLoggerImpl.java:157)
        at com.bea.wli.monitoring.alert.AlertLoggerImpl.updateAlertEvalTime(AlertLoggerImpl.java:140)
        at com.bea.wli.monitoring.alert.AlertLog.updateAlertEvalTime(AlertLog.java:248)
        at com.bea.wli.monitoring.alert.AlertManager._evaluateSingleRule(AlertManager.java:992)
        at com.bea.wli.monitoring.alert.AlertManager.intervalCompleted(AlertManager.java:809)
        at com.bea.wli.monitoring.alert.AlertEvalTask.execute(AlertEvalTask.java:65)
        at com.bea.wli.monitoring.alert.AlertEvalTask.run(AlertEvalTask.java:37)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:417)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:269)
        at java.util.concurrent.FutureTask.run(FutureTask.java:123)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:650)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:675)
        at java.lang.Thread.run(Thread.java:595)

Step #4 – Analyze the identified Thread Stack trace for root cause

At this point you have down 50% of the work done and should have one or a list of Threads contributing to your high CPU problem.

The final step is for you to analyze the Thread Stack Trace (from bottom - up) and determine what type of processing / computing culprit Thread(s) are involved in.

Find below common high CPU Thread scenarios:

-         Heavy or excessive garbage collection (Threads identified are the actual GC Threads of the HotSpot VM)
-         Heavy or infinite looping (application or middleware code problem, corrupted & looping non Thread safe HashMap etc.)
-         Excessive IO / disk activity (Excessive Class loading or JAR file searches)

Depending how your organization and client is structured; you can also provide this data to your middleware vendor directly (Oracle support, IBM support, Red Hat etc.) or development team for further analysis.

Final words

I hope this article has helped you understand how you can combine the powerful Solaris prstat and Thread Dump analysis to allow you to pinpoint high CPU Java Threads problem.

Please feel free to post any comment or question if you need any help.

12.08.2011

PRSTAT AIX – How to pinpoint high CPU Java VM Threads

Most of you are probably familiar with the powerful Solaris OS prstat command. This command allows you to generate a snapshot of all running Java VM Threads along with CPU % for each of them.

When combined with Thread Dump analysis, it allows you to pinpoint high CPU problems such as:

·         Java Threads involved in infinite or heavy loopin
·         Java Threads involved in excessive / nonstop garbage collection (GC Threads)
·         Java Threads involved in heavy logging / IO activities


This analysis strategy is extremely important for any individual involved in Java EE middleware and / or application support such as Oracle Weblogic, IBM Websphere, RedHat JBoss etc.

Ok thanks but I’m using the AIX OS; what can I do?

The prstat command is not available on the AIX OS but fortunately there is an equivalent command & strategy that you can use to simulate and generate the same Thread & CPU % breakdown.

This is great news. Now please show me how it works

Please simply follow the instructions below:

1)   Identify the AIX process Id of your Java VM process
2)  When high CPU is observed, execute the following command: ps -mp <Java PID> -o THREAD

Example: ps –mp captured of a Weblogic Java process running at 40% CPU utilization

USER      PID     PPID       TID ST  CP PRI SC    WCHAN        F     TT BND COMMAND
user 12910772  9896052         - A    40  60 98        *   342001      -   - /usr/java6_64/bin/java -Dweblogic.Nam
-        -        -   6684735 S    0  60  1 f1000f0a10006640  8410400      -   - -
-        -        -   6815801 Z    0  77  1        -   c00001      -   - -
-        -        -   6881341 Z    0 110  1        -   c00001      -   - -
-        -        -   6946899 S    0  82  1 f1000f0a10006a40  8410400      -   - -
-        -        -   8585337 S    0  82  1 f1000f0a10008340  8410400      -   - -
-        -        -   9502781 S    30  82  1 f1000f0a10009140  8410400      -   - -
-        -        -  10485775 S    0  82  1 f1000f0a1000a040  8410400      -   - -
-        -        -  10813677 S    0  82  1 f1000f0a1000a540  8410400      -   - -
-        -        -  11206843 S    3  82  1 f1000f0a1000ab40  8410400      -   - -
-        -        -  11468831 S    0  82  1 f1000f0a1000af40  8410400      -   - -
-        -        -  11796597 S    0  82  1 f1000f0a1000b440  8410400      -   - -
-        -        -  19070989 S    0  82  1 f1000f0a10012340  8410400      -   - -
-        -        -  25034989 S    2  62  1 f1000a01001d0598   410400      -   - -
-        -        -  25493513 S    0  82  1 f1000f0a10018540  8410400      -   - -
-        -        -  25690227 S    0  86  1 f1000f0a10018840  8410400      -   - -
-        -        -  25755895 S    0  82  1 f1000f0a10018940  8410400      -   - -
-        -        -  26673327 S    2  82  1 f1000f0a10019740  8410400      -   - -
-        -        -  26804377 S    0  60  1 f1000a0100220998   410400      -   - -
-        -        -  27787407 S    0  82  1        -   418400      -   - -
-        -        -  28049461 S    2  82  1 f1000f0a1001ac40  8410400      -   - -
-        -        -  28114963 S    0  82  1 11a835728   c10400      -   - -
-        -        -  29491211 S    0  82  1 f1000f0a1001c240  8410400      -   - -
-        -        -  29884565 S    0  78  1 f1000f0a1001c840  8410400      -   - -



3)      Immediately after, generate a Java VM Thread Dump by executing kill -3 <Java PID>. This command will generate a AIX / IBM Thread Dump (javacore.xyz format). At this point, you should have both ps –mp data output and a AIX Java VM Thread Dump
4)      Now analyse your ps –mp output data identify the Thread Id(s) with the highest CPU contribution and convert the TID decimal format to HEXA format

Example: Thread TID: 9502781 >> 91003D

5)      At this point, you are now ready to pinpoint and determine why such Java Thread(s) is / are using so much CPU. The answer is in the JVM Thread Dump. Simply search form the Thread Dump using the Thread TID; HEXA format. The final step is to analyze the affected Thread(s) Stack Trace and determine the root cause e.g. application code problem, middleware problem etc.

Example: In our example, the primary culprit (Thread TID: 0x91003D) was identified in the Thread Dump. As you can see, this Thread is currently involved in an infinite loop condition from a Hashmap. This is a common problem when using non Thread safe Hashmap data structure and combined with high concurrent Threads.

3XMTHREADINFO      "[STUCK] ExecuteThread: '97' for queue: 'weblogic.kernel.Default (self-tuning)'" J9VMThread:0x00000001333FFF00, j9thread_t:0x0000000117C00020, java/lang/Thread:0x0700000043184480, state:CW, prio=1
3XMTHREADINFO1            (native thread ID:0x91003D, native priority:0x1, native policy:UNKNOWN)
3XMTHREADINFO3           Java callstack:
4XESTACKTRACE                at java/util/HashMap.findNonNullKeyEntry(HashMap.java:528(Compiled Code))
4XESTACKTRACE                at java/util/HashMap.putImpl(HashMap.java:624(Compiled Code))
4XESTACKTRACE                at java/util/HashMap.put(HashMap.java:607(Compiled Code))
4XESTACKTRACE                at weblogic/socket/utils/RegexpPool.add(RegexpPool.java:20(Compiled Code))
4XESTACKTRACE                at weblogic/net/http/HttpClient.resetProperties(HttpClient.java:129(Compiled Code))
4XESTACKTRACE                at weblogic/net/http/HttpClient.openServer(HttpClient.java:374(Compiled Code))
4XESTACKTRACE                at weblogic/net/http/HttpClient.New(HttpClient.java:252(Compiled Code))
4XESTACKTRACE                at weblogic/net/http/HttpURLConnection.connect(HttpURLConnection.java:189(Compiled Code))
4XESTACKTRACE                at com/bea/wli/sb/transports/http/HttpOutboundMessageContext.send(HttpOutboundMessageContext.java(Compiled Code))
4XESTACKTRACE                at com/bea/wli/sb/transports/http/wls/HttpTransportProvider.sendMessageAsync(HttpTransportProvider.java(Compiled Code))
4XESTACKTRACE                at sun/reflect/GeneratedMethodAccessor2587.invoke(Bytecode PC:58(Compiled Code))
4XESTACKTRACE                at sun/reflect/DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37(Compiled Code))
4XESTACKTRACE                at java/lang/reflect/Method.invoke(Method.java:589(Compiled Code))
4XESTACKTRACE                at com/bea/wli/sb/transports/Util$1.invoke(Util.java(Compiled Code))
4XESTACKTRACE                at $Proxy115.sendMessageAsync(Bytecode PC:26(Compiled Code))
4XESTACKTRACE                at com/bea/wli/sb/transports/LoadBalanceFailoverListener.sendMessageAsync(LoadBalanceFailoverListener.java:141(Compiled Code))
4XESTACKTRACE                at com/bea/wli/sb/transports/LoadBalanceFailoverListener.onError(LoadBalanceFailoverListener.java(Compiled Code))
4XESTACKTRACE                at com/bea/wli/sb/transports/http/wls/HttpOutboundMessageContextWls$RetrieveHttpResponseWork.handleResponse(HttpOutboundMessageContextWls.java(Compiled Code))
4XESTACKTRACE                at weblogic/net/http/AsyncResponseHandler$MuxableSocketHTTPAsyncResponse$RunnableCallback.run(AsyncResponseHandler.java:531(Compiled Code))
4XESTACKTRACE                at weblogic/work/ContextWrap.run(ContextWrap.java:41(Compiled Code))
4XESTACKTRACE                at weblogic/work/SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:528(Compiled Code))
4XESTACKTRACE                at weblogic/work/ExecuteThread.execute(ExecuteThread.java:203(Compiled Code))
4XESTACKTRACE                at weblogic/work/ExecuteThread.run(ExecuteThread.java:171(Compiled Code))

Need any additional help?

I hope this short tutorial has helped you understand how you can pinpoint high CPU Thread contributors using the AIX prstat equivalent command.

For any question or additional help please simply post a comment or question below this article. You can also email me directly @phcharbonneau@hotmail.com.