Process Log Analysis

Process Log Analysis

During a recent webinar, a participant complained that the FDMEE process logs are too detailed and difficult for the average user to understand. This statement resonated with me and so I have decided to write this post for anyone that is struggling to understand the detailed information contained in the process log. While each situation that requires the log to be consulted is different, my hope is that this post will help you better understand how to focus in on the key information in the log to better diagnose and troubleshoot errors encountered in the FDMEE application.


A process log is generated for each execution of the workflow, batches, custom scripts, initializing a source system, purging application elements and offline execution of reports. The verbosity of the process log ranges depending on the Log Level setting of the System, Application or User where five (5) is the most verbose and one (1) is the least. When troubleshooting a failed process in FDMEE, it is best to set the Log Level to 5 and re-execute the process that encountered the error. This ensures that the process log is generated with detailed information that can often be used to diagnose the problem without needing to consult server logs such as aif-WebApp.log or ERPIntegrator0.log


Once a process log is generated at log level 5, then the analysis can begin. At the top of the process log, basic information about the execution is written including the point-of-view (when applicable) as well as system information like the Log Level, Jython version and File Encoding.  This can be useful to confirm for errors reported by other end users that may fail to provide basic information such as which POV was being processed when the error was encountered – not that an end user would ever fail to give you all the information you need to resolve the problem.



After the header section, the detailed actions that the application performed – including SQL executions – are contained in the log. To the average end user, this detail is often indecipherable and difficult to understand. That is expected, and frankly, it’s not intended for the average end user’s consumption. It is however incredibly valuable to an administrator (or consultant) with access to a database client. These SQL statements can be executed in a DB client to better understand the information with which the application is working. For example, there are SQL queries that will get the Period Mapping information. By running the SELECT query in a DB client, the administrator may discover that period mapping for the new fiscal year has not yet been populated for the target application for which the process is running. Moreover, analyzing the process logs helps an administrator better understand the database structure of the application which is incredibly useful not only for troubleshooting errors but also for writing reports.


But alas, as an end user you are not interested in SQL and just want to know how to find the relevant information in the log file that helps you understand what went wrong. Well this is portion of the post is for you. Within the workflow – Import, Validate, Export, Check – there are 2 primary portions of the process log to investigate:


  • Import Analysis
  • FATAL Errors


The import analysis is actually a carry forward of the Import Log of FDM Classic. This log is used to provide row by row detail to explain when a record from a flat file data source failed to import into the application. The most common reason that a record is not imported into the FDMEE application is that the balance is zero. FDMEE natively suppresses zeros when importing data to the FDMEE repository unless the NZP switch is applied to the Amount row on the import format assigned to the location/data load rule being executed. In FDM Classic, this log was created as a separate file but now the information is simply contained as part of the process log. To locate the import analysis section of the process log, open the process log in a text editor and search for PROCESSING CODES: or Rows Loaded: as shown in the below image.



In the above image, 3 rows were rejected from the data file. The first row failed to import for reason code TC – highlighted in yellow – which the Processing Codes legend tells us is a Type Conversion error which simply another way of that the value in the Amount field is not a number. In this example, the header row is skipped because the Amount field does not have a numeric value but instead the word Amount. The next two records failed to import for reason code ZP – highlighted in green – which indicates a zero suppression. As noted previously, FDMEE will not import zero balance records unless explicitly instructed to do so. Since both of these records have a zero balance, they fail to import to the application.


Analyzing the import results is a useful way to determine if and why any records were skipped during the import workflow process. This process log analysis can help isolate the reason for data inconsistencies. This is a critical skill for an end user of the application.


The second portion of process log analysis is trying to identify the reason why the application failed to execute a process as expected. This is not to be confused with application crashes but instead when a process completes but fails – either through a failed workflow status or failed process status in the Process Details. When these “errors” are encountered, the process log can often help identify the reason for the failed status.


When researching a failed status for a given process, simply open the process log and search for the text FATAL. This will highlight the portion of the process that failed to complete without errors. Moreover, by analyzing the process information prior to the FATAL keyword, the reason for the failure is sometimes very clear.



In the above image, the load of data to HFM failed. While the error – Error In HfmData.loadData, File “<string>”, line 46, in loadData – is likely meaningless to most, we now know that the error occurred during the data load.  Further, by looking just a few lines above the FATAL error, one can see the offending records that caused FDMEE to not be able to complete the load to HFM successfully.  A previously unknown problem now becomes actionable and the end user is empowered to take corrective action or communicate the problem more meaningfully to an administrator for his/her assistance and ultimately remediation.


I hope this post has helped you understand how to navigate the maze that can be the FDMEE process logs. By better understanding the information contained in the process logs and by being able to target keywords within these logs, you should be better educated on how to diagnose issues and subsequently correct them. While process log for different events in the application will contain different information, the information contained in this post should help to empower you to better utilize these logs and see their value and ultimately make them less daunting. I hope you have learned something and can apply this knowledge the next time you encounter an error in FDMEE.

March 01, 2016 in Troubleshooting

Tags: , , , , , ,

Comments are closed.