Excessive logging from inventory closing /recalculation

In my previous post, I stated that InventSumLogTTS table is not updated after related inventory transaction was physically posted. I was totally wrong.

In a while I had to make some performance tuning for a couple of installations, which suffered from bad inventory closing performance and overall conflicts between inventory closing and regular user work. After looking closer onto the problem, in both cases, I found out that the main reason for performance issues is a large set of updates to inventSumLogTTS table, coming from inventory closing/recalculation process. Since both installations use Weighted Average inventory model, routine inventory recalculation was causing some 2-3-5 millions of updates to InventTrans table and, respectively, creation of the same number of records in inventSumLogTTS table. The worst part of it is that InventSumLogTTS table is locked in pessimistic mode. It means, that if your users are running an order explosion update or trying to schedule production order, the system will start iterating over all InventSumLogTTS records for all items, which somehow participate in the explosion. Since now we have several millions of transactions, it will take some time. Since all explosion update process is run as a one big fat transaction, all updated records in inventSumLogTTS will remain locked till end of this transaction and all other users, trying to schedule another production order, which happens to share any of direct or indirect subcomponents with order being re-scheduled, will have to wait till the transaction ends. Also, most probably, part of the records related to scheduled items were inserted by inventory closing in not yet committed transaction(s). And every time when scheduling bumps into one of these items, it have to wait till transaction will commit. Shortly speaking, any attempt to run inventory closing/recalculation safely prevent most of production-related functionality.

Everything I said is related to DAX2009. In DAX2012, designers made some provisions to support inventory closing/recalculation execution in parallel with production operation.First of all – they enabled optimistic locking for InventSumLogTTS table. Second – they invented ‘Soft Deletion’ of inventSumLogTTS records, so the reqTrans update function can mark records as processed without causing long  locks to other processes (because of actual deletion). I have not checked this algorithm in practical usage and I am not sure that they addressed 100% of locking issues for inventSumLogTTS table, but I can believe that in version 2012 they addressed the most of them.

Yet, I am not sure that the very approach is correct. Let’s look into the following statements.

  1. Right now (at least – as of versions 4-2012R2) InventSumLogTTS is used only for dynamic update of net requirements (reqTrans). I do not know how it can be used in future, but now this table do not have any other use.
  2. MRP deals only with inventory transactions in, so to say, ‘Estimated’ statuses. After inventory transaction was registered (for receipts) or picked (for issues), from MRP point of view, it becomes a part of inventory-on-hand, one big net requirement which processed by MRP as a whole and it is based on inventSum data (not on inventory transactions).
  3. If an inventory transaction were physically updated (becomes a part of inventory-on-hand), then net requirements update logic apply this change ONLY on first change to physically updated status. If both new and old transaction statuses in a record of inventSumLogTTS are below or equal to Picked/Registered, then this record is not applied to net requirements and simply dropped during net requirement update process.
  4. Inventory closing/recalculation deal ONLY with inventory transactions who are well behind ‘Estimated’ status. It process transactions which were already Physically or Financially updated.
  5. Moreover, inventory closing/recalculation DO NOT change any of quantity/dimension/transaction status fields, which are somehow related to MRP. Well – I remember that average inventory recalculation/closing creates  dummy inventory transfers, but they are irrelevant for MRP.

So, I can simply draw a conclusion that any logging to inventSumLogTTS from inventory closing/recalculation should be simply disabled. The very fact that this was not made in a standard version of Dynamics AX can be considered as a design bug.

Now let’s come to more practical part. Dynamics AX do have API to disabling logging to InventSumLogTTS. You can call the following function: appl.inventUpdateOnhandGlobal().inventUpdateOnhand().parmSkipInventSumLogTTS() to enable or disable logging to inventSumLogTTS. I must mention, that in the end of transaction (commit or abbort), logging is re-enabled automatically, so in most cases you just need to disable logging and then do nothing.

Practically speaking – to disable logging to inventSumLogTTS during inventory closing/recalculation, you need insert a call to appl.inventUpdateOnhandGlobal().inventUpdateOnhand().parmSkipInventSumLogTTS(true); after every ttsbegin; statement (but not connection.ttsbegin() function), you find in the inventCostItemDim class. To disable logging for inventory closing/recalculation cancellation, you need to insert the same call (again – after every ttsbegin;  statement) into the class inventCostClosingCancel_WorkInvent. 

I made this change for two installations and I had quite positive results: Inventory recalculation/closing time dropped from days to hours (about 15-20 times) and all locking between production operations and inventory closing/recalculation have been gone.

Usual disclaimer: Use this thing with care, first try it in TEST environment, I am not sure about future compatibility and compatibility with your customizations and so on. Anyway – even in the worst case, InventSumLogTTS is cleared out on every MRP regeneration for dynamic plan, so It is not very big deal to skip this updates. In the worst case, just imagine that you ran unsuccessful session of MRP regeneration. 😉

 

 

 

 

Net requirements update in MRP Module and Working Set of MRP

Most of Dynamics AX specialists (especially from support) heard about inventSumLogTTS table. Everyone knows that this table somehow is related to MRP; Everyone knows that if you do not run MRP, your inventSumLogTTS table grows bigger and bigger and eventually you need to clean it somehow.  But since detailed documentation on static/dynamic plan update algorithms is absent,  not many people understand how Axapta actually use this table for plan updates; what are advantages and caveats of dynamic vs static plan use;how order explosion differs from regeneration planning and so on…

I wrote this text mostly from my experience with Dynamics AX 2009. In the new version (DAX2012) approach remained the same from functional point of view, only technical details changed, so this information can be useful also for people using DAX2012.

A bit of history

As I wrote in my blog before, inventSumLogTTS table began its life (somewhere in late releases Axapta 2.5) only as a tool to track a list of changes which was applied to inventSum table in independent DB transaction. Should the main transaction be rolled back, the system was to iterate over related records in inventSumLogTTS and programmatically  rollback changes to inventSum. Starting from Axapta 3.0, inventSumLogTTS table started to be also used to support dynamic update to MRP plan. In version Dynamics AX 4.0, the whole Inventory MultiTransaction System was dropped from Axapta and replaced with a new approach. Nevertheless inventSumLogTTS table still exists in the system, but nowadays it is used only to support dynamic MRP plan update. Today, name of the table is misleading, because now it is not used to log InventSum updates. As of versions 4-2012, it is used only to keep track of inventTrans changes, which have not been applied to dynamic plan yet.

Continue reading “Net requirements update in MRP Module and Working Set of MRP”

Two stories about inventory closing and SQL Locks

Story #1: update_recordset considered harmfull.

Recently, I was invited to a customer who had started to use Inventory Closing functionality about a month ago. They were using DAX2009 with recent rollup 7. Every time when they were running inventory closing, the system locked all (or at least – most of the) inventTrans records, safely preventing users from any regular work with logistics module. Before I came to the customer I already heard rumors that new functionality of non-financial transfer closing can cause long duration locks to inventory data. Now I had a chance to witness it with my own eyes. The sql statement which was a reason for show-stopper problem in IC was very easy to find. (I knew the place in the code before I went to a customer’s site). It was the following query from InventCostNonFinancialTransferHandler.updateInventTrans() method:

update_recordset inventTrans
setting NonFinancialTransferInventClosingRecId = _inventClosing.RecId,
DateClosed = inventTrans.DateFinancial,
ValueOpen = InventTransOpen::No
where inventTrans.DateFinancial != dateNull()
&& inventTrans.DateFinancial <= _inventClosing.TransDate
&& inventTrans.ValueOpen == InventTransOpen::Yes
&& inventTrans.Direction == _direction
// Select transactions that represent transfers:
// Transfer journal, WMS Location refill
&& (inventTrans.TransType == InventTransType::InventTransfer
// Transfer Order
|| inventTrans.TransType == InventTransType::TransferOrderReceive
|| inventTrans.TransType == InventTransType::TransferOrderShip
// WMS Pallet transport
|| inventTrans.TransType == InventTransType::WMSTransport)
join inventDim
where inventDim.InventDimId == inventTrans.InventDimId
exists join inventTransTransferMatch
where ((inventTrans.InventTransIdTransfer != ” && inventTransTransferMatch.InventTransId == inventTrans.InventTransIdTransfer)
|| (inventTrans.InventTransIdTransfer == ” && inventTransTransferMatch.InventTransId == inventTrans.InventTransId))
&& inventTransTransferMatch.ItemId == inventTrans.ItemId
&& inventTransTransferMatch.Direction == transferMatchDirection
// select transactions where the corresponding transaction has the same values of financial inventory dimensions
#InventDimJoin(inventTransTransferMatch.inventDimId, inventDimTransferMatch, inventDim, _inventDimParm)
join inventTable
where inventTable.ItemId == inventTrans.ItemId
&& inventTable.DimGroupId == _inventDimGroupId
// exclude std cost items
join inventModelGroup
where inventModelGroup.ModelGroupId == inventTable.ModelGroupId
&& inventModelGroup.InventModel != InventModel::StdCost
// there should be no settlements for these transactions
join inventTransNotSettled
where inventTransNotSettled.InventTransRecId == inventTrans.RecId
notexists join inventCostNonFinTransferMarking
where inventCostNonFinTransferMarking.InventTransRecId == inventTrans.RecId;

 

Looks impressive, hugh ? If you explode nested views, you find out that this query is joining 10 or 11 tables. (I lost count to be honest). After I started to look onto the query, I understood that from application logic point of view, the query is Okey. It must update all non-financial transfers for a given Dimension Group in one update. It should not lock users from current operations (e.g – reserving stock for sales orders or posting picking list), because it should update only financial transactions from period to be closed. I made an assumption, that maybe this query is just causing lock escalation from record level to page level and then to table level, thus causing total lockup. I tried to disable lock escalation for inventTrans with alter table set(lock_escalation=disable) statement. (Just for a test; I think it would be a bad idea in general). It did not help. Then I tried to analyze lock information by selecting data from sys.dm_trans_locks table. I found out that this statement placed U-lock over seemingly every page of the inventTrans (Or, at least, all pages with given dataareaid).

I never had a lot of experience with U-locks before. I saw them in a database from time to time, but I red somewhere in SQL BOL that it is a kind of short duration locks issued by update statement to maintain consistency and prevent deadlocks. When I tried to find more information on update locks in internet, I ran into perfect article  Read Committed and Updates by Craig Freedman. It turned out that update statement in SQL (and update_recordset statement in Axapta respectively) place U-lock ON EVERY row it read. What happens next ? If update statement has simple logic (like update table set nonindex_field=constant where another_field=constant), the system either update a row (if it fits a where condition) and replace U lock with X, either it immediately release U lock (if row does not fit to the condition). So for simple cases, U-lock is placed on a row for really short duration – from reading of a page into memory until end of the page’s in-memory scan. But in complex queries, especially in queries with self-join, U-lock is placed in the very beginning of SQL Statement execution (when the page is being read for the first time) and kept until query reached a home-run stage, when all records to be updated are defined and the query started update phase itself. Since our query in question has a join of 11 tables, even in the very best case scenario the system was taking a couple of minutes to evaluate all necessary info and proceed to update phase. In the worst case (and this worst case happens too often), the system was choosing incorrect execution plan. In this case, U-Locks persisted for 30-40 minutes. It was locking all the pages in the table, because the system was simply reading all pages with given dataAreaId (because inventTrans do not have more specific index suitable for query conditions) thus leading to a total lock-up for 30-40 minutes.

BTW, there is no much point in locking here, because these updates being called only from starting part of Inventory Closing, which is executed in single user mode. Chances that these inventTrans records would be updated but some other user activity (not IC) are pretty thin: There is no much you can do with already financially updated inventory transfer transactions. The only thing which is coming to my mind is an idea that some crazy user will start to change InventDimId of inventory transactions via Item Details->Functions->Edit dimensions function. But I think we can ignore this possibility 😉 I can understand SQL Server concerns about the data to be modified by different process, but from application logic point of view it is highly improbable.

When I identified the issue, I started to try different ways to fix it. First thing  I tried  to replace an update_recordset with equivalent set of statements w/o update_recordset: while select table {select forupdate table2 where table2.recId==table.recid; table2.update()} . Since outermost select does not use ‘pessimisticklock’ clause (and InventTrans has OCC enabled), it does not places any locks on table it read. Inner select use ‘forupdate’ clause, but because of OCC it is just a syntactic sugar, which does not generate any additional clauses or hints for the select statement sent to SQL Serever.  This approach brought some progress: Although the query (now select, not update) was still taking 30-40 minutes to execute, it did not place these nasty U-locks over all records; IC was still slow, but it stopped to lock users at least. As a next stage, I tried to split one select onto two nested selects. For everyone who is into SQL Statement tuning it sounds like very counter-natural, but since we are unable to directly influence the query plan in X++ with hints, we must somehow ensure SQL Server selecting more or less decent execution plan. I tried the following approach: On outer level I placed a query which was simply producing a list of itemIds, which have non-closed transfers (any transfers, maybe even financial ones), in nested while select I used the copy of original select, only with adding ‘itemId==itemIdFromOuterSelect’ condition to every occurrence of InventTrans and inventSettlement in the query. This approach brings good results: Processing of non-financial transfers in my case took only 1-1.5 minutes. Maybe in your case, if you have many items with transfers, it will take about 30-40 minutes for execution of the whole logical block, but it yet much better than 30-40 minutes for one item dimension group. Well, maybe this approach is not the most perfect one. Maybe I should try to combine outermost select (to select item list) and inner update_recordset. Maybe this U-Locks won’t be so nasty after all. If the statement takes only 5-10 seconds to execute, then maybe benefits of fast execution will outweigh potential lock conflicts. But for my case, it was acceptable solution and the customer kept to use it.

As far as I understand, the whole feature was first designed for DAX2012 (where related statement is always executed per-item) and then backported to DAX2009 without too much testing. Also, to my knowledge currently MS is working on a patch for this feature in DAX2009.

Story #2:  Wrong query execution plan can ruin the helper-based algorithm in Axapta

On another project, the customer often had performance issues with inventory closing cancellation.

In the beginning, it was a mere issue of total execution time. When they were running IC cancellation in interactive mode, it took around 3-4-5 hours to complete. I pointed them that to the fact, that if you execute IC cancellation in batch mode, it runs many times faster than in interactive. The trick is simple: When IC cancellation is started, it checks the mode of execution. If  it is  running in a batch mode, it checks which items has inventSettlement records in inventory closing being canceled, then it spawn a batch task  of class InventCostClosingCancel_workInvent for every such an item. Next it iterates over all indirect cost journals created by inventory closing being canceled, and for every cost journal, it spawns a batch job of class inventCostClosingCancel_WorkProd. Then all these jobs start to execute (up to a batch server maximum number of thread at a time), thus brings significant (like 5-10 times) increase in performance. (It is worth to mention that all batch jobs are spawned with empty batch group, so you have to configure at least one of your batch servers to serve batch jobs from this batch group).

So, after they started to run Inventory Closing cancellation in a batch mode, time of the operation dropped to 45-60 minutes. (Progress percent in batch screen to advance by 1-2 percent every minute.) But from time to time, a batch mode cancellation went exceptionally slow (like 0.1-0.15% in a minute). I decided that SQL server generates silly execution plans sometimes and prescribed to run update statistics on inventTrans/inventSettlement/prodTableJour before running IC cancelation. For a while  this well known home remedy against spontaneous slowness of queries helped to resolve the issue.  But eventually, since 8-10 months after GoLive, even after statistics update, inventory closing cancellation become very slow, so I had to investigate the issue a bit more.

When I checked the list of connection, I found out that there is a lot of blocked processes from batch server, so it was a locking issue, not the issue of wrong execution plan. (Or at least so I decided after first look). Next I ran SQL Profiler, I also found out that the system creates several deadlock in a second. Finally I started to check the list of running queries and resources they are waiting for. I found out, that all processes was waiting on newly inserted key into inventSettlement.DateVoucherIdx or prodTableJour.dateVoucherIdx. I drew a conclusion that they are waiting on new key, because when I tried to select data from the tables using %%lockres%% pseudo-column, I was unable to find record being locked). I made assumption that a transaction insert record, then everyone wait on it and then transaction is getting rolled-back because of deadlock detection. (This was the only reasonable explanation why all connections are waiting on in-existent record.) The waiting statements were one of the following update_recordset statements from inventCostClosingCancel_workInvent.duplicateSettlements() and inventCostProdCancel.prodTableJour() respectively:

update_recordset inventSettlement
setting Cancelled = NoYes::Yes,
QtySettled = -1 * inventSettlement.QtySettled,
CostAmountSettled = -1 * inventSettlement.CostAmountSettled,
CostAmountAdjustment = -1 * inventSettlement.CostAmountAdjustment
where inventSettlement.Voucher == newVoucher
&& inventSettlement.TransDate == newTransDate
&& inventSettlement.Cancelled == NoYes::No
&& inventSettlement.ItemId == itemId;

update_recordset prodTableJour
setting Canceled = NoYes::Yes,
AmountFinancial = -1 * prodTableJour.AmountFinancial,
AmountPhysical = -1 * prodTableJour.AmountPhysical,
Adjustment = -1 * prodTableJour.Adjustment
where prodTableJour.Voucher == newVoucher
&& prodTableJour.TransDate == newTransDate
&& prodTableJour.Canceled == NoYes::No
&& prodTableJour.ProdId == prodTable.ProdId;

After I found and identified the places in the X++ code, which was causing deadlocks, I was finally able to understand the issue. Meaning of IC cancellation is quite simple: It simply insert exact copy of closing records, but with reversed sign. Since insert_recordset statement in X++ does not support expressions in select list, the author of the code found the simple pattern to overcome this: First the system inserts copy of old data (inventSettlement/prodTableJour) ‘as is’ (only voucher id and date are changed), next, special update_recordset statement (which supports some basic expressions in setting clause), is updating newly inserted records, inverting a value of the fields. Since this insert and update are made in 1 transaction, no other user can read our half-cooked records with un-inverted sign.

From application logic point of view, since, every helper thread is processing its own itemId or JournalId, no lock conflicts or deadlock can occur. But since an update statement in SQL  (and update_recordset statement in X++) has a habit of locking EVERY record it read, locking logic suddenly becomes in-determenistic.If SQL Server use execution plan with index scan by ItemId/JournalId indexm several threads avoids locking conflicts, since every of them update its own personal set of keys. But from time, SQL Server having  a bad day and it is selecting voucher+date index for both queries. In this case,1st helper thread inserts records for item A and locks the related key entries in Voucher+Date index; Then the 1st thread start to iterate over WHOLE index entries for given voucher and date (which are common for all helpers, since all inventSettlement records of the cancellation use the same voucher and date). Sooner or later, this scan was trying to read uncommitted inserts of the second helper thread, which caused 1st thread to be locked until 2nd thread commit transaction. Second thread, in turn, sooner or later starts  to iterate all the index entries for the same voucher and date. Sooner or later it was hitting the locks on records inserted by 1st thread. SQL Server, after finding classical deadlock situation (2 processes are waiting on each other), terminates one of the two (or several) deadlocking connections causing all the threads by one to redo the same transaction.

In both cases, I fixed the issue by building a very specific index to prevent any potential conflicting index reads in this pieces of code. In the first case (for inventSettlement) it was an index on TransDate+Voucher+ItemId. In the second case (for prodTableJour) it was an index on TransDate+Voucher+ProdId. Since then, SQL Server always use these indexes for related update_recordset statements, thus eliminating this silly deadlock issue. (It is silly, because it was not caused by some real life conflict for the same resources. It was just a result of mis-optimization of a query in SQL Server).

If you do not believe me or just want to reproduce the 2nd issue on your own test installation, try to disable itemDateIdx index on inventSettlement and NumDateIdx index on prodTableJour table. Then try to run inventory closing cancellation in batch mode with SQL profiler running deadlock trace. Moreover, when I was preparing this article, I managed to run into the same issue on standard CONTOSO database on standard international version of  DAX2009ru7. It seems that the issue can be quite frequent and even average installation can benefit from building these two additional indexes, I mentioned on the previous paragraph.

Lessons learned:

  1. Locking in SQL Server can be in-deterministic. The set of records locked can be defined not by your application logic, but by some random fluctuation of SQL Server optimizer
  2. Be careful with update_recordset statement.It can place locks on records, which you never expects. Think about building appropriate indexes or rewriting the query if you are updating a table with highly competitive access scenarios.
  3. Sometimes a bad query plan can bring not only performance issues, but also concurrency issues. Do not forget to do all usual hoodoo to keep SQL optimizer happy: Update statistics regularly, rebuild index regularly and so on. If it does not help – try to rewrite a query…


 

A bit of advertising ;)

As you,probably, already know, Nigel Frank International now is conducting Dynamics Salary Survey

Of course, primary goal for these guys (or these ladies – looks more like it) is to collect HR-database with contact of potential hires, as they always asking you to fill out your contact data in the end of survey. Yet the results of of the survey, which they promise to send everyone, who submit this survey can be very interesting for everyone in Dynamics AX community.  I have filled out this questionnaire already, because I am interested in three things:

  1. Quite naturally, I want to know which country has highest salary for a Dynamics AX specialist.  😉 Especially, after this salary is normalized by,say, Expatistan cost of living index
  2. I want to figure out the proportion between cost of software (Dynamics AX in our case) and cost of implementation. Say, 11 years ago, when I started my career as an Axapta specialist, typical DAX project budget in Russia was around $100-120K for software and about $80-90K for for consulting. Now, after 10 years of high oil prices and consequential dollar wages increase, average salary in Dynamics AX market in Russia grew four-folds. Now, even with considerable increase in Dynamics AX price per user seat, cost of consulting typically is about 200% of software purchase costs. I am wondering, which country has highest, say, highest annual-wage to user-license-price ratio and which country – lowest. Actually, it looks a bit peculiar for me, that Microsoft use almost the same sales price for all countries regardless of their GDP per capita and average salary. I guess, if Microsoft pricing policy would be more flexible, they could greatly increase their user base in less-well-to-do countries. It might pay pretty well in future, but it requires more strategic kind of thinking then typical Microsoft executive have…
  3. I am also want to compare an average Dynamics CRM salary year-over-year. I think, we are witnessing a kind of commoditization of Dynamics CRM market. In most cases, Dynamics CRM is used simply as development environment to make ANY kind of custom development. No significant knowledge in consulting in general and sales management principles in particular is required to enter this market. Most of Dynamics CRM specialists (at least, people I had chance to meet and work with) are techies, who studied Dynamics CRM in a row of other Microsoft technologies (somewhere after ASP.NET and MS SQL, but before Biztalk). So, I expect drop in average Dynamics CRM salaries in future. I recently red that Dynamics CRM revenue for Microsoft has increased by 30% in last quarter. It might be a good indication of forthcoming salary drop. Demand for CRM specialists will increase sharply and then drop sharply as well. Sooner or later, knowledge of Dynamics CRM can become kind of commodity knowledge, like knowledge of C# or MS SQL. It can still bring money, but  this money won’t be as good as 2-3 years ago…

So, I hope that this survey will be representative enough to provide every participant with relevant data. That’s why it is so important for everyone to fill it out…

Regards,

Denis

 

Multithreaded MRP and related issues

Today, I will share some experience related to setting up and fine tuning MRP for best performance.

The most significant MRP-related improvement of DAX2009 over older versions is an ability to execute MRP in multi-threaded mode, when planning process is spread over several independent processes executed in parallel. As usual, multi-threaded execution works only when MRP is executed in batch mode, since helper threads are spawned as additional tasks inside the same batch job. This is the current way to support multi-threaded execution in Axapta.

Basic setup

The number of helper threads is specified on the second tab (Scheduling Helpers) of MRP start menu parameters. What number of helpers should be used ? To my experience, MRP scales very well. On my current project, we use 15 helpers (and 1 main thread). We tried to increase number of helpers to 31, but have not found any significant performance difference. Generally, it depends on configuration of your batch server(s) and your database server.When you are trying to increase the number of helper threads, you should check resource utilization for both batch server and database server. If one of these servers becomes saturated after yet another increase in helper threads number, then it is time to stop or, maybe, time to upgrade your hardware. I also want to mention, that I have positive experience of allocating MRP onto several batch servers. I would say, that my personal recommendation is as the following: First check every batch server, which will participate in MRP for the number of CPU cores it have. Then configure every batch server to run 2xNumber_Of_Cores batch threads. (In Administration->Setup->Server Configuration). Then specify number of helper as total number of batch threads for  all batch servers serving your MRP batch group MINUS 1.  Remember, that main batch thread also consumes one thread from your batch server group capacity. Also, if during MRP run, you are going to run some other batch processes on the same batch server(s), you may want to decrease the number even further, so maybe you should set number of helper-threads to total thread capacity minus 2 or 3.

Next important parameter to discuss is Item Distribution factor. What does it mean? On the very first stages of MRP (In Update
phase to be precise), the system allocates all items (all items in inventTable or all items in inventTable, which fit to the query specified in MRP startup dialog box) into chunks. Every chunk is a kind of unit of work, which is processed by one thread. During the phases of MRP, which are executed in parallel (Data regeneration, Coverage planning, Futures planning and Action Messages), every thread grabs chunk, process it, then grab another chunk, process it and so on, until chunks for given stage and BOM level are marked as processed.

Size of the chunk is calculated as Item_Distribution_Factor*(Number_Of_Helpers+1).

From one side, smaller chunks ensure smooth processing and even allocation of items between threads. The smaller chunk is, the less are chances for a thread to grab a chunk with high number of computational intensive items. Say, one item takes from 1 to 10 seconds to be coverage planed. If size of the chunk is only 1 item, then in the worst case, we most late thread will finish BOM level coverage only 10 seconds after the most early thread  finished. If chunk size is 300 items, then in the worst case scenario, difference between time to process is 300*10Seconds-300*1Second==2700Seconds==45Minutes. It means that for this worst case scenario, there are good chances that most of  helper threads will be doing nothing, waiting for 30-45 minutes for the last, unlucky thread to finish processing. It would increase planning time and it also would lead to non-optimal usage of hardware. (Since most of the threads would do nothing, while waiting for end of BOM Level). From other side, allocation of the chunk to a thread is a competitive process, which leads to temporary database locks. Several threads often try to allocate the same chunks in parallel; Only one of these threads succeed, while others repeat allocation process until they grab their own chunk. So, chunk allocation can become a bottleneck itself if number of chunks is too high and size of the chunk is too small. To my experience, reasonable size of the chunk is somewhere between 10 and 60. To find out the optimum chunk size and distribution factor, you can simply make several test with different distribution factors. Also, you can check ‘Track Item Process Duration’ checkbox in MRP Dialog parameters and then check typical item planning time in Master Planning->Inquiries->Unfinished Scheduling Processes->Inquiries->Item Process Duration. If item process duration varies a lot, you can benefit from smaller chunk size; if it does not, then probably increase of chunk size can be more beneficial.

Another potential way to improve performance (I never tried it though) is to put randomly-ordered items into a chunk. Now, when the system creates chunks, it simply iterates over inventTable ordered by itemId, so a chunk contain items with similar itemId. Since, usually, items with the same complexity of planning often has sequential itemIds, it often leads to uneven of distribution of items between chunks. Some chunks consist from items which are regular purchased items, while other chunks consist of complex BOM-items, which require complex resource planning on many work center groups. If you add special MRPOrder field to inventtable, fill it with random number during item creation and then sort by this field during chunk generation, you can have more even distribution of items between chunks.

Infant mortality issue

The frequent problem of parallel MRP run is early termination of helper threads. Say, we started our MPR as usual, but then in 15 minutes, we see that all our helper batch tasks terminated withthe strange message “Nothing to process”. (You can see this message, if you click Log button in Batch Task form before main thread of MRP-batch terminate). Then the only remaining main thread continues to run MRP in single-thread mode (very slowly).
Here is what is happening:

  1. Main thread spawn helper threads and continues it work
  2. Helper threads continuously check status of process waiting for it to advance to “Deleting and inserting data”
  3. Main thread deletes net requirements and planned orders for plan being regenerated. (and it takes a lot of time usually)
  4. Helper threads, after waiting for 15 minutes for status update, terminates with “Nothing to process” message
  5. Main thread runs MRP in single thread mode.

First of all, the standard logic of MRP has obvious bug in helper thread behavior. While waiting for status update from main thread, the system keeps reading process status table (reqProcessList) continuously,without any pauses between executing  SELECT statements. It is easy to figure out, that, say, 16 threads, issuing 70-100 statements in a second will send to DB server about 1500 statements in a second. This amount of queries can easily consume  an average DB Server’s capacity, so it simply won’t have a time for useful work (like deletion of net requirements in main thread). To prevent the system from this behavior, you need to modify method run() of ReqProcessExternalThread class. It contains the ill-fated status waiting loop. You should find the end of while(true){} loop and
insert into it sleep(500); statement. This will add 0.5 seconds delay between re-reads of the process status, thus giving to your database server a time to breathe.

Another issue to fix is low timeout value for helper threads. If you have realistically large working set of net requirements, deletion of the plan would take much more time then 15 minutes, expected by developers of the functionality. To increase timeout, you should open class declaration part of the mentioned class (reqProcessExternalThread) and modify definition of WAITFORPROCESSSTATUSUPDATE macros. Try to change the value of macros from 15 to at least 60 or maybe even 90.

Slow end-of-level processing.

Often, you can see in the Unfinished Scheduling Process form, that during coverage phase, in the end of BOM Level he system works very slowly. It looks like it almost hanged for 30-40 minutes, but then, suddenly it advances to the new level. If you start AOS tracing on the batch server, you will see that all helper threads are doing nothing (they simply rereading process status waiting for advance to the next BOM level), while one unlucky thread is busy calculating coverage for portion of items. Also, in Unfinished Scheduling Process form, you can realize that the number of items on the level decreases ! 10 minutes ago you
had more then 15000 items on the level, now it is slightly less then 15000, and in a few minutes it will be less then 14500.

What’s happening ?

The main reason for the issue is incorrect BOM Level data in inventTable. When the system allocates items between chunks, every
chunk has attached BOM Level. But sometimes, during explosion of BOM for,say, 5th level, the system finds as a component of
exploded BOM the item, BOM Level of which (according to inventTable) is 2. Coverage for this item was already created
already. How does the system calculates coverage info for this item again ?

  1. The line for the item in ReqProcessItemListLine (It is a list of lines in a chunk) is marked as ‘IsLevelChanged’
  2. The system finds of create special ‘Spare’ chunk for items, which were pushed to the next level because of sudden level
    change. This chunk’s line in reqProcessItemList is marked as ‘Spare’.
  3. The system reassign the line for the item in ReqProcessItemListLine to Spare Chunk
  4. During processing of next BOM Level, the system checks the item for being ‘pushed’ from level from level. If level for item was changed, the system deletes planned orders for item, previously created during planning on incorrect BOM Level.

The main performance issue of this approach is that the systemdoes not control the size of ‘Spare’ chunk at all. I witnessed
cases, when size of this chunk exceeded 1000 items. If planning of every item takes,say,4 seconds, then unlucky thread, which managed
to grab this chunk will spend more then 1 hour for processing of it. Since this chunk has highest number for a BOM level, it is usually
grabbed by some unlucky thread in the very end of BOM Level processing. Then one thread is processing this chunk for an hour, while all other threads are doing nothing, simply waiting for our unlucky thread.

The very first idea of how to fix the issue which came to the mind is to calculate BOM levels before every full regeneration planning. Unfortunately – it does not always help. Sometimes, we have incorrect BOMLevel info in inventTable only because we changed out BOM structure without recalculating BOM levels. But often, the reason for level change is that real life nesting of items in production BOMs does not fit to theoretical BOM structure described in master-data BOM. Say, we have a shining brass-head bolt, which is used only to screw a label with the name of our company to the final assembled good. Naturally, after calculation, it will get BOM Level of 1. Say, then, during production of a deeply nested sub-BOM of our finish good, someone decided to use the same bolt for screwing much smaller label to one of the sub-components. He dropped the line with standard bolt from production order’s BOM and added new line with the shining brass-head one.(Maybe it is not very good practice, but you simply can not update standard BOM structure for every possible small change, requested by a customer). If you recalculate BOM Levels, this bolt will still have BOMLevel 1. But if you try to run MRP, you would find out that it coverage actually is performed in level 5 or 6. By the way, one of the very last stages of MRP is BOMLevel update. During this stage, the system updates InventTable with actual BOM Level data, gathered during MRP processing, not with theoretical BOM Level from master-data BOM Structures.

So, the only way to resolve the issue, is to change behavior of the system to create many smaller Spare Chunks for a level, instead of one large Spare Chunk. To accomplish this, you need to modify method getSpareListNum() of ReqProcessItemList table:

static server ReqProcessListNum getSpareListNum(ReqProcessId        _processId,
                                                BOMLevel            _level,
                                                ReqProcessStatus    _status,
                                                Connection          _con
                                                )
{
    ReqProcessItemList          reqProcessItemList;

    ReqProcessItemListLine       reqProcessItemListLine;
    RandomGenerate              randomGenerate=new RandomGenerate();

    ;

    select firstonly ListNum, RecId from reqProcessItemList
    order by listNum desc
    where
        reqProcessItemList.ProcessId == _processId &&
        reqProcessItemList.Level     == _level     &&
        reqProcessItemList.Spare     == true;
    if (reqProcessItemList)
    {
        select count(recid)
        from reqProcessItemListLine
        where reqProcessItemListLine.ProcessId==_processId &&
              reqProcessItemListLine.ListNum==reqProcessItemList.ListNum;
    }
    if (!reqProcessItemList || (reqProcessItemListLine.RecId>=#MAXLISTSIZE)) //#MAXLISTSIZE is the macros with size of the spare chunk. Should be around 30-60
    {
        randomGenerate.parmSeed(timenow());
        try
        {
            reqProcessItemList=null;
            reqProcessItemList.setConnection(_con);

            select maxof(ListNum) from reqProcessItemList where reqProcessItemList.ProcessId == _processId;

            reqProcessItemList.ListNum=reqProcessItemList.ListNum+randomGenerate.randomInt(1,20);
            reqProcessItemList.ProcessId = _processId;
            reqProcessItemList.Level     = _level;
            reqProcessItemList.Status    = _status;
            reqProcessItemList.Spare     = true;

            reqProcessItemList.insert();
        }
        catch(Exception::DuplicateKeyException)
        {
            retry;
        }

    }

    return reqProcessItemList.ListNum;
}

The only strange thing in this piece of code is a usage of random number generator. I use it to decrease the number of potential conflicts, when several threads is trying to allocate new Spare Chunk, leading to duplicate key exception.

Parallel BOM calculation in RollUp 7

Since Microsoft released most recent update of Dynamics AX (I mean SP1 rollup 7), I would like to point the readers of my blog to very important update in this version, which can dramatically increase performance of one of the critical, but slow operations from production module of Axapta.

I am talking about BOM Calculation process, which in previous version can only be started in single-thread mode. I heard about implementations, where BOM recalculation for all BOMs in the system were taking more then 2 days.

Somewhere in the last December, Microsoft released hotfix KB2487033, which supported multi-threaded execution of BOM calculation and now this feature included into RollUp7.

This nice feature works only if BOM Calculation is started not for a given ItemId, but for  all BOMs in the system (say – from costing version form) and if user specified batch-mode calculation of this batch job. In this case, the system behaves the following way:

  1. BOM levels are recalculated
  2. Tables bomCalcItemTask and bomCalcItemInventoryDimensionTask  are populated with item ids and item inventory dimensions to calculate. (These two steps are also executed in standalone, single-threaded mode of calculation).
  3. In single-threaded mode, the system starts to iterate over the items to calculate, starting from the lowest level components (with numerically highest BOM Level).
  4. In multi-threaded mode, system SPAWNS a certain number of helper tasks for EVERY BOM level to be calculated. It also set dependencies between tasks of every BOM Level, so jobs of BOM Level 7 starts only after tasks of BOM Levels 8 and 9 are completed, tasks of BOM Level 5 depends on tasks of BOM levels 9,8,7,6 and so on…
  5. Then in multi-threaded mode, system just calculates every BOM level in parallel. It is quite meditative experience – to look onto progress of tasks in you BOM Calculation Batch job. 🙂 It also saves you a lot of time. In my case, calculation time dropped from one night to something like 20-30 minutes, moreover 10 minutes of it, were taken for BOM Level recalculation (I guess, it is principally not parallelizable task by it nature).

Nevertheless, there are a few issues, which I would like to share here:

  1. The BOM Calculation process is not robust. If one of the BOMs has an error (quite typical if data were imported from heritage system), then on the first BOM with error, one of the BOM recalculation tasks will terminate. Then next task will try to recalculate the same BOM and terminate again, then next, and so on. As a result, usually, after first calculation you will find out that only 20-30 percent of you items has new prices. Solution for the issue is simple: You should simply put try {} except block somewhere inside  BOMCalcJob_All.runBOMCalculation() method, so error in BOM would not terminate the whole job.
  2. When system calculates the number of helper threads, which will be used for BOM Calculation, it is implicitly assumes, that all available batch servers will be used for the calculation. Say, if you have 3 batch servers and every server has 8 maximum batch threads specified in batch server configuration, the system will spawn 24 helper threads for every BOM level. First of all, the number itself is too big, normally 8-12 threads should be enough. Morever, if you going to use only one batch server (out of three) for given BOM Calculation batch group, these 24 helper threads will be executed in three steps (8 threads each). So, I recommend to modify BomCalcJobItemTaskBatchManager.numberOfBatchSessions() method to return more reasonable number of threads if you have more then 1 batch server or if your batch server has too many batch threads available.

Also, as kind of shameless self-promotion 🙂 , I want to mention that for one of my project I made a clone of the feature, but the clone, which calculates in parallel only ONE BOM with all dependent sum-BOMs and items.

 

 

Ledger Balance Data in DAX

I remembering the first time when our team was developing a solution which was supposed to quickly calculate the GL balance for specific GL Account/Dimension combination. It was 2001 and we were developing the cost allocation functionality.

We just started to work with Axapta 2.1, so – our first approach was simply to calculate a GL Balance by summing up LedgerTrans table from the beginning of times . Soon, after we went live, we realized that the brute force approach is not always the most successful way of doing things in Axapta.

Luckily, we found the ledgerBalance and LedgerBalanceDim tables. First table contained the summed up GL turnover per GL Account+Period; Second table contained the summed up GL turnover per GL Account+GL Dimensions+Period. After we started to use these two tables for calculation GL balance from these two tables, we saw dramatic improvement in performance, since 200-300 of ledgerTrans records was packed into one record of these balance tables. (We had around 10-15 postings per GL Account daily and our GL period was a month).

Little bit later (like in the Fall 2001), we found out that these two tables is a more like part of problem than part of solution. 🙂 Every time when you design some table which holds summed-up data for quick retrieval, one should remember that update of the data in the table will hold the lock on the table’s record until end of the transaction. So, If Axapta was implemented in the company, which did not have very detailed GL account sheet  (say only 3-4 GL accounts was used to hold the customer’s payment balance), posting of the multi-line payment journal could prevent everyone else from posting ANY other customer-related document for 5-10 minutes. Somewhere in the beginning of transaction, system was updating the balance data and then this data was locked until the end of transaction. In connection with mutual locking during inventory update, which I discussed in my article “History of Inventory Locking”, this had a rather catastrophic consequences for the system’s stability and performance. Couple of carefully planted orders/journals with the same GL accounts and inventory items, could make system unusable for a whole business day :). The only remedy for this was the usage of batch servers for posting of nearly all documents, because batch server was naturally preventing any competition for resources by converting several potentially parallel processes into one sequential queue of processes. This approach could be a solution for accountancy-centric implementations, but any real time process, like WMS  just could not be fixed this way. That’s why I never believed into any of success stories telling about WMS implementation in Axapta made BEFORE release of Axapta 3.0.

The situation was relieved in Axapta 3.0.  These two tables was replaced with LedgerBalanceTrans/ledgerBalanceDimTrans tables. To prevent lock chains and deadlock during an update of balance data, developers of Axapta introduced two additional fields to these new tables: the transaction date and the funny named field ‘LedgerBalancesVariant’. First thing, we realized that now balance data is stored on per-date basis, but not per-period. Initially, the meaning of this Variant field was not clear for me. Soon, I found out that actually, system keeps up to 20 (TWENTY) GL balance records for the same date and same account/dimension combination. During update, system was RANDOMLY selecting a number from 1 to 20 and then updating/inserting balance record with arbitrary Variant number. (Actually – It was not complete random number. It was calculated as Variant= (SessionId mod 20), so update of the same account/dimension combinations from the same session went to the same balance record. If they were using true random() value, update from the same session and the same DB transaction would be scattered across several records even it was the only transaction for the combination in the day.) Thus, chances for long locks was greatly decreased. Even if GL account sheet is too coarse grained, probability for two users to update the same record at the same time now was like only 5%. When I first studied this concept, I had an idea that maybe after 300-400 concurent users, I would increase the number of these ‘variants’ to 30-40, but actually this idea proved to be wrong, since I never saw any serious lock contentions on ledger balance data since release of version 3.0. Of course, if someone were to monitor process locking in MS SQL, he/she would see occasional waiting locks on ledgerBalanceTrans/ledgerBalanceDimTrans from time to time, but generally – balance data had ceased to be the main bottleneck of system’s performance.

From the other side – this solution actually traded update performance for for query performance. After balance data have been scattered across 20 variants and stored on the per-date basis, size of ledger balance table became comparable to the size of the ledger transactions table itself. Typically, size of LedgerBalanceDimTrans table became to be around 50-70% of ledgerTrans table. Since then, usage of ledgerBalanceDimTrans instead of ledgerTrans for reporting purposes became unnecessary complication, since query time for the both cases was comparable, but ledger balance data sometimes lacked necessary details important for some particular reports. Of course, in may cases usage of ledgerBalanceDimTrans was little bit more convenient then direct usage of ledgerTrans data, since one record of the balance table holds both debit and credit turnover, while fetching the corresponding data from ledgerTrans with ‘group by crediting’ would give us debit and credit turnovers in different records.

When I first saw ledger balance tables in Dynamics AX 2009 I was very surprised to see that field Variant had gone. First of all I had not expected any changes in these area since it had became non-problematic since 2003. Second – I saw no way for this mechanism to exists without Variant field.

Actually, it turns out that designers of financial subsystem in Axapta realized that actual results from usage of this Variant algorithm was satisfactory, but not perfect. From one side, we lost almost all of our DB space/query time savings from ledgerBalance* usage, from the other side – we still can have a lock conflicts on ledger balance data updates. (Though less frequent). So the designers decided to store ONE balance record per ONE ledger voucher. In the worst case (if we have post mostly one-line documents) we will have the number of ledgerBalance* records equal to the number of ledgerTrans records. In the realistic case (we post at least 50-70% of multi-line documents) we will have the same number of records as in Variant-mechanism. Since now these balance data is not a query performance increase mechanism anyway since 2003, actual size of ledgerBalances* tables does not matter; It enough for them to not outgrow ledgerTrans itself. From the other side, since EVERY ledger voucher now has it own balance record, there is no chance of lock conflicts at all.

One interesting point which is worth to mention about the new mechanism is the way how these balance updates are being made now:

  1. All updated balances made in one ledgerVoucher is kept in RecordSortedList (thus – in memory) somewhere in LedgerBalancesList/LedgerBalancesPostingList class.
  2. As soon as posting of ledgerVocher has finished, or the list became to large to hold in memory it is being flushed to quasi-temporary table LedgerBalancesTransDelta.
  3. In the end of ledgerVoucher, system opens separate additional database connection and use this separate connection to execute insert_recordset statement which insert new balance records by summing up ledgerBalancesTransDelta. This update use optimistic locking mechanism (with transaction retry), but since this is separate transaction with a very little of actual database update, it can be rolledback and retried with much overhead. (And this is exact reason to use separate transaction for this update! If we were to use the main database transaction, in case of lock conflict, this would cause a great overhead (Imagine 20000 lines FA depreciation journal, which was rolled back and posted again because of balance data lock conflicts).
  4. Finally, the records belonging to current session are being deleted from ledgerBalancesTransDelta. (That is why this table called quasi temporary).

Although, as I said, I have never seen any actual implementations of Dynamics AX 3-4 which really suffered from ledger balance data related bottlenecks, It might be that these problems would arise in 3000-4000 user implementation. (Never have seen personally implementations with more then 700-800 users). Overall, the whole history of changing approach to balance data in DAX looks like a good example of software design and engineering.

Tracing in Dynamics AX 4 and 2009

I’ve spent considerable time trying to figure out how tracing works in newer versions of Dynamics AX. Although manual for Trace Parsers seems to be quite satisfactory on topics related to analysis of collected trace, it is pretty terse on the topic of trace collection and setting up; Dynamics AX Performance Team log has two very useful posts on trace parser (http://blogs.msdn.com/axperf/archive/2008/07/18/traceparser-video-training-part-i-installation.aspx and http://blogs.msdn.com/axperf/archive/2008/07/30/traceparser-video-training-part-ii-configuration-collection-and-importing.aspx), but it seems that these trainings cover some early beta of trace parser tool and prerelease version of DAX2009. So, I decided to write small article about my findings related to trace parsing in Dynamics AX.

To have a full picture of tracing subsystem in DAX, keep in mind that actually DAX supports THREE kind of tracing:

  1. SQL Tracing
  2. Server Tracing
  3. Client tracing

SQL Tracing

This kind of tracing was supported from the earliest version of Axapta, I’ve met. Even in Axapta 2.1 it was possible to have a log of SQL Statements issued by your application, by checking appropriate checkboxes in SQL tab of user settings. I won’ describe these settings in detail, You’d better play with them to understand. I only want to mention two SQL tracing-related peculiarities, introduced in version 4.0:

  1. SQL Tracing work ONLY if your AOS configuration has “Allow client tracing on Application Server Instance” setting checked. In versions 2 and 3, SQL trace worked independent from any server configuration settings, so I was  little surprised when I tried to run familiar SQL trace on DAX4.0 without any results. It seems,that in attempt to gain couple of extra percents of performance, DAX kernel’s team decided to make this SQL tracing globally switchable starting from version 4.0.
  2. If you are trying to log SQL Trace into file, expect to find this file ON SERVER SIDE. In versions 2 and 3, SQL trace file was being stored on local log directory. Starting from version 4.0, SQL trace file is being stored on Server side log directory. Probably, this is because performance team decided to completely decouple tracing process from client. Before attempt to commence SQL Tracing to file, please ensure that your AOS user (I mean – AD user, under which account AOS is run) has enough rights to create and write files in log directory. (Normally, It is something like C:\Program Files\Microsoft Dynamics AX\50\Server\DynamicsAx\Log).

Also, Normally results of SQL Tracing is stored in SQL table, browsable from Axapta itself. The only reason for usage of SQL Tracing to log-file is a special request from Microsoft support team (because it is much easier to e-mail a log-file than log-table).

Server tracing

The most useful way of tracing to my experience. First of all, I want to mention that this tracing module was introduced in Dynamics AX 4.0 and it heavily depends on Events Tracing for Windows subsystem (described in details in http://msdn.microsoft.com/en-us/library/aa363787(VS.85).aspx) So, if You have some problems with starting server trace, You should check that your AOS Server has appropriate Trace Provider installed. You can do it by running “logman query providers” command in AOS’ command prompt. Resulting output must have a line starting with “AOS Trace”. If your server does not have appropriate query provider, You can try to fix this by going into server’s BIN directory and running ‘mofcomp aos_trace.mof’ command. If your server has UAC turned on, this command must be run from command prompt with elevated privileges.

Tracing parameters can be configured on the ‘Tracing’ tab of Dynamics AX server configuration utility. I won’t go into detail about every checkbox and number on this tab,coz they are described in reasonable details in trace parser’s manual. The most funny thing about server tracing is that important-looking ‘Start Trace’ and ‘Stop Trace’ buttons actually DO NOT start a trace at all. They only change the value of Tracestart parameter in registry to 1 or to 0 respectively.

Somewhere in the heart of AOS’ server process, there is a separate thread which wakes up every second and check the value of TraceStart parameter in the registry. If the value of this parameter has changed from 0 to 1, this thread reads all other tracing related parameters from the registry (these parameters described in more details on Trace Parser’s manual) and starts the trace. If the value of this parameter has changed from 1to 0, this thread stop the trace.

So – If you want to start trace on live system, You need not restart AOS Service. You can just set necessary tracing parameters, then press Apply button to store them into registry (do not agree to restart service if Configuration Utility will try to propose it) and then just press ‘Start Trace” to set TraceStart Parameter value to 1. AOS will check the registry for changes and start tracing just in  few seconds.

The very first time, when I tried to run server trace I’ve faced the situation when my log directory, where I had expected to see trace file, was empty and my event log had rather cryptic error message from AOS Server:”Fail to start ETW tracing session: return code is 5”. After spending several hours trying to reinstall and reconfigure everything in my system, I figured out, that probably tracing is started with standard StartTrace WinAPI function, so 5 is an error code described in detail in the MSDN’s description of this function. Actually, this description is quite useful when You have some strange error message from Axapta’s ETW tool and You are trying to figure out what is happening. Moreover, this article has detailed description of all these in/max buffer and buffer size registry parameters, which, according to Trace Parser’s manual You should set before actual attempt to collect the trace.

From reading of the mentioned article, I comprehend two important rule related to security and ETW in Dynamics AX:

  1. The AOS’ user must belong to the ‘Performance Log Users’ user group if it is running on Windows Server 2003 or 2008. WMI event tracing subsystem requires that tracing can be started ONLY from user belonging to the user group or to Administrators user group. Since the idea of including AOS’ user into Admin group sounds completely wrong, the only solution is to include AOS’ user into ‘Performance Log Users’ group.
  2. The AOS’ user must have full access rights to server’s log directory.

Once, I also saw the ”Fail to start ETW tracing session: return code is 112” error message. It was caused by incorrect setting of tracefilesize in Dynamics AX configuration in registry. By default, this setting is set to 10000. So, on attempt to start tracing, WMI Event tracing subsystem checks that target drive for log file has 10000 MBytes available (10Gbytes actually). If available space is less than 10Gbytes, attempt to start tracing fails with the error message mentioned. I recommend to set this setting to something like 1000, because actually Trace Parser tools tends to choke on attempt to import event file larger than several hundreds megabytes.

A Trace log file, with the results of tracing session is created in server’s log directory and has binary format. Result of the tracing can be analyzed via usage of Trace Parser tool, available somewhere at Microsoft’s PartnerSource site.

Another small hint: First time then You try to work with server trace in trace parser tool, do not forget to select correct user in a small window in the upper left corner of the trace form. It seems obvious, but on my first try I spend couple of minutes trying to comprehend relation between what I had been doing in Axapta and what I saw on a screen. Somehow, It did not occur to me immediately that server tracing actually collect trace for ALL current user on a server.

Also – Keep in mind that if server has around 150-200 users, then server trace will produce around 150-300 MBytes of trace data every minute, so do not keep tracing your server longer than necessary.

Now, I want to discuss, why I consider Server Tracing extremely important for every DAX implementation technician. Most of developers heard that this tracing can be used for performance profiling of application. It is true. But actually, there is much more interesting application of Sever Tracing. You can treat it as a magical almost non-intrusive flight recorder which can be used to pip into already running application which suddenly went crazy and you do not have an idea what is happening. Say – You have a call from end-user, who complained that scheduling for one production order already took 20 minutes and still is running, while normally it take no more than a minute. You must try to start server tracing ASAP! Even if this scheduling finish in a minute or so, You hopefully would have enough trace data to analyze the situation later on.

As a conclusion: I consider Server Trace as a primary tool for nailing down floating errors, which does not have clear reproduction scenario by now.

Client tracing

First of all: You will be surprised, but Client Tracing just does not work in default installation. At least, all dynamics ax installations, I’ve checked does not have appropriate trace provider installed. I do not know, whatever it is an error or feature of Dynamics Installer. Before attempt to start client tracing, please run ‘logman query providers’ command and check whatever the command’s output has line starting with “AX Trace” in it. If it is not the case, first thing You should install AX Trace provider by running mofcomp ax_trace.mof from DAX Client’s bin directory. (In case of Vista/W7/WS2008, mofcomp must be run with elevated privileges).

The approach used for starting client tracing is similar to the approach used to start server tracing. Client periodically (I think – in the main message loop) checks the value of TraceStart parameter in the corresponding section of registry. If the value of the parameter has changed from 0 to 1, client starts tracing and so on.

So – The basic approach to starting/stopping tracing on a client side is the same as in case of server side tracing.

Here are several hints on Client Tracing.

  1. Client tracing only works if Axapta’s client is started with Administrative privileges. This requirement arises from usage of StartTrace function. I believe, If you include yourself into “Performance Log Users”, it would achieve the same result, but I never have tried this. Actually, when I need to run client trace I simply start DAX Client in “Run as Administrator” mode.
  2. FIles with client trace are located in Client’s log directory. In default configuration it is set to \Users\Public\Microsoft\Dynamics Ax\Log\ or \Document and Settings\All Users\Microsoft\Dynamics Ax\Log\.
  3. As I said – You can start tracing at any time, by pressing “Start Trace” button in client configuration utility. Since the client checks registry settings less regularly than server, expect some delay between pressing the button and actual start of client tracing.

In a conclusion, I want to mention that client tracing is the most useless kind of a tracing in Axapta. First of all, normally most of business logic is performed on a server side (so no much to see on a client side trace). Second, client trace does not contain information about performed SQL statements at all. (Since statements are always executed on server side).

Cost Explorer in DAX2009

It seems that not every Axapta user/consultant is aware about the powerful debugging tool for cost related issues, introduced in DAX2009. This tool is a Cost Explorer form, which is available from Inventory->Cost Explorer button in any of different Inventory Transactions forms. Basically, this form allows you to trace down to the deepest cost sources of any inventory transaction. The most important difference between this form and the Trace form, available starting from ancient versions of Axapta, is that the Cost Explorer form fetches tracing information from the Inventory Settlement data, while the Trace form just settle issue to receipts on the fly. This makes the Cost Explorer form to be a powerful cost debugging tool, which allows you to deconstruct a cost of any inventory transaction. Say – You open the form for closed inventory transaction for sales order. After expanding the cost sources for this transaction, you see that it cost originated from two receipt transactions – one is a production order and another is a transfer order from a different warehouse. By expanding receipt node for the transfer, you find that this transfer cost is originating in a production order, then that production order’s cost is originating from three purchases and one transfer order, which,in turn, originates from fourth purchase order. So, after considerable number of mouse clicks, You would find all the originating inventory transactions and understand the source for every cent of the cost price of the issue in question. Here is little more details about the form:
1. The form can be called for receipt and issue inventory transactions.
2. For an inventory receipt, the form can show not only transaction data itself, but also all the adjustments made for the receipt. The cost structure engine can even differentiate between kinds of adjustment (On Hand Adjustment/Transaction Adjustment, Rounding during inventory closing, Standard Cost adjustment, Other types of adjustment).
3. For a receipt from production order, even data from route transactions (essentially – non-material part of cost) are displayed.
4. For an inventory transaction, the cost tree node text shows Item No, financial inventory dimension, Item Name, Transaction Type, Order number (like PO’s number, Inventory journal’s Number), Settled Qty from this transaction, cost contribution per settled piece (basically – settledValue/settledQty), Settled Value.
5. For a receipt adjustment, the cost tree node text shows type of adjustment, settled Qty from transaction adjusted,cost contribution per settled piece (basically – adjustmentValue/settledQty), Adjustment Value.
6. The form displays the adjustments made ONLY before the date of the last inventory closing. If you want to analyze ALL the adjustments for given receipt transaction, You should not rely on this info. To analyze all the adjustments – just open the Settlements form for given inventory transaction.

When I am talking about Settled Qty, I mean “Quantity contribution” of this transaction into the top-level transaction, for which the Cost Explorer form has been started. Say – We have a purchase order for 30 pieces of raw material, production order, which consumes all 30 pieces of raw material and produces 6 pieces of finish good and we have sales order, which sells 2 pieces of finish goods. So – If we try to trace the cost structure down from sales order to purchase order, then for settlements from purchase order receipts to production order issues, we would see Settlement Quantity of 10 (despite the fact, that all 30 pieces from receipt has been settled to issues during the last inventory closing). This is because we are analyzing the cost structure from Sales Order, which has sold only 1/3 of finish goods. So – Only 10 pieces of original purchase have contributed into COGS.
It should be mentioned also, that contributed cost value from some receipts can be LESSER then 1 cent (eurocent, kopek, etc). This is perfectly normal. If the cost value of the issue in question is not equal to the total of originating receipts contributions into cost, just try to increase number of decimals on the Setup tab of the form.

Finally, I want to mention, that although this form works perfectly well, it leaves overall impression of a kludge made to quickly fix the problem. The cost data is being gathered through too many sources; Approach for cost structure display is not aligned with cost group and cost breakdown mechanisms introduced in DAX2009.
I hope that in some of the future versions, Axapta would introduce some high level engine for cost structure analysis and manipulation, which would disconnect cost data from all this inventory transactions, inventory settlements, indirect cost transactions, route transactions, markup transactions and so on. It would be a great advantage for the costing process in Axapta.