Recently, I was working on a particularly complex piece of code that required a lot of data manipulation, grouping, sorting, etc. and I had a breakthrough that I hope may prove useful for other developers.
Typically, when pulling data from Acumatica, I am comfortable pulling data into PXResultSet objects and looping through the results in a Foreach loop. This pattern is generally sufficient for most tasks. The new task, however, required a lot of processing of the data rather than just enumerating through the List. Also, performance was a concern, so I wanted to minimize the number of times I went to the data layer. For both reasons, I wanted to keep my data independent of the cache. What I needed to do was pull data into an IEnumerable object and then loop through, filter, copy, and manipulate it via LINQ operations. Initially, I had trouble getting PXResultSets into a List. I wasn’t able to directly translate it into the List I needed. Then I discovered the following in the source code.
Var ListofLines = SelectFrom<SOShipLine>.View.Select(this).RowCast<SOShipLine>().ToList();
The RowCast call refactors the PXResultSet into a list of the objects you pass into the call. If you’re not conversant in the tech, this is a good time to brush up on your C# LINQ nomenclature. I find that manipulating data in the application layer was a big efficiency gain in terms of completing programming tasks and system performance gains. I find two scenarios common in my code where this pattern may yield benefits.
Scenario One: Complicated BQL Predicates
Rather than formulate complicated BQL predicates that group by, sum, calculate, etc I dump a data set into a list and manipulate it via LINQ. This is especially useful if you find yourself searching for data multiple times against the same DAC. It is certainly more readable than a long BQL statement. In the below code, Method I is a typical pattern lifted from the Acumatica source. Method II is a version that pulls data with a Where clause but leaves the grouping to the application layer. The advantage here is that I was also able to easily get a distinct count and can subsequently refer back to the original ungrouped data readily. It is also possible that performance gains may be made depending on the system load/architecture of the particular instance of Acumatica.
Scenario Two: Data Sets
I think as your code becomes complex, it’s important to consider the data needs of the code set from a top-down perspective. When coding complicated tasks, it’s easy to get caught in the weeds and find yourself polling the same DAC multiple times to accomplish separate tasks. This can be especially true if you’re being good at writing modular code blocks. One of the easiest ways to spot this is by thinking about updating nested loops. If you have a nested loop that pulls data, a typical Foreach with a PXSelect, it will fire off the PXSelect every time the top loop iterates. Rather than iterate a data layer call each time you may want to consider pulling a larger data set once above the looping and filtering down the set for each loop iteration. In Method II below, I pull a set of accounts with parent accounts before looping begins. But wait, you may ask, why not pull all of the accounts for both loops? That might be great, especially if you need that data for other tasks or if you have a good idea of the resulting data set. It might also be pulling in an unnecessary large data set that would erase any performance gains you might make. As always it’s important to consider the topology of your data.
These examples are presented as a way to make code more readable, leverage native C# capabilities, and potentially find performance efficiencies. Hope this was helpful, happy coding!