Improving Performance – Part Two: C# Multithreading Operations in Acumatica

Yuriy Zaletskyy | August 1, 2019

Introduction

In my last blog post, I shared with you how asynchronous/sychronous operations work within the Acumatica framework using C#.  Today, I will continue the performance discussion focusing on multithreading optimizations in your code.

Multithreading in Acumatica

For one of my customers, they wanted to have something that works faster than WEB API calls. Such optimization can be achieved with the use of multithreading.

As a way of achieving this, I considered a synthetic case of importing 18,249 records inside of Acumatica.  The records were taken from here: https://www.kaggle.com/neuromusic/avocado-prices/kernels. Imagine that for each row in this data set, you need to generate one sales order. You have a couple of approaches from a C# code standpoint at your disposal: single-threaded and multi-threaded.  A single-threaded approach is pretty straight forward. You simply read from the source, and one by one, persist the sales order.

To begin, I created three inventory items in Acumaitca: A4770, A4225, A4046. Also, I created one purchase receipt for 1,000,000 items ordered for each of the inventory items.

Before continuing, I want to show you my Task Manager, Performance tab in order to use as a baseline:

 

 

And now I’ll run single threaded insertions of Sales orders into Acumatica. Here is the source code for it:

 


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using MultiThreadingAsyncDemo.DAC;
using PX.Data;
using PX.Objects.SO;

namespace MultiThreadingAsyncDemo
{
public class AvocadosImporter : PXGraph<AvocadosImporter>
{
public PXCancel<ImportAvocado> Cancel;

[PXFilterable]
public PXProcessing<ImportAvocado> NotImportedAvocados;

public override bool IsDirty => false;

private Object thisLock = new Object();

private const string AVOCADOS = "Avocados";

public AvocadosImporter()
{
NotImportedAvocados.SetProcessDelegate(ProcessImportAvocados);
}

public static void ProcessImportAvocados(List<ImportAvocado> importSettings)
{
var avocadosImporter = PXGraph.CreateInstance<AvocadosImporter>();

var avocadosRecords = PXSelect<Avocado, Where<Avocado.imported, Equal<False>>>.Select(avocadosImporter).Select(a => a.GetItem<Avocado>()).ToList();

 

var initGraph = PXGraph.CreateInstance<SOOrderEntry>();
var branchId = initGraph.Document.Insert().BranchID;

Object thisLck = new Object();

var soEntry = PXGraph.CreateInstance<SOOrderEntry>();
for (int i = 0; i < avocadosRecords.Count; i++)
{
var avocadosRecord = avocadosRecords[i];
CreateSalesOrder(soEntry, avocadosRecord, thisLck, branchId);
}

}

private static void CreateSalesOrder(SOOrderEntry sOEntry, Avocado avocadosRecord, Object thisLock, int? branchId)
{
try
{
sOEntry.Clear();

var newSOrder = new SOOrder();
newSOrder.OrderType = "SO";
newSOrder = sOEntry.Document.Insert(newSOrder);
newSOrder.BranchID = branchId;
newSOrder.OrderDate = avocadosRecord.Date;
newSOrder.CustomerID = 7016;
var newSOOrderExt = newSOrder.GetExtension<SOOrderExt>();
newSOOrderExt.Region = avocadosRecord.Region;
newSOOrderExt.Type = avocadosRecord.Type;

sOEntry.Document.Update(newSOrder);

var ln = sOEntry.Transactions.Insert();
ln.BranchID = branchId;
sOEntry.Transactions.SetValueExt<SOLine.inventoryID>(ln, "A4046");
ln.SubItemID = 123;
ln.OrderQty = avocadosRecord.A4046;
ln.CuryUnitPrice = avocadosRecord.AveragePrice;
sOEntry.Transactions.Update(ln);

ln = sOEntry.Transactions.Insert();
ln.BranchID = branchId;
ln.SubItemID = 123;
sOEntry.Transactions.SetValueExt<SOLine.inventoryID>(ln, "A4225");
ln.OrderQty = avocadosRecord.A4225;
ln.CuryUnitPrice = avocadosRecord.AveragePrice;
sOEntry.Transactions.Update(ln);

ln = sOEntry.Transactions.Insert();
ln.BranchID = branchId;
sOEntry.Transactions.SetValueExt<SOLine.inventoryID>(ln, "A4770");
ln.SubItemID = 123;
ln.OrderQty = avocadosRecord.A4770;
ln.CuryUnitPrice = avocadosRecord.AveragePrice;
sOEntry.Transactions.Update(ln);
newSOrder.OrderDesc = avocadosRecord.Date + avocadosRecord.AveragePrice.ToString();
sOEntry.Document.Update(newSOrder);

//lock (thisLock)
{
sOEntry.Actions.PressSave();
}

PXDatabase.Update<Avocado>(
new PXDataFieldAssign<Avocado.imported>(true),
new PXDataFieldRestrict<Avocado.id>(avocadosRecord.Id));
}
catch (Exception exception)
{
PXTrace.WriteError(exception);
}

}
}
}

 

Now, lets take a look, how task manager is affected after running of default single threaded code:

 

 

Observe that after starting of import loading, the processor didn’t change at all. Actually even become smaller which means that 40 cores will not be used to it’s full potential.  After two hours and 45 minutes, I had 3,746 Sales orders created. Not bad, but still not something to be particularly proud.

Next I created some multi-threading code:

 


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using MultiThreadingAsyncDemo.DAC;
using PX.Data;
using PX.Objects.SO;
 
namespace MultiThreadingAsyncDemo
{
	public class AvocadosImporter : PXGraph<AvocadosImporter>
	{
		public PXCancel<ImportAvocado> Cancel;
 
		[PXFilterable]
		public PXProcessing<ImportAvocado> NotImportedAvocados;
 
		public override bool IsDirty => false;
 
		private Object thisLock = new Object();
 
		private const string AVOCADOS = "Avocados";
 
		public AvocadosImporter()
		{
			NotImportedAvocados.SetProcessDelegate(ProcessImportAvocados);
		}
 
		public static void ProcessImportAvocados(List<ImportAvocado> importSettings)
		{
			var avocadosImporter = PXGraph.CreateInstance<AvocadosImporter>();
 
			var avocadosRecords = PXSelect<Avocado, Where<Avocado.imported, Equal<False>>>.Select(avocadosImporter).Select(a => a.GetItem<Avocado>()).ToList();
 
			int numberOfLogicalCores = Environment.ProcessorCount;
			List<Task> tasks = new List<Task>(numberOfLogicalCores);
 
			int sizeOfOneChunk = (avocadosRecords.Count / numberOfLogicalCores) + 1;
 
			var initGraph = PXGraph.CreateInstance<SOOrderEntry>();
			var branchId = initGraph.Document.Insert().BranchID;
 
 
			Object thisLck = new Object();
 
			for (int i = 0; i < numberOfLogicalCores; i++)
			{
				int a = i;
 
				var tsk = new Task(
					() =>
					{
						try
						{
							using (new PXImpersonationContext(PX.Data.Update.PXInstanceHelper.ScopeUser))
							{
								using (new PXReadBranchRestrictedScope())
								{
									var portionsGroups = avocadosRecords.Skip(a * sizeOfOneChunk).Take(sizeOfOneChunk)
										.ToList();
 
									if (portionsGroups.Count != 0)
									{
										var sOEntry = PXGraph.CreateInstance<SOOrderEntry>();
										foreach (var avocadosRecord in portionsGroups)
										{
											CreateSalesOrder(sOEntry, avocadosRecord, thisLck, branchId);
										}
									}
								}
							}
						}
						catch (Exception ex)
						{
							PXTrace.WriteInformation(ex);
						}
 
					});
				tasks.Add(tsk);
			}
 
			foreach (var task in tasks)
			{
				task.Start();
			}
 
			Task.WaitAll(tasks.ToArray());
		}
 
		private static void CreateSalesOrder(SOOrderEntry sOEntry, Avocado avocadosRecord, Object thisLock, int? branchId)
		{
			try
			{
				sOEntry.Clear();
 
				var newSOrder = new SOOrder();
				newSOrder.OrderType = "SO";
				newSOrder = sOEntry.Document.Insert(newSOrder);
				newSOrder.BranchID = branchId;
				newSOrder.OrderDate = avocadosRecord.Date;
				newSOrder.CustomerID = 7016;
				var newSOOrderExt = newSOrder.GetExtension<SOOrderExt>();
				newSOOrderExt.Region = avocadosRecord.Region;
				newSOOrderExt.Type = avocadosRecord.Type;
 
				sOEntry.Document.Update(newSOrder);
 
				var ln = sOEntry.Transactions.Insert();
				ln.BranchID = branchId;
				sOEntry.Transactions.SetValueExt<SOLine.inventoryID>(ln, "A4046");
				ln.SubItemID = 123;
				ln.OrderQty = avocadosRecord.A4046;
				ln.CuryUnitPrice = avocadosRecord.AveragePrice;
				sOEntry.Transactions.Update(ln);
 
				ln = sOEntry.Transactions.Insert();
				ln.BranchID = branchId;
				ln.SubItemID = 123;
				sOEntry.Transactions.SetValueExt<SOLine.inventoryID>(ln, "A4225");
				ln.OrderQty = avocadosRecord.A4225;
				ln.CuryUnitPrice = avocadosRecord.AveragePrice;
				sOEntry.Transactions.Update(ln);
 
				ln = sOEntry.Transactions.Insert();
				ln.BranchID = branchId;
				sOEntry.Transactions.SetValueExt<SOLine.inventoryID>(ln, "A4770");
				ln.SubItemID = 123;
				ln.OrderQty = avocadosRecord.A4770;
				ln.CuryUnitPrice = avocadosRecord.AveragePrice;
				sOEntry.Transactions.Update(ln);
				newSOrder.OrderDesc = avocadosRecord.Date + avocadosRecord.AveragePrice.ToString();
				sOEntry.Document.Update(newSOrder);
 
				lock (thisLock)
				{
					sOEntry.Actions.PressSave();
				}
 
				PXDatabase.Update<Avocado>(
					new PXDataFieldAssign<Avocado.imported>(true),
					new PXDataFieldRestrict<Avocado.id>(avocadosRecord.Id));
			}
			catch (Exception exception)
			{
				PXTrace.WriteError(exception);
			}
 
		}
	}
}

In the code sample, pay special attention on part with lock:


lock (thisLock)
{
	sOEntry.Actions.PressSave();
}

 

This is necessary for synchronization persist of sales orders into database. Without this lock, few graphs simultaneously try to create records in database, and as outcome this blocks the persist mechanism of Acumatica, which is not thread safe. I believe this can be related to fact that Sales orders numbers depend on previously generated items in the database, and that’s why I considered the lock as necessary.

I restored the database from back-up, and executed the multi-threading code, or to be more precise – multitasking code. Take a look at how different our Task Manager looks like now:

 

 

And look – only 7% higher load! But what about the speed of creation?

It should be noted that in 2 hours, 35 minutes and 26 seconds I was able to create all 18,247 sales orders. This means that our single-threaded approach got 22 Sales orders created per minute. And our multi-threaded approach gave us 117 Sales orders created per minute or 5 times as fast! As another point of optimization, it’s possible to have two machines, one on which Acumatica is installed and runs, and another, on which MS SQL server runs. And for MS SQL server, you should consider splitting the database file on a couple of hard drives, as well as placing the log file on third hard drive.

Summary

In this blog post, I’ve described one of two ways of speeding up performance using multithreading.  In Part One, I discussed asynchronous/synchronous approaches. Both of these approaches can improve performance significantly, but not in 100% of the cases. Real performance boosts will only gained if you have significantly large amounts of data to import, manipulate or massage.  What we are talking about here is data in the millions of records.

Before adding async/multitask/multi threading to your code consider adding caching ( i.e. simple enumeration of elements before main body of cycle ). If caching doesn’t help with performance, consider moving your logic calculations to SQL Server. If this still doesn’t provide any significant performance gains, you are likely not bottlenecking due to amount of data records in your testing process.  Adding async/multitask/multi threading can improve performance, and improve it significantly, but this often requires usage of critical sections (for C#,  you use the lock() function)  – which is not always straight forward.
I hope these two posts provide you, the developer, an understanding what techniques you can apply to boost performance when working with large amounts of data records.
Yuriy Zaletskyy

Yuriy started programming in 2003 using C++ and FoxPro then switching to .Net in 2006. Beginning in 2013, he has been actively developing applications using the Acumatica xRP Framework, developing solutions for many clients over the years. He has a personal blog, aptly named Yuriy Zaletskyy’s Blog, where he has been documenting programming issues he’s run into over the past six years – sharing his observations and solutions freely with other Acumatica developers.

Categories: Developers

Subscribe to our bi-weekly newsletter

Gartner Places Acumatica in the Visionary Quadrant

Gartner’s latest evaluation of cloud core financials vendors is out. Find out why Acumatica was recognized alongside other leaders.