Testing was conducted to validate that the design would perform as expected and support the workflows, users, and intended load. System tests provide the opportunity to discover and correct problems during system deployment in lower environments, ideally before they appear in production. For this test study, the focus of the testing approach was system performance and end-user experience.
Each component was monitored as the workflows were conducted against different load scenarios. Upon test completion, results were assembled and analyzed to identify both bottlenecks and over-resourced components in the system. This information was used to identify system components that needed to be scaled up, down, or out before further testing was repeated.
Manual user experience testing was conducted by capturing screen recordings of the workflow testers to ensure users of the system could complete their workflows productively.
For more information, see how to design an effective test strategy.
This test study applied a pacing model to the tested workflows. The pacing model shows how the test intends to simulate the pace of work at a utility, where workflows are performed as some number of operations per hour across a team of staff resources. This approach was based on Esri customer input and aimed to match the small to medium electric utility customer scenario that the data was based on.
The various workflows were spread out through a one-hour test period and staggered so as to not start at the same time, while overlapping with each other as real-world workflows also would. This overall breakdown of workflow pacing is considered the “design load” that the system is subjected to. The load was then increased by multiplying the workflows to a point where the system was no longer able to provide acceptable responses or support successful workflows. Note that the workflow pacing model applied in this test study might not match typical daily use at your organization.
Because ArcGIS is a multi-tier system, performance tests were conducted across client, service, and data storage tiers, as well as the underlying infrastructure itself. In this test study, JMeter was used to simulate the user workflows and measure system performance under different loads. ArcGIS Pro requests were recorded and then replayed to simulate load in addition to manual workflows that were performed to assess end-user experience. Windows Performance Monitor and ArcGIS Monitor were also used to monitor resource utilization across different components.
For more information, see tools for performance testing.
This architecture was validated with automated load tests and manual users in four scenarios, and you can see the results from each below. At a high level, the test results show that as implemented, the system is adequately resourced to support loads from the design load through 8x the design load. Tests also reinforced the importance of proper application and system configuration for performance. Across each scenario system utilization increases proportionally with load.
While the system was under load, conducted workflow times were captured as experienced by the users. This represents the time it took to complete all the steps listed in the workflows. Conducted workflow times are consistent until the system becomes overloaded at 10x design load.
While the system was under load, conducted workflow times of key steps across all eight workloads were captured. This represents the average time it took to complete a given step.