Replies: 1 comment
-
|
Hi @JOgbebor thanks for running this benchmark, the large jump in computation time at N > 1500 is quite concerning. I think the best thing you can do right now is to process these structures in parallel - there is an example in the jupyter notebook: smol/docs/src/notebooks/adding-structures-in-parallel.ipynb |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I am using smol to compute cluster concentrations from a Potts subspace. For my specific application, I would ideally like to analyze structures containing upwards of 5,000 atoms. I realize that the code is primarily intended to process DFT structures, and that my use case is out of the ordinary. Naturally, I've run into a bottleneck with regards to the speed of computation. To quantify the scaling, I performed two tests for the
subspace.corr_from_structure()function (plots below). In both cases, the Potts subspace contained four orbits (one-, two-, three-, and four-atom clusters) loaded from a pre-saved subspace. The number of threads was set usingsubspace.num_threads = N_threadsand timing was measured usingtime.perf_counter(). My main question is: what can I do smol-wise to make processing large structures more efficient, apart from modifying the number of threads? Thanks in advance.Runtime vs system size (8 threads):

N [atoms] | Runtime [s]
32, 0.04037
108, 0.24085
256, 0.18334
500, 13.55407
864, 273.13657
1372, 173.36762
2048, 5827.09123
2916, 13465.58817
Runtime vs number of threads (1372 atoms):

Beta Was this translation helpful? Give feedback.
All reactions