Conversation
…. Yes, it's confusing. Yes, the OpenMP ARB know.
…they would be easy.
|
Rudimentary testing done with: #include <cstdio>
#include "quest.h"
int main (void)
{
const int NQUBITS = 24;
const int TPB = 32;
initQuESTEnv();
reportQuESTEnv();
std::printf("Initial number of threads per block: %d\n", getQuESTGpuThreadsPerBlock());
setQuESTGpuThreadsPerBlock(TPB);
std::printf("New number of threads per block: %d\n", getQuESTGpuThreadsPerBlock());
Qureg qureg = createForcedQureg(NQUBITS);
std::printf("Initialising Qureg.\n");
initPlusState(qureg);
reportQureg(qureg);
std::printf("Applying Quantum Fourier Transform.\n");
applyFullQuantumFourierTransform(qureg, false);
reportQureg(qureg);
destroyQureg(qureg);
finalizeQuESTEnv();
return 0;
} |
|
Why would |
|
Is there an advantage to users having to set this as a runtime hyperparameter? My (mostly undeveloped) belief is we can use occupancy tools (alluded to here) to automate this. I definitely shy from giving users a greater onus to optimise for their settings (like other prolific softwares), which the v4 overhaul was supposed to avoid (via e.g. the autodeployer). Note too that the kernels so far are very primitive - each thread handles the updating of the minimum possible number of amplitudes (often just one!). I quite like that because it's very readable and simple (great for an open-source scientific project) but is an obvious site for optimisation.
It's true that it will never be anywhere as big as the quantities |
|
Hi Tyson, I just noticed the fixed value to 128 and have a feeling that it was large. I just wanted a handle so I could write a benchmark so we can easily automate performance tuning ourselves. I have not played with the occupancy tools but I should take a proper look as this might solve this automatically. My other concern is that there are differences between Nvidia and AMD on optimal sizes due to hardware differences so we might not be able to reply on the occupancy tuning in all cases unless this becomes available on all platforms. |
| #pragma omp parallel shared(n) | ||
| #pragma omp single | ||
| n = omp_get_num_threads(); | ||
| n = omp_get_max_threads(); |
There was a problem hiding this comment.
Isn't this functionally wrong? We wish to return the number of available threads as set by the user, and which is the default adopted by our openmp pragmas. If you call omp_get_max_threads() outside a parallel region, won't it just return 1?
There was a problem hiding this comment.
From an EPCC colleague:
omp_set_num_threads() set the value of the nthreads internal control variable, but omp_get_num_threads() does not get this value ( but omp_get_max_threads() does).
Standards aren't immune to issues.
There was a problem hiding this comment.
Yeah as James indicates omp_get_num_threads outside a parallel region will return 1, but omp_get_max_threads returns OMP_NUM_THREADS or whatever was last set using omp_set_num_threads. Parallel regions without a num_threads clause then use that value.
There was a problem hiding this comment.
Oh nice, my brain hadn't even noticed the change of num to max` 😅
|
|
||
| qindex numThreads = qureg.numAmpsPerNode / powerOf2(qubits.size()); | ||
| qindex numBlocks = getNumBlocks(numThreads); | ||
| const int NUM_THREADS_PER_BLOCK = gpu_getNumThreadsPerBlock(); |
There was a problem hiding this comment.
If we opt for this, why is NUM_THREADS_PER_BLOCK capitalised like a constant? It's runtime
There was a problem hiding this comment.
I agree capitalisation here bad.
There was a problem hiding this comment.
It's const in scope 😉 apologies, accidentally following my own style guide there rather than the QuEST one. I'll %s/NUM_THREADS_PER_BLOCK/numThreadsPerBlock/g it.
I guess it's very GPU specific! I think For illustration, the next smallest size is Of course, newer GPUs support more active blocks per SM (even when the max active threads per SM is unchanged). E.g. CC=8 supports up to 32 active blocks per SM, so we could shrink to Certainly seems prudent to consult a CUDA runtime API, if that doesn't hurt our AMD compatibility! |
|
Apologies, probably won't get to look at this again this week, but very happy to set this value programmatically if it can be done! As it's architecture dependent, we definitely do need a way to adjust it, and ideally both at runtime and compile time. At compile time, so kindly HPC support teams can compile and maintain a tuned version, and at runtime, so they can scan through values without having to recompile in between. I'll have a chat with James abour approaches later this week! I 100% agree that we don't really want unknowing users messing around with this. I think something like an |
|
Fair enough - you've convinced me! Being able to runtime adjust is of course extremely helping during development of a user-friendlier adaptive system anyhow. I like the sound of |
| } | ||
|
|
||
| void setQuESTGpuThreadsPerBlock(const int NEW_TPB) { | ||
| // just rely on the internal function to throw an error if there's no GPU support compiled |
There was a problem hiding this comment.
TODO: validate this is a factor of 32 (and is positive, etc etc)
There was a problem hiding this comment.
Doc to user: HIP warpsize is 64!
|
Should validate TPB is multiple of 32! |
Creating a facility for users to runtime set threads per block for tuning the GPU implementation. NOTE: only applies to kernels that are not handled by Thrust, which does its own thing. Resolves #735.
I considered and rejected the idea of creating a symmetric interface for the CPU for users who don't know
OMP_NUM_THREADSoromp_set_num_threads()exist, but that's much riskier as the point of truth is external (in the OpenMP runtime).TODO:
qindex? Probably.QuEST/quest/src/gpu/gpu_subroutines.cpp
Line 453 in b7d4a29