log in |
Message boards : News : Maintenance is over!
Author | Message |
---|---|
The project is restarted with new native BOINC apps and additional computational scenario. | |
ID: 128 · Rating: 0 · rate:
![]() ![]() ![]() | |
The first 18 work units ran perfectly on my GTX 1060 (Win7 6-bit, 373.06 drivers). The times ranged from 3 to 92 seconds, with the average being around 35 seconds. There were no problems with CPU support; it reserved a core on my i7-4771 while other cores were in use for other BOINC projects. But the weather is a little warm here, so I will give others the opportunity to get some for a while. | |
ID: 129 · Rating: 0 · rate:
![]() ![]() ![]() | |
The times ranged from 3 to 92 seconds, with the average being around 35 seconds. Yes, very variable lenght on my little gpu (from 25 seconds to 8 minutes). But all wus seems to be ok, no bugs until now | |
ID: 134 · Rating: 0 · rate:
![]() ![]() ![]() | |
GPU apps do not require dedicated CPU core by default anymore. CUDA apps still use one full core. | |
ID: 143 · Rating: 0 · rate:
![]() ![]() ![]() | |
CUDA apps still use one full core. Yes, this happens due to cudaThreadSynchronize() calls between kernel executions. I probably need to switch to cudaSetDeviceFlags(cudaDeviceBlockingSync). However, I meant that vacant CPU core is not required anymore to start the GPU app, so now the client will launch the GPU app even if all CPU cores are busy. | |
ID: 145 · Rating: 0 · rate:
![]() ![]() ![]() | |
Welcome Back! | |
ID: 150 · Rating: 0 · rate:
![]() ![]() ![]() | |
CUDA apps still use one full core. I've tested it. Indeed, CPU usage drops down to zero in cudaSetDeviceFlags(cudaDeviceBlockingSync) approach. However, if you run the computations and the diaplay on the same GPU, the system will be almost unusable during the computations. So, I don't know, what is better. | |
ID: 152 · Rating: 0 · rate:
![]() ![]() ![]() | |
CUDA apps still use one full core. You probably would have to split one long running kernel into multiple short ones, so GPU would have chance to refresh display. Unfortunately GPUs does not support preemption like CPU does. BTW, some projects allows to configure kernel size, maybe you should go this way too? ____________ ![]() | |
ID: 153 · Rating: 0 · rate:
![]() ![]() ![]() | |
You probably would have to split one long running kernel into multiple short ones, so GPU would have chance to refresh display. Unfortunately GPUs does not support preemption like CPU does. BTW, some projects allows to configure kernel size, maybe you should go this way too? The kernel execution time is set to be less than 20 ms if the GPU runs both the computations and the display, otherwise the system would be unusable even with cudaThreadSynchronize() calls. Unfortunately, if I set blocking kernel calls via cudaSetDeviceFlags() and remove cudaThreadSynchronize(), refresh rate becomes very unstable despite short execution time of the kernels. | |
ID: 154 · Rating: 0 · rate:
![]() ![]() ![]() | |
Message boards :
News :
Maintenance is over!