I am running structural model estimations and usually the model takes a long time to converge on my i7 6700K . Recently Ryzen release an 8 core CPU Ryzen X1800 and the price is quite reasonable. The tradeoff is that more core CPUs usually have lower frequency. I have now overclocked my 6700K to 4.6Ghz and i'm not sure how much X1800 can be overclocked. Should I stick to 6700K or switch to x1800 instead? Please advise Thanks!
It depends on what is taking the most time in your estimation. You can run your program under the GAUSS profiler to see which lines of code are taking the most time. Then when you know the data size and the actual operations that are taking the most time, it would be easier to make a recommendation.
If you use the profiler, you will want to limit the length of the program. Try to limit the number of iterations so that the smaller run will take 10 seconds or so if possible. If you have more questions about using the GAUSS profiler, please feel invited to ask. However, in order to make this forum a better resource for others looking for answers, please ask those questions in a new question.
Hi aptech, thanks for the answer, could you explain a little bit more about the CPU choice under different circumstances? I'm sure many Gauss users would have the same question. For me, I think my model takes a lot of time in using numeric optimization and loops through contraction mapping ( loop until convergence).
I knew little about hardware's fundamental functions so I did a little home work. People from the forum say that if the program can take advantage of multi cores then it's better to choose 8-core-lower-frequency Ryzen 1800X, such as video editing; Otherwise, higher frequency will give better performance. For most of the current gaming need, 6700K or 7700K will give better performance.
I'm not sure how efficiently Gauss can take advantage of 8 cores. That's why I opened this post..
This really depends on your code. GAUSS will use 8 cores very efficiently for a matrix operation on a medium-to-largish matrix--for example matrix multiply, linear solve, matrix inverse or computing singular values.
Or if you have an algorithm where chunks can be processed independently, you can use 'threadfor' and it will certainly scale to more than 40 CPU's. However, if there is not enough work for each thread to do, the thread set-up and tear down will take away much (if not all, or more) of your speed up.
I probably would not make that upgrade unless I knew my code was constrained by large matrix operations or was easy to parallelize like a monte-carlo simulation.