1. Programming GPUs is a problem. The ratio of CPUs to CPU programmers and GPUs to GPU programmers is massively out of whack. Not because GPU programming is less valuable or lucrative, because GPUs are weird and the tools are weird.
2. We are more interested in leveraging existing libraries than running existing binaries wholesale (mostly within a warp). But, running GPU-unaware code leaves a lot of space for the compiler to move stuff around and optimize things.
3. The compiler changes are not our product, the GPU apps we are building with them are. So it is in our interest to make the apps very fast.
Anyway, skepticism is understandable and we are well aware code wins arguments.
I can't help but get the feeling you have use-case end-goal in mind that's opaque to many of us who are gpu-ignorant.
It could be helpful if there were an example of the type of application that would be nicer to express through your abstractions.
(I think what you've shown so far is super cool btw)
These are the details we intend to insulate people from so they can just write code and have it run fast. There is a reason why abstractions were invented on the CPU and we think we are at that point for the GPU.
(for the datacenter folks I know hardware topology has a HUGE impact that software cannot overcome on its own in many situations)
Why is it also that terminology is so all over the place. Subgroups, wavefronts, warps etc. referring to the same concept. That doesn't help it.
Unfortunately, ATI/AMD has imitated slavishly many things initiated by NVIDIA, so soon after that they have created their own jargon, by replacing every word used by NVIDIA with a different word, also different from the traditional word, enhancing the confusion. The worst is that the NVIDIA jargon and the AMD jargon sometimes reuse traditional terms by giving them different meanings, e.g. an NVIDIA thread is not what a "thread" normally means.
Later standards, like OpenCL, have attempted to make a compromise between the GPU vendor jargons, instead of going back to a more traditional terminology, so they have only increased the number of possible confusions.
So to be able to understand GPUs, you must create a dictionary with word equivalences: traditional => NVIDIA => ATI/AMD (e.g. IBM 1964 task = Vyssotsky 1966 thread => NVIDIA warp => AMD wavefront).
- Wavefront: AMD, comes from their hardware naming
- Warp: Nvidia, comes from their hardware naming for largely the same concept
Both of these were implementation detail until Microsoft and Khronos enshrined them in the shader programming model independent of the hardware implementation so you get
- Subgroup: Khronos' name for the abstract model that maps to the hardware
- Wave: Microsoft's name for the same
They all describe mostly the same thing so they all get used and you get the naming mess. Doesn't help that you'll have the API spec use wave/subgroup, but the vendor profilers will use warp/wavefront in the names of their hardware counters.
Besides, full redesign isn't so expensive these days (depending).
>It seems like a solution in source of a problem.
Agreed, but it'll be interesting to see how it plays out.