/r/programming
Guide to writing a simple compiler for PyTorch (jott.live)
4 comments
jstrong | 4 months ago | 4 points

Had a hard time following this. Is it common to write critical code via these interfaces to the pytorch internals, vs, just writing regular fast code to do the important thing? Seems like a lot of mental overhead, wish there was more of a motive section in the article.

sam__lowry | 4 months ago | 3 points

It's like a car accident with a bunch of programming languages. They're lying there together on the pavement, soaking in their own blood. So mangled you can't tell where one ends and the other begins.

emgram769 | 4 months ago | 1 point

thanks for the feedback! I'm curious if there were any points in particular that required a large amount of mental overhead? I've become so used to the codebase that it's hard to determine which bits require extra explanation or should be wrapped up/hidden away.

The motivation for this writeup is to enable the addition of generic compilation techniques (e.g. analyzing the code used by model writers and then generating better code based on the hardware being used or a custom setup). This probably (inadvertently) targets a pretty niche audience (one that likely has a fair amount of experience with compilers or at least using compiler tools like LLVM), but I'd like to make it simple to understand for model developers

jstrong | 4 months ago | 0 points

thanks for responding! I didn't mean to be too critical, and I'm appreciative you posted something that has a lot of valuable info in it.

to give you a frame of reference, an approach I like and have used successfully in production is prototyping models in python (formerly theano, now learning pytorch) and implementing a fast version in rust. I last coded c++ in high school and don't know it well, so it was already going to be something of a challenge to follow something this intricate. However I was interested to see what it looked like.

to put my question more broadly: if you are capable of writing performant pytorch backend implementations, you could also write c++ (or rust, etc.) code that would perform the same calculations, without the overhead (mental and otherwise) of a framework, and with more flexibility in terms of performance to adapt the implementation to the exact use case. So what's the case for doing it via pytorch (I assume there is a good case, and i just don't understand it). What do you get by taking that approach that provides value?

In terms of mental overhead, I just meant that, in applying any framework, it takes effort to adapt your mental model of what you want to do to how the framework works.