All of my interests (Dynamicland, tinygrad, ORI) are the same question asked at different layers of the stack: Can we make a normally-opaque system legible enough to actually reason about, collaborate on, and trust? Dynamicland asks it about computation, tinygrad asks it about ML frameworks, ORI asks it about epistemics and trust networks.
And tinygrad is potentially a project that gets me funded.
See what happens when we run DEBUG=N python3 program.py:
DEBUG=2 adds per-kernel names, shapes, timings, GFLOPS. This is the sweet spot for "what is my program actually doing."
DEBUG=3 adds the schedules UOp graph.
DEBUG=4 prints the generated C source for each kernel before clang compiles it. Very educational for seeing what the CPU/GPU backend actually produces.
Now, we learned what the DEBUG=N knob exposes. You can watch a Python expression become a UOp graph, become a schedules kernel, become a C source, become assembly. That's a Dynamicland-ish property—you can see the thing. Most ML frameworks don't expose this.
tbc...