art with code

2016-01-21

Easy 3D on the web

Problem: can't do 3D graphics on the web. Solution: WebGL. New problem: no, I mean, I want to put this 3D model onto a web page. Solution: Sketchfab / Three.js / Babylon.js / ShaderToy. New new problem: I need to download libraries and code stuff or host the files on a SaaS or develop a third brain to model using modulo groups of signed distance fields moving along Lissajous curves.

Could easy 3D be part of the web platform? And how? With images, the solution was simple. The usual image file is a 2D rectangle of static pixels and all you really needed to figure out was how to layout, size and composite it with regards to the rest of the web page. When you step outside of simple static 2D images, all hell breaks loose.

Animated GIFs have playback timing and that's not so easy to figure out, and so animated GIFs are pretty broken. Videos need playback controls with volume and scrubbing, potential subtitle tracks, and a bunch of codecs both on the video and audio sides, so they were pretty broken as well. (Then YouTube started using HTML5 videos because mobiles don't do Flash. And magically the video issues were fixed~)

SVGs also need to handle input events and have morphed from "umm, put this in an embed and it'll maybe work" to their current (pretty awesome!) state where an SVG can be used as a static image, an animated image, embedded document and an inline document. Something for everyone!

Displaying a 3D model on a web page is a bit like mixing SVG with video elements. You'd like to have controls to turn the model around - it is 3D after all. And you'd like to have the model animate - again, it's 3D and 3D's good for animations. It'd be also nice to have the model be interactive. I mean, we were promised a 3D data matrix by a bunch of fiction writers in the 80s, and that's going to require some serious animated 3D model clicking action (to be fair, they also promised us a thermonuclear war, but let's not go there right now.)

So. 3D model. Web page. <3d src="castle_in_the_sky.3d" onclick="event.target.querySelector('#gate').classList.add('open')"> Right?

What file format do you standardize on? How do you load in textures and geometry? How do you control the camera? How do you do shaders for the materials? How do you handle user interaction? What are the limits of this 3D model format? How do you make popular 3D packages output this file format (plus animations, plus semantic model for interactivity)? How do you compress the thing and progressively stream it?

2016-01-19

ShaderPaint

I wrote a small painting program on ShaderToy, using the new multipass rendering feature. It's called ShaderPaint and it was a lot of fun to write.

The fun part about writing programs in ShaderToy is the programming model it imposes on you. Think of it as small pixel-size chunks of memory wired together in a directed graph. Each chunk of memory runs a small program that updates its contents. The chunks are wired together so that a chunk can read in the contents of chunks that it's interested in. The read gives you the contents on the last time step, so there's no concurrent crosstalk happening.

This is... rather different. Your usual programming model is all about a single big program, executing on a CPU, reading and writing to a massive pool of memory, modifying anything, anywhere, at any time. To go from that to ShaderToy's memory-centric approach is a big shift in perspective. Instead of thinking in terms of "The program is going to do this and then it's going to do that and then it's going to increment that counter over there.", you start to think like "If the program is running on this memory location, do this. If the program is running on that memory location, do that. If the program is running on the counter's memory location, increment the value." You go from having a single megaprogram to an army of small programs, each assigned to a memory location.

In the figure above, I've sketched the data flow of an output pixel. First, the UI inputs coming from the shader uniforms modify the UI state layer, which is read by the stroke layer to draw a brush stroke on it. The stroke layer is then composited with the draw layer when the brush stroke has ended. The final step is to composite the stroke layer and draw layer onto the output canvas, and draw the UI controls on it, based on the values on the UI state layer.

Blog Archive