art with code

2014-09-06

In which I make videos

Edited a video with After Effects.

Proceeded to follow a motion graphics tutorial.

And then I started making a website. Country life!

What the hey

Recently: Writing a web app backed by a bunch of CGI programs written in Haskell, running on Mighttpd2, a web server written in Haskell. And a whole lot of JavaScript to make it run. It's using PostgreSQL to store its stuff.

What the app does: It's a website with a shop component. You can edit the pages by writing HTML into text fields. Yeah. Let's shake our collective heads together.

How's it going: I like writing Haskell though I don't like the Cabal dephell. I probably have to use a different web server as the Mighttpd2 version in Ubuntu 14.04 doesn't do HTTPS and I haven't had luck installing a HTTPS-supporting version from Cabal. Eh. JavaScript is as usual, difficult to keep sane. CSS helps keep the JS less gnarly. I kinda like the little experiments in the page structure. First I had all the site content inlined into a single page, then I moved them out to HTML files that the JS pulls in (and inlined the front page using a build script). Then I moved the HTML files into a database and now I'm fetching all of them in a JSON array at page load, so it's sort of going back to the inline-everything-thing.

How about that shop: The shop part is building shopping carts for PayPal checkout. Which does work, though a completed payment should also ping the server to update the inventory.

How it does what it does: The client does all the interesting bits. The server just serves JSON from the database. The admin client edits the JSON objects it receives from the server and sends them back to save them. The cool bit is that Aeson, a Haskell JSON library, typechecks the whole thing, decoding and encoding the JSON with a minimum amount of code on my part.

-- ListPages.hs
-- GET -> JSON

-- Returns a JSON array of pages.

import Network.CGI
import Data.Aeson
import PageDatabase
import SiteDatabase

instance ToJSON Page

main = runCGI $ handleErrors cgiMain

cgiMain = do
  setHeader "Content-Type" "application/json"
  setHeader "Access-Control-Allow-Origin" "*" -- set CORS header to allow calls from anywhere
  liftIO listJSON >>= outputFPS

listJSON = fmap encode $ withDB listPublishedPages

-- listJSON :: IO Lazy.ByteString
-- listPublishedPages :: Connection -> [Page] -- fetches the list of pages from the database where published = true
-- Data.Aeson.encode turns [Page] into a ByteString of JSON.
-- See the "instance ToJSON Page" above?
-- That is all I need to do to get type-safe JSON encode and decode.
-- As long as I use "deriving (Generic)" in my type definition, that is.

Here's the Page data type from the PageDatabase module:

data Page = Page {
  page_id :: Int64,
  bullet :: String,
  body :: String,
  published :: Bool
}  deriving (Show,Generic)

This is how I deal with the JSON that the client sends:

-- EditPage.hs
-- POST (Page) -> JSON true | false

import Control.Monad
import Network.CGI

import SiteDatabase
import PageDatabase
import SessionDatabase
import GHC.Int
import Data.Aeson

instance FromJSON Page -- Make Data.Aeson.decode parse JSON into Page objects.

main = runCGI $ handleErrors cgiMain

cgiMain = do
  body <- getInputFPS "body"
  authToken <- getAuthToken -- helper that deals with session cookies, CSRF tokens and login user/pass params
  msg <- liftIO (maybe noPage (editPageDB authToken) (body >>= decode)) -- decode turns the body JSON into a Page object
  setHeader "Content-Type" "application/json"
  output msg

noPage = return "false" -- No body param received or the body param failed to typecheck in decode.

editPageDB authToken page = do
  rv <- withDB (\conn -> authenticate conn authToken (editPage conn page)) :: IO Int64 -- authenticate runs editPage if the authToken is OK
  case rv of
    1 -> return "true"  -- Edit successful
    _ -> return "false" -- Page not found or auth failed

Isn't CGI kinda slow: Dunno. Testing on an EC2 micro instance, a HTTP request for a JSON array of all the ten pages in the DB takes about 10 ms.

Couldn't you just use Weebly / Squarespace / Wix / Whatever: Hey! Watch it! No! Of course not!

2014-09-03

Social things

Public service announcement: I shut down my Google+, Twitter and Facebook accounts. I wasn't using them correctly and wasted too much time on them. So if you saw me suddenly disappear, no worries.

2014-08-31

Spinners on the membrane

I wonder if you could do image loading spinners purely in CSS. The spinner part you can do by using an animated SVG spinner as the background-image and inlining that as a data URL. Detecting image load state, now that's the problem. I don't think they have any attribute that flicks to true when the image is loaded. There's img.complete on the DOM side, but I don't think that's exposed to CSS. And it's probably tough to style a parent element based on child states. And <img> elements are fiddly beasts to begin with.

If you had a :loading (or some sort of extended :unresolved) pseudo-class that bubbled up the tree, that might do it. You could do something like .spinner-overlay { background: white, url(spinner.svg)50% 50%; width: 100%; height: 100%; opacity: 0; } *:loading > .spinner-overlay { opacity: 1; transition: 0.5s; }. Now when your image is loading, it'd have a spinner on top of it and it'd go away when it stopped loading. And since :loading doesn't care about what is loading, it'd also work for background-images, video, audio, iframes, web components, whatever.

2014-08-30

Sony A7, two months in

I bought a Sony A7 and a couple of lenses about two months ago to replace the Canon 5D Mk II that I had before. Here are my experiences with the A7 after two months of shooting. I've taken around 1500 shots on the A7, so these are still beginner stage thoughts.

If you're not familiar with it, the Sony A7 is a full-frame mirrorless camera at a reasonable (for full-frame) price point. It's small, it's light, and it's full-frame. The image quality is amazing. You can also find Sony E-mount adapters to fit pretty much any lens out there. If you have a cupboard full of vintage glass or want to fiddle with old Leica lenses or suchlike, the A7 lets you shoot those at full-frame with all the digital goodness of a modern camera. You probably end up adjusting the aperture and focus manually on adapted lenses though. There are AF-capable adapters for Sony A and Canon EF lenses, but the rest are manual focus only.

It's not too much of a problem though, since manual focusing on the A7 is nice. You can easily zoom in the viewfinder to find the right focus point and the focus peaking feature makes zone focusing workable. It's not perfect ergonomically, as your MF workflow tends to go "press zoom button twice to zoom in, focus, press again for even closer zoom, focus if need be, press zoom button again to zoom out, frame." What I'd like to have is "press zoom button, focus, press zoom button, frame." Or zoom in just one area of the viewfinder, leave frame border zoomed out so that you can frame the picture while zoomed in.

Autofocus is fiddly. Taking portraits with a shallow depth-of-field, the one-point focus mode consistently focuses on the high-contrast outline of the face, which tends to leave the near eye blurry. Switching to the small movable focus point, you can nail focus on the near eye but now the AF starts hunting more. And I couldn't find a setting to disable hunting when the camera can't find a focus point (like on Canon DSLRs), so AF workflow is a bit slower than I'd like. Moving the focus point around is also kinda slow: you press a button to put the camera into focus selection mode, then slowly d-pad the focus point to where you want it.

The camera has a pre-focus feature that autofocuses the lens when you point the camera at something. I guess the idea is to have slightly faster shooting experience. I turned the pre-focus off to save battery.

The camera's sleep mode is defective. Waking up from sleep takes several seconds. Turning the camera off and on again seems to get me into ready-to-shoot state faster than waking the camera by pressing the shutter button. Because of that and the short battery life, I turn the camera off when carrying it around.

The Sony A7 has a built-in electronic viewfinder. The EVF is nice & sharp (it's a tiny 1920x1200 OLED display!) When you lift the camera to your eye, it switches over to the EVF. Or if you get your body close to the EVF. Or if you place the camera close to a wall. This can be a bit annoying, but you can make the camera use only the rear screen or the viewfinder. Note that the viewfinder uses a bit more battery than the rear screen. Probably not enough to show up in your shot count though.

If you have image preview on, it appears on the EVF after you take a shot. This can be very disorienting and slows you down, so I recommend turning image preview off.

The rear screen can be tilted up and down. That is a huge plus, especially with wide-angle lenses. You can put the camera to the ground or lift it above your head and still see what you're shooting. You can also use the tiltable screen to shoot medium-format style with a waist-level viewfinder. The rear screen has great color and resolution as well, it's just lovely to use.

The glass on the rear screen seems to be rather fragile. After two months of use, I've got two hairline scratches on mine. Buy a screen protector and install it in a dust-free place. Otherwise you'll have to buy an another one, ha! The camera is not weather-sealed either, be careful with it. Other than the rear screen glass and the battery cover, the build quality of the body is good. It feels very solid.

The A7 has some weird flaring, probably due to the sensor design. Bright lights create green and purple flare outwards from the center of the frame. This might improve center sharpness for backlit shots, but for night time shots with a streetlight in the corner of the frame it's rather ugly.

One nice feature of the A7 is the ability to turn on APS-C size capture. This lets you use crop-factor lenses on the A7. And it also lets you use your full-frame lenses as crop-factor lenses. For instance, I can use my 55mm f/1.8 as a 83mm f/2.5 (in terms of DoF, in T-stops it's still T/1.8, i.e. has the same amount of light hitting each photosite). I lose a stop of shallow DoF and a bunch of megapixels but get a portrait lens that uses the sharpest portion of the frame.

Speaking of lenses, my lineup consists of the kit lens (a 28-70mm f/3.5-5.6), the Zeiss FE 55mm f/1.8 and an M-mount 21mm f/1.8 Voigtländer, mounted using the Voigtländer VM-E close focusing adapter. If I use the primes with the APS-C trick, I can shoot at 21mm f/1.8, 32mm f/2.5, 55mm f/1.8 and 83mm f/2.5. Wide-angle to portrait on two lenses!

The Zeiss FE 55mm f/1.8 is an expensive, well-built lens that looks cool and makes some super sharp images. Sharp enough that for soft portraits I have to dial clarity to -10 in Lightroom. Otherwise the texture of the skin comes out too strong and people go "Oh no! I look like a tree!" Which may not be what you want. If you shoot it at f/5.6 or so, the sharpness adds a good deal of interest in the image. It's a sort of hyperreal lens with the extreme sharpness and microcontrast.

The Voigtländer Ultron 21mm f/1.8 is an expensive, well-built lens that looks cool and makes sharp and interesting images, thanks to the 21mm focal length. Think portraits with environment, making people look like giants or dramatic architectural shots. It's manual focus and manual aperture though, so you'll get a lot of exercise in zone focusing and estimating exposure. The Ultron is a Leica M-mount lens, so you need an adapter to mount it onto the A7. One minus on the lens and the adapter is that they're heavy. Size-wise the combo is very compact, but weighs the same as a DSLR lens at 530g.


For a wide-angle lens, the Ultron's main weakness is its 50 cm close focus distance. But on the A7, you can use the VM-E close focus adapter and bring that down to 20 cm or so. Which nets you very nice bokeh close-ups. But blurs out infinity, even when stopped down to f/22.


The Voigtländer on the A7 has some minor purple cast on skies at the edges of the frame. It can be fixed in Lightroom by tweaking purple and magenta hues to -100. The lens has a good deal of vignetting, which is fixable with +50 lens vignetting in Lightroom. When fixing vignetting at high ISOs, take care not to blow the corner shadows into the land of way too much noise.

The kit lens is quite dark at f/5.6 on the long end, and it isn't as sharp as the two primes, but it's quite light and compact. And, it has image stabilization, so you can get decently sharp pictures even at slow shutter speeds. Coupled with the good high-ISO performance of the A7 and the auto-focus light, the kit lens is usable even in dark conditions. I haven't used it much though, I like shooting the two primes more.

Battery life is around 400 shots per charge. Not good, but I manage a day of casual snapping. The battery door seems to be a bit loose, I've had it open a couple of times when I took the camera out of the bag.

The camera has a built-in WiFi that the PlayMemories software uses for transferring images to your computer or smartphone. You can even also tethered shooting over WiFi but I haven't tried that. Transferring RAW photos from the camera to a laptop over WiFi is very convenient but quite slow. Transferring JPEGs to a smartphone is fast, though you want to batch the transfers as setting up and tearing down the WiFi connection takes about 20 seconds. When you're shooting, you probably want to switch the camera to airplane mode and save batteries.

I ended up shooting in RAW. I couldn't make the JPEGs look like I wanted out of the camera, so hey, make Lightroom presets to screw with the colors. The RAWs are nice! Lots of room for adjustment, good shadow detail, compressed size is 25 megs. You have to be careful about clipping whites (zebras help there), as very bright whites seem to start losing color before they start losing value. The RAW compression supposedly screws with high-contrast edges, but I haven't noticed the effect.

Due to shooting primarily in RAW, I find that I'm using the A7 differently from my previous cameras. I used the Canon 5D Mk II and my compact Olympus ZX-1 as self-contained picture making machines. I shot JPEGs and did only minor tweaks in post, partially because 8-bit JPEGs don't allow for all that much tweaking. The A7 acts more as the image capturing component in a picture creation workflow. Often the RAWs look severely underexposed and require a whole lot of pushing and toning before the final image starts taking shape. The colors, contrast, noise reduction and sharpening all go through a similar process to arrive at something more like what I want to see. I quite like this new way of working, it requires more of me and further disassociates the images from reality.

The camera has 16:9 cropped capture mode, which is nice for composing video frames. The 16:9 aspect also lends the photos a more movie-like look. The resulting RAW files retain the cropped part of the 3:2 frame, so you can de-crop the 16:9 photos in Lightroom. Note that this doesn't apply to the APS-crop frames, the APS-size photos don't retain the cropped image data.

I found that I can shoot usably clear (read: noise doesn't completely destroy skin tones) photos at ISO 4000, which is really nice. ISO 8000 and up start losing color range, getting magenta cast in shadows and light leaks when shooting in dark conditions. Still usable for non-color-critical work and very usable in B/W.

Overall, I don't really know why I got this camera and the lenses. I shoot too few images and video to justify the expense. The quality is amazing, yes. And it's not as expensive as a Canon 5D Mk III or a Nikon D800 either. But it's still expensive, especially with the Zeiss FE lenses. I feel that it's a luxury for my use.

The Sony A7 is too large and heavy for a walk-around camera strapped across your back and too non-tool-like for a total pro camera. I was trying to replace my old Canon 5D Mk II with its 50 f/1.4 and 17-35L f/2.8 with something lighter. But this is not it. The A7 has great image quality but the workflow is slower. And while it's light enough to throw in the backpack and go, it's still large and heavy enough to make me not want to dig it out. For me it sits in the uncomfortable middle area. Not quite travel, not quite pro. For occasional shooting it's great! Especially if you get a small camera bag.

2014-02-17

Saving out video frames from a WebGL app

Recently, I wanted to create a small video clip of a WebGL demo of mine. The route I ended up going down was to send the frames by XHR to a small web server that writes the frames to disk. Here's a quick article detailing how you can do the same.


Here's the video I made.

Set up your web server

I used plain Node.js for my server. It's got CORS headers set, so you can send requests to it from any port or domain you want to use. Heck, you could even do distributed rendering with all the renderers sending finished frames to the server.

Here's the code for the server. It reads POST requests into a buffer and writes the buffer to a file. Simple stuff.

// Adapted from this _WikiBooks OpenGL Programming Video Capture article_.
var port = 3999;
var http = require('http');
var fs = require('fs');
http.createServer(function (req, res) {
    res.writeHead(200, {
        'Access-Control-Allow-Origin': '*',
        'Access-Control-Allow-Headers': 'Content-Type, X-Requested-With'
    });
    if (req.method === 'OPTIONS') {
        // Handle OPTIONS requests to work with JQuery and other libs that cause preflighted CORS requests.
        res.end();
        return;
    }
    var idx = req.url.split('/').pop();
    var filename = ("0000" + idx).slice(-5)+".png";
    var img = new Buffer('');
    req.on('data', function(chunk) {
        img = Buffer.concat([img, chunk]);
    });
    req.on('end', function() {
        var f = fs.writeFileSync(filename, img);
        console.log('Wrote ' + filename);
        res.end();
    });
}).listen(port, '127.0.0.1');
console.log('Server running at http://127.0.0.1:'+port+'/');

To run the server, save it to server.js and run it with node server.js.

Send frames to the server

There are a couple things you need to consider on the WebGL side. First, use a fixed resolution canvas, instead of one that resizes with the browser window. Second, make your timing values fixed as well, instead of using wall clock time. To do this, decide the frame rate for the video (probably 30 FPS) and the duration of the video in seconds. Now you know how much to advance the time with every frame (1/FPS) and how many frames to render (FPS*duration). Third, turn your frame loop into a render & send loop.

To send frames to the server, read in PNG images from the WebGL canvas using toDataURL and send them to the server using XMLHttpRequest. To successfully send the frames to the server, you need to convert them to binary Blobs and send the Blobs instead of the dataURLs. It's pretty simple but can cause an hour of banging your head to the wall (as experience attests). Worry not, I've got it all done in the below snippet, ready to use.

Here's the core of my render & send loop:

var fps = 30; // Frames per second.
var duration = 1; // Video duration in seconds.

// Set the size of your canvas to a fixed value so that all the frames are the same size.
// resize(1024, 768);

var t = 0; // Time in ms.

for (var captureFrame = 0; captureFrame < fps*duration; captureFrame++) {
 // Advance time.
 t += 1000 / fps;

 // Set up your WebGL frame and draw it.
 uniform1f(gl, program, 'time', t/1000);
 gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
 gl.drawArrays(gl.TRIANGLES, 0, 6);

 // Send a synchronous request to the server (sync to make this simpler.)
 var r = new XMLHttpRequest();
 r.open('POST', 'http://localhost:3999/' + captureFrame, false);
 var blob = dataURItoBlob(glc.toDataURL());
 r.send(blob);
}

// Utility function to convert dataURIs to Blobs.
// Thanks go to SO: http://stackoverflow.com/a/15754051
function dataURItoBlob(dataURI) {
 var mimetype = dataURI.split(",")[0].split(':')[1].split(';')[0];
 var byteString = atob(dataURI.split(',')[1]);
 var u8a = new Uint8Array(byteString.length);
 for (var i = 0; i < byteString.length; i++) {
  u8a[i] = byteString.charCodeAt(i);
 }
 return new Blob([u8a.buffer], { type: mimetype });
};

Conclusion

And there we go! All is dandy and rendering frames out to a server is a cinch. Now you're ready to start producing your very own WebGL-powered videos! To turn the frame sequences into video files, either use the sequence as a source in Adobe Media Encoder or use a command-line tool like ffmpeg or avconv: avconv -r 30 -i %05d.png -y output.webm.

To sum up, you now have a simple solution for capturing video from a WebGL app. In our solution, the browser renders out frames and sends them over to a server that writes the frames to disk. Finally, you use a media encoder application to turn the frame sequence into a video file. Easy as pie! Thanks for reading, hope you have a great time making your very own WebGL videos!

Addendum

After posting this on Twitter, I got a bunch of links to other great libs / approaches to do WebGL / Canvas frame capture. Here's a quick recap of them: RenderFlies by @BlurSpline - it's a similar approach to the one above but also calls ffmpeg from inside the server to do video encoding. Then there's this snippet by Steven Wittens - instead of requiring a server for writing out the frames, it uses the FileSystem API and writes them to disk straight from the browser.

And finally, CCapture.js by Jaume Sánchez Elias. CCapture.js hooks up to requestAnimationFrame, Date.now and setTimeout to make fixed timing signals for a WebGL / Canvas animation then captures the frames from the canvas and gives you a nice WebM video file when you're done. And it all happens in the browser, which is awesome! No need to fiddle around with a server.

Thanks for the links and keep 'em coming!

Blog Archive