art with code

2015-12-09

Flat cameras

Well, this is really cool. It's a flat camera. A camera sensor with a special mask in front of it that makes it possible to computationally deconvolve the sensor data into an image. Imagine credit-card-slim mobiles and mobile cameras with a very large image sensor for shooting in low light. Or front-facing cameras embedded into the display.

On a similar vein, this Light L16 thing looks pretty nifty: jam 16 small cameras into a smartphone and do computer vision magic to treat the ensemble as a single big sensor.

2015-12-08

Display tech

I was playing with a Kindle the other day. It's got an e-ink display that's quite different from the usual mobile displays. For one, it works as a reflective display, much like the pages of a paper book. That means that you can read it without a backlight, unlike the LCD display you may have in your phone. The second big difference is that it's black & white only. Third, the screen can stay on and display an image without using any power. And finally, the refresh rate is very slow. So what's going on here? Why does it work that way?

E-ink displays have a tiny capsule for each pixel. This capsule is filled with black ink particles and white ink particles. The different colors have different charges. There's a charged plate underneath the capsule that can change the sign of its charge so as to attract one color and repel the other.

Let's say that you have black particles with a positive charge and white particles with a negative charge. When you set the charged plate to a positive charge, the white particles fly towards the charged plate at the bottom of the capsule and the black particles are pushed to the top of the capsule. As you're looking at the display from above, you see that the pixel turns black.

When you flip the charge, the black particles are drawn down to the bottom and the white particles are pushed to the top. The pixel now turns white.

Because the pixels use ink particles, the way they interact with light is much like you'd see with a regular sheet of paper. Light hits the ink at the top of the screen and bounces away towards your eyes. The black ink particles absorb more of the light than the white ones, and you see the resulting contrast. Nice and simple. But kinda slow, as the charging plate needs to flip its charge and the physical ink particles need to swap places for the pixel to change color.

By comparison, LCDs are quite a different beast. An LCD pixel is made out of three layers of material that polarize light. Polarized light is light where all the photons are vibrating in the same direction. In unpolarized light, the photons are vibrating in every which direction. To make polarized light, you either need a light source that produces polarized light or a filter that blocks photons that are vibrating in the wrong direction.

If you put two polarizing filters on top of each other, they do a bit of a trick. If the polarized filters are pointing to the same direction, they let light through. As you rotate them relative to each other, they let less and less light through, and once they're rotated to a 90 degree angle, they don't let any light through. What if you could change this rotation angle with electricity?

That's pretty much what an LCD pixel does. It's got a liquid crystal layer that rotates the polarization of light depending on how much electricity you put through it. To make the trick complete, it has two polarizing filters below and above the liquid crystal layer. The polarizing filters are at a 90 degree angle to each other, so when the liquid crystal layer is in the unrotating state, the pixel doesn't let any light through and appears dark. When the liquid crystal is set to rotate incoming light by 90 degrees, light passes through the pixel, making the pixel look bright.

Usually an LCD screen has a bright light behind it. The LCD in front blocks some of the light, creating the image that you see. Because the filters in the LCD are not perfect, some of the light hitting the dark pixels seeps through, making the darks brighter than ideal. Additionally, the black pixels are lit from behind at all times at the same intensity as the brightest white pixels, which makes LCDs relatively power-hungry. Because the liquid crystals can twist rapidly, LCDs have good refresh rates and work for displaying moving graphics.

OLEDs. OLEDs are tiny tiny lights, printed on the screen surface. The brightness of each of lamp is controllable by the amount of electricity fed to it. Because they're tiny individual lights, they are more power-efficient than LCDs as dark pixels don't use any energy at all. As each pixel is a tiny lamp, and black pixels are lamps that are turned off, the black levels of OLEDs are better than with LCDs. But because each pixel is a tiny lamp surrounded by circuitry, the total brightness achieved by an OLED display is worse than an LCD that can use a large powerful light behind the screen.

Ink, filters and lamps. That's what powers your mobile displays. Could you have something else? Maybe tiny waveguides that absorb a certain color of light - like the ones on butterfly wings - and have a controllable level of absorption? Something DLP-like where each display pixel is a mirror that either reflects incoming light back towards the viewing surface or away from it?

2015-12-04

HTTPS and HTTP2 on Apache2 with Let's Encrypt

This morning I figured I'd set up HTTPS and HTTP2 on my web server. It was pretty easy, too. And man, HTTP2 is fast, especially on silly sites like mine that have a large amount of small images on the page. Good riddance to sprites.

Here's how I set up my Ubuntu Apache2 web server for HTTPS and HTTP2:

For starters, let's get a HTTPS cert. You can get one for free using Let's Encrypt, a non-profit certificate authority from the US. It has an automagical command line tool that creates certs for you and registers them with the CA. It can even automate installation for Apache. Sadly, my Apache config didn't work with the automatic tool, so I had to do it manually. Which wasn't too bad either.

First, I shut down the Apache web server with sudo service apache2 stop. Then, I used the Let's Encrypt client to fetch the cert (this needs to be run on the server pointed to by the domain name):

git clone https://github.com/letsencrypt/letsencrypt
cd letsencrypt
./letsencrypt-auto certonly --standalone -d MY.DOMAIN.NAME

If everything goes well, you should now have the certificate files in /etc/letsencrypt/live/MY.DOMAIN.NAME/. To get HTTPS running, I edited my Apache2 configuration to set up the SSL module and use it for my domain.

<VirtualHost *:443>
  ServerAlias MY.DOMAIN.NAME

  SSLEngine on
  SSLCertificateFile "/etc/letsencrypt/live/MY.DOMAIN.NAME/cert.pem"
  SSLCertificateKeyFile "/etc/letsencrypt/live/MY.DOMAIN.NAME/privkey.pem"
  SSLCertificateChainFile "/etc/letsencrypt/live/MY.DOMAIN.NAME/chain.pem"

...

Ok, HTTPS working. Let's do HTTP2 now. If you haven't yet, you need to upgrade your Apache to version 2.4.17 to get HTTP2 support. Older versions of Ubuntu don't have Apache 2.4.17, so you may need to add a custom PPA to your software sources with sudo add-apt-repository ppa:ondrej/apache2 or such.

After upgrading Apache, turn on the HTTP2 module with sudo a2enmod http2. Almost there! The last step is to turn on HTTP2 on our HTTPS virtual host by adding h2 to the Protocols directive. I also turned on the H2Direct directive, as the description said that it'll spare the server from upgrading a HTTP/1.1 connection if the client starts talking HTTP2.

<VirtualHost *:443>
  ServerAlias MY.DOMAIN.NAME

  SSLEngine on
  SSLCertificateFile "/etc/letsencrypt/live/MY.DOMAIN.NAME/cert.pem"
  SSLCertificateKeyFile "/etc/letsencrypt/live/MY.DOMAIN.NAME/privkey.pem"
  SSLCertificateChainFile "/etc/letsencrypt/live/MY.DOMAIN.NAME/chain.pem"

  Protocols h2 http/1.1
  H2Direct on

...

That's it! Now turn on Apache again with sudo service apache2 start and you should have HTTP2 running. You can check for it in Chrome DevTools by going to the Network pane, right-clicking on the columns header and turning on the Protocol column.

Thanks for reading! Hope this helps you getting your site up and running on HTTP2.

2015-10-28

Mouse event coordinates on CSS transformed elements

How to turn mouse event coordinates into element-relative coordinates when the element has CSS transforms applied to it? Conceptually it's simple. You need to get the layerX and layerY of the mouse event, then transform those with the CSS transforms. The implementation is a bit tricky.

The following snippet is what I'm using with Three.js to convert renderer.domElement click coordinates to a mouse3D vector used for picking. If you just need the pixel x/y coords on the element, skip the mouse3D part.

// First get the computed transform and transform-origin of the event target.
var style = getComputedStyle(ev.target);
var elementTransform = style.getPropertyValue('transform');
var elementTransformOrigin = style.getPropertyValue('transform-origin');

// Convert them into Three.js matrices
var xyz = elementTransformOrigin.replace(/px/g, '').split(" ");
xyz[0] = parseFloat(xyz[0]);
xyz[1] = parseFloat(xyz[1]);
xyz[2] = parseFloat(xyz[2] || 0);

var mat = new THREE.Matrix4();
mat.identity();
if (/^matrix\(/.test(elementTransform)) {
  var elems = elementTransform.replace(/^matrix\(|\)$/g, '').split(' ');
  mat.elements[0] = parseFloat(elems[0]);
  mat.elements[1] = parseFloat(elems[1]);
  mat.elements[4] = parseFloat(elems[2]);
  mat.elements[5] = parseFloat(elems[3]);
  mat.elements[12] = parseFloat(elems[4]);
  mat.elements[13] = parseFloat(elems[5]);
} else if (/^matrix3d\(/i.test(elementTransform)) {
  var elems = elementTransform.replace(/^matrix3d\(|\)$/ig, '').split(' ');
  for (var i=0; i<16; i++) {
    mat.elements[i] = parseFloat(elems[i]);
  }
}

// Apply the transform-origin to the transform.
var mat2 = new THREE.Matrix4();
mat2.makeTranslation(xyz[0], xyz[1], xyz[2]);
mat2.multiply(mat);
mat.makeTranslation(-xyz[0], -xyz[1], -xyz[2]);
mat2.multiply(mat);

// Multiply the event layer coordinates with the transformation matrix.
var vec = new THREE.Vector3(ev.layerX, ev.layerY, 0);
vec.applyMatrix4(mat2);

// Yay, now vec.x and vec.y are in element coordinate system.


// Optional: get the untransformed width and height of the element and
// divide the mouse coords with those to get normalized coordinates.

var width = parseFloat(style.getPropertyValue('width'));
var height = parseFloat(style.getPropertyValue('height'));

var mouse3D = new THREE.Vector3(
 ( vec.x / width ) * 2 - 1,
 -( vec.y / height ) * 2 + 1,
 0.5
);

There you go. A bit of a hassle, but tractable.

2015-02-26

Get 11 live wallpapers for the price of one! (And there's more on the way!)

Get the Opus Live Wallpaper Pack now for $0.99

These fully animated live wallpapers transport you into the strange dream-world of Opus and let you escape the daily humdrum of existence. Relax during a busy day by watching one of these ambient animations.

Available on Android today! Head on over to the Play Store and get the wallpaper now. Take advantage of the 15-minute money-back guarantee at Play Store and use the wallpaper for a bit, risk-free! Opus Live Wallpaper Pack

Tell your friends about this amazing deal & share this post! Thanks :D

2015-02-20

New effects in Opus Live Wallpaper Pack

I added a bunch of new wallpaper effects to the the Opus Live Wallpaper Pack. You can preview some of 'em on fhtr.org, have a look! If you've got a nice Android phone and want to make it nicer, get the Opus Wallpaper now for $0.99 from the Play Store ;)

2015-02-15

Bouncing Heart Live Wallpaper

Just released a new Android live wallpaper called "Bouncing Heart", since, well, it's got a heart that bounces. Very Valentine's day, no?

It's all one big fragment shader, ready to turn your phone into a wonderland of shiny pink hearts, etc. Give it a try, maybe it'll cheer you up. It's free!

If you've got the pro version of the Opus Live Wallpaper Pack, the Bouncing Heart is one of the wallpapers included in the pack. You totally want to buy it because it's awesome!

2015-01-18

Three notebook system - part 1 / 3

In the past few years, I've started to gravitate towards a work scheduling / external memory system to keep my projects rolling smoothly. What I've got going now is a three notebook system, which, as you might guess, revolves around three notebooks. Let me give you a brief overview of the system.

I use the three notebooks to record what I plan to do, what I should be doing next and what I have done. The notebooks operate on different timescales. The planning notebook is concerned about quarterly plans. It moves along at a very leisurely pace. The daily notebook sets goals for the day and the week. It's got roughly a week's worth of goals on one spread. The third notebook is the work log. It records what I've been doing, how long it took to do it, and what I learned in the process.

When I start my day, I take a quick look at the planning notebook to remind myself of my medium-to-long-term goals. Then I write the first few tidbits to the work log: simple stuff like "6:30 Woke up, breakfast, shower. 7:30 Start of day. 7:45 Made Opus SA icons in Photoshop. 8:00 -> Write daily goals [x] ...". At the start of the day, I write my goals for the day into the daily notebook. At the start of the week, I also set some higher-level goals for the week.

The level of detail in each of the notebooks is quite different. The planning notebook deals in high-level plans and their measurable results. In it, I write strategic goals with planned quarterly-level tactics on achieving those. The daily notebook has weekly goals that support the quarterly tactics and daily goals that deal with the minutiae of scheduling and achieving the weekly goals. The work log acts more as a short-term memory extension. I use it to plan my next action during the day, keep myself focused and maintain a sense of progress.

With the three notebooks I've got guidance on where I'm headed in the future, what I'm planning to do this week, what I'm going to do today and how that's working out so far. The big idea here is to try and align my short-term actions to my long-term objectives.

As I progress through time, I tweak the goals as the situation changes. Tweaking the goals in turn tweaks the daily goal planning. The goals for the previous days are not the goals for today. This flexibility gives me the ability to respond to changes rapidly without losing sight of the long-term goals.

In conclusion, the three notebooks keep me focused on what I'm doing now and how that's going to help me in the future. The notebooks act as goal-oriented external memories at different timescales. By keeping track of my use of time, they also give me a better sense for how long it takes to do things.

I'll take a closer look at each of the notebooks in part 2 and go through the practical experience in part 3. Thanks for reading! What kind of planning systems do you use to get your work done?

2015-01-13

Opus Live Wallpaper

I ported a bunch of the fhtr.org effects over to Android to use as a live wallpaper. The wallpaper is available on the Google Play Store as either a $0.99 version or an ad-funded version (hey, there's also a small fireworks wallpaper with a web preview). My total income from these over the last month has been a bit less than $5, so, well, it's been more educational than anything else :D

The effects look like the pictures below or the video above. I like having them as wallpapers, gives a nice feel to my phone. You probably need something like an Adreno 330 to run them well, so phones like LG G2 and above.

Porting the shaders over from WebGL was easy for the most part. There were a whole lot of performance tweaks that I had to apply. One of the most important ones was rendering into an FBO at a lower resolution and then upscaling the FBO to screen res. With HiDPI displays, the difference between full res and scaled up is not very drastic.

Another technique I used was rendering at a lower framerate. The prob there is that you want to keep the main UI interactive while you're rendering. If you do it naively with "render frame -> wait 2 frames -> render frame -> ...", and the "render frame" bit takes longer than 16ms, you'll cause the main UI to skip frames. Which is bad. The solution I arrived to was to render a part of the wallpaper frame during the main UI frames, then display the finished frame in the end: "render 1/3 of frame -> render 2/3 of frame -> render 3/3 of frame & display it -> ...". This way I can keep the main UI interactive while rendering the wallpaper frame.

But we're not out of the woods yet! Some wallpapers have a central part that is very heavy to render and edges that are cheap. If we split the frame into e.g. 3 slices, one of the slices would take the majority of the time, and potentially cause the main UI to skip frames. So we need to find a split that distributes the work equally among the UI frames. To do the partial frame rendering, I slice the frame into 8 px vertical stripes. On each UI frame, I render the wallpaper stripes belonging to that frame (frameNumber % numberOfUIFramesPerWallpaperFrame == 0). As the UI frame number increases, the renderer moves across the wallpaper frame, drawing it in an interleaved fashion. This gives me a fairly even split of work per frame and gets me a consistent frame rate for the UI.

If you have a try at the wallpapers, lemme know how it goes. Would be super interested in hearing your thoughts.

Blog Archive