art with code

2010-04-30

Drawing with writing


The four faces on the left are made up of Hanzi/Kanji radicals, the middle face is hiragana, the left-side faces are Hangeul, Latin, and Greek. The face left of the bunny and the bunny itself are Cyrillic.

One thing that's been interesting me during the writing system study is how writing systems affect the drawings of the people who use them. I mean, if you're going to draw something, you're going to use the strokes and patterns you already know. And the patterns you know best are the ones you use every day, so the writing system should have a impact on the drawings. There seems to be a feedback effect at work here: the things you draw turn into writing and the writing guides your drawing.

To explain, drawing and writing are both ways to communicate information visually. The old writing systems were in fact formalized alphabets of simplified drawings (or not so simplified drawings in the case of Egyptian hieroglyphs). And these drawings were based on the surrounding nature of the time the writing system was made, hence you have cobras, pintail ducks, flamingos, jackals, crocodiles and hippos in the hieroglyphs. Had it been a Northern European writing system, it would've had wolves, elks and bears instead of the Nile fauna.

One example of writing systems affecting drawing that's easy to reason about are emoticons. You mostly do emoticons that are easy to type and don't get mangled in the transmission. With a Latin keyboard layout you get emoticons with Latin characters and punctuation. With a Korean keyboard, you get Korean emoticons. There's a feedback effect here as well: emoticons create an alphabet of emotions, that you then use in your drawings as a shorthand for the emotions in question. And when you draw an emotion without an emoticon, someone might figure out a way to simplify that drawing into an emoticon, which is then added to the alphabet and used by people drawing emotions...

And then you have writing systems designed for drawing. Perhaps I could segue from drawing writing systems to learning circuit analysis.

To take this drawing with writing thing further, Islamic calligraphy is one famous example:

Image courtesy of Wikipedia.

And the Egyptian Book of the Dead... those big guys there are more like 180pt calligraphy than drawings. Even their sculptures veer towards being big 3D letters carved from stone.


Images courtesy of Wikipedia. Click to see the pages.

That hand-wavy movie GUI



Oblong's G-speak prototype (G stands for Gesture, as far as I can tell). They consulted on the Minority Report movie, which then inspired them to build an actual prototype. (Via Cocktail Party Physics)

Who do you sell that to? Museums? Universities? Put it in lecture halls, have the professors wave through their slide decks? Automotive design houses might buy a couple to augment their VR caves. Air traffic controllers... kind of, except that it needs to be either secondary or a proven stable system.

It is the new technology dilemma: no current applications, new hardware, no exact target market. (Or very entrenched "target markets" that want proven products which have jumped through all the regulatory hoops and are backwards compatible and have an excellent sales force to wine and dine the people holding the strings of the corporate purse.)

I suppose the largest market would be semi-wealthy individuals. Strap it onto a 50" HDTV, have them wow visitors with awesome gesture-driven slideshows of holiday pics and the way you can change channels and surf the web from your couch. But without a remote or a keyboard. Worked for the iPad.

You can't replace the computer desktop with it, because the desktop has infinite inertia. And you need to rewrite all the apps to fit that thing. It does sound like they do need to use a carbon copy of the iPad strategy: have an App Store and integrate media stores and the web, require creation of a Store account to use it, have it be a not-computer (to not set high expectations and to avoid having to compete with the strong competitors), make it look cool and different (again to avoid comparisons), do the cool things well and basic things passably, sell an upgrade a year later ("It fixes all your problems!")

Extract money from people entering the platform, extract money from people on the platform, extract money from people upgrading the platform, extract money from people who want to sell things to people on the platform. Fight very very hard to maintain your grip on the money pipes.

Oh, also, I drew a new picture.

2010-04-28

Early (and recent) computer art

VISUAL AESTHETICS IN EARLY COMPUTING (1950-80) (via Trivium)

It's the demoscene, 50s-style!

Or maybe something akin to modern dance with a webcam-operated CFD backdrop.

Speaking of the demoscene, here are a couple recent productions:

Fairlight Cncd - Agenda Circling Forth. Millions of particles. Nice forest scene.



Farbrausch - fr-043: rove. Good ambience.



United Force & Digital Dynamite - Wir sind Einstein. See also The Golden Path for more trippiness.



I do wonder if we'll ever have user interfaces with those kinds of production values. The recent trend has been to lock the GUI down so hard that you can't even change the theme, sometimes you can't even change the colors (OS X, I'm looking at you.) On the other hand, web browsers seem to be picking up some easy themeability with those background-image themes of Chrome and Firefox. Squeeze creativity out from one place and it goes to another.

But still. There aren't many GUIs that look nice and fun. Well, except in games. In games they're the selling point. The fancier and better your game GUI looks (i.e. the thing we call "graphics"), the better your game will sell. It's all about the experience, the fancy wrapping around the game logic crack cocaine. Sure, you want to _play_ the game, but having the process of doing that look and feel awesome makes the experience so much better.

2010-04-22

Phonetic writing systems

From the 26 writing systems I've drawn thus far some patterns emerge. There are consonant-based systems, consonant-vowel systems and syllabic systems of different kinds. And these are all phonetic systems, I haven't gotten to the logographic ones yet.


Image courtesy of Wikipedia

Consonant-based systems (abjads) typically encode around 20 consonants. Examples of consonant-based systems are Phoenician, Hebrew and Arabic. Some have optional diacritics for vowels (mostly used for educational texts). They are prevalent in the Middle-East, and apparently are derived from the phonetic part of Egyptian hieroglyphs (also a consonant-based system) through Phoenician. The hieroglyphs were written left-to-right and right-to-left, the characters looking towards the start of the line. And for one reason or another, the derived writing system stuck to right-to-left.

Consonant-vowel systems (alphabets) have around 20 consonants and 5-10 vowels, each having a separate character. The Greeks took the Phoenician system, added vowels, and flipped it around to left-to-right, mirroring the letters in the process. From that you get the Greek-derived alphabets like Etruscan (RTL) -> Latin (LTR), Cyrillic and Coptic. A strange feature in these Greek-derived systems is the existence of lowercase letters. The Mongolian script is also a consonant-vowel system, but it derives from the Uyghur consonant-based system.

The Korean Hangeul is sort of a consonant-vowel system, but it's written in a syllabic fashion: take the letters in a syllable, cram them inside a box, write it down. But a syllable has at least two letters and the second letter is always a vowel. There's a no-sound first letter for doing stand-alone vowels, but no way to do a stand-alone consonant.

Alphasyllabic systems (abugidas) encode consonant-vowel-pairs. Devanagari and Ethiopian Ge'ez are examples of this. They both have a base character for each consonant and a set of diacritics (more like ligatures) to signify the vowel or the lack of one. Devanagari also has separate characters for stand-alone vowels, and some diacritics to change the pronunciation of consonants.

The syllabic Japanese kana system encodes consonant-vowel-pairs, stand-alone vowels and a stand-alone 'n'. The main difference between it and the alphasyllabics is that it uses a separate character for each consonant-vowel-pair instead of the base+modifier-system. The kana system has diacritics for modifying the voicing of the consonants but there's no diacritic for dropping vowels. There are two kana systems, hiragana and katakana. Katakana is an angular script used for transliterating foreign words (by pronunciation) whereas hiragana is a more rounded script used for everything else. They have a slight visual similarity, think of Cyrillic uppercase vs. cursive.

Speaking of cursive, the cursive scripts (e.g. Arabic and Mongolian) have three or four different letter forms, depending on whether it's the initial letter of a word, a middle letter, the last letter, or a stand-alone letter. If you know cursive handwriting with the latin alphabet, you pretty much know how that works. Think of the lowercase 'e': on its own it looks like the typed 'e', in the beginning of the word it's an 'e' with a low tail, in the middle of the word it's a low loop, and at the end of a word it's a low loop with an upcurved tail.

2010-04-18

Eyjafjallajökull

Nice photos of Eyjafjallajökull (eyja-fjalla-jökull, island-mountain-glacier) erupting in a storm of volcanic lightning.

Nordic Volcanological Center brings us this face-detector-tripping radar image:



For more pictures of the eruption:
Volcanic lightning photos by Marco Fulle
NASA feature on Eyjafjallajökull
Boston.com pictorial
Yahoo! editor's picks gallery on Flickr

A quick guide to volcanoes (I read a geology intro book earlier this year! Go go superficial knowledge!):

Basaltic iron-rich lava flows smoothly and makes pahoehoe and aa flows and pretty pictures. Basaltic (a.k.a. mafic) and ultramafic lavas come from the mantle, and so tend to happen at hot spots (plume of hot rock rising through the mantle and punching through the crust) and divergence zones (two plates drifting apart, the gap getting filled by outflow from the mantle).

Felsic silica-rich lava doesn't really flow, so it tends to clog up magma vents and trap gases. Clogging up a mountaintop tends to build pressure under it. Once the pressure builds up enough, the mountaintop blows up in a massive explosion and releases roiling pyroclastic clouds. Felsic lava comes from subduction zones where seafloor is pushing under the continental plate. At a depth of a forty km or so, the water in the seafloor rocks starts melting the surrounding mantle and the resulting magma rises up through the continental crust, forming a volcanic arc.

See ArsTechnica's article for an explanation why the Icelandic diverge zone volcano (the place sits on top of the Atlantic mid-ocean ridge) still blew up in a felsic fashion.

See the Eruptions blog for more info.

2010-04-17

Split the blog

Drawings went there, other stuff stayed here.

2010-04-13

Nature by numbers

This is nice and awesome and minimum-information encoding of geometry. (Via Cocktail Party Physics and Gurney Journey)

Also see this paper.

2010-04-12

Writing systems, part 3

I drew Glacolitic, Georgian, Armenian, Gothic, Pahlavi and Manchu since my last update.

Glacolitic is an early Slavonic alphabet, and while you can see some Greek letters there, the letterforms are quite different. Cyrillic resembles a more Greek Glacolitic, in a sense.

Georgian and Armenian are somewhat similar. Both have funky ┑ serifs (like the tail in Cyrillic sh and sha) in their letters and both have a quite distinct look to them. You have some Greek/Latin shapes there too, like the phi, h, b and S. They're both said to be legacy of Saint Mesrob and date from around the early-to-mid-400s.

Gothic is a basically Greek letters with some oddballs and two Futhark runes.

Pahlavi is an old Iranian abjad derived from Aramaic (and thus has the Phoenician letter order ABGDHWZ. Greeks added vowels there and theirs goes ABGDEZH...)

Manchu script is a derivative of the Mongolian script, which is a derivative of the Uyghur alphabet, which is a derivative of the Sogdian alphabet, which is a derivative of Syriac, which is a derivative of Aramaic, which is a derivative of Phoenician, which is a derivative of Proto-Canaanite, which is a derivative of the phonetic set of Egyptian hieroglyphs, which look like animals and people and are a total pain in the ass to draw. Manchu is written top-down, columns advance left-to-right.

I still have Egyptian hieroglyphs and Hanzi to go. And they are a bit daunting. Maybe I should add some American writing systems to the mix so that I get to draw grimacing dudes and jaguar heads...

2010-04-11

Boot comic

THIS IS HOW COMPUTERS WORK! TRUE STORY!




2010-04-09

Intel 48-core research chip

Intel press release, technical whitepapers.

48 cores at 1GHz, peak performance around .. 48 billion instructions per second. If it has SSE and mul-add then 190 double GFLOPS, 380 single GFLOPS? Without mul-add half that, and without SSE 48 double&single GFLOPS. Power draw something like the top Core i7s (@ 100 GFLOPS), so it should be anywhere between 0.25-4 times better (or worse) at raw number crunching. On parallel CPU-like scalar workloads, possibly 4 times the peak performance of a Core i7 980X (depending on instruction throughput). Probably pretty bad single-threaded performance (think of an underclocked Atom).

Nitpicking the press release:
Application software can use this network to quickly pass information directly between cooperating cores in a matter of a few microseconds

A few microseconds? One microsecond is 1000 cycles at 1GHz. If few means 5, that'd be 5000 cycles... I guess they really mean nanoseconds which'd give L2-like latency.

Now, this is a research chip, but let me think a bit about the commercial implementation.

What's the memory bus going to be like? Core i7 has three memory channels feeding it, and four times the computational power needs four times the bandwidth. Twelve DDR3 channels or fewer channels of something more expensive like GDDR5? The current research system has four DDR3 channels, so I guess it's not much faster than a Core i7.

And what's the price? Compared to GPUs, it'd be in the $100 bracket. Compared to CPUs, it'd be in the $4,000 bracket. From the pictures it looks like a pretty big chip (though it's at 45nm), so maybe the price reflects that? As it is, you could probably sell it at somewhere between $300 and $5000, depending on whether you're targeting heterogeneous supercomputers ($300, competing against GPUs) or x86 loads ($5000, competing against low-end Xeons and Opterons).

If it came with one or two cores with fast single-threaded performance, it could go places. As it is, it's in a bit of a bad place: worse single-threaded performance than cheap CPUs, lacking graphics drivers (?) to serve as a GPU replacement. So, x86 scientific computing and webservers as the first target?

Plus if you write code that runs fast on this chip, it's going to run fast on a GPU as well, so it's kinda hard to figure how this will pan out. There is no legacy code for 50-core+ systems (apart from real-time graphics software), so the competitive advantage of x86 ISA might be less important. Who knows, I don't.

2010-04-07

Algebraic transformations

Algebra is the study of structure and the transformations you can do to a computation while preserving its result. If you can prove that an operation fulfills an algebraic axiom, you can use that axiom to transform a tree of such operations.

With binary operators, the basic axioms are

No external side-effects: a x b = a x b
Closure: the operation preserves the type; t -> t -> t
Associativity: the nesting doesn't matter; a x (b x c) = (a x b) x c
Commutativity: the order of parameters doesn't matter; a x b = b x a

Neutral element is a value that does nothing. When one of the parameters to the operator is a neutral element, the operator returns the other parameter.
Left neutral element: when applied to the left side of the operator; NL x a = a
Right neutral element: when applied to the right side of the operator; a x NR = a
Neutral element: left and right neutral elements are the same; NL = NR

Inverse is a way to make the operator return the neutral element for any value.
Left inverse: inverseL(a) x a = N
Right inverse: a x inverseR(a) = N
Inverse: inverseL = inverseR

Then there are two-operator axioms like

Left-distributivity: a x (b + c) = (a x b) + (a x c)
Right-distributivity: (b + c) x a = (b x a) + (c x a)

To use the axioms in transforming a tree of operations (say, the AST used by your calculator program), create a list of axioms matching the nodes in the tree. The new trees generated by applying the axioms in the list are the neighbor nodes of the original tree. Then you toss in a cost function and a search algorithm and emerge with a program optimizer.

Hand-waving


The nice thing about the axioms is that they're kind of stackable. For example, if you know that your operator x is an abelian group (that is, it has closure, associativity, inverse, neutral and commutativity), an operator o, defined as a o b = a x b x C where C is a constant, is also an abelian group (ditto for permutations of the order of operations thanks to commutativity).

If you create a list of axiom-preserving tree transformations, I think you could create a search graph to figure out the axioms an operator fulfills. Take the AST of the operator, apply transformations until you find an already proven operator. Then automatically add the axioms to the operator's documentation and generate tests for them.

Digression


Maybe you could generate time-complexity figures for a function too. Determine the maximum amount of times it calls other functions wrt its parameters, multiply the other function complexities with that. For space-complexity, multiply the space-complexity of the other functions by the time they're called and add the current function's number of allocations.

For an empirical version of that, the IDE could call each pure function with different-size inputs and measure the CPU time and memory used. Then plot a nice graph, do some curve-fitting, embed into documentation. If you do this profiling live in a background process, you could have performance metrics that update while you code. A real-time view of the efficiency of your code...

Writing systems 2

Drew Egyptian Demotic and Hieratic scripts, and the hieroglyphs from which the Hieratic glyphs were derived. Also drew Coptic and Ogham. Ogham is nice and simple. Hieroglyphs are very slow to draw because they are complex and require accuracy (and I don't know the stroke order). Hieratic script is pretty. Coptic is derived from Greek with some Hieratic glyphs added to the mix. Coptic psi, phi and alpha look nice in the floral font.

2010-04-06

Simple math

What is math? My current thinking is that math is a system that preserves some property. This property might be called "truth", or "internal consistency", or something like that. The big idea is that if you have a system like that and if you can describe a situation in terms of that system, you can use the system to manipulate your description while preserving the system's idea of truth.

That's a bit nebulous a description, so here's an example: I have two apples and give you one. How many apples do I have now? Let's describe this situation in terms of integer addition. Integer addition is a mathematical system where you can add up integers in a way that preserves a "truth" about it. Ok, we have a binary operator "give" and we'll map it to "minus", or "add the add-inverse of". Then map apples to integers by leaving out the apple-quantifier. What we get is "add to two the add-inverse of one", i.e. 2 + -1, or, in shorthand 2 - 1. Following the rules of addition, the result is one. Then we map the result back to apples by adding the apple-quantifier and get one apple.

Right. What was the truth preserved in the above? Addition is closed over integers, so "integerness" was preserved in the addition. "Additionness" was also preserved by doing the addition correctly. And it maps to reality pretty well too, if in a limited manner. To demonstrate some limitations in the above mapping, consider these: what if I am also you, what if I have zero apples, what if giving does not map to inverse adding, what if the apples-to-integers conversion isn't that simple (e.g. suppose an "apple" means "150g of apple flesh sans peel, stem, seeds and worms"). But if you keep the limitations in mind, it works well enough for everyday use.

The real power comes from mapping the preserved properties in a mathematical system to whatever you're interested in. If you deem that "integerness" and "additionness" do generally map to reality well enough for you, and you have good enough mappings to integers from the things you're interested in, you can bring a whole lot of different things to the integer addition system and do your manipulations inside it. Abstracting from

"I have two apples and give you one apple" => "I have (two + -one) apples"

to

"A has x Ts and gives B y Ts" => "A has (x + -y) Ts".

As most mappings between systems are lossy, you often have problems and limitations caused by that. Suppose something simple like adding speeds together. The problem there is that "additionness" doesn't always map to reality. Instead of "x + y" reality might follow "(x+y) / (1+xy/c^2)". Or go all general relativity on you. Further, with velocities "integerness" doesn't hold either and you need to start adding spacetime four-velocities instead (why yes, I did just look that up on Wikipedia).

And even if you do get a reasonable reality-to-property mapping going on, the element mapping might be wonky. Mapping apple mass to integers with a +- 2kg mapping error (e.g. from inaccurate scales) gets you decently accurate numbers if you're weighing apples by the ton. But if you're selling a single apple made out of gold, the error might mean the difference between you paying out 100k to get someone accept -1.8kg of gold and the buyer paying 100k too much.

Every operation you do on the values is also applied to the error distribution. Perhaps a better way to think about this would be to think of numbers as histograms: Do a calibration run on each value to get their histograms, then when measuring an object, take the histogram for the measured value. Adding two histograms together would mean taking their Cartesian product and adding up each pair. Or - as the histograms would get huge after a couple operations - approximate them with distribution functions with the measured value as the median of the distribution.

Anyhow! Systems that preserve properties. Math.

2010-04-04

Writing systems

Over the past few days my background process has been drawing writing systems to my sketchbook from Wikipedia. Got Phoenician, Aramaic, Greek, Etruscan, Latin, Futharks, Hebrew, Arabic, Urdu, Old South Arabian, Ge'ez, Japanese kana, Hangul, Brahmi, Devanagari, Gujarati. Planning to do the uni/bi/triliteral Egyptian hieroglyphs & Demotic & Hieratic, Gothic, Syrian, Meroitic, Ogham, Manchu, Georgian, Armenian, Glacolitic. If I go nuts in the head, maybe 1000 hanzi/simplified & Egyptian hieroglyphs.

Blog Archive