Monday, December 31, 2007

Try Git

I'm not cutting edge. I was a long time ago but today, I prefer mature stuff. That's why I cultivate the luxury to ignore brand new, cool software until it's not only cool but most of the wrinkles have been ironed out by those guys who like to bleed (it's not called "cutting edge" for nothing).

So when Subversion came out, I stayed with CVS until version 1.2. I installed it and liked it immediately. Now, some years later, even the Eclipse plugins (Subclipse and Subversive) work so well that using them becomes uninterruptive.

The same is true for GIT. When Linus (of Linux fame) announced it, I knew that this guy was probably smart enough to know when to start a new version control system and when to stay with the flock. Alas, I never had the pressure to try it.

Actually, I had the pressure but I didn't notice because I didn't know enough about GIT, specifically what it does different than, say, Subversion (a.k.a "the worlds largest patch for CVS"). In nutshell, GIT is a set of commands to manage one or more repositories with the same or different versioned objects.

Confused? An example: You work at a paranoid company that won't allow you to connect to some OSS CVS/SVN server to update the files of a project which you're using in your daily work. Sad but quite common. Let's ignore for a minute the relative dangers of using CVS (which you aren't allowed to use) and using HTTP (which you can use). What options do you have?

You can download the files at home, put them on a USB drive and take them to work. That works pretty well. Now, one day, you make a change to the sources at home and at work. Only, you forget to commit one of them. So you keep updating the sources and eventually, you will run into problems merging the files because of that one uncommitted change.

This happens because neither CVS nor SVN allow you to "clone" the source repository. GIT works differently: When you start, you either create an empty repository or you copy an existing one. When you do that, you get everything. If that other repository gets corrupted, it could be restored completely from your copy. (Remember? "Backups are for wimps; real man put their software on public FTP servers and let the world mirror them!")

Also, GIT will track the work in each clone repository in an individual branch (unlike SVN where everyone usually works on the same branch). When you checkout the latest version in your working directory, you don't work on the same branch as any other developer. If you commit something, that's in your local branch only. If you break something, no one but you will notice.

Only when you're satisfied with your work, you "push" your changes over to other people. You needn't even push it to everyone at once. You can send your changes to a few colleagues first, before pushing them to the main project repository for everyone to see. Or you can keep it. GIT will happily download all the branches from the main repository without disturbing yours unless you say "Hey, I want to merge to main development into my code."

At that time, you will notice something: GIT will ask you to commit all your work first if you have uncommitted files in your working directory. In SVN this isn't possible or at least not that simple; you'd have to setup a SVN branch for each merge you do. For GIT, it's simple because from its point of view, the changes are not related in the first place. True, the do have a common ancestor somewhere but after that, they are as independent as they can be.

So you have the great luxury to be able to commit all changes you made, saving all your work before you do the actual merge. You can even save stuff away that you don't want to merge, so every modified file after that is actually part of the merge. This also means if something goes wrong, you can simply roll back all changes by checking out the head of your local development branch. Even better, you can create better diffs. It's simple to create a diff for a) everything you changed (diff common-ancestor local-branch-head), b) everything everyone else changed (common-ancestor global-branch-head) and c) everything you changed to do the merge (diff local-branch-head working-copy). In SVN, you usually see a mix of a) and c), at least.

So, if you usually work from several places and can't keep a network connection to the central server all the time, give GIT a try when you start your next project. It's worth it.

A Quiet Computer

Does your computer sound like a F-15 turbine? Well, my old one did and after two years, I was really sick of it, so I decided to silence the beast.

The Components

  • CoolerMaster CAC-T05 Centurion 5 case. The case is small, light and offers tool-less installation. Most stuff is clip-on. All edges are folded so you won't cut yourself. Also, it doesn't come with a power supply, which is good since I'm going to install my own anyway. Again a few bucks saved (and a considerable amount of waste). The case is produced for more than a year which tells to me: It's a quality product that people still buy.
  • Gigabyte GA-P35C-DS3R, Intel P35, FSB 1333, DDR2/DDR3. A high quality board with some room to upgrade for the next two years (for example, to DDR3 when the modules become as cheap as DDR2 is today or a Quad core CPU).
  • Intel Core 2 Duo E6750, Dual Core, 2.66GHz, 4MB Cache, FSB 1333. Not the latest CPU but with a TPC (total power consumption) of 60 watts, easy to keep cool (which means you don't have to buy the most advanced cooler). It's the LGA775 socket version matching the mainboard.

    Here, I considered the E6650 (Dual Core, 2.33GHz) and the Q6600 (Quad Core). The smaller brother of the E6750 offers less bang for the buck and the Quad Core need insane coolers, right now. The board can take both, so I decided to go with the E6750 and wait for Intel to come up with a Quad Core with a TPC of less than 90.
  • Asus EN8600GT Silent, 2*DVI, 256MB RAM, PCI Express x16. This board comes with a passive cooler. There is a second version with 512MB RAM but I don't game that much and Blender runs comfortably with 256MB for my projects, so no need to waste any money on something I don't need and that I'd have to cool down. Every component you install draws power which eventually gets converted into heat. The less, the better. One thing to note: This board gets hot and when I say hot, I mean skin-burning hot. While running 3D games, the temperature can reach 100°C. This is normal. The board and the chips on it have been designed for that kind of stress. Alas, the rest of your PC is not, so it is important to get that heat out.

Unlike other components, RAM and hard drive don't add that much to the concept so you can choose almost anything your dealer can deliver. I installed two Kingston HyperX DDR2 2GB kits (for a total of 4GB RAM) and a 500GB Samsung HD501LJ drive (SATA-II). The board does have an IDE slot but in order to keep the cable jungle in the case to a minimum, you should use SATA drives, especially for the DVD drive. I installed a Samsung WriteMaster SH-S203N.

Noise Control

The tricky part is to select quiet components for the moving parts: Fans and power supply. For all other components, you should just select something which doesn't drain insane amounts of power (because that directly translates into insane amounts of head which you have to deal with).

  • ichbinleise® Power NT 500 Watt - the power supply. ichbinleise ("I'm silent") is a company that has specialized into silent computers and components. The big plus: If they don't have some part you need, they usually have something else which works equally well (or sometimes better because of new developments).
  • Fan mounts made of soft silicon. A must have. Those keep the fans in place without screws and make sure the vibrations don't cross over to the case. When you buy them, look for double tipped ones instead of ones with one flat side. The idea of the flat ones is that you have fingers which can bend by an arbitrary angle into any direction while you pull the ones behind the fan through the case and the holes in the corner of the fan. Just say no. The double tipped ones are much more easy to install (take the fan, pull them through, then easily pull them through the case from the outside) and if you can't stand the tips standing out, just cut them off after the installation with a sharp knife.
  • Arctic Cooling Freezer 7 Pro, Socket 775, 520g to keep the CPU cool. Easy to install if you follow the instructions. I didn't. I still managed it, though. Note: You don't need any thermo transfer goo for this cooler. It comes equipped with a pad.
  • Arctic Cooling AF12025 Fan, 120mm fan for the back of the case.
  • Pabst 8412N/2GLE fan, 80mm for the front.
  • Zalman FanMate 2 Control, fan control to silence the 120mm fan.
  • Sharkoon HDD Vibe-Fixer III to keep the hard disk away from the case (resonance control).

You might be wondering about the amount of fans and the lack of case insulation. The latter suffers from two problems: If the case insulation is perfect, no noise will come out ... and no heat, either. That means your PC will die of overheating in two minutes. If nothing in the case generates any sound you can hear, the insulation is futile. So the goal should always be to buy silent components and not to try to reduce the noise afterwards.

So the amount of fans is not a problem as such. It quickly becomes a problem when the fans can be heard. To avoid this, I use two strategies: First, I install enough fans that no component ever gets so hot that any fan would have to run at top speed. The slower they can go, the more silent they are. If there is always a constant flow of air through the case, the fans stay silent, so that's what I need. Secondly, I install quiet fans and use silicon fan mounts to make sure the inevitable vibrations of the fan can't resonate with the case. Problem solved.

In case you still hear something afterwards, the noise should be very quiet. Only then, you can consider adding a patch of insulation or two. But usually, the insulation only makes everything worse because it blocks the flow of air, gets in the way when you install components, etc. Furthermore, all these quiet components cost a few bucks. If one of them is broken and makes any sound, just throw it away and buy a new one.

Installation

If you have never build a PC before, google the internet for some basic help. I'll concentrate here on the major issues I ran into.

Installation order is not a big deal. I've found it hard to squeeze the hard disk past the mainboard, though. There are two problems here: The case is for quick/toolless installation. This means the Viber-Fixer can't be screwed into the 5 1/4" slots. My solution was to screw the big dampers to the hard disk, then plug the tracks onto that and just stuff that into the middle 5 1/4" slot. After locking this in place with the plastic lock of the case, the disk was held in place.

Sure, it won't stay there when you pull at it. But during normal operation, the drive won't move - the forces of moving the tiny heads around inside are just too small. You should be careful when you drag the case around, though (lan parties, etc). If you fear that the drive might move, just tape some thick paper around the mainboard edge. That will prevent accidental short circuits.

For the fans, install the silicon fan mounts in the fan, first and then position the fan inside the case to pull the other end through the holes in the case. This way, you don't have to do insane finger contortions to pull the mounts through the tight holes of the fan inside the case.

I plugged the fans into the fan plugs on the mainboard. You can connect them directly with the power supply but then, you'll have to open the case to check on them. If you plug them into the mainboard, you can use software to check the fan speeds.

I tried to install a silicon power supply mount, too. That didn't fit since the case has two wide metal trays on which the power supply rests and one of them collides with the silicon. If it was present, the holes for the screws weren't aligned anymore. But since the supply doesn't generate and vibrations (maybe the fan is already isolated; I didn't look inside), this is not an issue.

You might consider to plug in all the case's cables into the mainboard before installing the board itself. The cables from the front go into tiny pins at the bottom of the installed board and they can be hard to reach (especially after you installed half of them).

First Power

An observatory has "first light" when they open the covers for the first time. Plugging in the power cord for the first time into your newly assembled PC is also a moment to remember. You plug in the power cord, flip the switch on the power supply and if you're lucky ... nothing happens. Today's PCs don't start when power comes up. So nothing to worry about so far. If the lights go out and you hear a pop or a hissing sound, chances are you just blew something. Congratulation. Next time, invest a minute or two to check all the cables a second time. This time, swear and curse and then look for a black patch and replace whatever got fried.

So the big moment is when you press the big power switch at the front. At this time, it is advisable to have the case open because you'll probably miss any acoustic cue that the PC is actually booting (unless there is no other noise). In my case, the fans didn't even start which is always a bad sign. Note that today, PCs don't usually start to smoke and burn if you did something wrong. After twenty years of cheap back-alley assembly, manufacturers have build one or two safety features in the components like plugs which either can't be installed the wrong way or which simply don't do anything if they are (so they don't fry anything).

To locate the problem, unplug everything from the power supply, remove all cards except the CPU and it's cooler. Then start with the big 20pin power plug for the mainboard. Plug it in, try the switch. If the fan of the power supply doesn't start turning, either there is a short circuit somewhere or it doesn't get the signal from the power switch. Note: For the basic test, you don't even need any RAM installed! Just the CPU, the CPU cooler (which doesn't have to be plugged in; just keep the running time short in this case! Never start the system without a CPU cooler!) and the power supply is enough to check if the system is okay.

In my case, I had missed a pin when installing the main power switch. After replugging it, the main fan showed me that the system was coming up.

The front fan showed some erratic movements as did the CPU fan. Apparently, the mainboard only triggers them when needed. Nice.

Depending on your needs, you should go through the BIOS setup to disable anything you don't need. I didn't need the serial and parallel port plus I switched the SATA connectors to AHCI for Linux. Time to boot from the DVD drive.

Ubuntu

My computer magazine came with an Ubuntu DVD for 64bit systems (which you need if you ever want to use more than 3.5GB of RAM; you can install more but you won't be able to use it), so I have that a try.

Since I have a big file server (1.5TB), I installed a smaller 300GB HD as the main drive and made a backup of all the data on it on the new 500GB drive attached via the eSATA bracket that comes with the mainboard. Ubuntu came up with no fuzz, mounted the old drive without problem. When I tried to use the partition editor on the new (external) drive, it crashed. Hm.

Back to the command line. Unfortunately, there is no console installed. Okay, Alt-F2 works and xterm is there, too. So xterm it is. A few moments later, the second drive is partitioned with cfdisk, the filesystem is there (mkreiserfs) and rsync -avHPh ... (preserve all attributes, hard links, show some verbose progress in human readable format and continue partial files) starts to copy the files to the new, bigger home.

Ten minutes later, the screen freezes. On the keyboard, caps- and scroll-lock start to blink; it's a kernel oops ("Linux crashed"). Hm. That's not nice. Did I fry the RAM while installing it? openSUSE comes with a memory test but that can't find anything. So maybe a driver problem. Just to be sure, I do the procedure again and again, after half an hour, the system freezes with a blinking keyboard. This time, I had the kernel log open but the system hung before it could display the error. Also, there is no way to switch to a text console with the kernel output that I could find in a few moments. Well, Ubuntu, it's been nice to meet you. Goodbye.

openSUSE

Back to openSUSE. I'm using SUSE for 15 years, now, and I'm comfortable with the system. For some reason, Debian based systems hate me (dpkg usually corrupts its database with every second install that I try; did no one ever attempt to abort an install?). Maybe that's why I couldn't get Ubuntu to fly.

Memory test shows that the RAM is probably okay, so it's got to be a driver problem in Ubuntu. Trying to boot from a freshly burned openSUSE x86_64 DVD fails. Hm. In my old system, I can mount the DVD but the new drive won't accept it. Strange. I can boot from the DVD but I can't mount it later.

Oh well, after connecting and setting up the network (ifconfig eth0 ...ip... netmask 255.255.255.0) in the rescue system (another boot option with openSUSE), I just copied the ISO file over and unpacked it (mount -o loop openSUSE-10.3-DVD-x86_64.iso /mnt ; rsync -avP /mnt suse ; umount /mnt). Booting again and switching to "Source: Hard Disk" (F4). I didn't specify anything when it asked from which hard disk; later, the installer will present me a list to pick from.

The first thing in the installer is an error: It can't find the installation repository. Well, yeah, I know. So I press return to close the dialog, select the keyboard and then "Start Installation / System", "Start Installation/Update", "Hard Disk", select the right partition and the directory ("/suse" in my case). Houston, we have lift off.

The rest of the installation is pretty standard. I suggest not to add any online repositories during the installation. I did once and the installer figured it was much more smart to download all 4GB again instead of using the local files. Duh. Instead of 10 minutes, the install took five hours.

That's all folks.

Afterword

It seems the RAM didn't work with the board after all. That would explain the random crashes I got with Ubuntu. I'd hoped that this kind of problem had been solved in 1994. Let's see if my dealer takes the RAM modules back. I'm thinking about upgrading my RAID with 500GB disks.

Finally, a Great Setup

Check it out:

I now have a 3520 by 1200 desktop on two displays, 4GB of RAM and a Dual Core E6750 CPU. That means I can have Blender and a web browser open side by side or I can have my text editor jEdit, a thesaurus/dictionary and treeline plus a web browser open without any overlapping windows.

Life doesn't get much better than this :-)

Saturday, December 29, 2007

Gigabyte GA-P35C-DS3R with Kingston HyperX

If you plan to buy yourself 2GB of Kingston HyperX RAM (KHX8500D2K2/2GN to be precise) for a Gigabyte GA-P35C-DS3R (BIOS Rev. mainboard, beware: I couldn't get it to work. I tried:

  • DDR2-1066/PC-8500 and DDR2-800/PC-6400 settings
  • 1.8V and 2.2V
  • Installing both modules in bank 1 and 2 (instead of 1 and 3)

With DDR2-800/PC-6400 settings and 1.8V, the RAM would pass MemTest+ V1.70 but some applications would crash reliably (comix, for example, when opening the third file).

I've now replaced the modules with 2*2GB from G.Skill and the board is rock solid.

Weird Path Twist in Blender

If you ever ran into the "Weird Path Twist" (a.k.a Z-Twist or curve singularity twist) in Blender, I've opened a bug against it: [#8022] Some operations on control points can introduce weird twists in paths

If you don't know Blender, here is what I did in two days:

It's the entrance to a public bath on the TAURUS. Since the corridor outside is perpendicular to the bath's ground, it's a gravity lock; in the center of the circular walkway, you can see the floor make a 90° turn downwards to align visitors with the gravity field of the bath. If you want to gaze, you can stay on the circular walkway and have a great view of the bath without craning your neck.

Tuesday, December 18, 2007

N&N in Eclipse 3.4M4: StringBuffer "Optimization"

Another one for the futile/harmful optimization awards: The New & Noteworthy page for Eclipse 3.4M4 says:

The new 'Convert to StringBuffer' quick assist (Ctrl+1) transforms string concatenations into more efficient, albeit less compact, code using an explicit StringBuffer:

Hello? Since when is using StringBuffer more efficient than using String concatenation? Answer: It was upto Java 1.3 (or maybe 1.2; I'm too lazy to look it up right now).

With Java 1.4, the compiler used StringBuffer as well, so this optimization doesn't buy anything but makes the code harder to read.

Worse, with Java 1.5, the compiler generates more optimal code by using StringBuilder instead of StringBuffer. The builder is not synchronized; since string concatenation doesn't suffer from threading issues, this is safe and faster!

And the morale: If you optimize something, make sure you don't rely on some myth about X being faster than Y.

PS: Of course, there is already a bug tracking this.

Monday, December 17, 2007

Looking for Quote

I'm looking for a quote which goes along these lines: "If we are ever visited by aliens, we'll have a lot of trouble explaining how a race smart enough to design the bomb is dumb enough to actually build it". Does anyone know who said this?

Wednesday, November 14, 2007

Mixins in Java

Mixins are powerful programming concept in dynamic languages because they allow you to implements aspects of classes in different places and then "plug" them together. For example, the "tree" aspect of a data structure (something having parents and children) is well understood. A lot of data can be arranged in hierarchic trees. Yet, in many languages, you cannot say:

   class FileTreeNode extends File mixin TreeNode

to get a class which gives you access to all file operations and allows to arrange the items in a tree at the same time. This means you can't directly attach it to a tree viewer. In some languages, like Python, this is trivial since you can add methods to a class any time you want. Other languages like C++ have multiple inheritance which allows to do something like this. Alas, not at runtime.

For Java, the Eclipse guys came up with a solution: adapters. It looks like this:

    public  T getAdapter (Class desiredType)
    {
        ... create an adapter which makes "this" behave like "desiredType" ...
    }

where "desiredType" is usually an interface of some kind (note: The Eclipse API itself is still Java 1.4, this is the generics version to avoid the otherwise necessary cast).

How can you use this?

In the most simple case, you can just make the class implement the interface and "return this" in the adapter. Not very impressive.

The next step is to create a factory which gets two bits of information: The object you want to wrap and the desired API. On top of that, you can use org.eclipse.core.internal.runtime.AdapterManager which allows to register any number of factories for adapters. Now, we're getting somewhere and the getAdapter() method could look like this:

    @SuppressWarnings("unchecked")
    public  T getAdapter (Class desiredType)
    {
        return (T)AdapterManager.getDefault ().getAdapter (this, desiredType);
    }

This allows me to modify the behavior of my class at runtime more cheaply and safely than using the Reflection API. Best of all, the compiler will complain if you try to call a method that doesn't exist:

    ITreeNode node = file.getAdapter(ITreeNode.class);
    ITreeNode parent = node.getParent(); // okay
    node.lastModification(); // Sorry, this is a file method

Try this with reflection: Lots of strings and no help. To implement the above example with an object that has no idea about trees but which you want to manage in a tree-like structure, you need this:

  • A factory which creates a tree adapter for the object in question.
  • The tree adapter is the actual tree data structure. The objects still have no idea they are in a tree. So adding/removing objects will happen in the tree adapter. Things get complicated quickly if you have some of the information you need for the tree in the objects themselves. Think files: You can use listFiles() to get the children. This is nice until you want to notify either side that a file has been created or deleted (and it gets horrible when you must spy on the actual filesystem for changes).
  • The factory must interact with the tree adapter in such a way that it can return existing nodes if you ask twice for an adapter for object X. This usually means that you need to have a map to lookup existing nodes.

A very simple example how to use this is to allow to override equals() at runtime. You need an interface:

interface IEquals {
    public boolean equals (Object other);
    public int hashCode ();
}

Now, you can define one or more adapter classes which implement this interface for your objects. If you register a default implementation for your object class, then you can use this code in your object to compare itself in the way you need at runtime:

    public boolean equals (Object obj)
    {
        return getAdapter (IEquals.class).equals (obj);
    }

Note: I suggest to cache the adapter if you don't plan to change it at runtime. This allows you to switch it once at startup and equals() will still be fast. And you should not try to change this adapter when the object is stored as a key in a hashmap ... you will have really strange problems like set.put(obj); ... set.contains(obj) -> false etc.

Or you can define an adapter which looks up a nice icon depending on the class or file type. The best part is that you don't have API cross-pollution. If you have a file, then getParent() will return the parent directory while if you look at the object from the tree API, it will be a tree node. Neither API can "see" the other, so you will never have to rename methods because of name collisions. ITreeNode node = file.getAdapter(ITreeNode.class) also clearly expresses how you look at the object from now on: as a tree. This makes it much more simple to write reliable, reusable code.

Monday, November 12, 2007

Hollow Sphere in Blender

For a scene in one of my books (a public bath on the TAURUS), I need a ball of water suspended around the center of a large sphere. The sphere outside is the "ground" (you can walk around in it; it's like the Hollow Earth theory but my version is just the product of careful alien design using magic a.k.a hi-tech to control gravity). I decided to render this in Blender to get a feeling how such a thing might look. What would my character actually see when they step into the bath?

Boolean operators are traditionally a weak spot of Blender (they are a major strength of POV-Ray, if you like text-file driven modeling). I had some trouble to get it to work and if you want to achieve a similar effect, here is how I pulled it off.

Inside

First add the inner object (this makes selecting more simple). In my case that would be an Icosphere (Press "Space", Select "Add" -> "Mesh" -> "Icosphere") with 3 subdivisions (to make it appear somewhat round even at the edges) and a radius of "4.000". This should look roughly like this:

Since this is supposed to be the inner volume of the object, there is a problem: Blender thinks it defines the outside. The key information are the "face normals". Open the mesh tools (Press "F9") and select "Draw Normals" on the far right (in the "Mesh Tools1" tab; use the middle mouse button to drag the tab into view if you have to - it's on the far right). Now the sphere sprouts little cyan pimples. Zoom in and rotate the sphere and you'll see that they start on the center of each face and extend outwards. This is how Blender knows "outside" from "inside": The direction in which the face normals point is "outside".

To turn this into the inner volume, all you have to do is to click on "Flip Normals" ("Mesh Tools" Tab, third line, last button). If you have "Solid" rendering active, the face normals will become tiny dots because the triangle faces now hide the rest of them. The object will still look the same but now, you're "inside" of it. Since all objects in Blender are hollow, it doesn't mean much ... for now.

I want a ball of water and water doesn't have edges, so I also smooth the surface ("Set Smooth" at the bottom in the "Link and Materials" tab). This doesn't change the actual geometry; it just draws the object smoothly. In my version of Blender, the object suddenly went all black at this point. Probably because I haven't assigned a material, yet. Selecting "Solid" rendering helps.

Connecting Hole

I needed a connection between the two sides (there is a "hole" in the hollow water ball where you can swim from one side to the other if you don't want to dive through it or use the walking spires or elevators in the restaurants), so I cut a hole in the inner sphere by selecting one vertice and deleting it (press "X" and select "Vertices"). In the image, you can see the lighter inside of the sphere shine through the hole:

Outside

Before I can create the outside, I must make sure that nothing is selected (or Blender would add the new points to the active object): Enter "Object Mode" ("Tab") and select nothing (Press "A" until nothing is highlighted anymore).

For the outside, I create another sphere. Make sure the cursor hasn't moved, so the centers of both objects are the same. If it isn't, select the sphere, press "Shift+S" (Snap) and then "Cursor -> Selection". When everything is ready, add the second icosphere: "Space" -> Add -> Mesh -> Icosphere, 3 subdivisions, Size "5.00". I also make that smooth but I leave the face normals alone (this is the outside after all).

Again, I delete the face where the connecting hole is supposed to be: Select a point (in "Edit Mode") and press "X" -> "Vertices". Now, you might face two problems: a) the hole in the inner sphere is somewhere else and b) the hole might be below the one you just cut but it's not perfectly aligned. If that is the case, you were in the wrong view.

When creating an icosphere (a sphere made of triangles instead of rectangles), the triangles don't have all the same size. If you rotate the sphere, you can see that they are uneven somewhat. I found that the triangles touching the horizontal axis are very even. The solution: Create the spheres in one view (for example YZ) and cut the holes in another (for example XZ). So after doing everything again and cutting in the right views, it should look like this:

As you can see, I did erase the vertice on the Y axis. Next, shift select both objects (use the outliner if you have problems with the inner sphere) and join the objects (use the menu or "Ctrl+J").

Smoothing Out the Wrinkles

After joining, it's simple to close the whole: Switch to "Edit Mode", select all vertices (six on the inner sphere, six on the outer, sequence doesn't matter) and "fill" them with faces (in the menu Edit -> Faces -> Fill or Shift+F). If you rotate the scene, you'll see that new triangles have been created but they look ugly in the otherwise smooth surface of the ball. Even using "Set Smooth" doesn't help; the angles between the hole and the rest of the surface is just too big (mostly perpendicular). To fix this, use "Subdivide" ("Mesh Tools" tab) and "Smooth" (same tab). This halves the selected faces, creating new ones and the smooth step evens the angles. For me, it now looked like this:

Holy ugly! What's wrong? I've left a hint ... can you find it?

It's the face normals again. For some reason, they point into the wrong direction around the hole. After a few undo steps (press "U"), I'm back at the point where the faces have just been created (before the smooth/subdivide steps). One "Flip Normals" later, the color transitions around the hole look much smoother. Back to another round of subdividing and smoothing. After the hole was as I wanted it, I noticed that the "horizon" of the ball still looked rough, so I selected all vertices, did another subdivide and several smooth to end with this result:

Pretty smooth, eh?

Rendering ... With POV-Ray

After fumbling with water that looks at least a bit realistic, I created the same scene with KPovModeler (with water but without the hole *sigh*) to give you an idea what someone, standing on the "ground" would see:

Each piece of the red-white checker pattern on the walking spires is 10x10m, the ball hovers 1250m above the observer, has a diameter of 500m and the water is 50m thick/deep. The two blue cubes are both 100m big, one is standing on the opposite side on the "ground", the other floats on the water. Anyone wants to add the water-slides, diving platforms (1250m jump!), etc.? With that much water and these dimensions, we'll probably also have clouds, too. The spires don't hold the water there, by the way, they are just a means of transport (if you don't want to jump or use the slides).

Monday, October 29, 2007

Heroes (TV Show)

*gasp* (Sound after emerging from a two day Heroes Season 1 marathon). If you haven't seen this, yet, you should.

As an author and SciFi fan, I'm always looking for good movies and TV shows. Here is my summary of season 1 (with a few spoilers further down below).

Overall, I'm very impressed. The show delivers depth and atmosphere like few I've seen before. It's as smart and logical as CSI or Dr. House but the cast is much more complex and the story is a beautiful example of an interwoven stream of events which happen independently but influence each other in a very special way. Nothing in that series is set into stone; events happen, the viewer feels he knows what is going on just to stumble over another small piece of information which turns everything around. The same happens to the characters which often find themselves having to make hard decisions they feel they aren't prepared for. Babylon 5 showed a glimpse of what can be done in this regard, Heroes goes the whole nine yards: Storytelling at it's best, rich, believable characters, super-human action without losing a grip on the special effects.

Spoiler Warning: The following text is only safe to read after seeing all of season 1.

There are a few dark spots, though, and they show a few of the problems an author/storyteller faces. Let's start with the "perfect prison". The prison itself contains almost nothing except for a few pipes which one of the heroes uses later to make an escape. I didn't notice them when Sylar was in that cell, so I'm giving the author the benefit of doubt and assume that Sylar was in a similar cell but one without the pipes. Alas, if you have ever seen a real prison, you'll know that surveillance is ubiquitous. Furthermore, with dangerous criminals (especially ones with special abilities), guards never visit the inmate alone. Not so in Sylars case; no one seems to care who visits him and when and what they take along. When Jessica Sanders is imprisoned, the authors don't make this "mistake": Guards never handle her alone; they are even afraid to come close to her in rather large groups!

I'm calling this a "mistake" because actually, it is quite easy to create a prison that no one can escape without help. Unfortunately for the show, Sylar has to escape which renders the whole "perfect prison" idea into a death trap for the writer. Authors: If you ever feel you have written yourself into a corner, take a step back and check where you came from. If you can, try to find a real instead of a cheap solution, because when Sylar escaped, I thought: "Oh, that's so silly." I didn't believe the show anymore for some time. When you write a story, the reader trusts that you produce a logical, believable world. Whenever you betray that trust, the reader will feel that your work is not worth the money she paid for it and this not what you want.

In the Sylar case, a possible solution would have been to rewrite story to make the attack on Claire happen far away from any "Company" location. Sylar could then have escaped much more believable from a make-shift prison. Or how about having more people around? It's unlike Sylar to just slaughter anyone in his path but he could have just rendered the "normal" guards unconscious and then go after the persuading girl (so she can have her grand moment).

The ending of season 1 is something else entirely. At first, I thought is was impossible for Sylar to be alive. Mr. Bennet knows how dangerous he is and would surely have put a few more bullets through his head if he had had any doubt that Sylar was dead. Some of that is solved in season 2 where the writers come up why the heroes didn't notice Sylar ... "escaped".

Just to round this up, here are a few more blunders which probably only happened because the writers had written themselves into a corner or vital information had to be cut away to fit the time slots of the show:

  • In the scene in the future when the guards smash in the door and shoot "Future Hiro": Why doesn't he stop time when he hears the door give in? Why doesn't he stop time as soon as the Haitian is taken out to tell Hiro everything he knows just to be safe? There is no apparent reason to wait until the last moment (except to allow for a dramatic and tragic (a.k.a stupid) death). Or why doesn't he stop time as soon as the Haitian is down to take out the guards trying to smash down the door?
  • When abducted in Las Vegas, Nathan Petrelly can fly away despite the Haitian being close by. Oh, and if that was a sonic book we're hearing, Nathan ought to be dead but maybe his ability turns his skin into something more durable than steel while he flies. That only leaves the question how his clothes make it ...
  • Again in the future: In all these years, Matt Parkman never noticed that Nathan Petrelli was in fact someone else? Never? In five years? Okay, again the benefit of doubt: Maybe the ability to create illusions can fool a telepath, too. Still, it seems uncomfortably odd.
  • After Claire ran the car into a wall, her father Noah has the brain of the quarterback erased so he "can't make her life even more complicated that it already is". Later, the whole school knows that Claire is somehow involved in the event. Having his brain erased just makes everything worse for her. Seems like an unlikely mistake for someone like Mr. Bennet.

All this might give you the impression that the crew around Tim Kring did a sloppy job. Well, think again. If you have seen Star Wars, you probably noticed the 264 mistakes in the first movie. For a TV show with a budget that is probably close to what Goerge Lucas spent for rubber stamps during the shooting, they did an incredible job.

Conclusion: Well done.

Lesson for authors out there: Strive for perfection and try to eliminate all logical mistakes and "easy ways out". Otherwise, your readers will spend their money on the authors that try harder than you do, the next time they buy a book.

Thursday, October 25, 2007

Five Easy Ways to Fail

It's been said over and over again and now, once more: Five Easy Ways to Fail lists five simple ways to make sure a project will fail:

  • Hire without brains
  • Sloppy schedules
  • Demand reality matches your deadlines
  • Spread tasks evenly
  • Work as many hours as you can

Another insight by Joel Spolsky

Resizing a 3ware RAID-5 Array With Linux

Ever wanted to extend the available space in your RAID 5 array? Whenever I do, I'm missing a consistent recipe how to do it. The following applies to OpenSuSE 10.2 and a 3ware 9550SX controller with 8 lanes. If your setup is different, adjust as necessary. Here are the steps:

  1. Add the drive in a free slot.
  2. If it doesn't show up in the web gui (Management -> Maintenance under "Available Drives"), click "Rescan Controller"
  3. Select the RAID-5 array you want to expand (not the free disk!)
  4. Click on "Migrate Unit". The web gui should offer you a list of drives to add and a few other settings you can change in the process.
  5. Click OK to start the migration. If your array is large, this can take a long time. I migrated from 1.3TB to 1.6TB. This took 24h.
  6. After the migration has completed, you'll have to reboot. Linux will see the new bytes only after the reboot but there is no danger in using the drive in this strange state for as long as you like. You just can't claim the new space but you can't loose any data, either.
  7. After reboot, make sure that no filesystems on the expanded RAID array are mounted. If they are, unmount them.
  8. If you run "vgdisplay" as root, it should show you the old size.
  9. Run "pvresize /dev/sdb" as root (replace the device name with yours). This will make Linux notice the new size. Note that it is safe to run this command without a reboot. It just won't do anything in this case. It will only print "1 physical volume resized" but when you run "vgdisplay", the size won't have changed.
  10. Run "vgdisplay" again to make sure the new size is correct.
  11. Run "yast2 lvm_config" to add the free space to any existing file systems or to create new ones.

That's all, folks.

Heroes (Storytelling)

As an author, you need to love your characters. You need to love them so much that you can make their lives really miserable. That doesn't mean slaughtering their families. Killing is easy. Giving them depth is hard.

Characters must have reasons for what they do. Take the doctor in "Alien". In the beginning of the movie, he opens the airlock blocked by Ripley and lets the contaminated crew members in. At that point, we think he's doing this because he's a doctor and he wants to help. Later, it turns out he is an android specifically programmed to gather alien lifeforms, ever at the expense of the crew. This gives the character depth that he doesn't have when you just make him do things to move the story on.

It's not necessary to explain everything to the reader; but every action should have a reason and at least you as the author should know that reason. Otherwise, the actions will soon start to become erratic and random. The readers will notice a pattern: There doesn't seem to be a reason why someone does something except to drive the story on. If you want to check your story against this, ask yourself: Does the character at this point in the story even know why he should do this? Or is he just making life easier for me?

Rambo is another good example for this. It also demonstrates my main point: You must make life as hard as possible for your character. When Rambo decides to stand up against the sheriff, that is the hard decision (just shrugging and walking away would have been much more easy). After that event, things get out of control. The deputies handle Rambo like any other petty criminal, only Rambo is not your standard drunk picked from a gutter. Their abuse triggers Rambo's instincts that kept him alive in the jungle. Blood is spilled.

Again, all characters could make the decision to step back, calm down, think. Instead, everyone tries to corner Rambo. They are driving him. Rambo escapes them as good as he can and only shoots down the helicopter when his own life is in danger. Again the pattern: Take the hard way.

The "Die Hard" movies work along the same lines. John McLane has a lot of chances just to hide in a corner and wait until everything is over but he never does. He always struggles to get the upper hand. That is what makes a character into a hero.

Many authors don't get this (at least, it doesn't make it to the screen). They put big and bigger guns into the hands of their "heroes" ("Eraser", anyone?) They add bigger explosions or make the evil guys commit worse atrocities. Cameras zoom in deeper and longer when blood is spilled. Guts fly around. Special effects take over. When Norman Bates killed the woman under the shower in "Psycho", Hitchcock keeps the camera on the drain. We don't even see the act itself but the scene is more intense than anything I've seen in the last twenty years.

If you as an author take the easy way out, so will your character. If you put a lot of effort into making life miserable for your hero (little or no ammo, no shoes, no food, no shelter, no help, no way out) and you can still come up with believable reasons why your hero can survive against all these odds, then your hero will be great.

Or to put it another way: How could your hero be better than your effort writing about him?

Friday, October 19, 2007

Telepods of Doom 2

On Telepods of Doom, Mike P. argues:

We can only assume that a machine can reconstruct experience, consciousness and the human soul.

First of all, the machine maybe doesn't have to reconstruct the soul of the being transported. Our everyday experience shows that the soul moves along with the body. There doesn't seem to be a limitation on how fast the body can move (at least not up to the speed we can achieve) without losing contact to its soul. In fact, looking at the problem from a quantum physics view, there is no reason to believe that the soul has to care about the actual location of the body. This means that if the wave form which represents our body is teleported across the universe, the soul might just stick to it.

Of course, I might be wrong and the soul might loose contact the moment the body is teleported. On the positive side, this would be a final proof that a soul exists (or at least something beyond the sub-atomic level). On the negative side, this would open a whole new world of tools to people who are not prepared for such power.

When someone manages to prove the existence of the soul, people will start to work on way to measure it. To access it. To modify it. Area Denial Systems already offer convenient new ways of torturing anyone you happen to dislike without leaving traces. For the victims, this makes it impossible to prove the act in court, making their situation twice as bad.

Imagine machines which can access the soul.

Luckily, nature has laws which will make sure we become extinct unless we are able to handle the powers which we seize.

Wednesday, September 26, 2007

Telepods of Doom

On BeContrary is a discussion about Telepods of Doom. The question goes like this:

It is the year 2112. Telepods have been in use for a decade to instantly transport matter from one part of the universe to another. You are waiting in line with your family at a telepod station to go to Tau Ceti. In front of you in the queue you meet the inventor of the telepods. He tells you that the telepods only appear to move matter, what they actualy do is create an exact duplicate at the destination and destroy the original in the process.

Do you get in the telepod?

As my math teacher would say: You're mixing up two frames of reference. In quantum physics, objects exist only once. There can be similar objects but these can never be exactly the same (they must differ in at least one attribute, for example in spin). Don't use that argument when the MPAA comes after you. ("That music isn't what was on CD! It must be different! Quantum theory says so!")

One way to make exact copies is to destroy the original and transfer all attributes onto another object (thus destroying the other object and creating a new "original"). In the real (macro) world, this can lead to all kinds of problems: If the destroy happens before the "apply attributes", you lose the object. If the destroy doesn't happen at all, you suddenly have two copies. If only a part of the attributes are copied, you have an imperfect copy.

In the quantum world, none of these effects can happen. It's either all or nothing because there is no state in between. Quantum particles can move through "solid" walls because they never spend any time inside the wall. In one moment, they are on one side, the next, they are on the other. The theory doesn't ask for continuous movement. It just says "when you look several times, there is a certain chance that you'll see the particle." There is no explanation how it gets from one place to the other and how it spends the time when you don't see it.

Since no one has found a flaw in the theory so far, it seems to be an accurate description of reality. That it contradicts our view of reality means that our view of reality is imperfect, not that quantum theory is wrong. Or as Douglas Adams put it:

"There is a theory which states that if ever anybody discovers exactly what the Universe is for and why it is here, it will instantly disappear and be replaced by something even more bizarre and inexplicable. There is another theory which states that this has already happened."
-- Douglas Adams

Tuesday, September 25, 2007

Undo for Webapps

While AJAX webapps grow more and more functionality, a very important one was missing so far: Undo. Imagine Word without Undo. 'Nuff said.

Aza Raskin has a solution. Well done!

Downloading Sources for Maven2 Projects

If you ever needed to download the source artifacts for a Maven2 project, there were several options: If you use the Eclipse plugin (either the Eclipse plugin for Maven2 or the Maven Plugin for Eclipse, also called "Maven Integration for Eclipse"), you can use the option -DdownloadSources=true from the command line or enable the "Download sources" preferences option.

Both have some drawbacks. When you use the Maven Plugin in the Eclipse IDE, it will try to download the sources all the time which blocks the workflow. If you just want to do it once, you have to enable and disable the option all the time plus you have to force the plugin to start the download (with 0.12, you can add and delete a space in the POM and save it). But it will only download the sources for a single project.

If you use the command line version to download the sources, it will overwrite the .project file and modify your preferences, etc. Also not something you will want.

There are two solutions. One would be the "-Declipse.skip=true" option of the command line plugin. Unfortunately, this option doesn't work. It will prevent the plugin from doing anything, not only from writing the project files.

So if you have a master POM which includes all other projects as modules and all your projects are checked into CVS, you can run:

mvn eclipse:eclipse -DdownloadSources=true

to download all sources and then restore the modified files from CVS. I've opened a bug which contains a patch for the skip option. After applying it, -Declipse.skip=true will just skip writing the new project files but still download the source artifacts.

Monday, September 10, 2007

Spammers "Cracking" Accounts on Blogger

There seems to be a recent increase in spammers "cracking" blogger accounts and replacing the blogs with spam/porn/etc.

If you want to save yourself from some hazzle (like your boss asking why you advertise porn on your blog), here are a few tips:

  • Don't blog while connected via WLAN.
  • Always log out after blogging.

If you have to blog via WLAN, always assume that everyone on this planet watches what you do. In our case here, the spammers don't actually "crack" your account; they just copy the cookie which your browser uses to identify itself against the server.

Anyone who can present that cookie is "you". So they listen for it when you talk to the server on a WLAN and, after you're gone, they run a little script which deletes your blog and replaces it with spam. Takes a few seconds and is almost impossible to track down afterwards.

If you want to be safe, don't use hotspots to connect to the internet. If you have to, you must set up a VPN, otherwise it's just a matter of time until your blog will be "cracked".

Sunday, September 09, 2007

How To Write, Part 1

In this ongoing series, I'll talk about how I develop and write stories. If you're interested in writing stories, getting tips and improving your style, etc., here are two great places to start.

First, browse the OWW, the Online Writing Workshop. It's mostly for SciFi, Fantasy and Horror writers but it contains lots of good advice and background references in the mailing list (ever needed to get some ideas on time travel stories? Who had already done what and how it worked out? Here is a good place to ask.

Next, have a look at the live journal of Joshua Palmatier. I have to admit that I haven't read any of his books but I greatly enjoyed his explanations, for example how scene, character and plot work together to build the complex immersion readers want to enjoy (and therefore, are willing to spend money and/or time on). I've rarely seen an explanation of these topics which was so easy to understand, so much fun to read and so helpful at the same time.

Tools

Next, you'll need a set of tools. Since I want to be able to write anywhere, I travel lightweight: jEdit, TreeLine and, since English isn't my native language, Office-Bibliothek. Being a seasoned software developer, I keep my stories in a Subversion repository. I do this for backup purposes and to be able to access them with a simple web browser for anywhere on the globe.

jEdit

jEdit is a text editor written in Java. This means several things: It will run on Windows, Mac and Linux. If there is a computer where I am, chances are, I can use it. The keyboard mappings are the same. The menu structure is the same. It can do all I need and it can do it well.

Furthermore, since I write my stories in an XML dialect, I don't need a more complex text editor. Later, I'll convert the text to TeX from which I can generate PDF or DVI. Or I can use another small tool to convert the XML into HTML.

Why not use a more complex editor like Office or Frame Maker? Because it gets in the way. I want to write, I don't need a complex UI, software that crashes on me or gets in the way (do I have to say "paperclip"?).

The only drawback of jEdit is that the spell checker sucks. But if I cared, I could fix that. It's open source software.

TreeLine

If you write anything that goes beyond 30 pages, you'll eventually strangle yourself in the strands of information. What happened when? What was the name of this character? How old was she? Where was she born? Did I mention that place already?

An outliner is like a file explorer for information. To the left, you have a tree-like outline (hence the name) with characters, the time-line, places and other knowledge. On the right side, you can see the details.

There are more complex tools, suited better for writers, which allow to move around events on the time-line, which help to organize relations between characters, which contain name generators and such. For some reason, they are all Windows-only. For me, that means, I can't work at home because there, I have Linux.

I can waste as much time as my employer likes as long as they pay me by the hour. At home, I need to get work done.

TreeLine is the outliner of my choice since it runs on any OS and because it's written in Python which makes it simple to extend. For my purposes, I've added a "quicklink" extension which allows me to define keyword fields and then creates links to those entries in other texts automatically (for example, I get a link to the character description when I use its name in the time-line).

I'm using version 1.1.9. Don't mind the warning about "development version" on the download page, it's rock solid.

Office-Bibliothek

If you start to write, you'll find that you often repeat yourself. "He said, he said, he said." Don't worry too much about that, this will go away as you learn to use them words better. On the other hand, you don't have to make your life unnecessarily difficult. Get a thesaurus, a dictionary and a good spell checker.

My choice was Office-Bibliothek, again because it runs on Linux. The prices for the data files are in the same range what you would have to pay for the books and you get a much better user experience (less hefting a big tome around, searching is faster than you can type and you can search for words which are not in the index; try that with a paper book).

Subversion

Subversion is a version control system, which will remember any change you ever made on your text. Not strictly necessary but it makes it much more simple to keep several versions (one at home, one at work, one on my palm and one on an usb stick) in sync, it automatically creates backups just in case I make a mistake and delete something I shouldn't have.

TeX

For many people, TeX is an anachronism. Why use a tool that expects you to write "this is a chapter" in your story when you can simply select the text in Word and chose "Chapter 1" in the toolbar? Well, because "simply" isn't so simple after all.

With tools like Word, OpenOffice and FrameMaker come a couple of price-tags which many people just have got used to and they don't question their motives anymore. First, we have the very real price tag. If you're not working with a pirate copy or one, for which you don't really have a license anymore, then you know, Word is not exactly cheap, even if you buy it alone. OpenOffice is free (as in freedom) and you just have to pay for the download but it's not available everywhere. Just imagine to try to talk the owner of a cyber cafe into installing OpenOffice for you so you can work an hour on your story.

Then, we have to issue of usability. I can work with Word and OpenOffice which means I can enter text and format it. Unfortunately, it will look like crap when it's printed. Both tools just have no idea what "beautiful" means and I frankly don't have the time or skill to teach them.

TeX, on the other hand, knows exactly how good (or bad) a text looks and it will even tell you so:

Underfull \hbox (badness 1789) in paragraph at lines 1041--1041
[]\T1/ptm/m/n/10 ^^PLook,^^Q the gen-eral started but she in-ter-rupted her:
[7] [8] [9] [10] [11] [12] [13] [14]
Underfull \hbox (badness 2073) in paragraph at lines 2165--2165
[]\T1/ptm/m/n/10 A smile split Forne's face. ^^PThank you, Ad-viser
[15] [16] [17] [18] [19] [20] [21] [22] [23] [24]
Underfull \hbox (badness 3148) in paragraph at lines 3705--3705
[]\T1/ptm/m/n/10 ^^PCertainly,^^Q an-swered the po-lite voice from above.
[25] (./haul05en.aux) )

Notice the "badness XYZ" warnings. They are just warnings. You can still print your text. It just won't look as good as it could. And in my humble opinion, a reader should not only get something that reads well but it should also look great. If you don't understand what I mean, print a page of text formatted with TeX and one page of text from any word processor and place them next to each other. Even if you don't know anything about font design and layout, the TeX version will make the other version want to roll up in its paper in shame.

Furthermore, since all markup (that's the information for the computer how it should treat a part of my text, i.e. if it's a heading, something someone says, a thought or whatever) is visible in the text, I never have to fight with the mouse to select just that part of my story that I want to select right now.

It's no accident that TeX is also the most stable software on this planet that I know of and that I use. The first version is from 1969 and the last update was December 2002. It's not dead, there is just nothing to improve anymore.

XML

XML is both the greatest idea since ASCII and the worst nightmare. It is a great concept, it finally allows to store data in a format that anyone can read (Ever tried to open a document you wrote ten years ago?). Unfortunately, some things had to be added so even computers from the stone ages can still read it and some definitions make perfect sense but are hard to understand.

For my SciFi stories, I use a special XML dialect that I call "story". It won't validate but it's simple to transform into valid XML. Here is an example:

<content>
    * Surprise, Surprise

<<T I'll show them,>> he thought and jumped out of his hiding
place, waving his rifle like a madman. <<Y Back off or I'll
shoot!>>

The muarhar<fn>Marauders of the south</fn> stopped dead, unsure
how big a threat he might be.

...
</content>

The part inside the content element is the meat of the story. It's a wiki-like format which allows me to write quickly (since I haven't found a single XML editor which would let me do that). The stuff between double pointed brackets encloses text that someone says or thinks. After that follows a single option character for thoughts (T), yelling (Y) or foreign language (F). In addition, I'm using a few XML elements like fn for footnotes, em for emphasis and q for quotes.

Note that I don't enclose paragraphs in p elements. In the beginning, I had a special key-mapping in jEdit to insert the empty element but I got rid of it. By adding a few simple rules to my "story-to-XML" converter, I made it add these by itself, freeing my fingers and eyes from such superfluous markup.

That's it. Now you just need a great story.

Myself - My Self - Who?

All my life, I've been fascinated by the mystery we call "self". Who is that person within my body who sits now in front of this computer and formulates this text? And why does this "me" feel so helpless in my own life? Why do I often feel that my life is in control of me instead of the other way around?

Of course, I'm not the first person to ask these questions. Psychologists and, in the recent years, neuro-scientists did the same and on a much more professional level. Freud explained this paradox with the Ego, super-ego and the id. Today, we can watch ourselves think and Benjamin Libet did. In very compact form, he found that the decision to do something is already made the time we (our "self") makes the decision (If you strongly object to this because it feels obscene, please read up on this elsewhere. It's true and it makes sense, however disturbing this may sound when you hear this for the first time).

So there is no free will? I can go around and kill anyone and say: "Oh, that wasn't me, that was my unconscious!"? Not at all! All I'm saying is that there is no free will right now. Let me give you some examples. Yesterday, I played with my cat. He was on the carpet and I was dangling the toy on the fringe of the carpet. He desperately wanted the toy and wriggled his muscles into the perfect position for the pounce but there was a threat: Beyond the carpet is a slippery floor on which he can't maintain his grip.

What happened was that he pounced and then immediately braked with all four paws to stop on the fringe of the carpet even if that reduced his chances to catch the toy. Obviously, he had noticed the danger and made a decision how to avoid it.

On a more complex scale, there is an experiment with monkeys and an upside-down u-shape. In the middle of the upper horizontal part of that u-shape is an apple and the monkey wants it. There is a cut in the u-shape, so it can prod the apple with a stick and move it in either direction. Unfortunately for the ape, one of the openings of the u-shape is closed with a mesh.

When conducting the experiment, the scientists found three types of apes. One type would move the apple into the "right" direction, it would fall out on the ground and they would eat it. The second type would move the apple into the "wrong" direction where it would fall onto the mesh. After a lot of effort, they would bring it up again and out of the other end.

The really interesting type is the third. They would move the apple into the wrong direction at first and, just before it would fall down the tube onto the mesh, they would stop. Next, they would start to move the apple into the other direction. Why?

The explanation is that these monkeys probably have created an internal representation of what will happen if they move the apple further, decided that this might not be so cool and changed their strategy. Just like the cat, they thought ahead. My theory is that my "self" is this "thinking ahead" thingy. Obviously, this is a very important part. Is it safe to cross the street, now, or will that car hit me? Maybe it will stop in time? Or is it too fast? Has the driver seen me? How about the car behind it?

If that part is so important, why is the decision made elsewhere?

Because it takes too much time. Making a decision takes time. If your life is at risk (and even today, it is at risk all the time), you don't want to waste time pondering all the possibilities. So, from a safety standpoint, it makes sense to cut some corners to be able to move those physical muscles while Mr. Brain still wonders what might be going on.

Of course, Mr. Brain would be genuinely unhappy if it felt that someone else is making all the important decisions while it, obviously so much more smart and important, still is working to assess the situation. Therefore, the decision making part of ourselves cleverly pulls some strings to create the illusion "Oh, you and you alone made that decision. Don't worry, everything is alright. Oh, look at that ... is that dangerous?"

From an evolutionary point of view, it makes less and less sense to allow a big and complicated brain to make decisions at short notice. It will reduce your chances of survival, slow you down to a crawl. Just imagine how you would walk if you had to move every muscle consciously. Try it and you'll be even more disappointed. There are martial artists who can touch you before you notice that they even moved. That kind of speed is impossible with a human brain.

Things change considerably when we look at the long term. Looking at cats and monkeys, they don't seem to plan ahead at all. The cat doesn't think about the dangers outside when you leave a door open. All it will see is a new opening which it hasn't explored yet and it will dash for it. Monkeys don't build shelters. They do use tools but there is a distinct limit how far they can look ahead. Not so with us humans. We can look ahead as far as we want (or at least we believe we can). We can plan for days, weeks, months, years, decades, centuries. We can plan and build big cities and they don't fall apart with the first woodpecker.

For example, building the famous World Trade Center took seven years (1966-73) or even twelve years (1961-73) if you start with when the initial plans were made public. The dream of building it started even before that.

My favorite story is the roof of New College, Oxford. When the large oak beams of the roof in the New College in Oxford were yielding to the teeth of time, the owners of the place were at a loss how to replace them. Can you imagine the price of oak beams spanning a great hall? They considered replacing them with steel but that was also way beyond their budget. By chance, one of the foresters of the college heard about this and mentioned that, 500 years ago, the builders of the place had planted oak trees just for this occasion. So in the end, they got the new roof for free. Of course, this is just an urban myth but a nice one. The cynical version adds that they sold the forest for profit afterwards without regard for what will happen in 500 years from now.

So, while someone might not be responsible for killing a guy in that brawl last night, he is fully responsible for getting into that brawl in the first place. His self is fully capable of looking that far into the future and preparing to avoid such a situation, for example by not drowning his ability to plan ahead in alcohol, by looking for a nicer place to get drunk or by just walking away when that guy started to make trouble.

"The avalanche has already started. It is too late for the pebbles to vote." - Kosh, Babylon 5

That is why understanding how our own brain works is so important: In the heat of the moment, the conscious part (the "I") of ourselves has no chance to vote anymore, so it must make the decisions before that moment. This will influence the options our unconscious has when it must move in an instant.

Because it does listen to us but only when it can afford the time. That is why we can change our lives and why it always takes so long.

Sunday, August 19, 2007

"What's Wrong With Java" as OpenOffice Document

Since my presentation at the Jazoon is only available as a PDF (and it looks horrible, too), I've uploaded the source OpenOffice presentation to my own website. It includes all the additional comments which are missing in the PDF. You can find it here.

For all those who couldn't attend my talk: This document summarizes a few weaknesses of Java which are solved in Python and Groovy and why I think that Java is now at it's peak. From now on, it's going down. Not overnight, of course, and there is no need to rush into any kind of action. But in ten years from now, Java will be where C is today: Something you don't want to build your career on (that's Java, the language, not Java, the VM).

Unit Testing jsp:include

If you're stuck with some JSPs and need to test them with MockRunner, you'll eventually run in the problem to test jsp:include. MockRunner doesn't come with built-in support for this, but this itch can be scratched with a few lines of code:

import java.io.IOException;

import javax.servlet.ServletException;
import javax.servlet.ServletRequest;
import javax.servlet.ServletResponse;
import javax.servlet.http.HttpServlet;

import com.mockrunner.mock.web.MockRequestDispatcher;
import com.mockrunner.servlet.ServletTestModule;

/**
 * Allow to use jsp:include in tests.
 */
public class NestedMockRequestDispatcher extends MockRequestDispatcher
{
    public ServletTestModule servletTestModule;
    public HttpServlet servlet;

    public NestedMockRequestDispatcher (
            ServletTestModule servletTestModule,
            Class servletClass)
    {
        this.servletTestModule = servletTestModule;
        servlet = servletTestModule.createServlet(servletClass);
    }

    @Override
    public void include (ServletRequest request, ServletResponse response)
            throws ServletException, IOException
    {
        servlet.service(request, response);
    }
}

In your test case, add this method:

    public void prepareInclude(Class servletClass, String path)
    {
        NestedMockRequestDispatcher rd = new NestedMockRequestDispatcher (createServletTestModule(), servletClass);

        getMockRequest().setRequestDispatcher(path, rd);
    }

The path is absolute but without the servlet context. So if the included JSP is named "foo.jsp" and the context is "/webapp", then path is "/foo.jsp". If that doesn't work, print the result of getMockRequest().getRequestDispatcherMap() after the test and you'll see the paths which are expected.

All that's left is to call this method in setUp() for all JSPs that you need to test. If you forget one, the jsp:include just won't do anything (i.e. you won't get an error). To make sure you don't miss any includes (especially ones which you lazy co-workers added after you wrote the test), I suggest that you check the map after the test run for entries which aren't instances of NestedMockRequestDispatcher.

Saturday, August 18, 2007

Blender Tutorials by Montage Studio

I've uploaded the four tutorials from Montage Studio (http://www.montagestudio.org/) on Vuze:

JWilliamson+-+Modeling+the+Female+Face JWilliamson+-+Modeling+an+Eye JWilliamson+-+Modeling+the+Human+Ear JWilliamson+-+Modeling+a+Lowpoly+Character

Unfortunately, the audio on the first one is a bit bad (you can hear the guy working on his keyboard as he speaks).

Monday, August 13, 2007

Demos Have Come a Long Way

I'm a long fan of demos, that is real-time computer animated demonstrations of skill. Or, as the guys themselves would probably say: I can do more pixels per frame than anyone else! (And in 16 colors, too!)

Ah, the good old Amiga times. Of course today, demos are more like music videos or even movies and no one has to worry anymore that Denise would eat too many cycles. Today, the main problem is probably to raise the money for the production ;-) Just check this one out:

Andromeda+Software+Development+-+Lifeforce

"Lifeforce" by Andromeda Software Development, a well deserved rank #1 in the combined demo competition at Assembly 2007. Congrats!

Monday, July 30, 2007

Rating of my Talk

The rating of my talk at the Jazoon just came in: 2.74 on a scale from 1 to 5. That's even below average (3 would be average). Hm. Okay, I was sick and tried to put too much information into my 40 minutes. Anything else I can do better next time?

Wednesday, July 25, 2007

Quickly disabling tests

Ever needed to disable all (or most) tests in a JUnit test case?

How about this: Using the editor of your choice, search for "void test" and replace all of them with "void dtest" ("d" as in disabled). Now, you can simply enable the few tests you need to run by deleting the "d" again.

I'm also using "x" to take out tests that won't run for a while. Using global search in the whole project, it's also simple to find them again just in case you're wondering if there are any disabled tests left.

Monday, July 23, 2007

Testing BIRT

I'm a huge fan of TDD. Recently, I had to write tests for BIRT, specifically for a bug we've stumbled upon in BIRT 2.1 that has been fixed in 2.2: Page breaks in tables.

The first step was to setup BIRT so I can run it from my tests.

public IReportEngine getEngine () throws BirtException
{
    EngineConfig config = new EngineConfig();
    config.setLogConfig("/tmp/birt-log", Level.FINEST);
    
    // Path to the directory which contains "platform"
    config.setEngineHome(".../src/main/webapp");
    PlatformConfig pc = new PlatformConfig ();
    pc.setBIRTHome(basepath);
    PlatformFileContext context = new PlatformFileContext(pc);
    config.setPlatformContext(context);
    
    Platform.startup(config);
    
    IReportEngineFactory factory = (IReportEngineFactory) Platform
    .createFactoryObject(IReportEngineFactory
        .EXTENSION_REPORT_ENGINE_FACTORY);
    if (factory == null)
 throw new RuntimeException ("Couldn't create factory");
    
    return factory.createReportEngine(config);
}

My main problems here: Find all the parts necessary to install BIRT, copy them to the right places and find out how to setup EngineConfig (especially the platform part).

public void renderPDF (OutputStream out, File reportDir,
        String reportFile, Map reportParam) throws EngineException
{
    File f = new File (reportDir, reportFile);
    final IReportRunnable design = birtReportEngine
        .openReportDesign(f.getAbsolutePath());
    //create task to run and render report
    final IRunAndRenderTask task = birtReportEngine
        .createRunAndRenderTask(design);
    
    // Set parameters for report
    task.setParameterValues(reportParam);
    
    //set output options
    final HTMLRenderOption options = new HTMLRenderOption();
    options.setOutputFormat(HTMLRenderOption.OUTPUT_FORMAT_PDF);
    options.setOutputStream(out);
    task.setRenderOption(options);
        
    //run report
    task.run();
    task.close();
}

I'm using HTMLRenderOption here so I could use the same code to generate HTML and PDF.

In my test case, I just write the output to a file:

public void testPageBreak () throws Exception
{
    Map params = new HashMap (20);
    ...
    
    File dir = new File ("tmp");
    if (!dir.exists()) dir.mkdirs();
    File f = new File (dir, "pagebreak.pdf");
    if (f.exists())
    {
 if (!f.delete())
     fail ("Can't delete "+f.getAbsolutePath()
         + "\nMaybe it's locked by AcrobatReader?");
    }
    
    FileOutputStream out = new FileOutputStream (f);
    ReportGenerator gen = new ReportGenerator();
    File basePath = new File ("../webapp/src/main/webapp/reports");
    gen.generateToStream(out, basePath, "sewingAtelier.rptdesign"
        , params);
    if (!f.exists())
 fail ("File wasn't written. Please check the BIRT logfile!");
}

Now, this is no test. It's only a test when it can verify that the output is correct. To do this, I use PDFBox:

    PDDocument doc = PDDocument.load(new File ("tmp", "pagebreak.pdf"));
    // Check number of pages
    assertEquals (6, doc.getPageCount());
    assertEquals ("Error on page 1",
            "...\n" + 
            "...\n" +
     ...
            "..."
            , getText (doc, 1));

The meat is in getText():

private String getText (PDDocument doc, int page) throws IOException
{
    PDFTextStripper textStripper = new PDFTextStripper ();
    textStripper.setStartPage(page);
    textStripper.setEndPage(page);
    String s = textStripper.getText(doc).trim();
    
    Pattern DATE_TIME_PATTERN = Pattern.compile("^\\d\\d\\.\\d\\d\\.\\d\\d\\d\\d \\d\\d:\\d\\d Page (\\d+) of (\\d+)$", Pattern.MULTILINE);
    Matcher m = DATE_TIME_PATTERN.matcher(s);
    s = m.replaceAll("23.07.2007 14:02 Page $1 of $2");
    
    return fixCRLF (s);
}

I'm using several tricks here: I'm replacing a date/time string with a constant, I stabilize line ends (fixCRLF() contains String.replaceAll("\r\n", "\n");) and do this page by page to check the whole document.

Of course, since getText() just returns the text of a page as a String, you can use all the other operations to check that everything is where or as it should be.

Note that I'm using MockEJB and JNDI to hand a datasource to BIRT. The DB itself is Derby running in embedded mode. This allows me to connect to directly a Derby 10.2 database even though BIRT comes with Derby 10.1 (and saves me the hazzle to fix the classpath which OSGi builds for BIRT).

@Override
protected void setUp () throws Exception
{
    super.setUp();
    MockContextFactory.setAsInitial();
    
    Context ctx = new InitialContext();
    MockContextFactory.setDelegateContext(ctx);
    
    EmbeddedDataSource ds = new EmbeddedDataSource ();
    ds.setDatabaseName("tmp/test_db/TestDB");
    ds.setUser("");
    ds.setPassword("");

    ctx.bind("java:comp/env/jdbc/DB", ds);
}

@Override
protected void tearDown () throws Exception
{
    super.tearDown();
    MockContextFactory.revertSetAsInitial();
}

Links:

What's Wrong With Java Part 2b

To give an idea why I needed 5KLoC for such a simple model, here is a detailed analysis of Keyword.java:

LoCUsed by
43 Getters and setter
40XML Import/Export
27Model
27equals()/hashCode()
21Hibernate mapping with annotations
14Imports
2Logging
174Total

As you can see, boiler plate code like getter/setters and equals() need 70LoC or 40% (48% if you add imports). Mapping the model to XML is more expensive than mapping it to a database. In the next installment, we'll see that this can be reduced considerably.

Note: This is not a series of articles about flaws in Hibernate or the Java VM, this is about the Java language (ie. what you type into your IDE and then compile with javac).

Saturday, July 21, 2007

What's Wrong With Java Part 2

OR Mapping With Hibernate

After the model, let's look at the implementation. The first candidate is the most successful OR mapper combination in the Java world: Hibernate.

Hibernate brings all the features we need: It can lazy-load ordered and unordered data sets from the DB, map all kinds of weird relations and it lets us use Java for the model in a very comfortable way: We just plain Java (POJO's actually) and Hibernate does some magic behind the scenes that connects the objects to the database. What could be more simple?

Well, an OO language which is more dynamic, for example. Let's start with a simple task: Create a standalone keyword and put that into the DB. This is simple enough:

   1:
   2:
   3:
   4:
   5:
Keyword kw = new Keyword();
kw.setType (Keyword.KEYWORD);
kw.setName ("test");

session.save (kw);

Saving Keyword in database

(Please ignore the session object for now.)

That was easy, wasn't it? If you look at the log, you'll see that Hibernate sent an INSERT statement to the DB. Cool. So ... how do we use this new object? The first, most natural idea, would be to use the object we just saved:

   1:
   2:
   3:
   4:
Knowledge k = new Knowledge ();
k.addKeyword (kw);

session.save (k);

Saving Knowledge with a keyword in the database

Unfortunately, this doesn't work. It does work in your test but in the final application, the Keyword is created in the first transaction and the Knowledge in the second one. So Hibernate will (rightfully) complain that you can't use that keyword anymore because someone else might have changed it.

Now, what? You have to ask Hibernate for a copy of every object after you closed the transaction in which you created it before you can use it anywhere else:

   1:
   2:
   3:
   4:
   5:
   6:
   7:
   8:
   9:
  10:
  11:
Keyword kw = new Keyword();
kw.setType (Keyword.KEYWORD);
kw.setName ("test");

session.save (kw);
kw = dao.loadById (kw.getId ());

Knowledge k = new Knowledge ();
k.addKeyword (kw);

session.save (k);

How to save Knowledge with a keyword in the database with transactions

Why do we have to load an object after just saving it? Well ... because of Java. Java has very strict rules what you can do with (or to) an object instance after it has been created. One of them is that you can't replace methods. So what, you'd think. In our case, things aren't that simple. In our model, the name of a Knowledge instance is a Keyword. When you look at the code, you'll see the standard setter. But when you run it, you'll see that someone loads the item from the KEYWORD table. What is going on?

   1:
   2:
   3:
public void setName (Keyword name) {
    this.name = name;
}

setName() method

Behind the scenes, Hibernate replaces this method by using a proxy object, so it can notice when you change the model (setting a new name). The most simple soltuion would be to replace the method setName() in session.save() with calls the original setter and notifies Hibernate about the modification. In Python, that's three lines of code. Unfortunately, this is impossible in Java.

So to get this proxy objects, you must show an object to Hibernate, let it make a copy (by calling save()) and then ask for the new copy which is in fact a wrapper object that behaves just like your original object but it also knows when to send commands to the database. Simple, eh?

Makes me wonder why session.save() doesn't simply return the new object when it is more safe to use it from now on ... especially when you have a model which is modified over several transactions. In this case, you can easily end up with a mix of native and proxy objects which will cause no end of headache.

Anyway. This approach has a few drawbacks:

  • If someone else creates the object, calls your code and then continues to do something with the original object (because people usually don't expect methods to replace objects with copies when they call them), you're in deep trouble. Usually, you can't change that other code. You loose. Go away.
  • The proxy object is very similar but not the same as the original object. The biggest difference is that it has a different class. This means, in equals(), you can't use this.getClass == other.getClass(). Instead, you have to use instanceof (the copy is derived from the original class). This breaks the contract of equals() which says that it must be symmetric.
  • If you have large, complex objects, copying them is expensive.
  • After a while, you will start to write factory methods that create the objects for you. The code is always the same: Create a simple object, save it, load it again and then return the copy. Apart from cut&paste, this means that you must not call new for some of your objects. Again, this breaks habits which leads to bugs.

All in all, the whole approach is clumsy. Really, it's not Hibernate's fault but the code is still ugly, hard to maintain (because it breaks the implicit rules we have become so used to). In Python, you just create the object and use it. The dynamic nature of Python allows the OR mapper to replace or wrap all the methods as it needs to and you never notice it. The code is clean, easy to understand and compact.

Another problem are the XML config files. Besides all the issues with Java XML parsers, it is always problematic to store the same information in two places. If you ever change your Java model, you better not forget to update the XML or you will get strange errors. You can't refactor the model classes anymore because there is code outside the scope of your refactoring tool. And let's not forget code completion which works pretty good for Java. Not so for XML files. If you're lucky, someone has written a code completion for your type of XML config. Still, there will be problems. If there is a new version, your code completion will lag behind.

It's like regexp: Some people, when confronted with a problem, think "I know, I'll use regular expressions." Now they have two problems. -- Jamie Zawinski

Fortunately, Sun solved this problem with JPA (or at least eased the pain). JPA allows to use annotations to store the mapping configuration in the class file itself. Apart from a few small problems (like setting up everything), this works pretty well. Code completion works perfectly because any IDE which has code completion will be able to use the latest and greatest version of your helper JARs without any customization. Just drop the new JAR in your classpath and you're ready to do. Swell.

But there are more problems:

  • You must create a session object "somewhere" and hand it around. If you're writing a webapp, this better be thread-safe. Not to mention you must be able to override this for tests.
  • The session object must track if you have already started a transaction and nest them properly or you will have to duplicate code because you can't call existing methods if they use transactions.
  • Spring and AOP will help a lot but they also add another layer of complexity, you'll have to learn another API, another set of rules how to organize your code, etc.
  • JAR file-size. My code is 246KB. The JARs it depends on take ... 6'096KB, more than 40 times of my code. And I'm not even using Spring.
  • Even with JPA, Hibernate is not simple to use because Java itself is not simple to use.

In the end, the model was 5'400 LoC. A added a small UI to it using SWT/JFace which added 2'400 LoC.

If you look at the model in the previous installment, then the question is: Why do I need 5'000 LoC to write a program which implements an OR mapper for a model which has only three classes and 26 lines of code?

Granted, test cases and helper code take their toll. I could accept that this code needs four or five times the size of the model itself. Still, we have a gap.

The answer is that there are no or bad defaults. For our simple case, Hibernate could guess everything. Java could generate all the setters and getters, equals() and hashCode(). It's no black magic to figure out that Relation has a reference to Knowledge so there needs to be a database table which stores this information. Sadly, defaults in Java are always "safe" rather than "clever". This is the main difference to newer languages. They try to guess most of the stuff and then, you can fix those few exceptions that you always have. With Java, all the exceptions are handled but you have to do everyday stuff yourself.

The whole experience was frustrating, especially since I'm a seasoned Java developer. It took me almost two weeks to write the code for this small model mostly because because of a bug in Hibernate 3.1 and because I couldn't get my mind around the existing documentation. Also, parent-child relations were poorly documented in the first Hibernate book. The second book explains this much better.

Conclusion: Use it if you must. Today, there are better ways.

Next stop: TurboGears, a Python web framework using SQL Objects.