Jazoon Cut is a nice idea: You got a project, they give you 20 minutes to present it (i.e. "cut" as in "cutting edge"). In this Cut, we had NetKernel, iGesture, Interactive Paper, and Privacy Supporting Identity Systems. A rather interesting mix.
In the NetKernel talk, Brian Sletten again tried to sell his "RDF is the best and you should use it everywhere." Basically, NetKernel is a little core where you can register translation services (called ... I don't know what he called them and I can't find the link to the actual presentation, just the abstract :/). So when a service needs some data (or "resource"), he calls the kernel and the kernel figures out who might be able to serve that request which might go through several hoops before it comes back. Nothing fancy here, Unix pipes do that for ages with the exception that they don't build themselves.
His demo was to show how you could calculate Fibonacci numbers by using a "bsh" service (BeanShell, that's JavaScript) to add two intermediate numbers of the result. You would imagine that this is slow as hell with all that creating messages, sending them around, starting a JavaScript interpreter and run each add. As you might remember, the Fibonacci generators are usually implemented recursively and that should kill the NetKernel.
Only it doesn't. If you look at the runtime graphs, the Java version of the Fibonacci generator needs exponential time as the input grows. Around 30, the Java version takes seconds to run while the NetKernel version always needs to same amount of time. The nice thing about the design is that you can cache the results. So the call to fibonacci(30) will just add the cached results of fibonacci(29) and fibonacci(28) and be done. One level of recursion required.
While this is mighty impressive and surprising, the question remains how that will scale in reality. After all, caching a 500MB result from some service might not be feasible or even possible.
2 comments:
Sorry, this talk had nothing to do with RDF; I don't believe I even mentioned that once. Nor am I positioning it for everything. I found out a week before the talk that I had 20 minutes to give it. Hilarity ensued. NetKernel scales like a beast. You should give it a try sometime.
My problem starts at a lower level. NetKernel may scale but if the data layer below it (database or whatever) can't deliver the data in time because it's based on technology from the 1970's, the result will be that 500 NetKernel threads on 10 machines will wait for data that will never arrive.
I feel that we first need a way to store arbitrary data in a way that systems like NetKernel can retrieve them in a timely fashion and without needing to train a DB admin for four years until he can optimize the data store.
Post a Comment