This year has been a very enlightening one. I have peeled the onion and learnt that many things earlier considered magic, was not very magic at all, yet elegant and far from trivial to implement myself. Hopefully I'm less prone to get stuck in trying to reimplement these features in the future.
I have finally learnt to see through all the abstract classes of java, finally understood the benefits interface and how clojure generates bytecode. Maybe a bit late, but I'm always learn things backwards anyway.
I've been reading some really good books and articles. Among them are
Brian Goetz - Java Concurrency in practice in which I learnt much more about the volatile and various atomic constructs in java.
Fred Hébert - Learn You Some Erlang for great good! - apart from the marvelous illustrations it's fun to see what Erlangs strengths really are, among them a very efficient implementation of green threads and the selective message receiving, removing much complexity in parsing-like functionality.
The JVM serialization benchmarking results - told me about the existence of Avro and Kryo, among others. I later found out about Kryonet, which I hope to try out further. I also read up on Fressian,
Pedestal.io entered my life. It will be very hard to start develop web applications in other frameworks after trying out this beast.
I re-read The Joy of Clojure (Fogus, Chouser) and realized I had missed most of it at the first read. The talk from Hugo Duncan on Debuging in Clojure helped me grasp the mind-blowing feature of starting a repl in the middle of an error.
Entered Garbage Collections Eden area when I visited a Lisp meet up in Gothenburg. People discussed to implement their own Garbage Collector and I was thinking "Impossible!". Afterwards I read up on the subject. It's not impossible at all, and it made me a somewhat better programmer to know it's not magic and I even dare to think I can play a bit better with garbage collecting now.
Professionally I've been able to juggle matrixes several gigabytes large in memory, which made my $500 laptop be as performant as a half rack of Proliant servers. Fun, and more than a bit scary.
I finally read through all of the Clojure source code. Much to say, yet little. The array-map (as well as the large hash-map) is likely much more conservative on allocating objects than Java Collections own implementations.
I learnt that TCP is quite a shitty protocol for throughput because of its strict ordering ACK mechanism. This explains why Storm and a lot of other applications chose to use UDP and implement their own ACK-ing mechanisms. I also learned that the SCTP protocol is quite cool, and that even Java already supports it.
I thought long and hard about compilation and compilers. I read parts of Urban Boquists PhD thesis on Code Optimization for Lazy Functional Languages, and found realized that inlining and register optimization share some similarities, although explicit register optimization likely will produce better results faster.
I wanted to find out if JIT supports the SSE and other late x86 instruction sets, and turns out it does, although it's hard to know exactly when. There's the option PrintAssembly for knowing exactly what the JIT spits out, which I hope to investigate more.
CUDA and OpenCL was getting visits from Clojure-generated guests like Sleipnir, I'm still looking for a suitable problem to squeeze into GPUs.
I also read up on Postgres SQL internals and indexing, the bitmap indexes are a cool thing. The clojure.lang.PersistentHashMap is implemented in very similar way. Could this be used to optimize the clojure.set/union and other set-operations somehow?
I finally discovered the ThreadLocal classes in java, which are potentially great for thread-bound processing, like CRC calculations or cryptographic state.
Thanks to David Nolen for continously tweeting things about relational programming I didn't even know existed.
Zach Tellman published Narrator, which is yet another stream processing library, full of clever close-to-JVM goodies.
Hopefully next year will be the year I visit some Clojure conference. I'm thinking a lot on state machines and network programming in my current job, but also visualizations, so I really hope I will be able to publish something slightly valuable regarding these issues.
lördag 28 december 2013
The java keyword volatile defines a variable to always be written to main memory before some other method access it. It's important to notice that this write to main memory takes about 1000 clockcycles on a modern CPU, since the variable has to traverse three layers of cache to get there. The use of volatile should be very carefully investigated. The blog Mechanical Sympathy wrote more about the use of volatile variables two and a half year ago.