Saturday, February 28, 2009

I'm no audiofile

After having used my pair of Polk Audio RTi8 paired with Onkyo 606 reciever for close to 3 months, I decided to re-calibrate Audissey thingie. When I did it before, I did not bother to put the calibration microphone on a tripod, although I immediately recognized the purpose of a hole at the bottom of the mic assembly. Instead I (imagine that!!!) held it in my hand. The result of the calibration was so terrible, with low frequencies cranked up so much, the system sounded like underwater effect. So I thought using "direct" sound mode is my only option.

Armed with a wee bit of free time, I recalibrated today, but with a tripod this time. Not entirely sure if I positioned mic appropriately (I directed the peak with opening towards space in between speakers, although for multi-channel setup, I can imagine, it should be pointed directly upwards). But the result was nothing short of stunning.

Coldplay CD (that I hapenned to have in my DVD player) came to life, with well-balanced mid and low sections, at the same time, lows are still fairly robust, allowing to feel the tension of the big tom. Before, highs were so overpowering... I realize now that many negative reviews of Audissey EQ system may be caused by less than diligent calibration process.

Now let's see how the Led Zeppelin's Mothership takes off :)

Sunday, February 22, 2009

"man inetd.conf " made me cry

...almost

There's something wrong with me. I keep on asking questions Google knows no answers to. I realize that a desire to have rsync daemon enabled in Solaris may be quite uncommon. Well, you demigods of organized computerland, I happen to have a use case you did not plan to address. I need to run Opensolaris in a VirtualBox VM on Windows XP host, and I need to share files fr,,,

F$^%k it. Why do I need to explain?!

Update: the error message is obscure beyond obscene. It actually just meant that rsync service is not known to the system. Upon adding appropriate entry in services file, inetconf ran successfully. However, while rsync'ing works, accessing guest OS using rsync:// URL does not :(

Saturday, February 07, 2009

Stateless internet is dead

Obsessing over absent-minded person would be strange. Admiring person with either short- or long-term memory loss would be difficult. Emerging artificial intelligence, however, is encouraged to be "forgetful" for the purpose of being resilient in the face of feeble networked silicone fabric it is destined to dwell in.

Well-balanced systems combine caches of all levels. It's unfathomable to me that analogues of L1/L2 caches are so much out of favor with modern distributed system architects. Blessing of sticky session, consistent hashing, and stateful conversational state are not appreciated anymore.

Fine by me if the masters of these systems have deep enough pockets. Except they might not. Clusters of memcached servers counting in 1000s... Sigh... Why are software engineers are so reluctant to learn the lessons hardware engineers learned decades ago?

Wednesday, June 11, 2008

Off-heap cache: ehcache vs Derby DB

I had my test like so:
- 1M properties where
- key varied between 30 and 70 characters, with average of 50
- value varied from 100 to 300 characters with average of 200
- size of file on disk 240MB

I chose to go through JMeter and its HTTP samplers, I wrote a simple JSP, so the overhead of both JMeter and Tomcat are sizable (run on same machine) and completely obfuscate meaningful values. 60 threads with constant throughput were used.

ehcache:
- configured to keep all values eternally, allowed to overflow to disk

Test 1: cache 5000 properties in memory
Test 2: cache 1000 properties -//-
Test 3: cache 10000 properties -//-
Test 4: cache 15000 properties -//-

Throughput achieved: 180 req/sec
CPU used: Tomcat 5%, JMeter 42%.
In all tests heap usage was around 225M (with max steadily rising to 300-380M), regardless of number of entries to cache... BAD!!!
Sampler time average: 1ms, max: over 3000ms (yep, GC is a bitch). But note the caveat above, these values are not to be trusted as a measure of performance.
Size of overflow file on disk: 512MB

Derby
- simple table, with PK constraint and index on key column
- used in embedded mode
- DB size 410MB (308MB table, 102MB index) on disk

Test 1: cache 10000 pages
Throughput achieved: 180 req/sec
CPU used: Tomcat 17%, JMeter 41%.
Heap usage: 72MB after full GC and went up to 128 if unhindered Not bad!!!
Sampler time average: 1 ms, max: 500 ms. But note the caveat above, these values are not to be trusted.

Test 2: cache 5000 pages
Throughput achieved: 180 req/sec
CPU used: Tomcat 17%, JMeter 41%.
Heap usage: 38MB after full GC and went up to 74 if unhindered Awesome!!!
Sampler time average: 1 ms, max: 180 ms.

Test 3: cache 1000 pages
Throughput achieved: 180 req/sec
CPU used: Tomcat 17%, JMeter 41%.
Heap usage: 14MB after full GC and went up to 27 if unhindered. W00t!!!
Sampler time average: 1 ms, max: 170 ms.

Test 4: default number of pages to cache
Throughput achieved: 180 req/sec
Heap usage: 14MB after full GC and went up to 27 if unhindered.
Sampler time average: 1 ms, max: 170 ms.
Test 4 proves that default is 1000 pages to cache... I had a doubt about it for a while there.

So, there. Derby is extremely good at keeping strict SLA (low pauses), but it does burn 3x CPU as compared to ehcache. In real-world scenario, there are ways when ehcache may further benefit: if small keys are used (int/long values), or when complex objects are stored (which would have required joining several tables). Caching Java properties may not be the best scenario for ehcache. However others were successful in combining the two. I might try doing the same at some point, but maybe not, since I got what I wanted from this test, the "feel" of these approaches.

Saturday, October 20, 2007

Making quality prints (panorama for my cubicle)

I had a bunch of photographs from my 2006 trip that I wanted a panorama made of. Finally found some time. I used one of less sophisticate tools available, one that came with my Canon camera. Then I had trouble trying to find software that would let me print this big photograph over a sequence of pages that I would be able to glue together.

Solution was pretty counterintuitive. I used Nero Digital to adjust brightness of the image (everything looks darker on the paper than it does on screen, so, just a little bit lighter). I then discovered that my printer's (I still use the cheap Epson 777 with non-Epson ink I bought off the Internet) driver is capable of exactly what I wanted, I set the multi-page option to print a Poster over 9 pages. What I did then was just set zoom to fit page in Nero, and when preview came up I had to un-select the blank pages and be done. So, from my other experiences this day:
  • Clean the printer! Over the years of irregular use too much tar-like gunk accumulated on the leads and rollers. I spoiled two pages of HP Premium Glossy paper to this...
  • Print a few drafts on plain paper to adjust brightness. No need to waste ink printing entire page, a quarter or a third of a page will give good enough impression of what the final thing will look like.
  • Print the resulting panorama one page at a time. This helps to avoid possible malfunctions where two pages get fed into the printer, any kind of misalignment, etc.
  • Don't bother to use genuine, original inks. Cheap compatible cartridges work just fine.
I would appreciate an advice on how to lift small specks of that tarry gunk I mentioned before... I could live with those small streaks of black, but the problem is they just won't dry and I'm afraid they will get smudged as soon as I roll the panorama for transporting to its place on my cubicle's wall.

Wednesday, August 29, 2007

Determining optimal JDK memory config, faster

If you are like me, stuck with Java 5 for one reason or the other, you may be having difficulty determining how much heap to allocate to a process, and whether the ratio of young generation space should be different from default. A shortcut can be taken if your primary goal is maximum throughput. We know about HotSpot ergonomics, but with Java 5 it takes too long for it to work its magic. HotSpot in Java 6, on the other hand, usually has the optimal parameters figured out in a matter of hours. This makes it possible to increase turnaround of load tests and zero in on optimal settings very fast.

Sunday, June 03, 2007

An OS in my pocket

I was entertaining this idea for a while: to carry an OS of choice in my pocket, on a bootable Flash drive. Yesterday I decided to treat me to this geeky pleasure of unwrapping new hardware (8GB PNY Attache USB 2.0), downloading an ISO (Ubuntu 6.4 Live CD), and burning it onto a CD. The guide at http://www.pendrivelinux.com/, found it easy to follow and it "just worked" in the end. Open Office comes installed, so it was trivial to create a new document and save it on the same USB flash disk. Worked without a hitch for my Sub Java Workstation W1100Z. For kicks, I continued to update/upgrade the OS, and installed a few applications (Xine, MPlayer).

A few of things did not work at once, though. First being Java. Went to java.com, downloaded the binary executable. Would not install for lack of some libraries... sigh. Did not spend too much time with that, as I know from past experience that resolving missing dependencies is not what I call "fun".

Second thing that did not work was installing NVidia drivers. I decided to download binary from NVidia's site, as I would always do for Solaris, and it would complain about graphical mode first, ok, I do "init 1", then it would warn about being in init mode 1, but allows to proceed, and then a brickwall: as it was about to compile the kernel module, it needed LIBC headers... forget it! And yet, while browsing the list of available updates, I find it right there, NVidia glx driver. Nice.

Lastly, I tried using this drive for Vista's ReadyBoost, and that did not work either (Vista says this thing is too slow)... Hmm... this drive goes back on shelf at Fry's then, and next time I'm on a market for bootable OS-in-pocket media, I'm paying attention to speed. Lexar's lightspeed, Kingston's DataTraveler, and Corsair's Voyager GT are in the running, but I'm planning to do more research next time.