December 14th, Montréal

Louis-Philippe Huberdeau Wednesday 15 December, 2010

It has been a while since the last report. Work has been going slow. After the major milestones, I never really feel like starting a new one right away. First there was the feature-completeness of the engine, then the coding of most of the content sources and global sources I had on my list. The next big step was to test it all with real data. That's scary.

I was right to be scared. The first tests were inconclusive. Downright failures. Testing on a database dump from doc.tiki.org, the indexer first crashed. That does not start the day too well. To put things in perspective, that database is old. It has been upgraded for many years, starting long before I got involved in the project. In most cases, if something has been done in a twisted way, it was done there. When I rewrote the parser plugin, I tested it with the home page. That failed the first time too.

It turns out it was a minor issue. The redirect plugin would just terminate it, so it was just about making sure that one would not execute. There were a few other issues related to various plugins. I had to solve countless notices around the code. Turns out I had plenty of time to do that, because the indexing was painfully slow. Not that much slower than I expected, but still slow enough to be annoying to test.

The execution was also way too long to profile, leading to disk filling up and not being able to open the file anyway. I was hoping to see a quick win that would make the rest of the testing faster. However, the little I saw on partial runs indicated that most of the time was actually spent indexing in Lucene, and not collecting data in the unified search part of the code. I didn't know if it was a relief or a terrible technological decision at the time.

Some searching around lead to the conclusion that the default values in how the index was built were very conservative. I was happy the indexing kept the memory usage low, but more speed is a nice thing when you want to index data. There are three knobs you can play with in Lucene to adjust the trade-off between memory usage and speed. Sadly, none of them will actually mention a memory limit. It's just a matter of document counts and merge cycles. That makes life hard. I moved up some values, got better results, but it's not optimal yet. I hope I won't have to expose those settings in out admin panels. I barely understand them myself at this time.

Good news is, I was able to cut indexing time in half. A good first day of work.

I then tried to actually search the index. I shouldn't have done that before the week-end. It just timed-out after 60 seconds. Profiling indicated a huge mess. I did not know where to start. The call counts in there made no sense at all. Terrible technological decision? The total index size was just over 2M at the time for approximately 2000 documents. I was really wondering why they were even bothering to mention that the maximum index size on a 32 bit system was 2G.

To be continued.


Permalink: https://trunkdev.tiki.org/blogpost17-December-14th-Montreal