Hylafax Mailing List Archives
|
[Date Prev][Date Next][Thread Prev][Thread Next]
[Date Index]
[Thread Index]
Re: Two off the wall questions
This is mainly based on general principles, as I don't have the HylaFax
sources immediately to hand...
>
>
> On a normal day, we send out around 2000 faxes. On Some days, we sound out
> as much as 8000 faxes.
When you are bulk faxing, are you sending customised faxes; if not, there
is likely to be benefit in using multiple addresses on one sendfax command,
although I don't have the system on this machine to check whether this
results in less send queue files.
>
> When we get past that 2000 mark, Hylafax really starts to bog down. It
> still sends OK, but submitting new faxes take longer and longer and longer.
> We have thrown money at hardware to make it work faster, but it still causes
> us grief when the queue gets full. We think that the root of the problem,
> is that Hylafax does directory scans each time it needs information, instead
There are two sorts of directory scan, a scan for existing files and
one for for unknown file names. The latter tend to take a time which
depends on the maximum size of a directory, rather than the current size.
Without B-Tree type directories, there is not a lot one can do about them,
except prune the directory after an overload. Building a list of all files
comes into the second category.
Searches for existing entries should be relatively fast if a small number
are being looked up at any one time, because of caching. However, the
default tuning of the Linux kernel would seem to suggest 128, 256 and 512
as points where one would get a slowdown, not 2000. You are only going to
get anything like 2000 active if the system is sorting the entries
by date, which may well be the case. If you think that this may be
the problem, all the same, you could try modifying the cache sizes in
/usr/src/linux/fs/dcache.c and inode.c. This may result in your having
to treat the kernel as big, and/or use modules to keep the basic kernel
size down. (I assume you mean a queue length of 2000, if you actually mean
a rate of 2000 a day with a smaller queue length, then it may well be
worthwhile increasing cache sizes.)
These are the values you might want to play with:
dcache.c:30:#define DCACHE_SIZE 128
dcache.c:72:#define DCACHE_HASH_QUEUES 32
inode.c:15:#define NR_IHASH 512
(2.0.30 kernel)
> of keeping a state file.
State files have their problems, e.g. they are not so resillient in a crash.
>
Incidentally, I notice that you quote the speed of the machine but not
the memory. I don't think you will need extremely large amounts of memory,
but it would be worth looking at the disk light - if it is on a large part
of the time, you need more memory.