Precision timings (e.g. for clock / calendar functions) can only be generated with timers. Only then the time base is independent from the other program execution. Only if you do nothing more, than generating timing signals, you can do it with delay loops. Then no interrupts and no conditional execution of other tasks allowed.
There are 2 timers on the 51 (or 3 timers on the 52). If you need 3 Timers
on the 51 you can split T0 in 2 separate 8 bit timers TL0 and TH0. Then T1
can no longer use its interrupt (can still be used as baud rate generator
for the UART).
In most applications the crystal frequency is fixed and can not be divided
by 256 or 65536 to get a true 1 second clock. But with some additional
calculations every crystal can be used.
To reduce the maximum count you must reload the timer by software inside
the timer interrupt handler.
But pay attention !
You know not the interrupt response (how long after the interrupt
generation the interrupt was served), since it vary on many reasons:
So you can not reoload the timer with a constant value. Solution: add the
desired value.
On 16 bit timers you must stop it to make 2 additions (low, high-byte) and
start again. Otherwise an overflow during addition can occur:
Then is only one timing restriction: the interrupt must served until the
next timer overflow occurs.
To inform the main program, that 1 second was gone I use a bit "F1sec". Then
it can be tested to advance the clock. It must be done within 1 second, otherwise
you can lost 1 count.
Also you must then clear it, but only if it was set. Otherwise if you clear
it unconditionally after test, the interrupt can it always have set. The
JBC instruction avoid such problems, since test and clear was done simultaneous.