if (c1=='4')
{
s2[0]=0x44;
s2[1]=0x7f;
if (!i2c_Command(0xde, 0, 0x7, 0, 2, s2))
uart_puts("FAIL\r\n");
Delay_ms(200);
s2[0]=0x43;
s2[1]=0;
if (!i2c_Command(0xde, 0, 0x7, 0, 2, s2))
uart_puts("FAIL\r\n");
}
else
if (c1=='5')
{
s2[0]=0x44;
s2[1]=0xc0;
if (!i2c_Command(0xde, 0, 0x7, 0, 2, s2))
uart_puts("FAIL\r\n");
Delay_ms(200);
s2[0]=0x43;
s2[1]=0;
if (!i2c_Command(0xde, 0, 0x7, 0, 2, s2))
uart_puts("FAIL\r\n");
}
The thought here is that 200ms at a half or double speed clock will affect the clock by 100ms. I use this inside a loop constantly reading the time and outputting it when it changes (when the second turns over). I can then compare that to time.gov and push it ahead of behind.
Something I don't fully understand is the way the coarse trim operates though. The 0x7f above in coarse mode should subtract 254 clocks 128 times per second - which should give 256 clocks instead of 32768 clocks - I thought this would bring the clock to a near stand still, but it changes the 64 Hz signal to 32.12 Hz instead.
The other that I settled on using was 0xc0, which is 0x40 with the 0x80 bit set. So this would be 0x40 * 2 = 128 clocks to add 128 times a second. 32768 + 16384 would be 49152 clocks per second, or a 50% speedup, but yet it is a 100% speed up when I measure it at 128 Hz.
Maybe I am not following the coarse trim part of the data sheet.