[GNC] Using Gnucash to access file over flaky network

Chris Mitchell gnucash at chris.oldnest.ca
Wed Nov 24 14:38:29 EST 2021


On Wed, 24 Nov 2021 09:30:29 -0800
"Stephen M. Butler" <Stephen.M.Butler51 at gmail.com> wrote:

> On 11/24/21 07:54, Chris Green wrote:
> > On Wed, Nov 24, 2021 at 11:24:21AM -0400, Chris Mitchell wrote:  
> >> Hi all,
> >>
> >> For reasons that are complicated but not especially interesting, I
> >> would like to run Gnucash on one machine, with the data file
> >> located on a remote machine — with the added challenge that the
> >> network access available to the Gnucash "client" machine is a
> >> terrible cellular data connection that sometimes drops without
> >> warning.
> >>
> >> I have daily backups, so I don't need strong guarantees against
> >> data loss, but if it's possible, I'd like to set things up so
> >> there's reasonable resilience against a network dropout corrupting
> >> the remote Gnucash data. I'm not the only user with access to the
> >> data, so (given that multi-user is still a long way off), I need
> >> file locking to work. Having to manually delete an orphaned lock
> >> file after a network dropout is acceptable.
> >>
> >> I assume that any of the database server backends would include
> >> this kind of resilience "out of the box", and I'm not entirely
> >> unwilling to try my hand at setting that up, but I am by no means
> >> a qualified database administrator. If I can get sufficient
> >> resilience by easier means, I'd prefer to stay away from the whole
> >> database server thing.
> >>
> >> What about Sqlite over sshfs?  I realize Sqlite is not designed for
> >> access to a database residing on a different machine, but my
> >> inexpert impression is that its "atomic commit" implementation
> >> should protect against sudden disconnection between the program
> >> and the storage medium just like it protects against sudden power
> >> loss. (IE the transaction that's in the midst of being written
> >> will be lost, but the database should be fine.)
> >>
> >> Can anyone confirm whether it's reasonable to expect that Gnucash
> >> with Sqlite backend over sshfs would have working locking and
> >> decent resilience against data corruption in this scenario? Or
> >> point out any obvious "gotcha" I'm missing?
> >>  
> > I think a better approach might be to simply copy the database to
> > the machine you're working on when you start GnuCash and then copy
> > the database back again afterwards.
> >  
> Go so far as to create a script that would check for your own lock
> file before making the copy.  If lock file not present, then create
> it and copy the database.  Lock file could be as simple as renaming
> file with an extension of "xxx.IhaveIT".
> 
> Then the reverse script would remove the lock after copying the
> updated file back to the original name.
> 
> That way anybody else trying to update the file would know it was 
> already "checked out".

Yep, if I opt for copying the data file to local, I would definitely
use a little wrapper script around Gnucash on the offsite
"Gnucash client" machine. In that case, I guess it would make sense to
use Gnucash's own *.LCK naming convention so that a stock, local,
on-site Gnucash instance would recognize the lock.

So:
1. Check for *.LCK and *.LNK files on remote fileserver and error out
   if found; otherwise continue on to
2. create my own LCK file on the fileserver;
3. rsync remote data file to local;
4. launch Gnucash and wait until it exits;
5. Check for local *.LCK and *.LNK files, which would indicate Gnucash
   crashed, (delete my own remote lock file?) and error out if found;
   otherwise continue on to
6. rsync local data file to remote;
7. remove my own LCK file on the fileserver.

All of which is pretty easy to script. But it does seem like a bit of a
hack, which leads me back to the original question. Is there any
reason to think that Sqlite's built-in "atomic commit" mechanism (as
accessed by Gnucash) would *not* handle sudden disconnection of the
storage media properly? *If* Sqlite's ACID implementation is built to
handle this already, why resort to this manual lock-and-copy hack?

Can anyone with knowledge of Gnucash's sqlite storage backend code weigh
in on whether it's generally reasonable to assume that a sudden
disconnection of the storage media would be handled fairly well?
(eg trying to add transactions will result in an error message, and
when the storage media comes back online any half-written transactions
will be rolled back.) Or is that expectation way out to lunch?

Cheers!
 -Chris


More information about the gnucash-user mailing list