[GNC] Using Gnucash to access file over flaky network

Geoff cleanoutmyshed at gmail.com
Thu Nov 25 04:47:42 EST 2021


Hi Chris

In my opinion this is really an SQLite question, not a GnuCash question.

To quote the SQLite dev team:-
"Rely upon it at your (and your customers') peril. See How To Corrupt 
Your Database Files.

Choose the technology that is right for you and your customers. If your 
data lives on a different machine from your application, then you should 
consider a client/server database. SQLite is designed for situations 
where the data and application coexist on the same machine. SQLite can 
still be made to work in many remote database situations, but a 
client/server solution will usually work better in that scenario."

https://www.sqlite.org/draft/useovernet.html


Good luck!

Regards

Geoff
=====

On 25/11/2021 6:38 am, Chris Mitchell wrote:
> On Wed, 24 Nov 2021 09:30:29 -0800
> "Stephen M. Butler" <Stephen.M.Butler51 at gmail.com> wrote:
> 
>> On 11/24/21 07:54, Chris Green wrote:
>>> On Wed, Nov 24, 2021 at 11:24:21AM -0400, Chris Mitchell wrote:
>>>> Hi all,
>>>>
>>>> For reasons that are complicated but not especially interesting, I
>>>> would like to run Gnucash on one machine, with the data file
>>>> located on a remote machine — with the added challenge that the
>>>> network access available to the Gnucash "client" machine is a
>>>> terrible cellular data connection that sometimes drops without
>>>> warning.
>>>>
>>>> I have daily backups, so I don't need strong guarantees against
>>>> data loss, but if it's possible, I'd like to set things up so
>>>> there's reasonable resilience against a network dropout corrupting
>>>> the remote Gnucash data. I'm not the only user with access to the
>>>> data, so (given that multi-user is still a long way off), I need
>>>> file locking to work. Having to manually delete an orphaned lock
>>>> file after a network dropout is acceptable.
>>>>
>>>> I assume that any of the database server backends would include
>>>> this kind of resilience "out of the box", and I'm not entirely
>>>> unwilling to try my hand at setting that up, but I am by no means
>>>> a qualified database administrator. If I can get sufficient
>>>> resilience by easier means, I'd prefer to stay away from the whole
>>>> database server thing.
>>>>
>>>> What about Sqlite over sshfs?  I realize Sqlite is not designed for
>>>> access to a database residing on a different machine, but my
>>>> inexpert impression is that its "atomic commit" implementation
>>>> should protect against sudden disconnection between the program
>>>> and the storage medium just like it protects against sudden power
>>>> loss. (IE the transaction that's in the midst of being written
>>>> will be lost, but the database should be fine.)
>>>>
>>>> Can anyone confirm whether it's reasonable to expect that Gnucash
>>>> with Sqlite backend over sshfs would have working locking and
>>>> decent resilience against data corruption in this scenario? Or
>>>> point out any obvious "gotcha" I'm missing?
>>>>   
>>> I think a better approach might be to simply copy the database to
>>> the machine you're working on when you start GnuCash and then copy
>>> the database back again afterwards.
>>>   
>> Go so far as to create a script that would check for your own lock
>> file before making the copy.  If lock file not present, then create
>> it and copy the database.  Lock file could be as simple as renaming
>> file with an extension of "xxx.IhaveIT".
>>
>> Then the reverse script would remove the lock after copying the
>> updated file back to the original name.
>>
>> That way anybody else trying to update the file would know it was
>> already "checked out".
> 
> Yep, if I opt for copying the data file to local, I would definitely
> use a little wrapper script around Gnucash on the offsite
> "Gnucash client" machine. In that case, I guess it would make sense to
> use Gnucash's own *.LCK naming convention so that a stock, local,
> on-site Gnucash instance would recognize the lock.
> 
> So:
> 1. Check for *.LCK and *.LNK files on remote fileserver and error out
>     if found; otherwise continue on to
> 2. create my own LCK file on the fileserver;
> 3. rsync remote data file to local;
> 4. launch Gnucash and wait until it exits;
> 5. Check for local *.LCK and *.LNK files, which would indicate Gnucash
>     crashed, (delete my own remote lock file?) and error out if found;
>     otherwise continue on to
> 6. rsync local data file to remote;
> 7. remove my own LCK file on the fileserver.
> 
> All of which is pretty easy to script. But it does seem like a bit of a
> hack, which leads me back to the original question. Is there any
> reason to think that Sqlite's built-in "atomic commit" mechanism (as
> accessed by Gnucash) would *not* handle sudden disconnection of the
> storage media properly? *If* Sqlite's ACID implementation is built to
> handle this already, why resort to this manual lock-and-copy hack?
> 
> Can anyone with knowledge of Gnucash's sqlite storage backend code weigh
> in on whether it's generally reasonable to assume that a sudden
> disconnection of the storage media would be handled fairly well?
> (eg trying to add transactions will result in an error message, and
> when the storage media comes back online any half-written transactions
> will be rolled back.) Or is that expectation way out to lunch?
> 
> Cheers!
>   -Chris
> _______________________________________________
> gnucash-user mailing list
> gnucash-user at gnucash.org
> To update your subscription preferences or to unsubscribe:
> https://lists.gnucash.org/mailman/listinfo/gnucash-user
> If you are using Nabble or Gmane, please see https://wiki.gnucash.org/wiki/Mailing_Lists for more information.
> -----
> Please remember to CC this list on all your replies.
> You can do this by using Reply-To-List or Reply-All.
> 


More information about the gnucash-user mailing list