Where does Gnucash save it backup files when using mysql data source

L. D. James ljames at apollo3.com
Sun Jul 5 17:48:59 EDT 2015


On 07/05/2015 05:17 PM, AC wrote:
> On 2015-07-05 13:42, L. D. James wrote:
>
>> When I was using Peachtree/Pervasive, the option was given to install on
>> a server or client.  I chose the server option.  Then I used clients to
>> login to and use the data on the server.  I stopped using Peachtree
>> because I didn't want to retain the Windows server as a main part of my
>> network.  If I could have easily afforded the $1500.00 to install it on
>> my Linux server, I would have.  I couldn't justify the expense.  So I
>> ran Windows on a virtual machine for Peachtree (for sever years) and in
>> the recent months stop using it altogether and have switched to Gnucash.
>>
>> I'm saying that to say, I was running it on a Windows server.  That was
>> one of the install options that I chose.
>
> You're not really understanding how a database like MySQL works in the
> background and that's where we're having difficulty communicating.  It
> has nothing to do with how large the database is on disk, it has to do
> with how the storage is utilized, protected, and synchronized (I also,
> in a previous email, did not specify that my large 400GB backup was a
> database, it was just a plain system backup of normal work files...not
> the same thing, it was merely a description of how image versus
> differential backups work).
>
> The Pervasive database service, even as a server, uses flat files for
> its storage system.  This is not how MySQL stores data.  Therefore, the
> file locking mechanisms in place in an operating system work fine for
> Pervasive databases but do not work at all for a MySQL database.  It
> works for Peachtree/Pervasive because they used the way Pervasive
> behaves on a low level to achieve the goals of performing backups,
> saves, etc.  This isn't the same with Gnucash using something like MySQL
> because MySQL does not behave the same way and the behavior can not be
> controlled by MySQL unlike the tight integration between Pervasive and
> Peachtree.
>
>
>> The backup feature already exist with the Mysql the way it is.  I use it
>> all the time with Gnucash, at this time.  It works flawlessly.  It takes
>> a fraction of a second to backup the software.  I have the option of
>> backing it up to XML or Sqlite.  In my environment the I can't back it
>> up (save as) to Mysql because I have chosen not to give the userID that
>> access.
> Yes, I do understand it exists as is.  However, what I am saying is the
> feature you are enjoying right now ("Save As" on a MySQL backed
> database) works in this application because it is only one user at a
> time.  This will not work properly, and will be potentially dangerous,
> in a multiuser environment.  It doesn't matter that you haven't given
> permissions to the user, the process is still the same, the originating
> database must be locked prior to this activity.  For a single user, that
> isn't a problem, for a multiuser environment it is a big problem.
>
>> Again, the data is stored on a server, on a different machine.  Any of
>> the machines in my shop, as well as any of the offsite locations can
>> access the accounting system and manually click "Save as" and have the
>> complete data on their  local machine.
> The location doesn't matter, it's the engine that matters.  MySQL could
> be installed locally on the same machine as Gnucash, on a network server
> in the closet, or on a cloud server halfway across the world.  The
> behavior is the same.
>
>> I don't understand why you're so animate on stopping the "save as" just
>> because the data might grow large.  I can't imagine it growing so large
>> that it would ever take more than a few seconds to "save as" two years
>> of accounting data.  You can save a couple of thousand pages of
>> Microsoft pages in a couple of seconds.
> Again, this has nothing to do with data size, it's data access,
> concurrency, and integrity.  Please ignore any mentions of file size,
> they're irrelevant.  For the purposes of my statements, the "save as"
> feature is inherently dangerous to a multiuser environment which Gnucash
> currently does not support.  Please also note that "multiuser" is
> specifically referring to multiple concurrent users.  It is true that,
> under the current implementation, multiple unique installations of
> Gnucash on different computers could read data from a central MySQL
> server.  However, those copies can not be running simultaneously or
> there will be data corruption.  It is still a single user program which
> happens to support remote data storage using a database engine.
>
>> So again, it's working now.  What I have asked for is already
>> implemented.  It is just lacking a feature to specify the directory and
>> now and actually add numbers for incrementing to the user configured
>> filename.
>>
>> At present there multiuser feature isn't available in the mysql. But
>> again, the saving (backing up) of the actual data is there and works
>> flawlessly.
>>
>> You're saying it won't work when multiuser is implemented because the
>> data might change.  However, there are api's for SQL to lock the files
>> so that the data doesn't change.  This is done all the time with file
>> locking when working with any files.  I can't see why it would be so
>> complicated for the database to be unavailable for two seconds while the
>> "save as" functionality that already works, is happening.
>
> The API to "lock a file" doesn't exist (because there are no files to
> lock, file locking is an operating system feature).  There are methods
> to lock records, tables, and whole databases but they are different for
> each database engine.  I mentioned these already previously
> (transactions, mutexes, semaphores, etc.)  If you have ever worked with
> locking on databases you would realize that locks are not instantaneous.
>
> Here is an example: If a copy of Gnucash (let's call this GC_A) is
> issuing the Save As, this will require a total database lock to ensure
> the data is static.  A database lock will first ask any other threads if
> the lock can be placed.  If another instance of Gnucash (we'll call this
> one GC_B) has a record open, the database lock will fail.  So then
> GC_A's choices are to wait until the record lock by GC_B is released,
> abort and close without performing the save, or abort and inform the
> user that it has failed and then wait for the user to decide what to do
> next.
>
> The first two choices will result in software that looks like it had an
> unexpected failure (the wait looks like a hang and the abort and close
> looks like a crash).  A user may be very confused about what's happening
> and this results in a lot of support requests.  So we'll take option
> three and pop up a dialog box that says the lock failed, would you like
> to try again or give up.  If the user chooses to try again, there is
> still no guarantee that the lock will succeed.  The process repeats.
> Eventually you reach a point where GC_A finally gets the lock or the
> user gives up.
>
> Now, suppose GC_A finally succeeded.  The problem now is that GC_B wants
> to make an updated (scheduled transaction happens, a sudden need for a
> report, etc.).  GC_B can't complete any of its work because GC_A has the
> database locked in its entirety.  So once again, like in the case of
> GC_A, GC_B must wait, abort, or inform the user and then, if the user
> tries again, there's still no guarantee GC_A is done and the database
> lock released.
>
> This situation is exactly why banks and other institutions use known
> solutions replication servers, snapshots, views, logging, and scheduled
> downtime to perform backups, exports, and the like.  It is a known and
> mostly solved problem but it does require experience and software
> outside the routine client (Gnucash in this case).  IN the end you can
> have your export if you really want it but you should not expect it to
> come from inside Gnucash if it is to be safe for the data.  It has to
> come from a utility that runs on the outside.  I never said you can't
> export your data in general, I only say that it would be a very bad idea
> to do it within Gnucash for a multiuser (concurrent) environment.
I understand everything that you just described.  This is what I was 
going to suggest to the list before I discovered that Mysql had been 
implemented.  I was going to suggest that when a new user opens the XML 
file, there is an option to override and open it anyway.  Many people 
may do this.  I was going to ask that a separate option (I was going to 
provide the wording) to have user "B" opening the XML file have an 
option to request to have access.  Then the request goes to user "A" 
which may permit or ignore.  Then user "B" can either use readonly or 
the ignore and override feature.  There are other details, but that is 
the gist.  I was going to offer this description and help the group 
implement this with XML.  There are very specific reasons for offering 
it, of which I won't go into the vivid details just now.

You appear to have concern for the integrity of the data and 
transactions.  So do I.  Currently you have a feature to ignore and open 
a locked file.  That means two people have the single file at the same 
time.  The users are confused (even me) of whose data will be 
preserved.  I have a fix for that.

When I saw the Mysql feature, I thought I'd bring up this functionality 
here instead.

I didn't realize it was you who brought up 400 megs as a reason not to 
work with my concern.  I don't always read or remember the name of the 
input and suggestions, but I always remember what is being said.

It appears that you continue to bring up unrelated exaggerations to 
suggest that we shouldn't bother with this concern.  I agree that we 
should ignore that distraction.

I made a reference to file locking, as something that people do with 
single files.  It's done with the XML file.

Now you're have described what has to happen to "save as" the data. Stop 
everyone from working for two seconds.  Or as you mentioned, inform the 
user that it can't lock the files and allow the user to wait or exit.  
If you can't figure out a way to make that clear without confusing the 
users, I'll be glad to help.  I'll even help with the documentation and 
wikis.

It takes a fraction of a second for update a record, or request to 
update a record and wait.  If the user doesn't want to wait a fraction 
of a second, then one of then the one user can exit without saving, and 
feel that the other person with the given access will use the default 
save when they exit (or when the last person exits).

I appreciate that you took the time to describe this algorithm.  I was 
trying to focus on the things that you were saying and respond kindly so 
that I could have an ear to bring up what you described, which I see as 
very solid functionality.

Giving the user an opportunity to wait and save, or exit, would be a lot 
better than no "save as" at all.

I really don't think it's as complicated or would be for any user as you 
suggest.  You might have thought it would be so complicated at first, 
but it appears that you are starting to see what I'm suggesting and the 
feasibility of it.

If you would hold off on throwing unrelated things as a reason against 
like the 400megs of data, or how complicated it is to install the mysql 
software, the path would be clearer.

Installing the mysql software is just as simple as installing the other 
libraries and tools such as the gtk interface (or whatever GUI kit is 
being used) etc.

Again, I'll assist in writing the script to check for, install and test 
for the mysql support needed.  I already do this for lots of people.  
The user doesn't have to understand any more about using that as they do 
have to learn how to use the GTK libraries.

-- L. James

-- 
L. D. James
ljames at apollo3.com
www.apollo3.com/~ljames


More information about the gnucash-user mailing list