SVN->GIT link broken?

Derek Atkins derek at ihtfp.com
Tue Mar 26 14:49:20 EDT 2013


On Tue, March 26, 2013 2:24 pm, John Ralls wrote:
[snip]
>
> I think these two are the result of trying to run two instances.

Yeah, looking at the email timestamps this is likely, as these two
messages were dated Fri, 22 Mar 2013 17:40:03 -0400 and 17:41:01.  So yes,
I can believe there were two copies running simultaneously.

>> Running git svn fetch
>> remote: ssh: connect to host github.com port 22: Connection timed out
>>  remote:
>> remote: fatal: The remote end hung up unexpectedly
>> remote: ssh: connect to host github.com port 22: Connection timed out
>>  remote:
>> remote: fatal: The remote end hung up unexpectedly
>> remote: Update script found and executable
>
> And this is probably why: The timeouts caused the script run time to
> exceed 1 minute.

Unfortunately this isn't the case..  The timestamp on this message from
March 23rd.  So it is unrelated to the previous issues.  But it certainly
did cause a failure to push data to github that did not get corrected
until the next commit happened in SVN.

[snip]
>
> Here's the "knocker" script I use:
> #!/usr/bin/perl -w
> use strict;
>
> use Fcntl qw(:DEFAULT :flock);
> use Sys::Syslog;
> use Tie::Syslog;
>
> $ENV{PATH} = "/usr/local/bin:/usr/bin:/bin";
> my $line = <>;
> chomp $line;
>
> my $wlock = "/home/john/wait.lock";
> my $plock = "/home/john/proc.lock";
>
> my ($waitfd, $lockfd);
> close STDIN;
> close STDOUT;
> close STDERR;
> my $logger = tie *STDERR, 'Tie::Syslog', 'local6.info', 'Gnucash_knocker',
> 'pid', 'unix';
> $logger->ExtendedSTDERR();
>
> sysopen($waitfd, $wlock, O_RDWR | O_CREAT)
> 	or die "Unable to open lockfile $wlock:$!";
> my $wait = flock($waitfd, LOCK_EX | LOCK_NB);
> if (not $wait) {  #Didn't get the wait lock, someone's already waiting
> 	close $waitfd;
> 	exit 0;
> }
>
> sleep (2); #Wait a little while for other requests
>
> sysopen($lockfd, $plock, O_RDWR | O_CREAT)
> 	or die "Unable to open lockfile $plock:$!";
> flock($lockfd, LOCK_EX) or die "Failed to acquire the lock: $!";
> close $waitfd; #release the wait lock; next request will be queued to wait
> print STDERR "Received keyword $line";
> my $typere = qr/(gnucash|gnucash-docs|gnucash-htdocs)/;
>
> if ($line && $line =~ /$typere/) {
> 	$line = "gnucash-trunk" if $line eq "gnucash";
> 	print STDERR "Processing Directory $line";
> 	open STATUS, "/home/john/git-svn-mirror update /home/john/$line 2>&1 |";
> 	while (my $out = <STATUS>) {
> 		chomp $out;
> 		print STDERR $out;
> 	}
> }
> else {
> 	foreach my $dir (qw(gnucash-trunk gnucash-docs gnucash-htdocs)) {
> 		print STDERR "Processing Directory $dir";
> 		open STATUS, "/home/john/git-svn-mirror update /home/john/$dir 2>&1 |";
> 		while (my $out = <STATUS>) {
> 			chomp $out;
> 			print STDERR $out;
> 		}
> 	}
> }
> close *STATUS;
> close $lockfd;
> undef $logger;
> untie *STDERR;
>
> It's perhaps a bit more involved than what you need because it has to
> decide which repo to update. You also might prefer to get emails than to
> use syslog.
>
> Regards,
> John Ralls

I'm trying to understand why you use two locks, but I think it's because
the first lock is used as the trigger and the second lock is used to make
sure only one copy runs at a time.  So yes, I can add a "proc lock" in
there to make sure only one instance runs at a time.  Then the only thing
I need to do is handle the case where the push to github fails.

-derek

-- 
       Derek Atkins                 617-623-3745
       derek at ihtfp.com             www.ihtfp.com
       Computer and Internet Security Consultant



More information about the gnucash-devel mailing list