gnucash-htdocs master: Replace obsolete <tt> mostly by <kbd>

Frank H.Ellenberger fell at code.gnucash.org
Sat Oct 10 21:08:25 EDT 2020


Updated	 via  https://github.com/Gnucash/gnucash-htdocs/commit/2376fae8 (commit)
	from  https://github.com/Gnucash/gnucash-htdocs/commit/8126925e (commit)



commit 2376fae85e196d6496b89c19606b74cb61f6edfd
Author: Frank H. Ellenberger <frank.h.ellenberger at gmail.com>
Date:   Sun Oct 11 03:08:09 2020 +0200

    Replace obsolete <tt> mostly by <kbd>

diff --git a/news/030429-outage.news b/news/030429-outage.news
index 175e1cb..f633027 100644
--- a/news/030429-outage.news
+++ b/news/030429-outage.news
@@ -5,7 +5,7 @@ The GnuCash.org Server is back online, and should now be fully operational.
 However, there has been some data loss: if you subscribed (or unsubscribed)
 to any mailing list, between 12 December 2002 and 28 April 2003, your 
 membership info has been lost.  Furthermore, <b>all</b> configuration
-info for the German <tt>gnucash-de</tt> mailing list has been lost.
+info for the German <b>gnucash-de</b> mailing list has been lost.
 (My sincerest apologies, Christian).  However, most of the mailing list 
 archives should be intact (possibly excepting Jan-April 2003, which might
 be damaged).  All web pages should work at least as well as before,
@@ -18,18 +18,18 @@ It was the classic server-failure triple-whammy.
 This server has RAID disk mirrors to minimize down-time due to a 
 failed disk, and is backed up nightly in order to safeguard against 
 catastrophic data loss.  Hard-drive status was monitored with 
-<tt>smartmontools</tt> and reported regularly with <tt>logcheck</tt>.
+<kbd>smartmontools</kbd> and reported regularly with <kbd>logcheck</kbd>.
 So how could this belt-and-suspenders system be down so long, and
 result in lost data before its all over?
 <br></br><br></br>
-Over the last few months, <tt>smartmontools</tt> was reporting occasional
+Over the last few months, <kbd>smartmontools</kbd> was reporting occasional
 disk status changes,  but none of these seemed to be in the form of warnings,
 or had any hint of being dire.  At the same time, there were increasing
-numbers of <tt>status error: status=0x58 { DriveReady SeekComplete DataRequest }</tt> 
+numbers of <samp>status error: status=0x58 { DriveReady SeekComplete DataRequest }</samp> 
 messages showing up in the system log.  In mid-April, these messages started
 showing up at least hourly, and were coupled with the cryptic S.M.A.R.T. messages
-(it didn't help that I was running the older, more cryptic <tt>smartsuite</tt>,
-not the new, improved <tt>smartmontools</tt>).  Finally, the server locked
+(it didn't help that I was running the older, more cryptic <kbd>smartsuite</kbd>,
+not the new, improved <kbd>smartmontools</kbd>).  Finally, the server locked
 up, waiting for a DMA to complete, that never would.  Reboot. Locks up.
 Reboot again, locks up (warlord calls by phone to point this out).  
 I disabled DMA, went to PIO-mode for the disk in question, and things 
@@ -41,16 +41,16 @@ My logic was this:  there are two disks in the raid array; both are exact
 duplicates of each other.  Therefore, if I replace the failed disk, the 
 contents of the good disk will be restored onto the blank disk automatically.
 Easy as pie.  I've done it many times before.   It didn't work this time.
-Upon reboot, I got a gazillion <tt>fsck</tt>'ing errors, the file system was corrupted. 
-In addition, I was getting a <i>lot</i> of <tt>status error: status=0x58 
-{ DriveReady SeekComplete DataRequest }</tt> from what used to be the 'good' disk.
+Upon reboot, I got a gazillion <kbd>fsck</kbd>'ing errors, the file system was corrupted. 
+In addition, I was getting a <i>lot</i> of <samp>status error: status=0x58 
+{ DriveReady SeekComplete DataRequest }</samp> from what used to be the 'good' disk.
 I plowed on.  At this time, I assumed that maybe both disks were bad, 
 a reasonable assumption; these were the infamous IBM-lawsuit drives.
 I guessed that the raid array was hiding the badness from me: 
 whenever one disk had trouble, the RAID would go to 
 the other disk, and all was well in the kingdom, even though anarchy seethed
 just below the surface.  Oh well.  I procured a second hard drive, and 
-replaced that.  With more <tt>fsck</tt>'ing error in the process.  Then I notice
+replaced that.  With more <kbd>fsck</kbd>'ing error in the process.  Then I notice
 that I'm still getting SeekComplete's in the syslog, even with the new 
 disks. Now, the replacement disks are the same lawsuit-brand and model number 
 as the old disks, so woe is me, this is my third mistake, I assume, incorrectly, 
@@ -67,20 +67,20 @@ one plugs in or removes controllers, enables or disables controller
 ports, etc.  This can be overcome, but is a provides a steady stream
 of hurdles to jump: one must boot a rescue diskette first, then
 mount, then re-write the boot sector, then reboot, then edit 
-<tt>/etc/fstab</tt>, and then try again. Over and over and over.
+<kbd>/etc/fstab</kbd>, and then try again. Over and over and over.
 It didn't help that my rescue diskette didn't have RAID on it:
 so that was one more thing to hack around.   Finally build
 a stable system, and now it comes time to restore the data
-files that were <tt>fsck</tt>'ed out of existence.  To restore 
-<tt>/usr</tt>, I decide that reinstall of the OS is appropriate.
+files that were <kbd>fsck</kbd>'ed out of existence.  To restore 
+<kbd>/usr</kbd>, I decide that reinstall of the OS is appropriate.
 I then restore the FTP site, which was badly corrupted.  Restore
 the mailing lists; no problems, only October 1998 was lost and 
 restored.  Restore the website; only minor damage there. 
 Then restore the mailing list subscriber info in 
-<tt>/var/lib/mailman/lists</tt> ... Uhh ... whoops.  That directory
+<kbd>/var/lib/mailman/lists</kbd> ... Uhh ... whoops.  That directory
 was <i>not</i> backed up nightly.   I had falsely assumed that 
-everything in <tt>/var/lib/mailman/lists</tt> was stuff that could
-be recovered by re-installing <tt>mailman</tt>.  I had no idea that it
+everything in <kbd>/var/lib/mailman/lists</kbd> was stuff that could
+be recovered by re-installing <kbd>mailman</kbd>.  I had no idea that it
 kept subscriber info there.  Mistake number four (number zero?):
 this critical directory was not one that was backed up nightly.
 I was lucky to find a December 2002 backup of it;  it could 
diff --git a/news/031106-debian.news b/news/031106-debian.news
index 72faec9..b8edebb 100644
--- a/news/031106-debian.news
+++ b/news/031106-debian.news
@@ -8,12 +8,12 @@ I have just updated the gnucash package on people.debian.org.
 All dependencies should work now and after upgrading it should work
 'out of the box'.
 <br></br><br></br>
-Edit <tt>/etc/apt/sources.list</tt> and add <br></br>
-<tt>
+Edit <kbd>/etc/apt/sources.list</kbd> and add <br></br>
+<kbd>
 deb http://people.debian.org/~treacy/gnucash.woody ./
-</tt>
+</kbd>
 <br></br>
 then<br></br>
-<tt>apt-get update ; apt-get install gnucash</tt>
+<kbd>apt-get update ; apt-get install gnucash</kbd>
 </i>
 </p>



Summary of changes:
 news/030429-outage.news | 32 ++++++++++++++++----------------
 news/031106-debian.news |  8 ++++----
 2 files changed, 20 insertions(+), 20 deletions(-)



More information about the gnucash-changes mailing list