Linux Gazette... making Linux just a little more fun! Copyright © 1996-98 Specialized Systems Consultants, Inc. _________________________________________________________________ Welcome to Linux Gazette! (tm) _________________________________________________________________ Published by: Linux Journal _________________________________________________________________ Sponsored by: InfoMagic S.u.S.E. Red Hat Our sponsors make financial contributions toward the costs of publishing Linux Gazette. If you would like to become a sponsor of LG, e-mail us at sponsor@ssc.com. Linux Gazette is a non-commercial, freely available publication and will remain that way. Show your support by using the products of our sponsors and publisher. To subscribe to Linux Journal, click here. _________________________________________________________________ Table of Contents January 1998 Issue #24 _________________________________________________________________ * The Front Page * The MailBag + Help Wanted + General Mail * More 2 Cent Tips + Followup to PostScript and VC Key Sequences (LG#23) + PostScript $0.02 follow-up + Yet another cheap tip + 2 cent tip - dosemu + Re: 2c Tip "Finding What You Want with find" + Re: Finding What You Want with find + Finding What You Want with find Part III + More on finding + Another way to find + Yet another way to find + A final(?) way to find + Re: I need some help + Spinning Down Unused HDs + LG Tips and Tricks (Netscape) + Easter Eggs in Netscape + Calculator Tip + Security script + Controlling cron.hourly + Syslog and ping * News Bytes + News in General + Software Announcements * The Answer Guy, by James T. Dennis + Netscape Mail Crashing + Slackware Help + Netscape /var/spool/USER + Getting Rid of Virtual Screens + diald's niche + Upgrade to Red Hat 5.0? + Red Hat Linux and WABI and other things + Linux as a PDT * Quick autofs Tutorial, by Mark Nielsen * Buying A Laptop, by Joel Jaeggli * Copying Files Using Mirror, by Gerd Bavendiek * Linux Benchmarking: Part 3 -- Interpreting Benchmark Results, by André D. Balsa * LXNY at UNIX EXPO '97, by Michael E. Smith * More Adventures with SAMBA , by Dave Nelson * My Linux Revolution, by Ylian Saint-Hilaire & Erik Campo * New Release Reviews, by Larry Ayers + KDE and Gnome + Updates and Correspondence * Product Review: Applixware, by Gary Moore * A Bit About Security, by Marcus Berglund * The Standard C Library for Linux, Part one, by James M. Rogers * xscreensaver, by Jamie Zawinski * The Back Page + About This Month's Authors + Not Linux The Answer Guy The Graphics Muse will return next month. The Weekend Mechanic will return next month. _________________________________________________________________ The Whole Damn Thing 1 (text) The Whole Damn Thing 2 (HTML) are files containing the entire issue: one in text format, one in HTML. They are provided strictly as a way to save the contents as one file for later printing in the format of your choice; there is no guarantee of working links in the HTML version. _________________________________________________________________ Got any great ideas for improvements! Send your comments, criticisms, suggestions and ideas. _________________________________________________________________ This page written and maintained by the Editor of Linux Gazette, gazette@ssc.com _________________________________________________________________ "Linux Gazette...making Linux just a little more fun!" _________________________________________________________________ The Mailbag! Write the Gazette at gazette@ssc.com Contents: * Help Wanted -- Article Ideas * General Mail _________________________________________________________________ Help Wanted -- Article Ideas _________________________________________________________________ Date: Sun, 30 Nov 1997 14:08:24 +0000 From: David Stern kotsya@mailhost2.cac.washington.edu Subject: Help Wanted -- Article Ideas I'm at that point in my Linux development where I'm comfortable with the basics, and am venturing out to learn and discover the myriad of alternatives which exist. While I appreciate the many alternatives, it can be difficult for someone with little experience to decide which MUA, MTA, proxy..is most suitable. When each individual must personally begin anew to evaluate the field, unnecessary repetion of efforts results and often a selection is made based on incomplete information. When the user later discovers a more suitable alternative, and possibly later another, a glutton of inefficiency results. On that note, I'd like to suggest "head-to-head" comparision articles of similar programs. A chart with columns and rows which represent the programs and the features would be invaluable for Linux users ranging from completely new to advanced, thus I would consider that a necessity. Optionally, different recognitions may be given for exceptional achievement. Notes on individual programs or categories, and a brief summary would probably be required. If thorough analytical evaluations were performed, this may exceed the resources and other imposed limitations of Linux Gazzette, but I'm not asking for that much depth. I'm just looking for a cursory examination of programs with a comparison of features in a "side-by-side" format. While I appreciate the reviews of individual programs, and enjoy the deeper attention which can be given, there is an ever-increasing number of alternatives available to the Linux user, and summary comparisons of programs is now a very real need, and the importance will only increase with time. Please consider adding a side by side summary comparison of programs feature article. I think this would not only make Linux Gazzette better than it already is, but also expand the readership. Thanks and sincerely, David Stern _________________________________________________________________ Date: Fri, 28 Nov 1997 08:35:04 From: Erwin Penders ependers@cobweb.nl Subject: passwd shadow convert problem I am running RedHat 4.2 with normal passwords since a couple of month's. Now i read the shadow-password howto and i wanted this also to work on my system. After reading the manual i went to a 'blank' redhat system with a couple of users and i ran /usr/sbin/pwconv5 and the shadow was up and running fine. BUT on another system (same as the first) but with a lot more users the pwconv5 runs but won't stop. It makes an empty shadow file and i have to kill pwconv5 because it isn't doing anything. I then copied the passwd file from the second to the first system and tried on the first system... and the same problem.. no shadow. Can anybody tell me what i do wrong !? Thanks everybody. Erwin Penders _________________________________________________________________ Date: Fri, 21 Nov 1997 01:39:51 From: Manish Oberoi oberoi@coeibm.rutgers.edu Subject: printing problems Anyone that can help me. I'd love to hear it. I try running the lpr, but everytime I get no name for local machine. How do I set this and/or what is the problem. Manish Oberoi _________________________________________________________________ Date: Wed, 10 Dec 1997 20:33:38 -0800 From: Nolan Zak nzak@uniserve.com Subject: Help Wanted! I'm running RH 4.2 with kernel 2.0.30 on an Intel P90, I've only been at this for a couple weeks so go easy on me if this is a stupid question. :) I'm trying to get ppp to working for dialing into my ISP, but no matter what I do it disconnects. Here's a small description of what is going on: I set up /dev/modem --> /dev/cua1 and enabled full permissions on both. Set the jumpers on my modem for com2, irq3. Ran setserial to setup the proper device settings. Re-compiled the kernel for ppp. checked my modem init strings. Now, no matter what I use (minicom, shell scripts, netcfg), the chat script goes through the proper procedure and starts up ppp on the server side, passes control to pppd, which connects ppp0 /dev/modem, and after that, the modem hangs up (within about 2 seconds). I've tried all kinds of command line args to pppd, with no luck. Can anyone help me out here? Later, Nolan My webpage--> http://users.uniserve.com/~nzak/welcome.htm _________________________________________________________________ Date: Thu, 11 Dec 1997 22:20:54 -0500 From: "atm" atm@dapa.com Subject: Linux and routing I have heard that you can connect a LAN to the internet via just 1 assigned IP address. This is what I am planning on doing, however, I do not know how one would go about doing it, and I would like to ask you if you could do an article about it. (Any takers among our readers? --Editor) I plan on getting a cable modem soon, so the bandwidth would be pretty high, so that is why I have decided to try to make this connection provide for my whole house via a LAN connection in my home. What I have read is that you could use the private IPs, meaning the 10.x.x.x or so, 192.168.x.x and some others for the IP of the LAN and have these connect to some box (the LINUX box?) that would provide its connection to the internet to the inside LAN connected to the box. Is the problem that you would have to route the assigned address to the private IPs for the LAN use. I have also read that this would slow down the connection a bit or something, but that is a price I am willing to pay. So, the summary of the question is how would I be able to connect many computers to the internet via just 1 assigned IP address? I would like to be able to do it using my LINUX box connected to the internet via cable modem, and to my LAN via an Ethernet link. Any help is much appreciated, thanks. _________________________________________________________________ Date: Fri, 12 Dec 1997 10:41:54 +0000 From: John Fisher john@Atropos.apana.org.au Subject: Of Mouse and Men (no Cheese) A very new boy to all this I am :-) Problem-- Using a 486 pc & Slackware I'm unable to use my mouse due to this error: Too many symbolic links encountered /dev/console Would very much apprecaite some help. Regards John Fisher _________________________________________________________________ Date: Fri, 12 Dec 1997 09:23:14 -0500 From: Wenhao_Meng@dadebehring.com Subject: try to use a 386 computer I am new in the Linux world. How new? I am so new that I have just ordered a Redhat release 5.0. Though this is a new world I am very glad I am one of you, the Linux lovers. I used to have a 386 25 MHz computer. Not long time ago I bought a Pentium 200 MHz computer. Since then I have not played with 386. Is there any easy and economical way to connect the 386 to the Pentinum computer where I will install the Release 5.0. If so, what I can do with it or at lease what I can learn from it. Thank you very much. Waiting for talking to you. _________________________________________________________________ Date: Tue, 16 Dec 1997 08:44:15 -0700 From: Doug Milligan doug@nwrks.com Subject: Help Wanted: RedHat 5.0 sound Have installed RedHat 5.0 and configured the sound card using sndconfig. All went well and I heard the demo sound bite of Linus. However, I have never heard another sound since. When browsing web sites with sound, no audio is played. Anyone have any ideas? Doug Milligan _________________________________________________________________ Date: Thu, 25 Dec 1997 12:45:29 -0800 (PST) From: karl rossing unixb0y@yahoo.com Subject: LINUX AS A PDT I was wondering if it is possble to get windows 95/NT to authenticate to LINUX (using nis or nis+). I'm really getting tired of adding accounts on the nt boxes for the linux boxes (for smb)...Is there any commercial software availible? I know of d-sync [http://www.m-tech.ab.ca/psynch/index.html] and NSGINA [http://www.dcs.qmw.ac.uk/~williams/] which seems a bit of work to setup... I'm not really looking for passwd syncronisation, i'd like to consolidate it to the linux box, because the users use both linux/95/nt. nuff said Thanks, Karl Rossing _________________________________________________________________ Date: Sun, 04 Jan 1998 23:54:37 +0100 From: Gabriele Giansante gvgsoft@madnet.it Subject: Perl and HTML please pardon me for my bad english. I need help for one exam in my university. I have to do a script CGI in Perl and I have to recall with HTML. I have done all. Perl compile without errors the script but when I run the HTML page and choose the link to the script, I obtain only a list of script. Why is it? I put in the Perl script the line #!/usr/local/bin/perl. I know this is used to indicate the Perl compiler. I work on Linux RedHat 4.1 trying to execute the script with browser ARENA and NETSCAPE. I enjoy if you can help me. I see Linux Gazette now the first time and like it because I find many help on my questions. Pardon my english and Thank you. _________________________________________________________________ General Mail _________________________________________________________________ Date: Sun, 7 Dec 1997 11:15:36 PST From: Marty Leisner leisner@sdsp.mc.xerox.com Subject: some requests When including more than a few lines of code, include a link to the code (i.e. the original source files). In issue 22, I had problems with the line breaks cutting and pasting a program from netscape into a window, saving and recompiling. Marty _________________________________________________________________ Date: Wed, 10 Dec 1997 13:36:05 -0600 (CST) From: Justin Dossey dossey@ou.edu Subject: Help for trival problems I notice that a lot of people write the Gazette with fairly trivial problems that are difficult to solve via non-interactive media (email). I'd like to remind some and inform others of the Linux Internet Support Cooperative. An excerpt from the LISC home page (http://www.linpeople.org) says: "Since 1994, a small and somewhat foolish group of Linux system users and administrators have been giving free technical support for Linux under the name LinPeople, on Internet Relay Chat (IRC). With Linux being a free operating system, it only seemed appropriate to provide a free means of supporting it.Since 1994, a small and somewhat foolish group of Linux system users and administrators have been giving free technical support for Linux under the name LinPeople, on Internet Relay Chat (IRC). With Linux being a free operating system, it only seemed appropriate to provide a free means of supporting it." It sometimes seems to linux users with problems that no one is interested in helping them. They post to news and don't get a reply, they send email to the Gazette and feel ignored. When you have a problem, especially if you suspect that others might have the same, try LISC. With most internet-connected Linux boxen, it's just a matter of typing: ircii irc.linpeople.org /join #linpeople and then asking the question. The people at LISC will do what they can to solve your problem, teaching you about Linux at the same time. Justin Dossey _________________________________________________________________ Date: Thu, 11 Dec 1997 06:35:00 -0500 (EST) From: Benjmin Lee Adamson ladamson@itd.nrl.navy.mil Subject: Ah... Goodstuff... I just found the Linux Gazette... I haven't read all of them yet, but I really dig what I've found so far... :) Really really really really goodstuff. :) _________________________________________________________________ Date: Fri, 12 Dec 1997 01:03:04 +0100 From: Diego Cortassa cortassa.diego@usa.net Subject: Netscape Hidden tips:w I saw Ivan Griffin's 'Netscape Hidden "Easter Eggs"' tip on Linux Gazette Issue 23 and I've got one more cool special URL: about:mozilla Read the message and take a look to the N animation while downloading a web page ! :-) P.S. Linux Gazette is GREAT !!!!!!!!! Diego Cortassa _________________________________________________________________ Date: Sat, 20 Dec 1997 06:13:59 -0700 From: Sengan Baring-Gould senganb@cyrix.com Subject: Loading times Hi, Thanks for the great work at linuxgazette.com. I'd like to suggest an improvement: that the loading does not get paused while the massive Linux Gazette banner at the top of the page gets loaded... it's a pain on a slow link. Thanks Sengan (You might try turning off the display of graphical images using your browser. Alternatively, you can wait a short while and then stop the loading. Although the graphics may be incomplete, the text should be there. --Editor) _________________________________________________________________ Date: Sun, 21 Dec 1997 20:03:32 +0100 From: Ingo Oeser ioe@informatik.tu-chemnitz.de Subject: Kewl new cover image The subject just says what I would like to tell you: The cover image (the one with "Linux Gazette" inside) really looks great! cu Ingo _________________________________________________________________ Date: Tue, 30 Dec 1997 09:52:10 -0500 (EST) From: Kragen Sittler sittler@erim-int.com Subject: http://www.operasoftware.com/alt_os.html Using WinNT at work, I discovered this fabulous browser called Opera. It's the fastest web browser I've ever used, including Lynx, but has most of the features I want from Netscape. Also, it's fairly small -- right now, my Opera process is under 5000K, even though it has six fairly heavy web-pages open, and the download size just grew over one megabyte. They're doing this funky pledge-drive thing where they ask people to promise to buy copies of Opera for $35 for other platforms -- Mac, Be, OS/2, and Linux -- before they've started developing Opera for those platforms. They say they haven't gotten much support from the Linux community -- perhaps it's because not many people have heard of them? (I learned about Opera from Borland's web pages, btw.) Kragen _________________________________________________________________ Date: Tue, 30 Dec 1997 20:37:26 +0530 From: Sudhir Krishnan sudhir@kaveri.tifr.res.in Subject: Can I help you? I have been using Linux for more than a year now. There's no other OS that fascinates me more than Linux! I have been programming in Linux using gcc. I have made my own text editor for Linux, with various modes for emulating emacs, vi and turbo c editor keystrokes. Also there are modes for C, C++ and Pascal programs so that the keywords are highlighted. My home page's location is: http://www.geocities.com/SiliconValley/Pines/9147/ Here I have an entire page dedicated to Linux tips and help, meant for the Linux newbie. There are sections regarding PPP configuration, kernel compilation, installation and partitioning etc. Please let me know if any of these could be of help in any way. regards, - sud - _________________________________________________________________ Date: Wed, 31 Dec 1997 14:23:26 PST From: Marty Leisner leisner@sdsp.mc.xerox.com Subject: Troff/Tex debate I read Andrew Young's (aty@mintaka.sdsu.edu) letter in December, then read Larry Ayers Issue 22 column... I take issue with two of Larry's statements: "Groff is the epitome of the non-user-friendly and cryptic unix command-line tool." Larry also says: "Learning to use Groff on a Linux system might be an uphill battle, though Linux software developers must have learned enough of it at one time or other, as most programs come with Groff-tagged man-page files. Groff's apparent opacity and difficulty make LaTeX look easy in contrast!" I'm not sure LaTeX is an improvement in these areas. I've used troff for over 10 years and recently started to use LaTeX (in fact, many times LaTeX is far more obscure then troff). And there are a few features in LaTeX I miss (like: .sy stat \n(.F | fgrep Change >change.time .so change.time .sy rm change.time to get the document time into the document (as opposed to the "TeX" time, or having to manually change the date, which is invariably wrong. I have a makefile which puts this in a file, then includes the file) The biggest problem is the lack of reference materials for troff. I've used Unix Text Processing (Dougherty/O'Reilly) which Andrew mentions...I've never seen or heard of the other book... There are some references for troff/pic/indexing tools on: http://cm.bell-labs.com/cm/cs/cstr.html (including the one mentioned later) There are also good papers on troff in: 4.4BSD User's Supplementary Documents (I'm not sure which ones can be redistributed and which ones are only in the book). FYI, from Eric Raymond's online jargon file: :troff:: /T'rof/ or /trof/ /n./ [Unix] The gray eminence of Unix text processing; a formatting and phototypesetting program, written originally in PDP-11 assembler and then in barely-structured early C by the late Joseph Ossanna, modeled after the earlier ROFF which was in turn modeled after Multics' RUNOFF by Jerome Saltzer (*that* name came from the expression "to run off a copy"). A companion program, {nroff}, formats output for terminals and line printers. In 1979, Brian Kernighan modified troff so that it could drive phototypesetters other than the Graphic Systems CAT. His paper describing that work ("A Typesetter-independent troff," AT&T CSTR #97) explains troff's durability. After discussing the program's "obvious deficiencies -- a rebarbative input syntax, mysterious and undocumented properties in some areas, and a voracious appetite for computer resources" and noting the ugliness and extreme hairiness of the code and internals, Kernighan concludes: None of these remarks should be taken as denigrating Ossanna's accomplishment with TROFF. It has proven a remarkably robust tool, taking unbelievable abuse from a variety of preprocessors and being forced into uses that were never conceived of in the original design, all with considerable grace under fire. The success of {{TeX}} and desktop publishing systems have reduced `troff''s relative importance, but this tribute perfectly captures the strengths that secured `troff' a place in hacker folklore; indeed, it could be taken more generally as an indication of those qualities of good programs that, in the long run, hackers most admire. marty _________________________________________________________________ Date: Fri, 02 Jan 1998 15:33:15 +0100 From: Wolfgang Laun Wolfgang Laun@aut.alcatel.at Subject: Linux Gazette/GNU/Linux Benchmarking A recent LG article by André Balsa on benchmarking provided interesting material to me. Having a little experience with benchmarks myself (some of it while checking optimizing efforts myself on a compiler) I have found that caching on Intel CPUs can significantly distort results. In the first part you mention that caching can be disabled via the BIOS setup. If you write your own benchmarks (or have the source), you could also consider using the pertaining CPU instructions, easily inserted using gcc's _asm_. This could be used to keep caching up while running the code you want to measure and to flush between cycles, in order not to "carry over" a cache bonus from the previous iteration. Wolfgang _________________________________________________________________ Published in Linux Gazette Issue 24, January 1998 _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Next This page written and maintained by the Editor of Linux Gazette, gazette@ssc.com Copyright © 1998 Specialized Systems Consultants, Inc. _________________________________________________________________ "Linux Gazette...making Linux just a little more fun! " _________________________________________________________________ More 2¢ Tips! Send Linux Tips and Tricks to gazette@ssc.com _________________________________________________________________ Contents: * Followup to PostScript and VC Key Sequences (LG#23) * PostScript $0.02 follow-up * Yet another cheap tip * 2 cent tip - dosemu * Re: 2c Tip "Finding What You Want with find" * Re: Finding What You Want with find * Finding What You Want with find Part III * More on finding * Another way to find * Yet another way to find * A final(?) way to find * Re: I need some help * Spinning Down Unused HDs * LG Tips and Tricks (Netscape) * Easter Eggs in Netscape * Calculator Tip * Security script * Controlling cron.hourly * Syslog and ping _________________________________________________________________ Followup to PostScript and VC Key Sequences (LG#23) Date: Thu, 4 Dec 1997 16:43:47 +0000 (GMT) From: Ivan Griffin ivan.griffin@ul.ie I just wanted to point out that some of my 2cent tips in Issue 23 of the Linux Gazettte (December, 1997) were a little funky in their appearance. While it doesn't really matter at all with the VC key sequences, it may affect someone's understanding of the bad (imho) PostScript generated by the Microsoft PS driver. In this, the PostScript should have been pre-formatted using the appropriate HTML tags. Basically, the line 30000 VM? Is on its own, and not part of any other line. All that you have to do to remove this artificial restriction on viewing/converting the PostScript with ghostscript is to delete this line. On another note, someone asked me where those key sequences come from. If you check either keyboard.c or keyb_m68k.c, you will find an array of function pointers called spec_fn_table[]. This array contains a list of functions to execute when certain key combinations are received... The key combinations listed in the 2cent tips execute the functions show_state(), show_mem() and show_regs() You will find the source for function show_state() in /usr/src/linux/kernel/sched.c show_mem() is in /usr/src/linux/arch/i386/mm/init.c and show_regs() is in /usr/src/linux/arch/i386/kernel/process.c Best Regards, Ivan. _________________________________________________________________ PostScript $0.02 follow-up Date: Wed, 3 Dec 1997 13:51:48 -0500 (EST) From: Kyle Ferrio kbf@phy.duke.edu In the December issue of LG, Ivan Griffin suggests using pstops from the psutils package to accomplish two-up printing, gives a helpful example for A4 paper, and points out that the command line needs to be tweaked for US letter. If you're using US letter paper, then psnup (also part of psutils) already does the job nicely with no uncomfortable thinking. It might even work for A4, but I haven't checked. The psutils are generally very handy, so folks might want to have a look. An RPM is available in /contrib at ftp.redhat.com, for instance. Be advised that there seem to be at least two very distinct packages called psutils floating around Net-space. _________________________________________________________________ Yet another cheap tip. Date: Sun, 30 Nov 1997 03:48:40 -0800 (PST) From: Gary Johnson gjohnson@season.com Sorry if it has been mentioned before, I thought I would throw it in the Gazette pile just in case it hasn't . . . Cat proof keyboard. Switching to an unused virtual console is a quick way to blank the screen and disable the keyboard. To make one available try setterm -clear > /dev/tty12 on startup. ALT F12 flips to it, or ALT CTRL F12 from X. Because there (probably) isn't a login running on that VC it doesn't do much, which can be a feature. A smart cat may still luck into a troublesome key sequence. _________________________________________________________________ 2 cent tip - dosemu Date: Fri, 5 Dec 1997 00:55:55 -0500 From: Joey Hess joey@kitenet.net I occasionally use dosemu, mainly to run some games I can't live without, but I hate seeing the C:\> prompt. So I thought it'd be nice if there were a way to tell dosemu what dos command to run, and it would run that command on bootup. Here's a perl script that does just that. Read the comments at the top, they explain some changes you need to make on the dos side of this. The basic idea is, make a ~/dos_do.bat file, that contains the command you want to run, and use lredir to let dosemu see your home directory. Then run the batch file. #!/usr/bin/perl # # This runs dosemu. # # Any parameters psecified after "--" will be passed in to dosemu to be # run as dos commands. # # Setup: add to autoexec.emu: # lredir.com h: linux\fs\${home} # if exist h:\dos_do.bat call h:\dos_do.bat # # GPL Copyright 1996, 1997 Joey Hess # Split params into dosemu parameters and dos commands. while ($a=shift @ARGV) { if ($a=~m/--/ ne undef) { last } $dosemu_command_line.="$a "; } $dos_command_line=join(' ',@ARGV); $dos_command_line=~s/;/\r\n/g; open (OUT,">$ENV{HOME}/dos_do.bat") || exit print "$ENV{HOME}/dos_do.bat: $!"; if ($dos_command_line) { print OUT "$dos_command_line\r\n"; # note dos CR LF print OUT "exitemu\r\n"; } close OUT; system "/usr/bin/dos $dosemu_command_line"; unlink "$ENV{HOME}/dos_do.bat"; _________________________________________________________________ Re: 2c Tip "Finding What You Want with find" Date: Wed, 03 Dec 1997 16:03:30 +0100 From: Mike Neuhauser mike@gams.co.at Jon Rabone, jkr@camcon.co.uk, wrote in the December 97 issue of LG: > In the October 97 issue, Dave Nelson suggests using > find . -type f -exec grep "string" /dev/null {} \; > to persuade grep to print the filenames that it finds the search > expression in. This starts up a grep for each file, however. A > shorter and more efficient way of doing it uses backticks: > > grep "string" `find . -type f` > > Note however, that if the find matches a large number of files you > may exceed a command line buffer in the shell and cause it to complain. To avoid an overflow of the command line buffer use: find . -type f | xargs grep "string" This may give problems if filenames contain white space (e.g. touch "test file") -- to avoid use: find . -type f -print0 | xargs -0 grep "string" Note also that find doesn't follow symbolic links to directories per default. Using find with the option -follow does the trick (find . -follow ...). _________________________________________________________________ Re: Finding What You Want with find Date: 5 Dec 1997 17:47:50 -0000 From: Dale K. Hawkins dhawkins@mines.edu find . -type f -exec grep "string" /dev/null {} \; That is how I used to run things too, but a friend showed me the xargs program. Very nice. So one could turn the above statement to something like: find . -type f | xargs fgrep "string" /dev/null Again, the /dev/null will force the name of the file to be printed (in the unlikely case that find only found one file name). This has the benefit of not invoking a new grep process each time. But for a really slick (and much faster search) try this: locate $PWD | grep "^$PWD" |xargs fgrep "string" /dev/null This assumes that your locate database is current for the directory to be searched. It does have a problem though: it tries to grep everything, including directories! locate $PWD | grep "^$PWD" |xargs -ifilename sh -c \ "if [ -f filename ]; then echo filename; fi " | \ xargs fgrep "string" /dev/null And as an exercise for the reader: Take a look at lesspipe.sh (if it is installed; download it otherwise!) See if you can create a shell script called supercat (or something) which preprocesses the input to prevent grep'ing binary files, etc. You gotta love UNIX and especially Linux! -Dale K. Hawkins _________________________________________________________________ Finding What You Want with find Part III Date: Thu, 11 Dec 1997 17:12:46 +0100 (MET) From: Axel Dietrich Axel.Dietrich@neuroinformatik.ruhr-uni-bochum.de >In the October 97 issue, Dave Nelson suggests using > > find . -type f -exec grep "string" /dev/null {} \; > >to persuade grep to print the filenames that it finds the search >expression in. Besides Jon Rabone's "shorter and more efficient" version in the December 97 issue using backticks: grep "string" `find . -type f` the following variant can be used without the danger of exceeding a command line buffer limit: find . -type f -exec grep -l "string" {} \; The "-l" switch tells grep to show the name of the file in which "string" was found. To limit such a search on selected files I use a combination of the -type and -name switches. find . \( -type f -name "*\.html" \) -exec grep -l "string" {} \; This searches in all files with the suffix "html" for the string "string" and outputs the name(s) of the file(s) in which "string" was found. Axel _________________________________________________________________ More on finding Date: Tue, 16 Dec 1997 14:12:57 +0100 (MET) From: Alexander Larsson alla@lysator.liu.se In the December 97 issue Jon Rabone wrote: ------------------------------------ This starts up a grep for each file, however. A shorter and more efficient way of doing it uses backticks: grep "string" `find . -type f` Note however, that if the find matches a large number of files you may exceed a command line buffer in the shell and cause it to complain. ------------------------------------ A better way would be to use: find . -type f | xargs grep "string" which starts up a new grep everytime the command line buffer is full. / Alex _________________________________________________________________ Another way to find Date: Sat, 27 Dec 1997 12:06:47 -0500 From: rchandra@letter.com In an article in the LG, it was suggested that, in order to cut down on having to fork(2)/exec(2) for each grep when you're searching through a tree of files, you use the shell's capability of command substitution (for the file names paramaters to the grep command) with "backquotes," "grave accents," "backticks," etc. as they are commonly called ("`"). In that little tidbit, it is noted that it has the limitation of the system-wide imposed limit on number of arguments, and I possibly think there might be a length issue as well (too many total bytes). Enter xargs(1). The job of the xargs command is to read its stdin and use the resultant strings as arguments to some command prefix (such as "grep -n somestring"), much like backquotes work. However, the xargs program is "aware of" the limitations imposed by the system, and will run the command prefix as many times as necessary to exhaust the list provided on stdin, while on each run giving the command only the maximum number of arguments and the maximum byte count (?) that an exec(2) call can handle. Thus, provided that the program named in the command prefix follows the UNIX program protocol of iterating over its non-option arguments, one can search one, hundreds, thousands, even millions of files with a line like: find / -type f -print | xargs grep -n 'where is that string?' As usual, consult your favorite source of documentation, such as your local man pages, for ways to get even craftier with xargs. _________________________________________________________________ Yet another way to find Date: Mon, 29 Dec 1997 10:49:52 +0100 From: Guido Socher eedgus@aken104.eed.ericsson.se In recent Linux Gazette issues there were a couple of ideas on how to recursively grep around files and directories. Very useful, but it can cause problems when you have binaries (e.g some executable) in the directories that contain somewhere the string that you are looking for. The result is most of the time an unusable terminal because some control character from the binary file has set it to graphics mode. There are, of course, ways to make the terminal readable again but the best is to avoid it in the first place. Let's just remove the unprintable characters. They are unreadable anyway! The command sed -e 's/[^ -~][^ -~]*/ /g' removes multiple occurrences of non printable/control characters and replaces them by a single space. The [^ -~] matches all characters not in the ASCII range from SPACE to Tilde. This command can be easily combined (using a pipe) with the find and grep. Here is a little script, I called it grepfind, that does it all: #!/bin/sh #save this in a file called grepfind and do a "chmod 755 grepfind" # if test $# = 0 -o "$1" = "-h" -o "$1" = "--help" ; then echo ' grepfind -- recursively descends directories and egrep all files ' echo '' echo ' Usage: grepfind [--help][-h][start_directory] egrep_search_pattern' echo '' echo ' The current directory is used as start_directory if parameter' echo ' start_directory is omitted. The search is case insensitive.' echo ' Multiple occurrences of control characters are replaced by a single' echo ' space. This makes it possible to grep around in files that contain' echo ' binary data and strings without setting the terminal accidently ' echo ' to graphics mode.' echo '' echo ' Example: grepfind /home "hello world" ' else if [ "$2" = "" ]; then find . -type f -exec egrep -i "$1" /dev/null {} \; | sed -e 's/[^ -~][^ -~]*/ /g' else if [ -d "$1" ];then find $1 -type f -exec egrep -i "$2" /dev/null {} \; | sed -e 's/[^ -~][^ -~]*/ /g' else echo "ERROR: $1 is not a directory" fi fi fi #__END__OF_grepfind _________________________________________________________________ A final(?) way to find Date: Wed, 31 Dec 1997 14:31:57 PST From: Marty Leisner leisner@sdsp.mc.xerox.com In the last few months, there's been a few letters (by Dave Nelson, Jon Rabone, some more) on how to grep with file names. Instead of using the trick: find . type f -exec grep "string" /dev/null {} \; and other variatiants, or doing grep "string" $(find . -type f) 1) use the -H option of grep 2.1 (to print file names, not in 2.0) 2) use xargs to overcome problems with buffer size find . -type f | xargs grep -H Marty _________________________________________________________________ Re: I need some help Date: Sun, 7 Dec 1997 14:25:25 +0100 (MET) From: Roland Smith rsmit06@ibm.net Javier, In response to your article in the mailbag of the dec. 97 Linux Gazette: You need to set the environment variable MOZILLA to the directory containing Netscape's files. There are two ways of doing this: You can type `export MOZILLA_HOME=/usr/local/netscape' every time you start your computer, or you can edit /etc/profile. This is a file read by the bash shell. Add the following to this file (assuming the Netscape stuff is in /usr/local/netscape): MOZILLA_HOME="/usr/local/netscape" export MOZILLA_HOME You also need to add an entry for Netscape to your window-manager's initialization file, so it shows on the toolbar and/or menu. How to do this depends on the window manager you use. If you're using fvwm2-95, add the following to the .fvwm2rc95 file in your home directory: # add to this menu: AddToMenu "Utilities" "Utilities" Title + "Netscape%mini-nscape.xpm%" Exec netscape -geometry 931x683+54+9 & Regards, Roland _________________________________________________________________ Spinning Down Unused HDs Date: Mon, 08 Dec 1997 14:21:31 -0500 From: Peter S Galbraith galbraith@mixing.qc.dfo.ca In the December issue of LG tips, you discuss the hdparm command to spin down disks. I tried this on my old SCSI disk: bash-2.01# hdparm -S6 /dev/sdb /dev/sdb: operation not supported on SCSI disks Too bad! I use the `scsi-idle' kernel patch to do this very same thing on SCSI, and I was eager to try your trick to finally stop having to patch the kernel at evey upgrade. Too bad it don't seem to work on SCSI disks (it's also strange that the man page doesn't say this...) _________________________________________________________________ LG Tips and Tricks (Netscape) Date: Fri, 12 Dec 1997 02:56:16 -0600 From: Christian J Carlson carlson@means.net * Date: Sun, 9 Nov 1997 22:00:31 +0000 (GMT) * From: Ivan Griffin ivan.griffin@ul.ie * * These special URLs do interesting things in Netscape Navigator and Communicator. * * about:cache gives details on your cache * about:global gives details about global history * about:memory-cache * about:image-cache * about:document * about:hype * about:plugins * about:editfilenew * * view-source:URL opens source window of the URL * * Ctrl-Alt-F take you to an interesting site :-) This appeared in Linux Gazette, December 1997. There is one more way cool easter egg that's in Netscape. First, type "about:mozilla" to get the Mozilla easter egg. Then, watch your "N" in the upper right hand corner of Netscape. Whenever you access a website, Mozilla himself will appear instead of the boring flying stars, etc. As far as I know, this has been in every version of Netscape since at least version 2.0. Of course, this only works in the Linux version (I don't know about other *nix versions) of Netscape, NOT Windows95 :). Christian J. Carlson _________________________________________________________________ Easter Eggs in Netscape Date: Tue, 30 Dec 1997 0:47:12 +1100 (EADT) From: Michael Lake mikel@BlueSky.com.au I have just been reading the Linux gazette Issue 23 about the easter eggs in Netscape and thought that I would try some URL's of my own. I am using Netscape 3.01 for Linux. A little experimentation found the following-- about:foo The message that is returned is: Whatchew talkin' 'bout, Willis? Instead of foo you can use anything that is not understandable to Netscape. A more interesting one that I tried is-- about:mozilla This gives a very interesting quotation which I will leave to the reader to discover. Enjoying the Linux Gazette immensely, Best Regards, Michael Lake Sydney, Australia _________________________________________________________________ Calculator Tip Date: Sun, 14 Dec 1997 19:23:55 -0500 From: Michael McLay mclay@nist.gov The Issue #21 and #23 tips column gave tips for doing calculations without having to fire up a heavyweight GUI calculator. It is very handy to be able to do all the number entry through the command line, but I was surprised to see perl and awk used in the two examples. The bc has been around forever in Unix and would be the logical first choice to many oldtimers. And bc can do the calculations to any precision desired if that is important. Another good option is Python. Python can be run in an interactive mode like bc, so previous calculations can saved as variables and reused. Python also can be built so that past lines can be edited using the standard GNU readline library editing operations. For instance, in the following interactive sequence the previous-line key will restore the last executed line to the prompt and the line edit keys, such as backward-char and delete can then be used to edit the line. ~: python Python 1.5b2 (#2, Dec 12 1997, 16:13:12) [GCC 2.7.2] on linux2 Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam >>> a = (10+3)/7 >>> a 1 >>> a = (10.+3)/7 >>> a 1.85714285714 >>> a/32 0.0580357142857 >>> "it takes %7.2f percent" % a 'it takes 1.86 percent' >>> "it takes %-7.2f percent" % a 'it takes 1.86 percent' >>> from Numeric import * >>> b = array(arange(12)) >>> b.shape = 3,4 >>> b array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> b/a array([[ 0. , 0.53846154, 1.07692308, 1.61538462], [ 2.15384615, 2.69230769, 3.23076923, 3.76923077], [ 4.30769231, 4.84615385, 5.38461538, 5.92307692]]) >>> This example also shows the Python Numeric[1] module being used at the command line. Any Python module that is installed with the interpreter can be imported and used in the interactive mode. Of course if you want to make Python do a one-liner from the command line that is possible also: ~: python -c "print 34./33" 1.0303030303 or to format the output: ~: python -c "print 'eat %3.4f %s' % (1.444e5/32,'more fish')" eat 4512.5000 more fish [1] The Numeric module in this example is not built into the standard distribution. See the matrix-sig page for details on how to add it to the module library if you can't find it on your system. _________________________________________________________________ Security script Date: Mon, 15 Dec 1997 20:49:53 -0600 (CST) From: Corey G cgaff@interaccess.com Often when I leave my machine connected to the Internet for prolonged periods I worry about hackers. I wanted a program that would know if a process was started by anyone, including root, that was not originally on the machine. This caused me to program this script. I dont know if something similar exists but I have tested this very throughly and it works rather well. It can be frustrating at times when you are active on the machine but works very well for idle times. HOW IT WORKS: This scripts grabs all the processes when first invoked and saves them to a temporary file. After a default of 10 seconds the process table is checked against any new processes that were started. If these processes were not listed in the "TRUSTED_ITEMS" variable they will be killed immediately. USAGE: Once you have all the necessary processes running on your machine start the script as root. It will make the necessary directories on the machine in a safer place than just /tmp. I have created two variables named "TRUSTED_ITEMS" and "TRUSTED_USERS". These can be used to ignore some users or programs that you never want killed. Be careful since sometimes you will need to include more than one item for some programs. For example, if you dont want xterms killed you must add "xterm" and "bash" if you are running bash as your default shell. Note: When testing this script make certain that nothing important is running. I take no blame for any wrong doing from this script. To start the script: nohup ./secmach & I am always looking for ways to improve this script so feel free to e-mail your comments or suggestions to me. Good Luck !!! #!/bin/sh # Secmach - security program # v1.0 12-14-97 # By: Corey Gaffney export PATH=/usr/bin:/bin:/sbin COUNTER=0 LOCATION=/usr/secmach CHECK_TIME=10 TRUSTED=/usr/secmach/trusted UNTRUSTED=/usr/secmach/untrusted DIFFKILL=/usr/secmach/diffkill TRUSTED_USERS="johndoe" TRUSTED_ITEMS="$TRUSTED_USERS|pppd|chat|netscape|xterm|egrep|ps|sed|secmach|awk " if [ ! -s $LOCATION ] then mkdir $LOCATION chmod 700 $LOCATION fi while : do COUNTER=`expr $COUNTER + 1` if [ $COUNTER -eq 1 ] then ps -aux | sed -e '1d' | awk '{print $2}' > $TRUSTED fi sleep $CHECK_TIME ps -aux | sed -e '1d' | egrep -v $TRUSTED_ITEMS | awk '{print $2}' > $UNTRUSTED diff $TRUSTED $UNTRUSTED > $DIFFKILL KILL=`grep ">" $DIFFKILL | awk '{print $2}'` kill -9 $KILL done _________________________________________________________________ Controlling cron.hourly Date: Sun, 21 Dec 1997 10:36:16 -0500 (EST) From: Jeff Johnson jbj@JBJ.ORG According to Gary Turkington: > I know this one of those *really* simple ones, but it's beating me. How > do I stop the cron.hourly setup mailing a 'fortune' to root? This used > to happen daily, no biggie, but when I upgraded to 5.0, its hourly.. spam > :) This is a variant of the "not a typewriter" error that causes loss of hair when using rlogin :-) The analysis goes like this: 1) Cron runs a job an hourly job as root. 2) To run the job, a non-interactive (i.e. stdin/stdout are *not* connected to a tty but to cron) shell is started. 3) The shell reads its init files: /etc/profile, ~/.bashrc, whatever. 4) The init files execute fortune. 5) The job is performed. 6) Cron detects output, so it mails it to root. Fix by identifying which shell init file is executing fortune and avoiding fortune when not interactive. There are a couple of techniques for doing this, often by checking whether PS1 is set. 73 de Jeff _________________________________________________________________ Syslog and ping Date: Tue, 30 Dec 1997 17:58:59 -0500 (EST) From: Andrew Tucker andrew.tucker@kplus2.aces.k12.ct.us Hi, this is a small hack I did to allow logging of users's use of ping through syslog. With more and more larger systems running Linux, and more and more situations of ICMP abuse, any shortcut a system administrator can use to prevent such abuse is helpful. Click here to dowload the scripts. -- Andrew _________________________________________________________________ Published in Linux Gazette Issue 24, January 1998 _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next _________________________________________________________________ This page maintained by the Editor of Linux Gazette, gazette@ssc.com Copyright © 1998 Specialized Systems Consultants, Inc. "Linux Gazette...making Linux just a little more fun!" _________________________________________________________________ News Bytes Contents: * News in General * Software Announcements _________________________________________________________________ News in General _________________________________________________________________ February Linux Journal The February issue of Linux Journal will be hitting the newstands this week. The focus of this issue is Databases. Also, included is an article on the making of the movie Titanic -- Digital Domain used Linux extensively in generating the graphics for this movie. Check out the Table of Contents. _________________________________________________________________ Apache Webserver Serves Over Half the Internet Date: January 5, 1998 The Apache Web Server, from the Apache Group (http://www.apache.org/), now serves over half the domains on the Internet, according to the latest Netcraft Web Server Survey (http://www.netcraft.com/). According to the January 1998 survey, the Apache server and its derivatives serve 50.24% of the 1,834,710 sites found by Netcraft. For complete press release, click here. _________________________________________________________________ Linux on Celsius Workstations Date: November 28, 1997 Siemens Nixdorf (SNI) is selling there new "Celsius" Workstations with WinNT AND Linux. SNI is the first major Producer that offers Workstations with Linux on board. The Linux-Support is done by the well known Linux Distributor SuSE GmbH. The machine is targeted for the universitys and already in use there with linux. The "Celsius" is based on a Pentium-II with 266/300 Mhz, 512KB Cache, 32MMB ECC-SDRAM, 2GB drive, a PCI/AGP based matrox millennium II with 4MB, FastEthernet, Diskdrive and Keyboard. The price shall be between 7300-8200DM but its exspected to fall. According to SNI the "Celsius 1000" (300Mhz Version) gets 11,9 SPECint95 and 8,6 SPECfp with WinNT. (no Linux values yet.) _________________________________________________________________ Freeware OCR Project A mailing list has been started for discussion of the design and implementation of a freeware Optical Character Recognition program for Unix. Scanner support for Unix is under rapid development thanks to projects such as SANE, and applications such as GIMP are beginning to benefit from this support. OCR is one major application missing from the freeware Unix world, and has long been on the FSF's project list, as well. The project is going to have to create both a user interface and a character recognition engine. The project Web page is at http://starship.skyport.net/crew/amk/ocr/; look there for announcements about the project's status. For more information: Andrew Kuchling, amk@magnet.com _________________________________________________________________ 5th International Linux Kongress 1998 The 5th International Linux Kongress 1998 will be held from June 3 to June 5 at the University of Cologne (Köln). It follows the tradition of the Linux conference series (Heidelberg 94, Berlin 95/96 and Würzburg 97) which has evolved into the most important meeting for Linux experts and developers. The conference will offer up to 25 talks during two days (June 4 and 5) in two parallel tracks. The first track will be dedicated to Linux development, the second track will be intended to accomodate talks about Linux applications and usage. Each talk will be 40 minutes (possibly including discussion). The language of the talks will be English or exceptionally German. For more information: GUUG office, Ms. Bester, info@linux-kongress.de http://www.linux-kongress.de/ _________________________________________________________________ Linux in the News Articles by Charles J. Fisher worth checking out in Unixworld On-line: Creation and Maintenance of Virtual Hosts Linux, PPP and Windows 95 ------------------------------- More Linux articles: "Linux Grows Up" -- Red Hat's commercial Linux beats NT at its own game. Computer Currents "Intel-Based Unix: A Great Solution for Intranets" Federal Computer Week ------------------------------- Better late than never: On November 15 Linus announced that the F00F bug had been fixed in the 2.0.x kernel. Click here to read what he had to say. _________________________________________________________________ Linux Bibliography After a three month hiatus, the Linux Bibliography is back on line. The bibliography has grown a lot in size since it was last "live": there are now almost 50 Linux specific titles listed. While the site is back up, it is still in beta mode. You'll need a Javascript compatible browser to see it, and some of the JavaScript appears to be a bit buggy with older browsers. Most of the forms are working, but not all of them just yet. For more information: Mark Stone, markst@shell.nanospace.com _________________________________________________________________ Linux Business Solutions Announcing: A project to generate and improve documentation of Linux Businesss Solutions. This project is a few days old, so far. http://linas.org/linux/web-project.html. There is a separate mailing list for this project. Web page owners, writers, marketing types & document mangement hackers are encouraged to join. For more information: linas@linas.org _________________________________________________________________ The Linux Project Server The Linux Project Server has a new home page providing counters, guestbooks and many many other such scripts. For more information: Gerhard Poul, gerhard@shadow.ccc.at _________________________________________________________________ Freely Redistributed Software Track - USENIX'98 Come hear the latest about the full range of freely redistributable software--with pointers to the code--including Linux, FreeBSD, GNU, netBSD, OpenBSD Samba, and more. FREENIX, The Freely Redistributed Software Track, at the USENIX 1997 Conference held June 15-19, 1998 in New Orleans, Louisiana. For more information: http://www.usenix.org/events/no98/ _________________________________________________________________ Linux Software Database The Linux Software Database is now up and running at http://www.txcc.net/~cl/. Unlike software sites that list only what software the author has found the LSDB allows Linux users connecting to the database to add software thay have found or use. This way both authors of software and users of the site can add there titles on-line. For more information: cl@txcc.net _________________________________________________________________ S.u.S.E. Discussion Groups S.u.S.E. has set up two lists for discussions and announcenments concerning the S.u.S.E. Linux Distribution: suse-announce-e -- Announcements of S.u.S.E. LLC (english) suse-linux-e -- Discussions about S.u.S.E. Linux (english) To subscribe, send an email to majordomo@suse.com with 'subscribe listname' in the body. For more information: Bodo Bauer, bb@suse.de http://www.suse.de _________________________________________________________________ The Linux Resource Kit The Linux Resource Kit has been expanded to included current news and information on: * Web Servers * Groupware * Networking/Routing/WAN * Client/Server Database * Microsoft/Novell/Apple/NFS file and Print sharing * System Administration For more information: Joe Royall, joe@secretagent.com _________________________________________________________________ IACT & Linux users A message from the Land of Beyond web site! We have devoted a new section for IACT, the "International Alliance for Compatible Technology." Users of major platforms such as OS/2, Linux, FreeBSD, BeOS, MacOS, Unix, DOS etc., are invited to sign-on to IACT's Declaration! IACT is working for greater freedom of choice in software, for better access to computer-based services & technology, and for open standards, including higher standards of compatibility. To achieve such important goals for all people, IACT will join in anti-monopoly efforts against Micro$oft, while we find positive ways to help consumers & users regardless of their chosen platforms. Join us at... http://www.trailerpark.com/moonwalk/moonwolf/iactpage.html For more information: Diane Gartner, dgwhiz@earthling.net IACT Co-ordinator _________________________________________________________________ Software Announcements _________________________________________________________________ Dotfile Generator 2.2 Date: December 24, 1997 Jesper Pedersen has announced the release of The Dotfile Generator version 2.2. New in this release is: * Added the ipfwadm module for configuring IP Firewalling, Forwarding and Masquerading. * Added the procmail modules for configuring mail filters. * The menu system, from which one select the configuration pages has been rewritten, so it's now much more intuitive. * Some bugs have been fixed. * Made it work with Tcl/Tk 8.0, which is much faster than the older versions of Tcl/Tk The source is available from the home site of TDG: ftp://ftp.imada.ou.dk/pub/dotfile/dotfile.tar.gz For more information: Jesper Pedersen, blackie@imada.ou.dk _________________________________________________________________ TeamWave Workplace 2.1 Date: November 27, 1997 TeamWave Software Ltd. today announced the release of version 2.1 of TeamWave Workplace, its popular software that lets teams work together in shared electronic rooms across the Internet. For teams within an organization who are sometimes physically separated yet still need to work closely together, TeamWave Workplace provides an Internet forum where teams collaborate, communicate and share information. For more information: info@teamwave.com http://www.teamwave.com/ _________________________________________________________________ Red Hat Linux 5.0, HURRICANE Date: December 1, 1997 Red Hat Software, Inc. has announced the availability of Red Hat Linux release 5.0 for Intel and Alpha computers. This release is a major update to the Red Hat Linux 4.X series. Red Hat Linux is available on their FTP site: ftp://ftp.redhat.com/pub/redhat/redhat-5.0/ For more information: http://www.redhat.com/ info@redhat.com _________________________________________________________________ Nighthawk Date: December 19, 1997 Nighthawk is An X-Windows game for Linux (Freeware GPL), based on Andrew Braybrooks C-64 classic- "Paradroid", written by Jason Nunn (JsNO) Nighthawk 1.0 uploaded onto the following sites: * http://www.downunder.net.au/~jsno/rel/1997/nighthawk-1.0.tgz * ftp://ftp.cdrom.com/pub/unixfreeware/incoming (asked to be put into games) * ftp://sunsite.unc.edu/pub/Linux/games/arcade (uploaded to /incoming/Linux) * ftp://ftp.funet.fi (uploaded to /pub/Linux/incoming) For more information: Jason Nunn, jsno@dayworld.net.au _________________________________________________________________ C++ GUI IDE Date: November 28, 1997 Simple IDE for: Qt C++. GUI dialog editor, all developed on Linux X11R6 available at: http://qtez.commkey.net/ All distributes of QtEZ include: documentation, a set of tutorials, and many demonstration programs to illustrate features in the environment. I hope QtEZ is well received. For more information: Sam Magnuson, zachsman@commkey.net _________________________________________________________________ r2d2 - easy but powerful boot-concept Date: December 3, 1997 r2d2 is a newly developed boot-concept for Unix. In some way r2d2 combines the easeness of the BSD-style scripts (e.g. in Slackware Linux) with the power and flexibility of the traditional SysV scripts (e.g. in RedHat, Debian or S.u.S.E. Linux). r2d2 is available at ftp://sunsite.unc.edu/pub/Linux/system/daemons/init/ For more information: Winfried Truemper, winni@xpilot.org _________________________________________________________________ TkSTEP 8.0p2 Date: December 5, 1997 This is Tk 8.0p2 with some modifications to make it look like N*XTSTEP's interface, plus some new widgets inspirated on N*XTSTEP and the ability to utilize the full power (and more) of the OffiX Drag'n'Drop protocol. The programming aspects of Tk have not been changed much, so you can run your old Tk apps normally with it, although some options will not do anything (like -relief for scrollbars). To get TkStep go to http://www.fga.de/~ograf/TkStep.shtml For more information: Oliver Graf, ograf@fa.de Steve Murray, stevem@eng.uts.edu.au _________________________________________________________________ t1lib-0.4-beta: A rasterizer for Adobe Type 1 Fonts Date: December 5, 1997 t1lib is a library for generating character- and string-glyphs from Adobe Type 1 fonts under UNIX. t1lib uses most of the code of the X11 rasterizer donated by IBM to the X11-project. But some disadvantages of the rasterizer being included in X11 have been eliminated. You can get t1lib by anonymous ftp at: ftp://ftp.neuroinformatik.ruhr-uni-bochum.de/pub/software/t1lib/t1lib- 0.4-beta.tar.gz For more information: Rainer Menzner, Rainer.Menzner@neuroinformatik.ruhr-uni-bochum.de _________________________________________________________________ SGML-Tools 1.0.1 Date: December 5, 1997 After a year of mostly code restructuring and bugfixing, the time finally arrived to let SGML-Tools 1.0 see the light. This means the final demise of Linuxdoc-SGML. Loads and loads of features were added. Most of the code was redone in Perl 5, which adds a lot of flexibility for people who want to override certain parts of the functionality (but at the same time be able to install newer versions); uncountable bugs were fixed (more were introduced, of course :-)), lots of small enhancements, etc. Visit the SGML-Tools homepage, at http://pobox.com/~cg/sgmltools/, for latest news, pointers to locations, etcetera. Note that I haven't been able to get this stuff on SunSite yet, so you'd better go via the homepage. For more information: Cees de Groot, cg@pobox.com http://pobox.com/~cg/sgmltools/ _________________________________________________________________ LinuxThreads 0.7 -- POSIX thread library Date: December 5, 1997 LinuxThreads is a thread library for Linux implementing the POSIX 1003.1c API and based on the "one thread = one process" model. The sources are available through the LinuxThreads home page: http://pauillac.inria.fr/~xleroy/linuxthreads/ or directly on our FTP server: ftp://ftp.inria.fr/INRIA/Projects/cristal/Xavier.Leroy/linuxthreads.ta r.gz For more information: Xavier Leroy, xavier.leroy@inria.fr _________________________________________________________________ TkDesk 1.0b5 released Date: December 9, 1997 TkDesk 1.0b5, a Desktop and File Manager for UNIX's running X11, has been released. TkDesk is a graphical desktop and file manager for several brands of UNIX (such as Linux) and the X Window System. It offers a very rich set of file operations and services, and gives the user the ability to configure most aspects of TkDesk in a powerful way. The reason for this is the use of Tcl/Tk as the configuration and (for the biggest part of TkDesk) implementation language. You can get TkDesk from its homepage at: http://people.mainz.netsurf.de/~bolik/tkdesk/ For more information: Christian Bolik, Christian.Bolik@mainz.netsurf.de _________________________________________________________________ MP3Blaster V2.01 -- Mpeg audio player for Linux Date: December 9, 1997 A new release of MP3Blaster, an Mpeg-layer 1/2 audio player for Linux is available. This player has a smart and intuitive user-interface that allows a user to split up a playlist into sections. An interesting use of this feature is that you can put about 10 of your audio CD's in MP3-format on one CD-ROM and have each section in the playlist represent an album. Then, you can have the player play the songs in every imaginable order you want. Available at: ftp://sunsite.unc.edu/pub/Linux/apps/sound/players/mp3blaster-2.0b1.ta r.gz http://www.stack.nl/~brama/src/mp3blaster-2.0b1.tar.gz For more information: Bram Avontuur, brama@mud.stack.nl _________________________________________________________________ X-CD-Roast 0.96c Date: December 9, 1997 X-CD-Roast 0.96c, a CD writer package for X, has been released. Really new stuff: * based on Tcl/Tk 8.0 * includes cdrecord-1.5 and mkisofs 1.11.1 * lots of fixes. Check out the web page: http://www.fh-muenchen.de/rz/xcdroast/ For more information: Thomas Niederreiter, p7003ad@sunmail.lrz-muenchen.de _________________________________________________________________ ADABAS D Workgroup Edition Date: December 11, 1997 ADABAS D Workgroup Edition 10.0 for Linux is now available. Visit http://www.IoS-online.de For more information: Bjoern Bayard, Bjoern.Bayard@ios-online.de _________________________________________________________________ Linbot 0.1 web site management tool Date: December 12, 1997 Linbot 0.1 is a tool that can examine a web site for broken links, outdated pages, and other useful information. It is based on the commercial LinkBot from TetraNet Software (http://tetranetsoftware.com/) though Linbot will be free. You can get Linbot at http://home1.gte.net/marduk/linbot/index.html For more information: Marduk, marduk@gte.net _________________________________________________________________ Published in Linux Gazette Issue 24, January 1998 _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next _________________________________________________________________ This page written and maintained by the Editor of Linux Gazette, gazette@ssc.com Copyright © 1998 Specialized Systems Consultants, Inc. "Linux Gazette...making Linux just a little more fun!" _________________________________________________________________ The Answer Guy By James T. Dennis, answerguy@ssc.com Starshine Technical Services, http://www.starshine.org/ _________________________________________________________________ Contents: * Netscape Mail Crashing * Slackware Help * Netscape /var/spool/USER * Getting Rid of Virtual Screens * diald's niche * Upgrade to Red Hat 5.0? * Red Hat Linux and WABI and other things * Linux as a PDT _________________________________________________________________ Netscape Mail Crashing From: Jim Kelley, the-jim@swbell.net I've been running Netscape Communicator 4.04 on my RedHat Linux system since the day it came out and yesterday it started crashing whenever I check for new messeges or try to go to a newsgroup. I've deleted NC4.04 and re-installed it but to no avail. Any suggestions? When you removed NC4.04 (which I didn't know had been released yet -- I'm still using 4.03) did you also remove your ~/.netscape directory tree? What happens if you try it from another account on the same system? What happens if you use different e-mail and newsreaders? (elm, pine, emacs' mh-e and/or tin, nn, trn, or emacs' Gnus for respective examples). I would suspect a data or configuration file corruption. Many programming practices that I most loathe and detest have to do with a lack of robustness and simple error messages with regard to corrupted input. Is it really that hard to have a switch that logs each input source like: "Reading configuration from ~/.netscape/foorc....." and "Bounds error at ~/.netscape/bar.conf offset 0x2AFF9A77"? I haven't looked at the innards of NS Comm's data files -- I wouldn't use their mail and newsreaders since I'm very particular about my mail and netnews -- and I need extensible, configurable, text mode capable systems (so I use emacs' mh-e and Gnus). --Jim _________________________________________________________________ Slackware Help From: Ralph, RPMAXEDGE@aol.com HELP!!!!!!!!!!!!! Recently I installed Linux Slackware version 3.2, everything loaded fine, but I really run into problems when I set up the password for the root, for some reason the one i register doesn't allow me to log in. I have problems on how to mount the /root and remove the existing password from the /etc/shadow. I install linux with partitions for a SCSI hardrive sda1 sda2. I would really appreciate some few hints to solve this problem.. Thank YOU VERY MUCH. --Ralph I haven't used any of the newer Slackware systems -- so I don't know if there's anything specific to their system that would contribute to or cause this problem. The general way to fix this sort of problem (lost root password) is to boot from a rescue diskette (or from an "alternate root partition") mount your normal root partition (let's you put it under /mnt/oldroot) and then simply edit the mnt/oldroot/etc/passwd file -- and maybe the corresponding shadow file). Another trick is to use the chroot command instead of editing the passwd files directly. Basically you can follow the mount with a command sequence like: cd /mnt/oldroot/ && usr/sbin/chroot . bin/sh ... and then: passwd (allows you to use the normal passwd command -- and forces it to update the passwd files *under* the (chroot)/etc directory rather than the one of the root diskette or the alternative root part). A properly maintained alternative root partition should have extra copies (mirrors) of the whole /etc directory from your production root partition. This makes recovery from errors *much* easier. On some systems you may be able to use a removable drive (LS120, Zip, Jaz, magneto-optical, Bernouli, Syquest, DynaMO, whatever) as your alternative root system. -- Jim _________________________________________________________________ Netscape /var/spool/USER From: John Liebenrood, k7ro@teleport.com I'm running Redhat 4.2 with Netscape 3.01. I can send mail fine but I can't receive mail. I don't understand how to configure my /var/spool/USER. I've used chown mail.mail and chmod 01777 on USER file. But still can't get mail to come in... all talk no listen --john Do you have a POP account? Are you trying to read you mail from the localhost? Where is your localhost supposed to get it's mail from? Normally 'sendmail' (or smail, deliver, or procmail) appends messages to your spool file. 'sendmail' gets your mail via SMTP or from your UUCP system (depending on your configuration and your ISP/account type). There are a bunch of factors that anyone would need to know before answering your question. Netscape is normally configured to fetch mail via the POP protocol and to send it directly via SMTP. If your system is already configured to send and recieve mail (i.e. you can use other mail user agents (MUA's) like elm, pine, mh, etc) then you should be able to configure Netscape to use "localhost" (the loopback interface -- internal to your own system). Personally I won't use NS Nav. (or Comm.) as a mail or news client. I absolutely require a text mode interface -- Jim _________________________________________________________________ Getting Rid of Virtual Screen From: bmiles@intergate.bc.ca Hi, how do i get rid of the virtual screen under x windows, it's bloody annoying!! I've tried disabling it under install and I've even tried resetting my resolution size, please help!!! Assuming you're using XFree86 you'd edit your XF86Config file (which might be /etc/X11/XF86Config or might be something more like /usr/X11R6/lib/.... or something). Find the Screen section for the device driver and mode you're using, look for the display subsection that applies you your monitor and modify the "Virtual" line thereunder. If you've tried that (if that's what you mean by "resetting my resolution" or "disabling it under install") than it's possible that you're using a different X Server than I think you are (Xig, or Metro-X or something) or that your installation or distribution is using a config file in a different location than you think it is. The man pages for 'X', 'startx' and 'xinit' may help -- in particular the XFree86 servers allow you to specify an option of -xf86config file -- which allows you to explicitly over-ride the system wide configuration (and is a great reason for security concscious sysadmins to limit the execution of X to some trusted local users -- and maybe use xdm). I found this option in XFree86(1) (that's the XFree86 man page in chapter one on my man pages). Don't confuse it with the -config option referred to in the XServer(1) man page. That option just says that more *command-line options* are stored in a file. The XServer man page refers to options and configuration information that should apply to any Unix X Windows display server. The XFree86(1) page refers to the extensions which apply to the servers written as part of the XFree86 project. The various XF86_* man pages refer to the features that are specific to specific servers (that is the ones which are compiled for a given video card or chipset). You're only running one binary -- yet three man pages apply to whichever one you're using. This is unnecessarily confusing -- and one of the books I've perused (on Unix, X, or Linux) cover this sort of thing. So you have to wander through lots of different man pages and /usr/doc/* files without a clear roadmap. I'm hoping that some future distribution (maybe the Red Hat 5.0 that's supposed to be shipping in the next couple of weeks, or the next Debian) will have a really good set of HTML (lynx clean!) documents, served by default off of the initial localhost webserver which takes a top down, organized approach to educating and informing us about all the power and choices we have in Linux. -- Jim _________________________________________________________________ diald's Niche From: Pollywog, caldera-users@rim.caldera.com Put it anywhere you'd like. The .rpm version installs into defined sub-directories, but keep the .rpm file anywhere you like. I have /usr/local/packages for tarballs and .rpms.... I recommend that SA's use a /usr/local/from directory tree. You can then have a set of directories named after your favorite FTP and web sites and after the volume names of your favorite CD's. When you download/fetch a file -- put it into a directory that reminds you where you got it from such as: /usr/local/from/sunsite/ /usr/local/from/redhat/ /usr/local/from/ftp.replay.com/ ... etc. Now when you want to upgrade a package you can see where you got the previous version from -- and consequently you have a headstart on where to look for upgrades. This technique is also handy if you read an alert about a compromised (trojan) package -- you can easily see where *you* got your copy from. For files you install off of the CD -- create a directory name that reminds of you which CD it is like: /usr/local/from/cd/MOP/ (Mother of Perl) /usr/local/from/cd/rharchive.fall97/ ... etc. Now you just put symlinks from that directory to your usual CD-ROM mount point (ln -s /mnt/cdrom/dir/foo foo). This creates a tree of "broken" links -- but it tells you where to find the source file if/when you need to rebuild. As you can probably see this technique is really a "poor man's HSM" (hierarchial storage management system). You can also extend this idea to migrated data files from your home directories to removable media (such as Zip, Syquest, MO, and CDR devices). It's also a very handy form of self-documentation in businesses where you may have many sysadmins or you may have various consultants "stomping" around on your systems. -- Jim _________________________________________________________________ Upgrade to Red Hat 5.0? From: Jason Welsh, jwelsh@mci.net hey, im running an older 4.something version of Red Hat and was wondering if I just wanted to upgrade it to 5.0, do I need to download certain RPMS or do I need to get the whole thing? or get it on CD.. just curious if there was a shortcut I could take.. --Jason. I would get the CD (in fact I did -- but I haven't run the upgrade yet -- want to finish some work and do an extra backup first). CD's save lots of bandwidth and save lots of time. If you don't need the commercial packages that come with Red Hat (the BRU and Metro-X) you can wait a month or so for the "Archives" set to come out -- which is about $20 (less than half the full version). -- Jim _________________________________________________________________ Red Hat Linux and WABI and other things From: peter@trimatrix.net I have RedHat Linux v5.0 (kernal 2.x) ($49) and (silly me) found out that Caldera OpenLinux Standard ($399) supports WABI for Windows 3.1 apps, FreeDOS (or is it DOSEMU??) for DOS apps, and NetWare 3.x/4.x client supporting NDS (Network Directory Services). WABI/Linux (a.k.a the Windows 3.1 Applications Binary Interface) is available separately from Caldera. It is a commercial package -- and it should install on most Linux systems without much trouble. There's also WINE ("WINdows Emulation" or "WINE is not Emulation" -- take your pick of acronym expansion). This is a freeware project to implement enough support for the lower level Windows API's to allow Linux (and other Unix) users to install MS-Windows and run Windows programs. I've heard that it is also possible to run Windows 3.1 in "standard mode" under DOSEMU/MS-DOS. I'm not sure if that works under other DOS variants running under DOSEMU. OpenDOS is Caldera's release of "Novell DOS" (which was formerly "DR DOS" -- from Digital Research). Caldera aquired the licenses and rights to Novell DOS when Novell decided to "refocus on its 'core' markets" (and practically gutted itself in the process). In fact Caldera's "Network Desktop" (their distribution that preceded the OpenLinux/Base and OpenLinux/Standard) was originally a research project at Novell. OpenDOS is partially commercial -- it is free for personal use (or for students, or something like that -- read their web pages for details). It has little to do with DOSEMU. OpenDOS is available on CD for about $30(US). DOSEMU is really a bit of a misnomer. Technically it's a BIOS emulator which can be used to run any x86 "real mode" operating system (such as CP/M-86 or some versions of Forth). When you install DOSEMU you also have to install some copy of DOS (MS-DOS, PC-DOS, OpenDOS, whatever) to get any practical use out of it. DOSEMU includes several DOS programs which connect to the underlying Linux system. This allows one to access Linux directories and NFS mounts as DOS "network" drive letters, and do other things like that. FreeDOS is a different project -- you can learn more about it at http://www.freedos.org. They are quite Linux friendly -- but I haven't played with their release (and I've barely touched DOSEMU or Caldera's OpenDOS) so I can't say much about it. Regarding Netware access from Linux: Caldera's Netware client access (with bindery and NDS support) is only available as part of their OpenLinux Standard (as far as I know). I've heard that some people have successfully installed the clients on other Linux distributions. However it appears that you are legally required to purchase the commercial COL/Standard (Caldera OpenLinux is often called COL by participants of it's mailing lists). There are also a couple of free packages that implement some Netware protocols for Linux. 'ncpfs' is one that allows you to 'mount' Netware partitions in a way that similar to NFS. This is a system-wide mount (unlike the Caldera netware clients where each user has unique "virtual" mountings that are not visible from other concurrent processes on the system. There's also the mars_nwe (Netware emulator) that implements a subset of the Netware fileserver protocols -- allowing DOS and other systems to access portions of your Linux filesystem(s) using the Netware clients (similar to 'samba' for Windows for Workgroups/LANMan/NT functionality). I would like to know if this support can be added to RedHat Linux by downloading this stuff from somewhere, and recompiling the kernel? Can you help? I'm new to RedHat and have not yet gotten an answer from them.... If not, can you direct me to where I can inquire? This is a quest to make use of our 486 computers at work (and just need to know) by running Linux, and still support some things like Windows 3.1x Lotus Lotsuite, etc., which Caldera claims to do, but I would like to find out how much trouble it would be to add support to Redhat..... In general, anything you can run under Caldera you can get to run under Red Hat or any other reasonably recent Linux distribution. Since Caldera, Red Hat, Craftworks, and a couple of other Linux distributions use the same package management system (the RPM -- or "Red Hat Package Manager") sharing packages among them is somewhat simpler than installing a Debian package or Slackware "tarball" would be. I'd look at any FTP mirror of Red Hat's "contrib" directory for the lastest dosemu "rpm" and install that. You'll probably find one of these on your set of Red Hat CD's in addition to the ones online. You can check the online directory for updates and recent additions. -- Jim _________________________________________________________________ Linux as a PDT From: Karl Rossing, gtivr6@pangea.ca Linux as a PDC (Primary Domain Controller), NIS/NIS+ Master -- etc. I was wondering if it is possble to get windows 95/NT to authenticate to LINUX (using nis or nis+). I'm really getting tired of adding accounts on the nt boxes for the linux boxes(for smb)...Is there any commercial software availible? It sounds like you're asking fundamentally different questions here. In your subject you refer to using Linux as a PDT by which I presume you meant a PDC (Primary Domain Controller). Here you refer to using NIS/NIS+ -- which would involve adding client support (third party software) to all of the NT/'95 boxes. A broader questions is: What network authentication and directory services system/model/architecture should you use? This is a sticky question with no easy answer. A simpler question is: How can I configure my MS client machines (NT and '95) to use my Linux system's account information for access control and authentication? I'll provide some thoughts on each of these questions after commenting on the rest of your message: I know of p-sync [http://www.m-tech.ab.ca/psynch/index.html] I glanced at their web pages and was not impressed. They have almost no text and are almost completely unreadable for Lynx users. They also don't offer any functionality in their demo -- which is just a mockup of the GUI (crippleware). and NSGINA [http://www.dcs.qmw.ac.uk/~williams/] which seems a bit of work to setup... That would "NISGINA" This is by Nigel Williams -- apparently derived from work by Doug Scoular(*). It is apparently released under a BSD'ish license. So this is much more promising than the p-sync package right off the bat. * (http://www.arch.usyd.edu.au/~doug/gina.html) GINA (graphical identification and authentication) is the NT DLL that manage logins at the NT console -- there are several different GINA's -- one from Novell for NDS, one from MIT for Kerberos, another similar one for NT-AFS (Transarc's distributed filesystem -- which uses a Kerberos 4 authentication model) etc. Here are some related URLS: NT GINA related information: http://web.mit.edu/cartel/ntgina.html -- very informative -- leads to all the rest that I found. ND_GINA - An alternative authentication method for Windows NTr, http://www.nd.edu/~dobbins/ntarch/nd_gina_doc.html NT/UNIX Integration with Doug's GINA replacement, http://www.arch.usyd.edu.au/~doug/gina.html The problem with GINA is that it doesn't appear to be available for '95 (or earlier versions of Windows or DOS). That may be a show stopper for for this approach I can't recommend in good faith that you upgrade you Win '95 systems to NT (since that just buys you in further to this proprietary OS model -- and just worsens your dilemma when '98 and NT 5.x ship). I'm not really looking for passwd syncronisation, i'd like to consolidate it to the linux box, because the users use both linux/95/nt. nuff said, thanks. I don't think nearly 'nuff's been said about this topic. There are a large number of directory service and authentication methods that are vying for control of your network. Each as its own security implications -- making them co-exist is difficult from the start, and a constant drain on administrative time and resources -- and having them running concurrently usually means that the weakest link prevails in your security model. There's an excellent white paper about this at Cygnus Solutions: http://www.cygnus.com/product/unifying-security.html That aside, some of your choices are: * Microsoft's WINS (and its PDC/BDC domain model) * Kerberos and Cygnus Kerbnet * NIS/NIS+ * RADIUS/TACACS * LDAP (lightweight directory access protocol) * Netware NDS, Banyan StreetTalk, etc. * Host based security -- custom synchronization * scripts. So far I don't like any of them. NIS/NIS+ is usually used with NFS. Kerberos is the models that's used in conjunction with AFS/DFS (and CODA if they ever finish it). CIFS/SMB filesharing can be done with very weak authentication (workgroups style) or with the WINS Microsoft model. Overall I think I'll like CODA best when we have a reasonably Linux server and client for it. More more info on the CODA project at CMU browse through their mailling list archives at: http://www.coda.cs.cmu.edu/maillists/linux-coda/0175.html I could rant for sometime about the security models of the various network/shared filesystems -- but it's late so let it suffice to say I like that even less than the choices for DS and authentication. So far I think I like TCFS (transparent cryptographic filesystem) the best for security -- though I'm quite concerned about its performance costs. I presume you're using Samba on your Linux server(s) to provide file services to your Windows clients. From a glance at the Samba Meta-FAQ and some of its other pages it looks like you could just let Linux/Samba manage the accounts for your whole network. Here's some links that relate to that: Samba: User accounts http://samba.anu.edu.au/samba/docs/smb_serv/html/smb_se-4.html#ss4.1 Samba Server HOWTO http://samba.anu.edu.au/samba/docs/smb_serv/html/smb_se.html (*Note: if I read this correctly -- Samba can't currently be a "password server." This seems to mean that it can't act as a PDC/BDC (backup domain controller) for NT systems to refer client authentication requests through). It looks like the future will hold some sort of LDAP and Kerberos -- for NT and many other OS' and packages. This would be fine -- if it weren't for the inevitable politicking and kneebiting that the various commercial vendors are going to do. The problem is that everyone's version of LDAP (directory services) and Kerberos (authentication) will be just different enough that each vendor's OS will just *need* to be *the* server for *their* domain. They'll all make press releases about their "interoperability" -- and most will refuse to release enough details about their "extensions" for any other vendor (or freeware programmers) to implement them elsewhere. I guess it will take a few years after the initial deployment for enough of this proprietary info to leak out (and/or be reverse engineered) to allow system administrators to actually have any semblance of a unified directory service and authentication system. The bugs and security problems will probably keep popping up for a long time after that (they've been popping up in Unix for 27 years -- and many of them are reappearing in NT now). Meanwhile we're going to see a continuing explosion of servers and network applications (client server systems) that each require different user account (with associated group, token, and other information) and authentication information. Worse yet, the various layers of management above us are already hearing the marketeers lies -- that the solutions are "already shipping" or "just around the corner." This is just what management wants to hear -- so many of them are believing it -- and planning their budgets and project schedules accordingly. A system administration disaster in the making. Sorry I can't offer a brighter hope for the new year -- but I'm no marketeer. -- Jim _________________________________________________________________ Copyright © 1998, James T. Dennis Published in Issue 24 of the Linux Gazette January 1998 _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next "Linux Gazette...making Linux just a little more fun!" _________________________________________________________________ Quick autofs Tutorial By Mark Nielsen _________________________________________________________________ What is autofs? Autofs uses automount to mount cdrom and floppy drives (and other things) to your computer as you need them and umount as they are not being used. What is mount and unmount? Mount and umount are programs to mount a device ontop of a directory. For example, this command would mount the first floppy drive (drive a: in DOS terms) to the directory "/mnt/floppy" so that everytime you access "/mnt/floppy" to access your floppy drive. mount /dev/fd0 /mnt/floppy And this command unmounts or frees up the floppy drive from being used. umount /dev/fd0 Now, in DOS terms, your floppy disk is accessed as you need it. For normal users, that is what they would expect. They wouldn't want to use mount and umount because it is time consuming and because they might not get it right. I assume "/dev/cdrom" is your cdrom drive and "dev/fd0" is your floppy drive. I am also assuming you will backup your "/etc/auto.master" file. Use this script to create the following threes files and restart autofs. Login as "root", goto your home directory, copy whatever between the next two lines to a file called "CreateAutofs.script" and execute the script with the command source CreateAutofs.script _________________________________________________________________ cd echo "Please ignore any errors when making directories" ### Let us make sure the two directories exist, ignore errors mkdir /mnt/floppy mkdir /mnt/cdrom ### Let us backup the auto files in case they haven't mv /etc/auto.master /etc/auto.master_old mv /etc/auto.floppy /etc/auto.floppy_old mv /etc/auto.cdrom /etc/auto.cdrom_old ### Create the files for autofs echo "/mnt/cdrom /etc/auto.cdrom --timeout 10" > /etc/auto.master echo "/mnt/floppy /etc/auto.floppy --timeout 1" >> /etc/auto.master echo "floppy -user,suid,fstype=msdos :/dev/fd0" > /etc/auto.floppy echo "cdrom -fstype=iso9660,ro :/dev/cdrom" > /etc/auto.cdrom ### Create the links to the floppy drive and cdrom drive ln -s /mnt/floppy/floppy a: ln -s /mnt/cdrom/cdrom d: ln -s /mnt/floppy/floppy floppy ln -s /mnt/cdrom/cdrom cdrom ### Lets retstart autofs, you might have to reboot cd /etc/rc.d/init.d ./autofs restart ### If it didn't work, you might have to reboot ### Try "./autofs start" if the restart claims autofs has not been #### started cd _________________________________________________________________ The end of your results should look something like the following if it was sucessfull Start /usr/sbin/automount --timeout 10 /mnt/cdrom file /etc/auto.cdrom Start /usr/sbin/automount --timeout 1 /mnt/floppy file /etc/auto.floppy Now put a floppy disk formatted for MSDOS and a cdrom in and execute the commands ls a: ls d: to see if there is anything on them. Hopefully you don't get any error messages. Personally, my /etc/auto.floppy file looks like floppy -fstype=auto,defaults,user,suid :/dev/fd0 and my /etc/auto.cdrom file look like this cdrom -fstype=iso9660,user,suid :/dev/cdrom The reason why I gave conservative values in the script was the fact the my values might be security hazards. But since I am the only person using my computer, I wanted to make sure my personal account had full access to the floppy and cdrom drives. Also, -fstype=auto doesn't seem to work quite right when your disk is formated for MSDOS, but works fine with ext2. "-fstype=auto" tries to autodetect the file format. If you noticed the timeout value for the floppy drive is 1 second. This makes it so that by the time the floppy drive light has gone out, your floppy disk is unmounted and a normal user can take the floppy disk out and "nothing bad happens". I made the timeout value for the cdrom 10 seconds because it wasn't working really well at 1 second, and I figured it was because the drive didn't have enough time to "warm up" before it was being shut down. You might want to test what the timeout value for your cdrom drive should be. To get more information about autofs, do the commands man 5 autofs man autofs and look the the directory "usr/doc" for the directory "autofs-0.3.14" or something similar to it. Now to explain it Okay, here is my brief explanation for what is happening. Read the man pages and any docs first. Your "/etc/rc.d/init.d/autofs" script first looks at "/etc/auto.master". That file usually has three things on each line. It has the directory which all mounts will be located at. Then next to that value is the filename which contains the configuration(s) for what devices you want mounted. We will call these filenames the "supplemental" files. Next to that value is the timeout which you want to occur after so many seconds of inactivity. The timeout will free or umount all devices specificed in the supplemental files after so many seconds of inactivity. Now, the supplemental files can have more than on entry, but for my purposes I don't do that. Read below for the explanation. The supplemental files can be named anything you want them to be named. They also have three values for each entry. The first value is the "psuedo" directory. I will explain this later. The second value contains the mount options. The third value is the device (like "/dev/fd0" which is the floppy drive) which the "psuedo" directory is connected to. The "pseudo" directory is contained in the directory which is defined in "/etc/auto.master". When people try to access this "psuedo" directory, they will be rerouted to the device you specified. For example, the above script will generate a link called "a:" which if you list with the command "ls a:" will give you a list of files in the floppy drive. Or, a similar command would be "ls /mnt/floppy/floppy". But if you do the command "ls /mnt/floppy", you don't see anything even though the directory "/mnt/floppy/floppy" should exist. That is because "/mnt/floppy/floppy" doesn't exist as a file or directory, but somehow the system knows that if you specifically ask for "/mnt/floppy/floppy", it will reroute you to the floppy drive. Now as to the reason why I didn't combine the floppy drive and cdrom drive into the same supplementary file. Each definition in the "/etc/auto.master" file will have its own "automount" program running for it. If you have several devices running on the same automount program and one of them fails, it could force the others not to work. That is why I want every device running on its own automount program which means there is one device per supplementary file per entry in the "/etc/auto.master" file. _________________________________________________________________ Copyright © 1998, Mark Nielsen Published in Issue 24 of Linux Gazette, January 1998 _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next _________________________________________________________________ "Linux Gazette...making Linux just a little more fun!" _________________________________________________________________ Buying A Laptop By Joel Jaeggli _________________________________________________________________ Having just read Micheal Shappe's review of the Fujitsu Lifebook 420d in the Nov issue of LJ I thought I would relate a similar experience I had in purchasing a new laptop for my organization. Academic User Services at the University of Oregon provides support for end users on timeshare systems running Digital Unix, VMS, Solaris, and Linux. We have a wide variety of desktop hardware including PC's running Linux, FreeBSD, NT, 95 as well as DEC Alphas, Sparcs, SGIs and Macs. We needed a portable computer to send with people to do workshops, attend conferences and demo software for people. Because our members use a number of varieties of Unix but only a little better than half of them use Windows in the course of their work, solid UNIX support of some variety was imperative. Our requirements were that it should have sufficient space to dual boot Windows95 and Linux, more over all the subsystems should work equally well under either Windows or Linux. The Laptop needed to provide support for an external display at 24-bit color depth as well as have support for an external keyboard and mouse. It should above all meet those goals and be relatively inexpensive as well as reasonably compact and durable. We eventually settled on a Toshiba Satellite Pro 220cds. The Toshiba has a complete compliment of ports on the back, PS/2, serial, parallel, IRDA, VGA, docking station and interestingly enough, a USB port. It comes with a Pentium 133 processor, 16mb of ram and 2mb of dram to drive the 12.1" passive matrix display at 8, 16 and 24 bit color. It has a small but sufficient 1.4gb disk. The audio jacks and the volume control are located on the front of the case. The satellite pro 220cds is over all similar to it's more expensive cousins the 440cds and the 460cds which will undoubtedly replace it eventually. They are distinguished by having active matrix displays, larger hard disks and in the case of the 460 an internal modem. The keyboard is quite large, the feel is closer if anything to a desktop keyboard than anything that you would expect to find on a laptop except for the travel of the keys which is fairly short. The Toshiba laptops have a ThinkPad style trackpoint-pointing device rather than the ubiquitous trackpads that seem to be on almost all portables these days. It does not regrettably have a third mouse button like the new and very expensive ThinkPad 770; but the buttons are located on top of each other rather than adjacent to each other as on most trackpads. This makes them easier to press and hold down with your thumb while moving the trackpoint with you index finger, so you can drag a scrollbar in X for example without using two hands. Once I got over the fact that windows95 osr2 uses a fat/32 filesystem which cannot be modified by fips, I used partition magic, a utility by Powerquest to resize the windows95 partition I was able to install Redhat 4.2 without trouble. The alternative would have been to repartition the disk by hand using fdisk and then reinstall Windows off of the supplied CD-ROM. I choose to partition the disk 800MB for Windows and 600MB for Linux since I needed to install some large Windows applications such as Adobe Frame and Microsoft Office. Rather than have separate var / and home filesystems I opted for a single large Linux partition. A 64MB slice reserved for swap, this made sense since I didn't expect var to grow too much. The computer wouldn't be serving to much and that everyone using the portable under Linux would be logging in as the same user. The Toshiba 220cds uses a Chips and Technologies 65555 chipset for the video display. While AcceleratedX and MetroX support this chipset it is also well supported by XFree86's SVGA X server. If you choose to run the free server you can expect to get 640x480 or 800x600 on the internal display at 8 16 or 24 bits per pixels. Because it has 2MB of video memory you can drive an external monitor at up to 1280x1024 in 8-bit color. Networking was actually easier to configure under Linux than it was in windows. The 3com Elink 3 ethernet card and megahertz 33.6 PC-card modem that we purchased were detected by Redhat's install disks which was fortunate because I installed Redhat 4.2 via NFS using the network card. Because It is a portable I haven't configured it with a static-ip, rather I DHCP the portable under both Windows and Redhat which facilitates dragging it back and forth between subnets on our campus a great deal. The python based Redhat network control panel is particularly well suited to adjusting your network configuration on the fly. Configuring sound support turned out to be a pretty interesting exercise. I chose to use the commercial OSS-Linux sound driver to support the Yamaha olp3-sa chipset that the portable has. OSS-Linux could not auto detect the settings of the sound chipset as it had done with my desktop Linux machine. It worked fairly well once I figured out which irq's and memory addresses were in use by the sound chipset, which was pretty easy using the windows95 system control panel. The Toshiba 220cds is not by the standards of the current high-flyer in the laptop world an exceptionally fast machine, It is however full featured and even given all the accessories that we added extremely cheap. The laptop itself can be had for as little as $1600. The cost of a 3com network card and a Megahertz modem were $120 and $220 dollars respectively. A spare lithium-ion battery an additional $199 and an additional 16MB of ram was $90. In all the fully configured laptop cost less than $2300. I have been very happy with how well the new laptop has worked out. It is a compact and elegant package, which is similar in function and design if not performance to the cream of the Toshiba notebook crop such as the Tecra 740. The moderate sacrifice in performance results in a great machine at a fraction of the cost. I would recommend such a device to anyone looking in to run Linux on a laptop. _________________________________________________________________ Copyright © 1998, Joel Jaeggli Published in Issue 24 of Linux Gazette, January 1998 _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next _________________________________________________________________ "Linux Gazette...making Linux just a little more fun!" _________________________________________________________________ Copying Files Using Mirror By Gerd Bavendiek _________________________________________________________________ Do you have a laptop ? And want frequently to copy files to another system ? Use mirror ! I frequently have to exchange files between my laptop and other systems, eg. my home desktop. This can be conveniently done using rdist(1). I wrote a small script called mirror, which basically contains a call of rdist setting up a small Distfile using the shell's here syntax: rdist -d PWD=`pwd` -f - << EOF ${PWD} -> mirror install -oyounger ${PWD}; except_pat ( ~\\$ ); EOF This is not the place to deal with rdist-syntax in greater detail, so see rdist's-manpage, if you like. Files will be copied to host mirror. Of course you have to set up /etc/hosts appropiately. So working on a project with files in ~/wsp/pbd/os-tools, I can simply say nana:/home/bav/wsp/pbd/os-tools> mirror mirror: updating host mirror mirror: /home/bav/wsp/pbd/os-tools/main-window.tcl: updating mirror: /home/bav/wsp/pbd/os-tools/os-tools.tcl: updating mirror: /home/bav/wsp/pbd/os-tools/popups.tcl: updating mirror: updating of mirror finished and mirror will copy new or changed files to the very same directory on the other node. This will be done recursively. Files on the other node, which are younger than the files on the node I started mirror on, will be mentioned, but remain untouched. Emacs-backup-files will not be copied. Using the Option -verify, you can check what will be done without really doing anything: nana:/home/bav/wsp/pbd/os-tools> mirror -verify mirror: updating host mirror mirror: /home/bav/wsp/pbd/os-tools/os-tools.tcl: need to update mirror: /home/bav/wsp/pbd/os-tools/popups.tcl: need to update mirror: updating of mirror finished The option -f will remove extraneous files on node mirror. This is useful to get a real mirror: nana:/home/bav/wsp/pbd/os-tools> mirror -f mirror: updating host mirror mirror: lulu: /home/bav/wsp/pbd/os-tools/qqq: removed mirror: lulu: /home/bav/wsp/pbd/os-tools/otto: removed mirror: /home/bav/wsp/pbd/os-tools/main-window.tcl: updating mirror: /home/bav/wsp/pbd/os-tools/popups.tcl: updating mirror: updating of mirror finished Besides mirroring to another system's disk mirror can be used to mirror the current directory to a floppy. This comes in handy for a quick kind of backup. There is no real advantage of using rdist when operating locally. If there is enough space available, I use cp with the options -ruvp. To do so, call mirror with the option -floppy: nana:/home/bav/wsp/pbd/os-tools> mirror -floppy ./main-window.tcl -> /floppy/./main-window.tcl ./os-tools.tcl -> /floppy/./os-tools.tcl ./popups.tcl -> /floppy/./popups.tcl As with rdist, only new or changed files are copied. Mounting and unmounting the floppy is done by the script. Right now there is no way of handling extraneous files implemented. In case you like this ideas, you can find my mirror-utility here. _________________________________________________________________ Gerd Bavendiek Last modified: Thu Sep 24 22:24:59 MET DST _________________________________________________________________ Copyright © 1998, Gerd Bavendiek Published in Issue 24 of Linux Gazette, January 1998 _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next _________________________________________________________________ "Linux Gazette...making Linux just a little more fun!" _________________________________________________________________ Previous Next Contents _________________________________________________________________ Linux Benchmarking: Part III -- Interpreting Benchmark Results by André D. Balsa (writer/coordinator), andrewbalsa@usa.net v0.82, 31 December 1997 _________________________________________________________________ This is the third article in a series of articles on Linux Benchmarking, to be published by the Linux Gazette. The first two articles showed how to successfully run synthetic or application benchmarks to produce accurate, significant and relevant data. The present article deals with the correct interpretation of this data, and it also presents nbench-byte 2.1, a modern CPU benchmark suite. _________________________________________________________________ 1. Contributors 2. Benchmarking vs. benchmarketing * 2.1 The scientific/quantitative approach * 2.2 The benchmarketing approach 3. Benchmarks for SMP systems * 3.1 Description of the problem * 3.2 Runtime issues * 3.3 Scheduling issues * 3.4 Further reading/links * 3.5 Benchmark availability 4. GNU/Linux specifics 5. An example of correct/incorrect interpretation of results * 5.1 Example * 5.2 Pitfalls 6. In the next article 7. Notes _________________________________________________________________ Previous Next Contents _________________________________________________________________ Copyright © 1998, André D. Balsa Published in Issue 24 of Linux Gazette, January 1998 _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next _________________________________________________________________ Previous Next Contents _________________________________________________________________ 1. Contributors Without the help of the following gentlemen, this article would not have been completed in time, would certainly be much shorter and would probably be filled with errors. In no particular order: * Uwe F. Mayer, mayer@tux.org (nbench-byte and general contributions/revision), * Andy Kahn, kahn@zk3.dec.com (SMP Linux kernel 2.0.0 compilation benchmark idea, updates and results), * Joseph Yao, jsdy@tux.org (comments and contributions), * François Laagel, f.laagel@ieee.fr (researched and contributed most of the SMP section), and * David Niemi, niemi@tux.org (comments and contributions). _________________________________________________________________ Previous Next Contents Previous Next Contents _________________________________________________________________ 2. Benchmarking vs. benchmarketing There are two basic approaches to benchmarking in the field of computing: the "scientific" or quantitative approach and the "benchmarketing" approach. Both approaches use exactly the same tools, however with a slightly different methodology and of course with widely diverging objectives and results. 2.1 The scientific/quantitative approach The first approach is to think of benchmarking as a tool for experimentation. Benchmarking is a specific branch of Computer Science, since it produces numbers which can then be mathematically processed and analyzed; this analysis will be later used to draw relevant conclusions about CPU architectures, compiler design, etc. As with any scientific activity, experiments (benchmark runs and reporting) must follow some basic guidelines or rules: * A good dose of modesty/humility (don't be too ambitious to begin with) and common sense. * No bias or prejudice. * A clearly stated objective related to advancing the state-of-the-art. * Reproducibility. * Accuracy. * Relevance. * Correct logical/statistical inference. * Conciseness. * Sharing of information. * Quoting sources/references. Of course, this is an idealized view of the scientific community, but these are some of the basic rules for the experimental methods in all branches of Science. I should stress that benchmarking results in documented quantitative data. The correct procedure for GNU/Linux benchmarking under this approach is: 1. Decide on what is the issue that is going to be investigated. It is very important to execute this step before anything else gets started. Stating clearly what we are going to investigate is getting half the work done. 2. Also note that we are not out to prove anything: we must start with a clean, Zen-like mind. This is particularly difficult for us, GNU/Linux benchmarkers, since we are all utterly convinced that: 1. GNU/Linux is the best OS in the universe (what "best" means in this context is not clear, however; probably the same as "coolest"), 2. Wide-SCSI-3 is better than plain EIDE, (idem), 3. Digital's 64-bit RISC Alpha is the best platform around for GNU/Linux development (idem), and 4. X Window is a good, modern GUI (no comments). 3. After purifying our minds and souls ;-), we will have to select the tools (i.e. the benchmarks) which will be used for our benchmarking experiments. You can take a look at my previous article for a selection of GPLed benchmarks. Another way to get the right tool for the job at hand is to devise and implement your own benchmark. This approach takes a lot more time and energy, and sometimes amounts to reinventing the wheel. Creativity being one of the nicest features in the GNU/Linux world, writing a new benchmark is recommended nonetheless, especially in the areas where such tools are sorely missed (Graphics, 3D, multimedia, etc). Summarizing, selecting the appropriate tool for the job is very important. 4. Now comes the painstaking part: gathering the data. This takes huge amounts of patience and attention to details. See my two previous articles. 5. And finally we reach the stage of data analysis and logical inference, based on the data we gathered/analyzed. This is also where one can spoil everything by joining the Dark Side of the Force (see subsection 1.2 below). Quoting Andrew Tanenbaum: "Figures don't lie, but liars figure". 6. If relevant conclusions can be drawn, publishing them on the appropriate mailing lists, newsgroups or on the Linux Gazette is in order. Again this is very much a Zen attitude (known as "coming back to the village"). 7. Just when you thought it was over and you could finally close the cabinet of your CPU after having disassembled it more times than you could count, you get a sympathetic email that mentions a small but fundamental flaw in your entire benchmarking procedure. And you begin to understand that benchmarking is an iterative process, much like self-improvement... 2.2 The benchmarketing approach This second approach is more popular than the first one, as it serves commercial purposes and gets more subsidies (i.e. grants, sponsorship, money, cash, dinero, l'argent, $) than the first approach. Benchmarketing has one basic objective, and that is to prove that equipment/software A is better (faster, more powerful, better performing or with a better price/performance ratio) than equipment/software B. The basic inspiration for this approach is the Greek philosophical current known as Sophism. Sophistry has had its adepts at all times and ages, but the Greeks made it into a veritable art. Benchmarketers have continued this tradition with varying success (also note that the first Sophists were lawyers (1) see my comment on Intel below). Of course with this approach there is no hope of spiritual redemption... Quoting Larry Wall (of Perl fame) as often quoted by David Niemi: "Down that path lies madness. On the other hand the road to Hell is paved with melting snowballs." Benchmarketing results cover the entire range from outright lies to subtle fallacies. Sometimes an excessive amount of data is involved, and in other cases no quantitative data at all is provided; in both cases the task of proving benchmarketing wrong is made more arduous. A short history of benchmarketing/CPU developments We already saw that the first widely used benchmark, Whetstone, originated as the result of research into computer architecture and compiler design. So the original Whetstone benchmark can be traced to the "scientific approach". At the time Whestone was written, computers were indeed rare and very expensive, and the fact that they executed tasks impossible for human beings was enough to justify their purchase by large organizations. Very soon competition changed this. Foremost among the early benchmarketers was the need to justify the purchase of very expensive mainframes (at the time called supercomputers; these early machines would not even match my < $900 AMD K6 box). This gave rise to a good number of now obsolete benchmarks, as of course each different architecture needed a new algorithm to justify its existence in commercial terms. This supercomputer market issue is still not over, but two factors contributed to its relative decline: 1. Jack Dongarra's effort to standardize the LINPACK benchmark as the basic tool for supercomputer benchmarking. This was not entirely successful, as specific "optimizers" were created to make LINPACK run faster on some CPU architectures (note that unless you are trying to solve large scientific problems involving matrix operations - the usual task assigned to most supercomputers - LINPACK is not a good measure of the CPU performance of your GNU/Linux box; anyway, you can find a version of LINPACK ported to C in Al Aburto's excellent FTP site. 2. The appearance of very fast and cheap superminis, and later microprocessors, and the widespread use of networking technologies. These changed the idea of a centralized computing facility and signaled the end of the supercomputer for most applications. Also modern supercomputers are built with arrays of microprocessors nowadays (notably the latest Cray machines are built using up to 2048 Alpha processors), so there was a shift in focus. Next in line was the workstation market issue. A nice side-effect of the various marketing initiatives on the part of some competitors (HP, Sun, IBM, SGI among others) is that it spawned the development of various Unix benchmarks that we can now use to benchmark our GNU/Linux boxes! In parallel to the workstation market development, we saw fierce competition develop in the microprocessor market, with each manufacturer touting its architecture as the "superior" design. In terms of microprocessor architecture an interesting development was the performance issue of CISC against RISC designs. In market terms the dominating architecture is Intel's x86 CISC design (c.f. Computer Architecture, a Quantitative Approach, Hennessy and Patterson, 2nd. edition; there is an excellent 25-page appendix on the x86 architecture). Recently the demonstrably better-performing Alpha RISC architecture was almost wiped out by Intel lawyers: as a settlement of a complex legal battle over patent infringements, Intel bought Digital's microelectronics operation (which also produced the StrongARM (2) and Digital's highly successful line of Ethernet chips). Note however that Digital kept its Alpha design team and the settlement includes the possibility by Digital to have present and future Alpha chips manufactured by Intel. The x86 market attracted Intel competitors AMD and more recently Cyrix which created original x86 designs. AMD also bought a small startup called NexGen which designed the precursor to the K6, and Cyrix had to grow under the umbrella of IBM and now National Semiconductor but that's another story altogether. Intel is still the market leader since it has 90% of the microprocessor market, even though both the AMD K6 and Cyrix 6x86MX architectures provide better Linux performance/MHz than Intel's best effort to date, the Pentium II (except for floating-point operations). Lastly, we have the OS market issue. The Microsoft Windows (R) line of OS's is the overwhelming market leader as far as desktop applications are concerned, but in terms of performance/security/stability/flexibility it sometimes does not compare well with other OSes. Of course, inter-OS benchmarking is a risky business and OS designers are aware of that. Besides, comparing GNU/Linux to other OSes using benchmarks is almost always an exercise in futility: GNU/Linux is GPLed, whereas no other OS can be said to be free (in the GNU/GPL sense). Can you compare something that is free to something that is proprietary (3) Does benchmarketing apply to something that is free? Comparing GNU/Linux to other OSes is also a good way to start a nice flame war on comp.os.linux.advocacy, specially when GNU/Linux is compared to BSD Unices or Windows NT. Most debaters don't seem to realize that each OS had different design objectives! These debates usually reach a steady state when both sides are convinced that they are "right" and that their opponents are "wrong". Sometimes benchmarking data is called in to prove or disprove an argument. But even then we see that this has more to do with benchmarketing than with benchmarking. My $0.02 of advice: avoid such debates like the plague. Turning benchmarking into benchmarketing The SPEC95 CPU benchmark suite (the CPU Integer and FP tests, which SPEC calls CINT95/CFP95) is an example of a promising Jedi that succumbed to the Dark side of the Force ;-). SPEC (Standard Performance Evaluation Corporation) originated as a non-profit corporation with the explicit objective of creating a vendor-independent, objective, non-biased, industry-wide CPU benchmark suite. Founding members were some universities and various CPU and systems manufacturers, such as Intel, HP, Digital, IBM and Motorola. However, some technical and philosophical issues have developed for historical reasons that make SPEC95 inadequate for Linux benchmarking: 1. Cost. Strangely enough, SPEC95 benchmarks are free but you have to pay for them: last time I checked, the CINT95/CFP95 cost was $600. The quarterly newsletter was $550. These sums correspond to "administrative costs", according to SPEC. 2. Licensing. SPEC benchmarks are not placed under GPL. In fact, SPEC95 has a severely limiting license that makes it inadequate for GNU/Linux users. The license is clearly geared to large corporations/organizations: you almost need a full-time employee just to handle all the requisites specified in the license, you cannot freely reproduce the sources, new releases are every three years, etc... 3. Outright cheating. Recently, a California Court ordered a major microprocessor manufacturer to pay back $50 for each processor sold of a given speed and model, because the manufacturer had distorted SPEC results with a modified version of gcc, and used such results in its advertisements. Benchmarketing seems to have backfired on this occasion. 4. Comparability. Hennessy and Patterson (see reference above) clearly identify the technical limitations of SPEC92. Basically these have to do with each vendor optimizing benchmark runs for their specific purposes. Even though SPEC95 was released as an update that would work around these limitations, it does not (and cannot, in practical terms) satisfactorily address this issue. Compiler flag issues in SPEC92 prompted SPEC to release a 10-page document entitled "SPEC Run and Reporting Rules for CPU95 Suites". It clearly shows how confident SPEC is that nobody will try to circumvent specific CPU shortcomings with tailor-made compilers/optimizers! Unfortunately, SPEC98 is likely to carry over these problems to the next generation of CPU performance measurements. 5. Run time. Last but not least, the SPEC95 benchmarks take about 2 days to run on the SPARC reference machine. Note that this in no way makes them more accurate than other CPU benchmarks that run in < 5 minutes (e.g. nbench-byte, presented below)! Summarizing, if you must absolutely compare CPU performance for different configurations running GNU/Linux, SPEC95 is definitely not the recommended benchmark. On the other hand it's a handy tool for benchmarketers. _________________________________________________________________ Previous Next Contents Previous Next Contents _________________________________________________________________ 3. Benchmarks for SMP systems 3.1 Description of the problem SMP (Symmetric MultiProcessing) has been implemented in the Linux kernel for Intel Pentium, Pentium MMX, Pentium Pro and Pentium II processors (4) and more recently for SPARC architectures. SMP systems are usually more expensive than their uniprocessor counterparts because they are frequently used to implement heavy-duty (possibly fault-tolerant) servers. For this reason potential buyers of such systems often want to make sure that applications, OS and hardware platform will be able to satisfy their needs in terms of overall performance before deciding on an expensive purchase. This is precisely where a Linux SMP benchmark would be useful. As this series of articles focuses on using current and stable 2.0.x kernels, we will only deal with what could be done for benchmarking Linux SMP systems with current Linux distributions. Taking advantage of the additional computing power brought to the end-user by an SMP hardware platform puts constraints on almost all layers of the software involved: application, runtime libraries and operating system. Basically two approaches are possible depending on how the application being considered is designed: 1. The application uses multiple simultaneously running processes. Those processes are very likely to communicate with each other using standard IPC (Inter-Process Communications) mechanisms. 2. The application is multi-threaded: for some of the related processes, multiple instances of sequential execution exist in the same address space. The table below summarizes the impact of these two designs on the software layers involved, on the programming complexity and on the expected performance improvement (relatively to a comparable uniprocessor system): Application Multiple single-threaded processes Multi-threaded application Runtime libraries special requirements None. Libraries must be thread safe and should preferably offer some POSIX control over the threads. Operating system special requirements: load balancing Smart assignment of processes to processors must be implemented (static or dynamic). An assignment mechanism of kernel-threads to processors must be supported Example make -j 4 vmlinux None available AFAIK. Additional programming complexity None. Greater than for single-hreaded applications, but it can be done by us mere mortals. Expected performance improvement Average to poor. High (close to linear speedup) for CPU bound applications but can also degrade to become as low as single processor performance for system call intensive applications. How do those issues relate to current stable Linux kernels? Good results obtained from a Linux multi-threaded benchmark would be very interesting for power users. 3.2 Runtime issues Threads can be implemented at the user-level as coroutines (e.g. the LinuxThreads package), or can be kernel threads (i.e. threads running in user mode but scheduled by the kernel). Until the very recent release of Glibc 2.0 which RedHat 5.0 includes as its standard C library, finding a thread safe runtime library could be a tough job. 3.3 Scheduling issues The issue here is the way scheduling is implemented on SMP platforms by the current stable kernels. Quoting its implementor Alan Cox (in a paper he wrote in 1995): "A single lock is maintained across all processors. This lock is required to access the kernel space. Any processor may hold it and once it is held may also re-enter the kernel for interrupts and other services whenever it likes until the lock is relinquished. This lock ensures that a kernel mode process will not be pre-empted and ensures that blocking interrupts in kernel mode behaves correctly. This is guaranteed because only the processor holding the lock can be in kernel mode, only kernel mode processes can disable interrupts and only the processor holding the lock may handle an interrupt." So a correct interpretation of this is: right now, no more than a single process may be executing in kernel mode (i.e. executing a system call) at any given time. But efforts are underway to improve the granularity of locking in future 2.2.x kernels. We should also soon be able to take interrupts without having to take a lock. This should result in much better performance of system call intensive applications on SMP systems running GNU/Linux. 3.4 Further reading/links 1. "An Implementation Of Multiprocessor Linux", Alan Cox, 1995. I found this TeX article in the Linux source tree (kernel 2.0.33 source in /Documentation/smp.tex). 2. A FAQ about the clone() Linux system call. 3. A clone() utilization example 4. LinuxThreads: a package that implements user-level threads under Linux. 3.5 Benchmark availability If we stick to our guideline for simple, quick running, readily available benchmarks (or more simply, K.I.S.S. benchmarks), we can use a modified version of the Linux kernel 2.0.0 compilation benchmark (described in article II), now for SMP systems. Andy Kahn provided us with this test and some very interesting results. Quoting directly from some email we exchanged on this subject: "...actually, it's pretty simple. GNU "make" has an option you can specify to use multiple processes (either a default number or a user specified number).I don't have the man page handy right now, but i'm pretty sure it's either the -j option or the -p option (actually, i think both options have some importance to multiple processes). Once you specify multiple make processes, each process will have gcc compiling something (so in effect, it's just multiple gcc processes). (later) "Andre Derrick Balsa" wrote: -> Great news :-) -> -> Thanks to Andy who actually tried this on a dual PPro SMP system and -> explained the whole thing to me, I am pleased to announce a version of -> the Linux 2.0.0 kernel compilation application benchmark for SMP -> systems: -> -> Just replace the "make vmlinux" (was "make zImage") by "make -j n -> vmlinux". Replace n by 2, 3 ... and make will launch 2, 3 ... processes -> in parallel. Since Linux SMP will transparently distribute processes -> between the SMP processors, there is no need to program anything special -> in terms of message-passing, clone(), etc... -> -> Andy doesn't have any exact figures available, but it seems this would -> provide a 30% decrease in compilation time (over a single serialized -> process). Thanks, Andy. :-) -> and because I don't have any exact figures, I decided that I would go and get some exact figures. :) The system tested was: Dual Pentium Pro 180MHz overclocked to 200MHz 64MB EDO RAM Linux 2.0.27 gcc v2.7.2.1 libc v5.3.12 hda: QUANTUM TRB850A, 810MB w/96kB Cache, LBA, CHS=823/32/63 This is more or less your "standard" PC from about 13-14 months ago. I'm not at liberty to upgrade the software on this system, so this is as good as it gets from me with this setup. Also, instead of doing a "sync" before issuing the final "make" command, I propose that if the circumstances allow it (you have root access), then umount the file system, remount it, then go back to that directory and build the kernel. --- THE RESULTS! --- "time make vmlinux" 107.32user 149.01system 4:27.91elapsed 95%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (143472major+167951minor)pagefaults 0swaps "time make -j 2 vmlinux" 131.13user 177.77system 3:28.34elapsed 148%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (169498major+168582minor)pagefaults 8903swaps Ugh, the results are terrible (only a 22% improvement)!! Note that in the SMP case, CPU usage was only 148%. From this, we can see that the 2nd CPU wasn't really used all that much (efficiently)." I really appreciated Andy's attitude: not only he improved on my previous test procedure, but he went right ahead and produced some nice experimental data to go with it! Plus one can feel how enthusiastic he was at doing some hands-on experimentation! Another nice feature of this simple SMP benchmark is that it provides a basis for performance comparisons between uniprocessor and SMP GNU/Linux systems. Two more benchmarks would deserve a thorough description, but I will just mention them here: 1. UnixBench 4.1 has some tests that will launch simultaneous processes. 2. A rather complex, but complete Unix benchmark suite developed in France, called SSBA. François is working on a Linux port of the latest 2.4F revision. _________________________________________________________________ Previous Next Contents Previous Next Contents _________________________________________________________________ 4. GNU/Linux specifics One should be aware of some specific details when benchmarking GNU/Linux systems, as compared to benchmarking on other OS's. GNU/Linux is a multitasking, multiuser system. So, obviously system load will skew results. On the other hand, this may be exactly the behavior that we want to test: how will a GNU/Linux system perform under heavy use? There is no simple answer to this question. Again, careful data gathering and analysis may reveal interesting opportunities for GNU/Linux improvement. Note that system load particularly affects latencies, so one should be very careful about the conceptual differences between latency and throughput. Just a short example, provided by Jeremy Chatfield ( Xi Graphics : some X servers will freeze for various seconds under heavy load, resulting in mouse movements that can become quite jerky. This behavior is totally undesirable, and yet it is not measured by any X server benchmarking tool available. Also, in a multiuser, multitasking system, the time function reports various items that must be analyzed separately: CPU time vs system time vs elapsed time (mentionned in article II). So, for a CPU benchmark, we should use CPU time, since the time spent during I/O or system functions is irrelevant. In the case of a system benchmark, it is probable that we will try to write a benchmark that spends most of its time in the kernel, so we will use system time. On the other hand, for our Linux kernel 2.0.0 compilation benchmark, we used elapsed time. There is no general rule to be followed here, one must use one's good sense. Some caveats also apply to NFS benchmarks. The present Linux NFS implementation runs in user space, not in kernel space as in BSD Unices. Similarly, comparing the performance of Linux as a router against dedicated hardware would be an example of comparing Apples and Oranges. Even though Linux networking performance is very good, particularly with some DMA bus mastering Ethernet controllers/drivers, it cannot possibly be compared to dedicated routing hardware. _________________________________________________________________ Previous Next Contents Previous Next Contents _________________________________________________________________ 5. An example of correct/incorrect interpretation of results We finally get to the practical part of this article. As usual, I propose a different benchmark as a practical example, only this time we will be seeing a more complex benchmark, in fact a CPU benchmark suite: we'll use the latest version of nbench-byte (version 2.1) as our example. You can download it from Uwe Mayer's new Web site or from the Linux Benchmarking Project What we are going to measure this time can be described as "general CPU performance". So: this is not processor performance for matrix operations, this is not MMX performance, this is not the ability of a processor to decode an MPEG stream. Also, this is not a measure of a processor interrupt response time, peak MIPS, etc. 1. Wrong ways to benchmark. 2. Wrong way to analyze benchmarking data measurements. Now let's take a look at a correct procedure, following all the steps recommended in section 1.1 5.1 Example Stating our objective For this short example we just want to compare the performance of two different CPUs: the AMD K6 and the Cyrix 6x86MX. This is comparative benchmarking, so we should keep all conditions fixed and vary just this single variable: the CPU. This is not too ambitious and I have no bias for/against any of these two chips. Also, since both CPUs are widely available at reasonable prices, such comparative benchmarking may be of interest to GNU/Linux users wanting to upgrade and/or put together their next CPU. Choice of a benchmark Nbench-byte is an improved, updated version of the BYTEmark benchmark suite developed at BYTE magazine by Rick Grehan. Uwe F. Mayer did the port to Linux and is its present maintainer/developer. The latest version is 2.1, dated December 97. Similarly to SPEC95, this modern CPU benchmark suite uses 10 different algorithms that are representative of common CPU-intensive tasks (the file bdoc.txt included with the source has a description of each algorithm). Note that Rick has stopped development of BYTEmark (neither Uwe nor myself managed to contact him), but you can see that this is not a committee-designed benchmark; in this respect its lineage fits quite well the GNU/Linux style of development. Nbench-byte 2.1 also goes one step beyond SPEC95 in that it generates three index figures: an Integer Index, a Floating-Point Index and a Memory Index. The Memory Index reflects the fact that on most modern CPUs, the memory subsystem represents a major performance bottleneck. You can check the Web site for STREAM a new benchmark specifically created to address this issue, for more information on this topic. One of nbench-byte nicest features is that it calibrates itself. For each of the tests it determines a minimum amount of work that needs to be done to be able to accurately measure the time needed. Then it runs that test five times and does a statistical analysis (using the student-t distribution) to see if the results are consistent (meaning that the probability is at least 95% that the true mean of the results is within 5% of the calculated mean of the results). If not, then nbench runs the test up to twenty-five times more and does the statistical analysis after each additional test run. If consistency cannot be achieved within a total of thirty runs, a warning will be issued when the score gets reported. In terms of raw data statistical processing, nbench-byte 2.1 goes beyond all the other benchmarks I have ever come across. Another very interesting feature of this benchmark suite is its portability across a wide range of OS's and platforms. However, because of fundamental differences in compiler/libraries/memory management in different OSes, this benchmark should not be carelessly used to compare results across platforms. This is not an OS benchmark, it's a CPU benchmark (see the pitfalls subsection below). You have been warned. Benchmark setup We are doing comparative benchmarking, so we will be using exactly the same hardware for our benchmark runs. All that will change between runs is: 1. The processor (one run with a 6x86MX, the other run with a K6). 2. A small cyrix.rc file that was added to the rc.local script. This calls set6x86 to setup a few internal 6x86MX registers. The K6 does not need this file. Also note that we are using the precompiled nbench executable, as shipped in the tar.gz package. To describe our hardware setup, we resort to the Linux Benchmarking Toolkit Report Form: LINUX BENCHMARKING TOOLKIT REPORT FORM CPU === Vendor: AMD/Cyrix Model: K6-166/6x86MX-PR200 Core clock:166 MHz (2.5 x 66MHz) Motherboard vendor: ASUS Mbd. model: P55T2P4 Mbd. chipset: Intel HX Bus type: PCI Bus clock: 33 MHz Cache total: 512 Kb Cache type/speed: Pipeline burst 6 ns SMP (number of processors): 1 RAM === Total: 32 MB Type: EDO SIMMs Speed: 60 ns Disk ==== Vendor: IBM Model: IBM-DCAA-34430 Size: 4.3 GB Interface: EIDE Driver/Settings: Bus Master DMA mode 2 Video board =========== Vendor: Generic S3 Model: Trio64-V2 Bus: PCI Video RAM type: 60 ns EDO DRAM Video RAM total: 2 MB X server vendor: XFree86 X server version: 3.3 X server chipset choice: S3 accelerated Resolution/vert. refresh rate: 1152x864 @ 70 Hz Color depth: 16 bits Kernel ====== Version: 2.0.29 Swap size: 64 MB gcc === Version: 2.7.2.3 Options: (default nbench) libc version: 5.4.38 Test notes ========== Two processors tested. The 6x86MX was configured with a special rc.cyrix file. RESULTS ======== Linux kernel 2.0.0 Compilation Time: N/A Whetstone Double Precision (FPU) INDEX: N/A UnixBench 4.10 system INDEX: N/A Xengine: N/A nbench-byte integer INDEX: 6x86MX - 0.686; K6 - 0.713 nbench-byte memory INDEX: 6x86MX - 0.753; K6 - 0.793 nbench-byte floating-point INDEX: 6x86MX - 0.655; K6 - 0.802 Comments ========= With the CPU case open, it took me 30 minutes to run nbench-byte on the two pro cessors! Detailed benchmark results One can get very detailed benchmark results with nbench-byte 2.1 by specifying the -v option. However, here we are only showing the normal output from a standard run, first on the 6x86MX, then on the K6: 6x86MX results: BYTEmark* Native Mode Benchmark ver. 2 (10/95) Index-split by Andrew D. Balsa (11/97) Linux/Unix* port by Uwe F. Mayer (12/96,11/97) TEST : Iterations/sec. : Old Index : New Index : : Pentium 90* : AMD K6/233* --------------------:------------------:-------------:------------ NUMERIC SORT : 80.681 : 2.07 : 0.68 STRING SORT : 11.107 : 4.96 : 0.77 BITFIELD : 2.1997e+07 : 3.77 : 0.79 FP EMULATION : 8.5349 : 4.10 : 0.95 FOURIER : 881.21 : 1.00 : 0.56 ASSIGNMENT : 0.71582 : 2.72 : 0.71 IDEA : 147.28 : 2.25 : 0.67 HUFFMAN : 58.095 : 1.61 : 0.51 NEURAL NET : 0.70897 : 1.14 : 0.48 LU DECOMPOSITION : 27.869 : 1.44 : 1.04 ==========================ORIGINAL BYTEMARK RESULTS========================== INTEGER INDEX : 2.861 FLOATING-POINT INDEX: 1.181 Baseline (MSDOS*) : Pentium* 90, 256 KB L2-cache, Watcom* compiler 10.0 ==============================LINUX DATA BELOW=============================== C compiler : gcc version 2.7.2.3 libc : libc.so.5.4.38 MEMORY INDEX : 0.753 INTEGER INDEX : 0.686 FLOATING-POINT INDEX: 0.655 Baseline (LINUX) : AMD K6/233*, 512 KB L2-cache, gcc 2.7.2.3, libc-5.4.38 * Trademarks are property of their respective holder. K6 results: BYTEmark* Native Mode Benchmark ver. 2 (10/95) Index-split by Andrew D. Balsa (11/97) Linux/Unix* port by Uwe F. Mayer (12/96,11/97) TEST : Iterations/sec. : Old Index : New Index : : Pentium 90* : AMD K6/233* --------------------:------------------:-------------:------------ NUMERIC SORT : 82.229 : 2.11 : 0.69 STRING SORT : 10.57 : 4.72 : 0.73 BITFIELD : 2.0672e+07 : 3.55 : 0.74 FP EMULATION : 6.4842 : 3.11 : 0.72 FOURIER : 1117.1 : 1.27 : 0.71 ASSIGNMENT : 0.93388 : 3.55 : 0.92 IDEA : 158.42 : 2.42 : 0.72 HUFFMAN : 81.407 : 2.26 : 0.72 NEURAL NET : 1.0764 : 1.73 : 0.73 LU DECOMPOSITION : 26.521 : 1.37 : 0.99 ==========================ORIGINAL BYTEMARK RESULTS========================== INTEGER INDEX : 2.990 FLOATING-POINT INDEX: 1.445 Baseline (MSDOS*) : Pentium* 90, 256 KB L2-cache, Watcom* compiler 10.0 ==============================LINUX DATA BELOW=============================== C compiler : gcc version 2.7.2.3 libc : libc.so.5.4.38 MEMORY INDEX : 0.793 INTEGER INDEX : 0.713 FLOATING-POINT INDEX: 0.802 Baseline (LINUX) : AMD K6/233*, 512 KB L2-cache, gcc 2.7.2.3, libc-5.4.38 * Trademarks are property of their respective holder. Data analysis We will concentrate on the Linux data, for obvious reasons. As we can see, whereas the 6x86MX outperforms the K6 on some tests by a narrow margin (approx. 6%), the K6 vastly outperforms the 6x86MX on other tests. Conclusion On our synthetic test nbench-byte version 2.1, the K6 has shown slightly better performance than the 6x86MX, running at the same 166MHz (2.5 x 66MHz) clock rate on exactly the same hardware. 5.2 Pitfalls The basic pitfall that one should be warned against concerning nbench-byte applies similarly to all benchmarks: one should not to try to use this tool for something it was not designed for. Since this is a CPU benchmark, do not use it to test OS performance, video bandwidth, or any other feature that implies I/O activity. Also, it is not an adequate tool for comparing compilers and/or C and math libraries. This is less obvious than it seems at first. For an accurate, thorough, documented discussion of this particular pitfall, you are referred to one of Uwe's excellent pages on benchmarking Another pitfall would have been to compare the two processors running on widely different machines. Motherboard, cache and RAM timing setup can skew results by as much as 10%. Compilation options and libraries can also skew results by 25% or more. _________________________________________________________________ Previous Next Contents Previous Next Contents _________________________________________________________________ 6. In the next article I had initially estimated that four articles would be enough for an overview of GNU/Linux benchmarking, but recently more issues have been raised and more questions have been asked than I could address in just four articles. The next article will deal with exactly this problem: Available Linux Benchmarking Data and Open Issues. * The Linux Benchmarking Project. * Available Linux benchmarking data on the Web. * Suggestions for further Linux benchmarking. * A system benchmark: UnixBench. _________________________________________________________________ Previous Next Contents Previous Next Contents _________________________________________________________________ 7. Notes (1) Basic objective of Sophists: win all arguments by any means available. Truth, logical coherence and argument transparency did not matter to Sophists. They were fought (on intellectual grounds, of course) by Socrates and Plato and later by Aristotle. A quick search on Yahoo/Altavista with keywords Sophism or Gorgias will turn up some information on the subject. (2) StrongARM: this is an architecture developed at Digital, based on the Advanced Risc Machines design. It provides roughly the same CPU processing performance as a 133 MHz Pentium at a fraction of the cost and with a ridiculously low power consumption (< 0.5W). Intel's purchase of Digital's microelectronics operation includes the design rights to the StrongARM, so the future of this architecture is unknown (as of December 97). Note that Linux has been ported to some implementations of the ARM architecture. The Corel Java NC is based on Linux/ARM and uses the latest version of the StrongARM SA-110 CPU. The following is strictly rumour: Digital didn't quite know what to do with the StrongARM design, since as a product it didn't fit Digital's corporate culture of high-margin minis and workstations. On the other hand, it was just a nice bargain for Intel, who was looking for exactly that sort of CPU to address the future "appliance PC" market, ideally complementing the Pentium family. Since the details of the Digital/Intel deal have been kept secret, one can only guess how much Intel paid for the StrongARM design, but rumour has it that Digital's negociators were not even aware of how strategically important this CPU was to Intel. (3) To get a clearer idea of what free means in the GNU/Linux world, you are referred to the various Richard Stallman (founder of the Free Software Foundation interviews and articles, to the GNU manifest, and also of course to the text of the General Public License (GPL). In a speech in 1986, RMS said: "I want to establish that the practice of owning software is both materially wasteful, spiritually harmful and evil. All these three things being interrelated." (4) Intel did not license its APIC SMP specifications/technology to either Cyrix or AMD. Even though Cyrix and AMD agreed on a different, open specification called OpenPIC for multiprocessing, no motherboard was ever released to that standard (and no OS has ever implemented it, obviously). An excellent description of OpenPIC and x86 multiprocessing issues can be found in IBM's Application Note 40208, available in pdf format from IBM's Microelectronics Web site _________________________________________________________________ Previous Next Contents "Linux Gazette...making Linux just a little more fun!" _________________________________________________________________ LXNY at UNIX EXPO '97 By Michael E. Smith, LXNY _________________________________________________________________ The meeting notice on Usenet (including comp.os.linux.announce) set the tone for the event: Richard M. Stallman's original announcement of the GNU project (Subject: Free Unix!) was the centerpiece. The event: the traditional LXNY meeting at UNIX EXPO (this year a part of Miller Freeman's IT Forum '97) in New York City. The speaker: Bryan Sparks, Founder and CEO of Caldera, Inc. The setting: a corner room at the Javits Center seating 75, with a crowd of approximately 100 enthusiasts overflowing into the corridor. Jay Sulzberger of LXNY delivered a rousing introduction to the speaker, establishing a martial-revolutionary theme. After regaling the assembled troops with stories of RMS at MIT, he echoed Linus Torvalds' call for more applications and strongly cautioned Linuxers on the need to avoid internecine conflict -- the war is to be waged on many fronts in many ways, even including alliances with commercial vendors. (Caldera offers both free and commercial versions of Linux (and (DR) OpenDOS).) Sparks spoke about the Caldera international operation, non-desktop environments (especially kiosks), their partnerships, their lawsuit against Microsoft, the continuing phenomenal growth of Linux and the need for perseverance. He mentioned that Caldera has sold 2,000,000 copies of OpenDOS. Caldera brought some of their partners to the meeting including Corel (WordPerfect for Linux), Enhanced Software Technologies (bru), AppGen Business Software, FacetCorp, TwinCom and ICentral. Also represented were VAResearch and Non-Profit Computing. Caldera provided food, T-shirts, flashlights and OpenLinux Lite CDs to all present. LXNY's booth at the show, which contained a display of Linux publications, was as big a success as the meeting. _________________________________________________________________ Copyright © 1998, Michael E. Smith Published in Issue 24 of Linux Gazette, January 1998 _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next _________________________________________________________________ "Linux Gazette...making Linux just a little more fun!" _________________________________________________________________ More Adventures with SAMBA By Dave Nelson _________________________________________________________________ If you haven't networked your small office/home office computers, ethernet prices are now so cheap you can't afford not to! I recently installed a three-computer LAN, including my dual- boot Linux/Win95 box (named Dave), my wife's WfW 3.11 box (named Kathy), and my Linux/Win95 portable. We share files among the three computers and print to the laser printer on Dave from within Windows. The SMB protocol (server message blocks) does all this over IP. SMB/IP is built into Win95 and WfW; Linux uses the Samba program to talk SMB. A special challenge was to get Dave to look the same to Kathy whether Dave is booted in Linux or Win95. This technique could also be helpful if you are changing from a Windows server to a Linux server, and you don't want to redo the settings on each client. All the software is free, once you own the operating systems. My total hardware costs were under $100, including a five port 10 base T hub, network interface cards, and twisted-pair cables. John Fisk's article on Samba in LG issue 20 was a great introduction to Samba. I used it to get started. Then I added printing from Windows to Linux, solved some file permission problems, and figured out how to make Dave look the same to Kathy under Linux or Win95. I'm sure what I did could be improved on -- I am new to Samba and only a journeyman at Linux, but this way works. If you worry about security, you may want to add passwords. Between my wife and me security isn't a problem ;-) An example of Samba's power: my wife runs Quicken on Kathy under Windows. She transparently uses the Quicken data file stored in a DOS partition on Dave running Linux. She transparently prints from Quicken to the laser printer on Dave. She doesn't have to change any settings on Kathy when I switch Dave between Linux and Win95. And my settings on Dave are handled automatically on bootup. Way cool! Here's how I did it: I used System Commander to dual boot My box (Dave)to Win95 or Linux. I installed networking on both operating systems, using the same IP addresses. I named my SMB group "home." Fisk's article shows how to do most of this. My Linux release (Caldera) comes with Samba installed. Probably your release does too. Samba runs as two daemons: smbd and nmbd. Find them by typing which smbd; which nmbd If they are installed, they are probably in /usr/sbin. If not, install them. Caldera Linux starts them on bootup by running the script /etc/rc.d/init.d/smb. Note that if you change the Samba configuration file, it isn't necessary to reboot (at least using Caldera or Red Hat.) Just issue the commands /etc/rc.d/init.d/smb stop /etc/rc.d/init.d/smb start and Samba will be reconfigured. Fisk's article points out that Samba may also be started by init.d. You don't want to start Samba twice, so check your settings after reading Fisk. I created the following /etc/smb.conf file on Dave: [global] workgroup = home printing = bsd printcap name = /etc/printcap load printers = yes guest account = dos [printers] comment = All Printers ; print command = cp %s /tmp/tmp.print print command = lpr -Pepson -b %s browseable = yes printable = yes public = yes writable = no create mode = 0700 [d] comment = DOS Disk d: path = /mnt/diskd/ public = yes writable = yes printable = no guest ok = yes The [global] section of smb.conf tells Samba that my workgroup is called "home," the printer description file is /etc/printcap, and the user (or guest account) for dos services is "dos." To set up the user "dos" run the program "adduser dos" or just edit the /etc/passswd file. I had to edit /etc/passwd after running adduser to get things right. My /etc/passwd file has the following line for the user dos: dos::501:500:DOS files:/home/dos:/bin/false In order of fields this line says the user is dos; dos needs no passwd; its user number is 501; its group number is 500; it is called DOS files (this field is just a comment); its home directory is /home/dos; and it has no shell privileges. The user and group number were assigned by adduser; they don't have to be 501,500. To test that the user dos is set up right, change /bin/false to /bin/bash and log on as dos. You shouldn't need a password and should get a bash shell prompt. Then change back to /bin/false to close the security hole. When I ran adduser, I told it that dos belongs to group DOS, and it added the group DOS to the /etc/group file with the line DOS::500: The [printers] section sets up printing for DOS. The commented- out line "print command = cp %s /tmp/tmp.print" is a great way to debug Samba printing. I found this in the help file "Printing.txt" that comes in the Samba package. If this line is uncommented and the next one commented out, the print file from Kathy appears on Dave as /tmp/tmp.print rather than being sent to the printer. You can check whether it arrived OK and try printing it by running lpr. The line "print command = lpr -Pepson -b %s" does the actual printing. The option "-Pepson" says to use the "epson" printer description in /etc/printcap. My laser printer on Dave is called "epson" under Win95, and Kathy expects to see the same name under Linux. The option "-b" tells lpr to accept the binary print files that Windows produces. Otherwise lpr chokes because its default is to expect ASCII files, and the printer does nothing. (Maybe this fix is the same as what is called raw mode printing?) The "%s" parameter represents the file name being sent to Samba. I created a section in /etc/printcap for the epson printer: epson:\ :sd=/var/spool/lpd/lp:\ :mx#0:\ :lp=/dev/lp1:\ :sh: Notice there is no "if=" line, i.e. no input filter that processes the binary print file. My printer is an Epson 7000, basically a HPIIp clone, so it expects the DOS convention of CRLF at line's end. If I tried to use this printer description when printing under Linux, which only sends the Unix standard LF, I would see the dreaded staircase effect. The [d] section of smb.conf describes the shared disk that Kathy expects to be called "d," the same as drive D: under DOS. I mount it as /mnt/diskd in Linux. I ran into a puzzling problem with user permissions (probably either my ignorance of standard Unix practice, or something weird about msdos filetype.) The user dos needs to have write privileges to the directory /mnt/diskd and all its files. But I couldn't make that happen using chmod, chown, or chgrp. As soon as I would reboot and mount the file system, /mnt/diskd would revert to the following privileges: drwxr-xr-x 44 root root 18432 Dec 31 1969 diskd/ The missing "w" for group and others did me in as long as root owned the directory. I fixed this by editing the line in /etc/fstab for /mnt/diskd to be the following: /dev/hda5 /mnt/diskd msdos user,noauto 0 0 The important field is user,noauto, which means mountable by a user and don't automatically mount on bootup. The I added a line to /etc/rc.d/rc.local to mount diskd as user dos: mount -ouid=501,gid=500 /mnt/diskd This says mount /mnt/diskd with option (-o) of user id 501 and group id 500, which correspond to the user dos. If your adduser gives dos a different uid and gid, just change this line appropriately. If you have trouble mounting diskd on bootup, try logging in as dos (after changing the /etc/passwd line for dos to /bin/bash) and mounting diskd manually. When that works, go back and get the rc.local line to work right. As John Fisk wrote: one thing about Linux, it hones your problem solving skills. If you have problems, look in Samba's logs and message files. On my system the logs are in /var/log/smbd and /var/log/nmbd. Messages are in the directory var/samba. That's all it took for me to set up Samba. The Samba documentation shows a wealth of different configurations. At first I found them all daunting, but by chipping away, one problem at a time, things came more easily. I hope this article helps you get started. Who needs NT servers anyway? _________________________________________________________________ Copyright © 1998, Dave Nelson Published in Issue 24 of Linux Gazette, January 1998 _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next _________________________________________________________________ "Linux Gazette...making Linux just a little more fun!" _________________________________________________________________ My Linux Revolution By Ylian Saint-Hilaire & Erik Campo _________________________________________________________________ For a long time, I lived in my cave, doing my thing. One day, I got out of my nice, very functional cave and saw that my fellow man had built a house just outside. It had all the same functions that my cave had but was much comfortable and livable (for one, you didn't have to push a boulder to close the door every night). Many years later, I got Linux installed on my personal computer, doing my thing. One day, surfing the net, I saw my fellow man programming a new version of Linux, which had a great user interface, was easy to install and possessed much more new added features. For a long time user of Linux on an i386 like me all Linux was good for was routing packets. This was a shock. Of course, the new user interface I am talking about is called the KDE project, which along with the new Red Hat distribution version 5.0, the very active www.linux.org site and the many applications available form an incredible package. To my great astonishment, Linux development is in full acceleration and is starting to be viewed as a real contender to Microsoft Windows NT. Not too long ago, people installing Linux on a computer where viewed as computer gurus, or mystic "roots". With the arrival of user friendliness, the question I ask myself is: Is Linux going to become an operating system for the general public? Some people will feel bad of loosing the respect of being the only ones able to install Linux in their social group. As opposed to them, I welcome this new age. I can't wait to install Linux at my grandma's place. The operating system of the people will finally come back to the people. This will, however change many things. If less technically inclined people jump on the Linux wagon, new demands will be generated for easier-to-use software and better support and help files. My grandma will ask for drag-and-drop support and very large fonts. The new people on the wagon will not be of much help in moving it forward (have you ever seen your grandma code lately?). Still, they will bring new respect to the OS, and, well, why not, perhaps new ideas. Some time ago, word from "Santa Cruz" was that we had to upgrade out of Linux. This of course was funny and it highlighted the maturity of Linux. Not only that Linux is free but it compares better (on my scorecard) to almost any other operating system. And unlike some "other" operating system, Linux is soon to become a general public operating system (hello grandma!). So, I finally decided I was going to move out of my cave and into I much more respectable house. But the most important thing of it all, I must start keeping track of the developments and start pushing the wagon myself. By the way, Linux makes a great Christmas gift! _________________________________________________________________ Copyright © 1998, Ylian Saint-Hilaire & Erik Campo Published in Issue 24 of Linux Gazette, January 1998 _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next _________________________________________________________________ "Linux Gazette...making Linux just a little more fun!" _________________________________________________________________ KDE and Gnome by Larry Ayers _________________________________________________________________ Introduction Watching the Linux operating system begin to mature is interesting these days. A couple of years ago much attention was devoted to incompatibilities with various hardware components, networking, and the development of the kernel itself. Though these activities continue, it's no longer necessary to follow these development efforts as closely in order to run a dependable Linux system. Distributions have improved immensely, and now more free-software developers are turning their attention towards refinement and integration of the user interface. Two separate projects have arisen in the past year, KDE (the K Desktop Environment) and GNOME (the Gnu Network Object Model Environment). Both of these projects include among their stated goals the desire to make the administration and usage of a Linux system easier for beginners, in part by employing a uniform look-and-feel for the most commonly used applications and utilities, as well as interoperability of the system components. It's difficult to make much of a comparison between the two, as KDE is much farther along than GNOME, but I'll make an attempt. Commonalities There is one common structural aspect to these two projects. They each rely on a group of shared libraries, which provides the interface to basic OS operations, such as file-reading and saving, as well as basic display and appearance functions. The end result of this is that an installation will populate adirectory with a variety of shared libraries, which in turn supports another directory of fairly small executables. The Gimp works this way as well; the individual plug-ins tand to be small, but rely on the services provided by both the GTK and the Gimp shared libs. This approach facilitates contributions by programmers not directly involved with a project, as many of the low-level and window-display functions are already written, allowing a contributed application or extension to "hook" into them. KDE The first of the two to gain momentum was KDE. About a year ago a group of developers, mainly European, began coding the components of this ambitious project. They chose the Qt toolkit (from TrollTech in Norway) as the GUI framework, a decision which has since led to some controversy. Qt has a few licensing restrictions which, though not onerous for end-users, can cause problems for the creators of CDROM-based distributions. Advocates of GNU-style free software tend not to favor Qt, a circumstance which led to the creation of the GNOME project. Setting aside the thorny licensing issues, the KDE developers have managed to pull together quite a remarkable system in the past year, though numerous bugs still remain evident. The second public beta was released in November of 1997, and I compiled and installed it soon after. (I had briefly tried the initial beta, but found it too unstable to evaluate). This second release still has flaky aspects, but enough of it works to give the user an idea of what the developers are planning to accomplish. In effect KDE is a sort of GUI wrapper around an existing Linux system, which attempts to simplify system-administration tasks and offer interacting and compatible utilities and applications. Kfm is at the core of the system, as it is intended to be left running in the background and serves as the help-viewer for all of the KDE components. Kfm is also a file-manager (icon-based, with some resemblance to xfm and moxfm) and serves creditably as a web-browser. Kfm is an impressive application, and in itself justifies trying out KDE. Many of the other applications are replacements for programs which most Linux users probably already have and would only be desirable if a complete KDE system is the goal. KDE has its own window-manager, kwm, which had some display faults on my system. Due to these video artifacts I didn't use it much, but it did appear stylish and well-designed. It seems that these display bugs don't show up on most systems; I suspect that it depends upon the video-card and X-server in use. A new Linux user (especially someone accustomed to Windows or Macintosh systems) might appreciate the relative ease of configuration and use which KDE offers. In a sense, KDE extends the scope of the tasks traditional distributions perform. One drawback might be the very comfort of the KDE environment itself; the various system-administration tasks outside of KDE's abilities might seem too daunting or unapproachable without a KDE interface. This won't be seen as a drawback to prospective users who lack the fascination with internals and configuration which in the past has typified Linux users. Some KDE users have reported that they find the system usable and useful, but with my particular setup this wasn't the case. But I have to say that my extensively customized Linux installation seems perfectly satisfactory as is, and I probably lack the motivation to spend the time learning to adapt KDE to my needs. If KDE had existed back when I first booted up a Slackware system some years ago, who knows... GNOME Miguel Icaza (head of the Midnight Commander development group) also seems to be at the helm of the new GNOME development project, which has goals similar to those of KDE, with one difference: the project is composed completely of GNU-style free software. This project is based upon the GTK toolkit, the free successor to Motif in the Gimp development efforts. The project arose as a direct response to the KDE project, and the GNOME developers have borrowed some code from KDE for a few of the applets. As of late December (version 0.10) GNOME as a whole isn't really suitable for actual use, but several of the applets function well and the future looks bright for the project. Miguel Icaza is in the process of porting the Midnight Commander file-manager to GTK, which will let it fit in with the remainder of the GNOME applications. The Panel applet, written primarily by Federico Mena Quintero, is an icon-bar and program-launcher which locates itself at the bottom edge of the screen. It features cascading menus which could be a substitute for the usual window-manager root menus. Most of the GNOME applets have been included in the default menu of Panel, allowing this applet to serve as an entry-point to the GNOME installation. It takes a little fiddling around to get the hang of using Panel, so don't give up if at first glance it seems like nothing is working The provided applets include a desktop-manager (which in part serves as an interface to the Xlockmore screensaver), CroMagnon (an interface to the crontab utility), an audio mixer, an interface to the elaborate LinuxConf configuration manager, several nicely-done games (some of which were adapted from KDE), and several others. One major difference between GNOME and KDE is that KDE includes a window-manager, whereas GNOME doesn't, and is designed to cooperate with a user's current window-manager. This may make GNOME more appealing to seasoned users who have extensively customized their window-manager resource files. Conclusion As I write this only the source code is available for GNOME 0.10, and it's tricky to compile. An intel-Linux binary archive of the 0.9 release is available from this site, but I would recommend waiting a while for either an updated binary release or an easier-to-build source release. The developers are hard at work these days (judging by their mailing-list postings) and I think that, given time, something both interesting and usable will appear. Though KDE is closer to being "finished" (if such a state even exists in the realm of software), it still has a ways to go. Development is proceeding rapidly, and I imagine that sometime this year a more polished release will become available. The fate of a free-software project is interesting because of the inherent unpredictability. Anyone can start one, but whether it comes to fruition or withers on the vine is up to the inscrutable software gods. The timing may be just right (i.e., it addresses many users (and developers!) needs) but convincing enough programmers with time and inclination to become involved just can't be forced or foretold. These two projects seem to have attained that essential momentum, and hopefully we shall see them evolve further. Last modified: Sun 4 Jan 1998 _________________________________________________________________ Copyright © 1998, Larry Ayers Published in Issue 24 of Linux Gazette, January 1998 _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next _________________________________________________________________ "Linux Gazette...making Linux just a little more fun!" _________________________________________________________________ Updates and Correspondence by Larry Ayers _________________________________________________________________ Here in north-east Missouri we are currently afflicted with a heavy, wet snowfall, so the time is ripe to write a page updating some of my past Gazette articles, along with some e-mail which has come my way. More Text-Processing I've received quite a few messages concerning my article in LG #22, Word-Processing vs. Text-Processing. Eric Marsden sent this message concerning the Lout text-formatting system: From: Marsden Eric To: layers@marktwain.net Subject: [LG] Lout-mode for Emacs Hello Mr Ayers, In the October Linux Gazette you wrote an article comparing different document formatters, and mentioned Lout in passing. I noticed that you regretted the lack of an Emacs mode for Lout code. I agreed with you, and set out to write one. Indeed, there are now two Emacs modes, since another Lout user had also set about writing one, independently of my effort. Both are available on my site, where you'll also find the Lout FAQ/HOWTO. You were right to mention that "The Lout system is still maintained and developed"; Jeff Kingston is quite receptive to suggestions and requests for new features in the formatter, which is far from being the case for TeX (now frozen) and LaTeX (whose development group seems very closed). I agree that Lout is very much less widely used than LaTeX, which is a definite disadvantage. I believe that things will change over time, in particular given Lout's very strong capabilities for mixing graphics and text. I might even write an article for the Linux Gazette myself to spread the word. Eric Marsden emarsden @ mail.dotcom.fr It's elephants all the way down. [LA: sounds like a Terry Pratchett quote!] I tried out the XEMacs mode found at the above link, and though it's still under development, it works well here and is well worth investigating. _________________________________________________________________ Here's another interesting message: From: oliver@fritz.co.traverse.com (Christopher Oliver) Subject: TeX/GROFF To: layers@marktwain.net I noticed in your defense of mark-up systems, you didn't touch on issues regarding quality of output. If you write on this in the future, you might find What has WYSIWYG done to us? by typographer Conrad Taylor quite interesting. I think he makes quite a case that the word processors aren't suitable if the user cares about the quality of the typesetting. I think there is a lot of good thinking there for folk involved with document production at any level. This link will take you to the article. Regards, Christopher Oliver Traverse Communications Systems Coordinator 223 Grandview Pkwy, Suite 108 oliver@traverse.com Traverse City, Michigan, 49684 _________________________________________________________________ Another message concerning Emacs and LaTeX: To: layers@marktwain.net Subject: Word Processing and Text Processing From: Peter S Galbraith Nice summary! > Emacs provides excellent syntax highlighting for LaTeX files, > which greatly improves their readability. I also wrote a better syntax highlighting Emacs package for LaTeX files, called font-latex.el. I think I should make it part of Emacs. I doubt you knew about it even though it's distributed as a contributed package with AUC-TeX. > Xtem has one feature which is useful for LaTeX beginners: > on-line syntax help-files for the various LaTeX commands. There are a few add-on packages to AUC-TeX that do this by interfacing with latex info files: * http://www.ifi.uio.no/~jensthi/word-help.el * ftp://ftp.phys.ocean.dal.ca/users/rhogee/elisp/func-doc.el Peter Galbraith, research scientist Maurice-Lamontagne Institute, Department of Fisheries and Oceans Canada I confess I'd seen Mr. Galbraith's font-latex.el in the AucTeX distribution files, but didn't realize that it is an extra package which must be explicitly loaded. The version included with AucTeX is an older one; I recommend downloading the latest revision from this FTP site. Along with font-latex.el, a file called font-latex.tex is available at the site. This file is a sort of demo for font-latex.el, illustrating its capabilities. Some people won't like the very colorful approach to highlighting this LISP file provides, but with judicious selection of colors (now so much easier using the Customize facility!) the readability of TeX files is much enhanced. _________________________________________________________________ I recently happened across a message posted to the comp.editors newsgroup which eloquently expresses a plea which I'm sure will resonate with many Linux users: From: Des Small Newsgroups: comp.editors Subject: Re: writing an editor Date: 17 Dec 1997 22:09:22 +0000 Organization: Southampton University X-Newsreader: Gnus v5.5/Emacs 20.2 (discussion of editor internals snipped) Given that the Unix world has already more editors than could possibly be required, and a dearth of even modest word-processing type apps, I would urge you at least to consider allowing multiple fonts in one document. You don't have to rewrite Word; we don't even have a competitor for Notepad (simple wysiwyg-ish thing with RTF output). And no, XEmacs does not count. (more discussion snipped) But I really, really think that this particular wheel has been reinvented often enough. What I want is a toy word processor/editor, like wordpad on steroids, say, which could be used for writing simple letters to Auntie (for the Windoze crowd), and to make programming more pleasant (different faces for different bits of syntax), and a nice HTML mode. XML (a sort of SGML-lite) is coming, and its facilities for structured documents could make it a snap to develop literate programming envronments, without locking you in to one set of tools, or using (eek!) embedded TeX. You could have DTD's for almost every application, and finally supercede the (admittedly powerful) Unix "everything is a stream of bytes" philosophy with a universally understood set of conventions for structured documents. Word processors could use XML for storage! You could share files across platforms! You could even still use Emacs or vi or joe, if you wanted to! I know XEmacs can (probably) do all this, but I want a small, fast, cute version, that doesn't eat all my RAM. I want to use proportional fonts to edit text (which Sam and Wily allow) without changing my entire world-view (which they tend to insist on), and I don't care if it won't run on a vt100. It's 1997 for heaven's sake! I have a windowing system! (Admittedly, it's only X, but it's still a window system!) At the moment, my desktop has a bunch of terminal emulators, and a couple of GNU emacs frames open. These are powerful tools, but they hardly constitute a rich GUI environment that would make me the envy of all my friends (they lust after the stability of my system, but they run away screaming when confronted with the tools. And they are neither stupid nor technophobic). I don't want a ultra-heavy power tool, and I don't care about slow serial lines: I already have tools for those jobs. I just want a sprinkling of nice fonts, and an interface which doesn't scare off Windows or Mac users. Context sensitive pop-up menus might be nice, and a reasonable (in terms of looks and functionality) menu bar, too. Real-time spell-checking along the lines of Word is quite a nice idea, too -- spellchecking email and Usenet posts is overkill if it takes any effort at all. (Even M-x ispell-buffer is effort: I have to remember to bother.) Notepad on steroids, is all I want, really. And it has to be free (as in freedom-not-price, that is). Does anyone have one, or do I really have to roll my own? And does anyone have any info on (or pointers to) suitable data structures for such a thing? Standard Unix tools are very powerful, and for some things I find them indispensible. But, for me at least, vt100 compatibility is a legacy issue. Sometimes I use remote systems, and then I telnet in and use vi, and I'm happy to do so. But most of the time I'm on my own Linux box, with 16Mb and clock cycles to burn. I can afford some luxuries, but I don't want a whole XEmacs. Is this really so weird, this late in computing history? Or did I swear a vow of allegience to xterms and non-proportional fonts when I signed on as a Unix user? Am I the only person who finds the current situation imperfect? Do I have to wait for GnuStep to combine the robustness and programmability of Unix (which I love) with a halfway-sane GUI-fied environment for those "I want to use a tool but I haven't got a month spare to master the interface" moments? Sorry to rant on like that, but I feel strongly that the many things Unix does well should not (and in the eyes of the Heathen do not) excuse its barely-half-hearted embracing of the possibilities of the new-fangled (ahem...) bit-mapped screen. Des, who sometimes feels trapped in a 1980's timewarp. After reading the above posting, I began to think about editors which can display proportional fonts. Off-hand I can think of three which offer this option: XEmacs, the semi-commercial Edith editor, and NEdit. All three display Postscript fonts well only if bitmapped versions are available, which limits you to the fonts (such as Times Roman and New Century Schoolbook) supplied with X. XEmacs will attempt to scale other Type 1 fonts but they are unaliased and unsightly. Edith and NEdit don't even try to scale the fonts, and only the 12 point size is offered in the font dialogs. This isn't the fault of the editors I've mentioned; they just do what X will allow them to do. There are several Type 1 font rasterizers under development for Linux, and perhaps this deficiency in the X environment will eventually be addressed. This could be helpful in attracting Windows users to Linux, as Win95 and NT, for all their faults and annoyances, do display scalable fonts well. If anyone knows of a technique for generating bitmapped fonts in various sizes from standard type-1 Postscript fonts, I'd love to hear about it! Both the Gimp and SDCorp's WordPerfect 7 port will scale Postscript fonts flawlessly; I assume they have their own internal font-display engines. I imagine that StarOffice and Applix can do the same. _________________________________________________________________ A floridly-worded letter from Harry Baecker was printed in issue 23 of LG, to which I felt compelled to respond: he seems to think my opinions on text-processing are "a ritual obeisance to received wisdom" and a "requisite Unixworld denigration of word-processors and their users". On the contrary, I have little interaction with Unix users and my expressed opinions are a direct result of my experiences using both word-processors and text-formatting systems. So there! The folks at SDCorp in Utah have made a welcome change to the licensing scheme used in their Linux port of WordPerfect 7. When the port was first released a licensing daemon had to be running in order to run the word-processor, which would only work on the original machine on which WP was installed. Now the daemon isn't necessary, and the application isn't limited to one machine. Rumor has it that WordPerfect 8 will be ported to Linux sometime next year if sufficient interest is shown in the Linux community. _________________________________________________________________ New Editor Versions There have been new releases of several editors lately. Those of you who are of the VI persuasion will be glad to hear of new versions of all three of the actively-maintained VI-clones. Vim is probably the most featureful of the VI-style editors. Judging by newsgroup postings, it may be the most popular as well. With the release of vim-5.0s, vim 5 has finally reached a beta rather than alpha state. This revision has a really well-implemented syntax-highlighting system for many programming and shell-script languages, and it's not too difficult to adapt to new file-types and languages. The down-side is that vim is growing larger, and is beginning to lose the quickness and low memory-usage that has been a hallmark of VI-style editors. Of course, memory is cheap and machines are more powerful these days, so this isn't as much of a factor as it used to be. I tend to use XEmacs as my primary editor, due to its excellent programming modes, with vile/xvile as an adjunct for quick editing tasks, such as config files and e-mail messages. Vile 7.3 was released recently, and it is in my opinion the ideal vi-style editor for an Emacs user. It incorporates several of the most common Emacs keystrokes, such as control-x-1 and control-x-0, which softens the transition between the two. Recently Paul Fox, who several years ago modified the Microemacs code until it became the first version of the vile vi-like editor (sounds improbable, but it's true!), posted an interesting response in the comp.editors newsgroup. He was responding to a query concerning the differences between vile and vim: From: pgf@foxharp.boston.ma.us (Paul G. Fox) Newsgroups: comp.editors Subject: Re: Vile 7.3 Announcement Date: 29 Dec 1997 00:21:07 GMT brian moore (bem@news.cmc.net) wrote: : > How about portability? I use Vim under NT at work, Linux at home. : : vile runs on both, and OS/2 and a bunch of other stuff. Hell, it even : works on Solaris! :) : and VMS and Win95. i'm not sure which i've used less. :-) as the original vile author, i'll chime in here, but a) i'm biased :-), and b) i've never used vim much, except when looking at a specific feature implementation. vile's design goal has always been a little different than that of the other clones (and i mean elvis, vim, and nvi here -- i don't know enough about any of the others). vile has never really attempted to be a "clone" at all, though most people find it close enough. i started it because in 1990 i wanted to to be able to edit multiple files in multiple windows, i had been using vi for 10 years already, and the sources to Micro-EMACS came floating past my newsreader at a job where i had too much time on my hands. i started by changing the uemacs keymaps in the obvious way, and ran full-tilt into the "hey! where's 'insert' mode gonna come from?!" problem. so i hacked a little more, and hacked a little more, and eventually released in '91 or '92. (starting soon thereafter, major version numbers tracked the year of release: 7.3 was the third release in '97. i don't know what tom is going to do about the Y2K problem. ;-) but my goal has always been to preserve finger-feel, as opposed to display visuals , and, selfishly, to preserve finger-feel most for the commands i use. :-) i've never used ex mode much, so vile doesn't have much of an ex mode. actually, it has quite an amazing ex mode, that works very well -- it just looks really odd, and a couple of commands ("t", and "m", which are beyond the scope of the current parser) are missing. for the same reasons, it also won't fully parse existing .exrc files, since i don't really think that's very important -- it does simple ones, but more sophisticated one's need some tweaking. when you toss vile's built-in command/macro language, you quickly forget you ever cared about .exrc. just for bragging rights, i think vile had X11 support earlier than the others, thanks to work by Dave Lemke from NCD, and Keving Buettner who made it really functional. i take no credit -- i never use xvile. on the other hand, vile wasn't real useful under DOS, since it doesn't use a swap file, and the memory limits got in the way pretty quickly. (of course, this isn't a practical problem under real OSes.) unfortunately, since none of the "vi rewrite" authors were collaborating much in the early years (if we knew about one another at all :-), i think we all made different choices for the extension commands. vile tends to follow an emacs-like model, and uses ^X and ^A as built-in (but rebindable, of course) command prefixes, and indeed uses emacs bindings directly for some commands: like ^X-2 to split a window in half. another typical difference: i insist that ":q" should quit the editor, and not just close the current window. both nvi and vim got this wrong, imho. another one: vile does infinite undo the way nvi does, and not the way vim does. small differences, but ones that can make a user prefer one over another. as someone else said in this thread -- if you're choosing a new version of vi, you owe it to yourself to try them all, for half an hour of real work with each, and make your choice based on that. vim has lots and lots of support, and having this nice "comp.editors.vim" newsgroup helps :-), but the others have things to offer too, and you might like one of the others better. i do. ;-) btw, i'm only peripherally involved in vile maintenance anymore -- Tom Dickey does most of it (thanks tom) these days -- i just run the mailing lists. current versions can be had from: ftp://ftp.clark.net/pub/dickey/vile paul paul fox, pgf@foxharp.boston.ma.us (arlington, ma, where it's 23.5 degrees) Steve Kirkendall's Elvis editor has also been updated recently. The X version, like vim's, has good syntax-highlighting support, and also like vim, the windows (95/nt) version is well-supported, for those users who need to work in that environment. I confess I haven't spent as much time with elvis as I have with vile and vim; perhaps another user might care to contribute? XEmacs development continues apace. Versions 19.16 and 20.3 are available (from the home site and its mirrors) but one of the most interesting developments is taking place in the 20.5 series of betas. A common complaint about XEmacs is its bulk and lengthy loading time. A full distribution is huge, with many bundled packages for which most users have little use. Trying out a new version was not to be undertaken lightly, as the download time was long and a large block of disk space needed to be available for compilation. The XEmacs team is in the process of unbundling packages, which are now available individually. The base source archive is now around eight megabytes, while the compiled LISP (*.elc) archive is only one and one-third megabytes. The packages are independent, and when this beta of XEmacs is compiled it finds whichever packages you have installed and loads their documentation, menu-items, and keybindings. The package subdirectory is independent of the version-specific binary and LISP directories, so unchanged package files need not be downloaded when upgrading to a new XEmacs version. The easiest way to try this out is to compile the base source archive without any packages installed and see what doesn't work. Then packages can be incrementally installed until the desired functions are once again available. After a new package has been unpacked, the /lib-src/DOC file should be deleted. Run make again and the new package should be found and incorporated into the editor. In other words, it's a good idea to keep the built source-tree on your disk until you've generated an XEmacs which meets your particular needs. Of the subset of the available packages which I installed, only the Ediff package initially failed to work, but after some experimentation I found that the line (require 'ediff-hook) in the XEmacs init file caused it to be loaded. The release version of 20.5 should be available in the late spring of 1998. A new version of NEdit has been released recently. NEdit has become popular with programmers and general users due to its nicely-designed interface, equally-useful mouse and keyboard control, and relatively small size. Version 5.0 adds very configurable (via dialog-boxes) syntax-highlighting and a new macro language. It's one of the easiest editors to learn, and it's nearly as powerful as Emacs without being as large and memory-hungry. If you like mouse-based editing, the ability to highlight a selection and drag it to another location in the file will be appealing. This function isn't found, as far as I know, in any other editor available for Linux. NEdit is strictly X-based; if you like to edit in a console session (admittedly a minority view these days) this editor may not be to your taste. There are now two versions available: the main version is maintained by Mark Edel at Fermi National Laboratories, and both source and binaries are available from ftp.fnal.gov, the home site. Max Vohlken has made a number of patches to NEdit 5 which for various reasons haven't been accepted into the main distribution. He has been packaging his patched version into an alternate release, and it can be also obtained from the above site, in the /KITS/pub/nedit/v5_0/contrib/max/5.0 directory. Both versions have adherents, it seems. _________________________________________________________________ File-Managers Christian Bolik recently released the first new version of the desktop- and file-manager TkDesk in nearly a year. Version 1.0b5 has some nice new features (check out the Be-style icon-bar!), and is well worth looking into. This beta is supposed to be the last; as soon as [incr tcl] (an object-oriented extension of Tcl) is updated to work with Tcl8.0, TkDesk will also be able to make use of the latest Tcl/Tk releases. The TkDesk web-page has recently become inaccessible, but the new version is still in Sunsite's /pub/Linux/Incoming directory. Henrik Harmsen's FileRunner is now (with version 2.4.1) a GNU General Public License application, which means that it will be more easily included in distributions such as RedHat and Debian. FileRunner is small, quick, and efficient, and if you have installed Tcl/Tk 8.0 you should give it a try. It can be obtained from this site. The Midnight Commander, the versatile text-mode file-and-archive-manager, is still under continual development. Beta 4.1.19 is the latest beta, and although none of the X-windows versions are yet ready for prime-time, these recent betas are well worth installing, as useful new features are continually added. Source (which generally will compile without a hitch) is available from this site. I must confess that the numerous icon-based file-managers, many of which have been released or updated lately, don't really suit my needs. Several are based on the venerable xfm manager, or its (apparently abandoned) successor moxfm. Surely one of you readers out there prefers this type of file-manager? Why not write an article or review for the Gazette? There's room here for all sorts of views, after all! Xlock and XScreensaver Jamie Zawinski is a programmer and hacker currently working for Netscape. He has written several useful free software programs, including xkeycaps and several of the screensaver modes included with both Xlockmore and XScreensaver. He also was involved in the early development of Lucid Emacs (an ancestor of XEmacs), and has contributed to the development of many Emacs packages including the Fontlock highlighting mode. I recently received this message from him: Date: Sat, 29 Nov 1997 20:20:12 -0800 From: Jamie Zawinski Organization: Netscape Communications Corporation, Mozilla Division X-Mailer: Mozilla 3.02 (X11; U; IRIX 6.2 IP22) To: gazette@ssc.com CC: layers@marktwain.net Subject: xlockmore and xscreensaver Dear Linux Gazette folks, I saw the article on Xlockmore by Larry Ayers in issue 18 of the Linux Gazette, and I was surprised that it didn't mention, even in passing, my XScreenSaver program! Allow me to engage in a bit of advocacy. Back in 1991, before Xlockmore existed, there was only Xlock. Xlock was not a screensaver: it was only a locker. There was no way to make it activate itself automatically when the console became idle, nor was there any way to avoid having it lock the screen: that is, there was no way to have it turn off when the mouse moved. So, I wrote XScreenSaver. XScreenSaver is superior to Xlockmore in a number of ways. The most important way, of course, is that it is actually a *screen saver*. Although Xlockmore can be configured to not require a password, it still doesn't have the ability to turn on when the machine is idle; for that you have to use an external program that launches and kills it. The second way in which XScreenSaver is better is that it takes a server/client approach: the "xscreensaver" program itself knows how to detect idleness, and to lock the screen. The graphics hacks are not built in: the beauty of XScreenSaver is that any program which can draw on the root window can be launched by XScreenSaver for use as a graphics hack! This has several benefits: * You don't have to recompile and reinstall xscreensaver to install a new graphics hack: all you have to do is change your X resources, and issue one command. * Since programs don't have to be written *specifically* to run inside the xscreensaver framework, there are many more potential graphics hacks available. They don't even need to be written in the same language: they just have to draw on the root. Thus, it's easier to write programs to work with XScreenSaver than with Xlock or Xlockmore, because they don't have to follow a complex set of idiosynchratic rules on how to structure the code: the only rule is, "draw on the root." * By separating the task out into two processes, the whole system becomes more robust: the memory protection provided by the OS serves us well, in that, if one particular graphics hack has a bug (leaks memory, corrupts memory, gets a floating-point exception, etc) the integrity of the screen saver itself is not compromised. The offending hack may exit, but the screen saver itself is still running, and the screen is still blackened (or locked.) Also, since a screen saver is, by its nature, a very- long-running background task, even a small leak would build up over time. By arranging for the graphics hacks themselves to be relatively short-running tasks, this problem goes away. * On some systems, only programs which are running as root can check passwords. Therefore, on such systems, a screen locker would need to be a setuid-root program. Obviously one needs to be very careful about what programs one allows out of the security sandbox; a conscientious sysadmin would want to examine such a program very carefully before installing it. The XScreenSaver model allows this, by having the priveleged part of the program (the saver itself) be small, while letting the larger part (the graphics hacks) remain unpriveleged. XScreenSaver also includes a nice Demo Mode that lets the user interactively experiment with the currently-configured graphics hacks. (Until recently, most Linux users wouldn't have been able to take advantage of this, since the code had required Motif; but that is now configurable, and demo mode works with Athena widgets too -- since release 1.27 back in January 1997.) If you have a system with more than one monitor, XScreenSaver can save them both at once: a different graphics hack will run on each, and if the two monitors have different depths (for example, if one is monochrome and the other color) they can be configured to choose their screenhacks from different lists, so that each monitor is running the hacks that look best on it. Most of the xlockmore hacks (the ones that I liked, anyway, including the GL modes) are included with the XScreenSaver distribution; the only change made being to extract the various display modes from the monolithic XLock executable, and turn them into standalone root-window-oriented programs. However, it's possible to have XScreenSaver run xlock itself, as just another one of its many modes -- the best of both worlds! Do check it out: the canonical web page is , which includes screen shots and descriptions of most of the included graphics hacks. The latest version, as of this writing, is 2.12. At last count, it came with 64 different graphics hacks. Roughly a third of these were written by me; the others were graciously contributed by others. And, of course, it's all free, under the usual X-Consortium-style copyright. You also might enjoy my philosophical rambli ngs on the nature of screensavers, at . Jamie Zawinski http://people.netscape.com/jwz/ about:jwz _________________________________________________________________ The Gimp Since I last wrote about the Gnu Image Manipulation Program (in LG #18) both the GTK toolkit (the underlying programming framework of the Gimp) and the Gimp itself have undergone several revisions. The long-awaited version 1.0 still hasn't been released, but may have been by the time you read this. There has been talk of a release by Christmas. There's really no reason to wait for a non-beta release, though, as the version available as of mid-December is very useful and impressive. Some incredible new plug-ins are bundled with version 19.16, and general stability has been much improved. A useful feature for new users has been added: at start-up a small window appears which contains a different usage tip at each invocation. This can be disabled at any time. One caveat: be sure to obtain the corresponding version of GTK (in this case gtk+-0.99.0), as unlike some earlier versions it is no longer bundled with the Gimp. Here are brief descriptions of some of the new plug-ins: Iwarp Some plug-ins are very useful, but tend to be taken for granted, such as the various blurring or image-format modules. Then there are plug-ins which, though not immediately useful, are fascinating due to the unexpected and interesting changes they can wreak upon an unsuspecting image. The Iwarp plug-in, as an example. I admit I've never used Adobe's Photoshop and the many commercial plug-ins available for it. Therefore my initial reaction (composed of equal parts of awe and wonder) to this plug-in may seem a bit naive, as there are several commercial extenders for Photoshop with similar capabilities. Be that as it may, clicking the mouse pointer on an image thumbnail and actually stirring and shifting pixels interactively is quite amazing. A screenshot will give you an idea of the nature of the interface and the options available: the Iwarp plug-in The plug-in offers a choice of six different methods of changing an area of pixels in an image: clock-wise rotation, counter-clockwise rotation, growing, moving, shrinking, or removing. Playing around with this plug-in will give you a good idea of its capabilities. Try setting it to one of the rotation options, then hold the first mouse button down as you slowly move it across an image. It's as if a miniature hurricane is travelling across the image, leaving spirals of distortion in its wake. I imagine that a drawing tablet would be handy to use with this plug-in, allowing more precise control than that provided by a mouse. Like icing on a cake, Iwarp has animation capabilities built-in. Once you have warped an image to your satisfaction, select the animation tab in the plug-in's window; this lets you choose how many frames to create, and whether to make them cycle repeatedly (the ping-pong effect) or just play from start to finish. Each frame is saved in a separate layer, and there is a convenient plug-in called Animation Playback which will let you view it. So what's the use of this? You can use it to create animated GIF files for web-pages, or perhaps tweak a small area of an image, but mostly it is just a lot of fun! Flame and Fuse Gimp 0.99.16 must be something of a milestone for Scott Draves, creator of the Flame and Fuse plug-ins (as well as the Bomb interactive screen hack). Though successive versions of the Flame plug-in have been available for some months now, the maintainers of the Gimp have been reluctant to include it in the Gimp distribution due to certain licensing restrictions which Scott had placed on the program and its output. Recently Scott relaxed those restrictions and Flame is now a part of the Gimp. Flame is similar in some ways to the IFS-Explorer plug-in, in that is based on Iterated Function Systems fractal algorithms, but the approach and interface are different. Rather than manipulating three triangles, which is the traditional IFS interface used by IFS Explorer, Flame does much of its work "behind the scenes", though there are still several options available to the user. One nice feature is the built-in support for the Gimp's color gradients. Several of Scott Draves' own gradients are included in the plug-in, but any others can be entered into an entry-field in the plug-in window. Here's a screenshot of a typical window: Flame Window Flame can generate some really intriguing images; some of Scott Draves' examples can be seen on this web page. If you are using an earlier Gimp version, a binary of the plug-in has been thoughtfully included in the Flame archive file available from the page. Just drop the binary into your Gimp plug-in directory; the next time you start the Gimp Flame should be accessible from the "Render" sub-menu. It's interesting to compare the results obtainable from Flame and the IFS-Explorer plug-ins, as they both are based on the same mathematical principles but have such different methods of implementing them. Another plug-in from Scott Draves is called Fuse. This one is used to disassemble a pair of images and merge them into one, using what seems to be some sort of AI trial-and-error process. A preview window shows the various attempts in real-time, which can be interesting to watch. The process is time-consuming, as the plug-in seems to be repeatedly dissatisfied with its results, and will back up to a previous branch in the process and start anew. Scott Draves has achieved some interesting results with Fuse, which can be viewed on this web-page. It would take some practice to gain proficiency with this plug-in; this is one of those applications which may require reading the source code to get a feel for just what is going on behind the scenes. In other words, no on-line help! Illusion The Illusion plug-in is another interesting one to experiment with. It was written by Hirotsuna Mizuno, a Japanese programmer. What this one does is difficult to describe; it seems to duplicate an image and tile a new one with "faded-out" copies of the original. This results in a surrealistic sort of pattern, and could be useful in creating watermark-like backgrounds. A before-and-after pair of screenshots should convey an idea of its capabilities: Illusion #1 Illusion #2 _________________________________________________________________ Map-Object The Map-Object plug-in, though included with 19.15, unfortunately wasn't supplied with the 0.99.16 release. It is being developed by Tom Bech, of the University of Bergen in Norway. This plug-in will map or project an image onto the surface of either a sphere or a skewed plane, with full control of many parameters such as the light source's orientation and color. The background can easily be set to be transparent, which makes the output useful in HTML files. If you are still using version 19.15 of the Gimp the current version is functional. Perhaps by the time Gimp 1.0 is released Mr. Bech will find time to adapt this ingenious plug-in to the rapidly-changing Gimp/GTK environment. Just as I was finishing this article Gimp 0.99.17 was released, and Map-Object has been reinstated into the distribution. A new version of GTK was released concurrently; visit ftp.gimp.org if you'd like to obtain the source. Debian and Redhat packages of these releases are also available there. This is just a sampling of the more than sixty plug-ins bundled with the 19.16 release; more are appearing regularly, and when the 1.0 release is completed (and development of the Gimp moves on to a new beta version) the resulting "fixed target" should make developing new plug-ins easier for programmers. Last modified: Sun 4 Jan 1998 _________________________________________________________________ Copyright © 1998, Larry Ayers Published in Issue 24 of Linux Gazette, January 1998 _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next _________________________________________________________________ "Linux Gazette...making Linux just a little more fun!" _________________________________________________________________ [A reader requested this article by Gary Moore from the April '97 issue of Linux Journal. --Editor] _________________________________________________________________ Product Review: Applixware by Gary Moore _________________________________________________________________ * Product: Applixware 4.2 For Linux * Publisher: Red Hat Software, Inc. * Phone: 800 454-5502 * Fax: 203 454-2582 * WWW: http://www.redhat.com/ * Price: USD $495, student price USD $79.95 _________________________________________________________________ Applixware is is an excellent "office suite" that may open doors to wider use of Linux. Applixware features a word processor, a spreadsheet, a presentation graphics tool, a drawing tool, an e-mail client, database connectivity and an object-oriented application builder. For some time, this professional set of programs has been available for other Unix platforms, including HP-UX, Solaris, AIX and Digital Unix, and now Applixware is available for Intel-compatible Linux machines and Microsoft Windows; at the time of this writing, the NT version is out and the 95 version is in beta testing. [INLINE] An Applixware On-Line Book If you install Applixware on your system, you'll notice an impact on system resources. A complete installation with the included Red Hat RPM files requires 210MB--if that's more than you have available, you can make a partial installation from a live, "unpacked" directory on the CD-ROM. In fact, Applixware can be launched and used directly from the CD-ROM, though this makes program operation a little leisurely. I was using Red Hat Linux 4.0 when I reviewed Applixware, but the software should work fine on other distributions, and installation instructions are included. The CD speed may not seem bad if you're using a 486DX25, on which Applixware is fast enough to be usable, but probably too slow for a production environment; I found my meager CPU power to be a real problem only when I started using the graphics tools. This is not an application for low-memory systems. As cheap as RAM is today, this shouldn't be too painful a state to rectify. With 16MB of RAM, the word processor was snappy enough with X and the Afterstep window manager running, but having much else loaded caused so much paging of virtual memory I needed something to read while waiting. [INLINE] Applix Words Not much reading material comes with Applix--at least, not on paper. Back when it was known as Asterix and also in version 3.x of Applix, there was a manual for each module, but either with the Linux version or with the later releases, virtually all documentation is in the "On-Line Books". Use the on-line tutorials if you're new to the system, or the on-line help if you just need a reference. Applix Words is a full-featured word processor with everything you'd expect to find in a modern product. That is, unless you're looking to do something which really should be done using desktop publishing software. By the way, one thing you never want to do with it is embed, oh, 80 or so large, 256-color GIFs in a single document--at somewhere around 8MB, application behavior gets a bit wacky. Linking is much, much better. Words gives you tables, borders, shading, embedded equations and calculations, conditional text and cross-referencing, international dictionaries, thesauri and a multi-font, multi-size WYSIWYG display. You can rely on multiple undo and redo, and when you're done, you can save PostScript and PCL printer files or send them directly to a networked printer. [INLINE] Applix Graphics HTML is easy with the Applix HTML authoring tool. Documents can be imported from Applix or another popular word processor using one of the format filters or created from scratch with the same ease as a word processing document. Clip art, GIFs and linked or embedded Applix Graphics images are converted seamlessly. Applix Spreadsheets documents and queries from the database interface application, Data, can be included, too. Tables, colors, and more than 25 standard HTML styles are all under your control. Applix Graphics is a terrific drawing and presentation graphics tool. At your disposal are user-definable fill patterns, various brush styles, shearing, drop shadows, incremental zoom, rotating, scaling, color pixel editing and text wrapping, to name a few. Grid snap, guide lines, rulers, and coordinates help create precise and complex drawings quickly. I found graphics as easy to produce with Graphics as with Powerpoint. The good news continues with Applix Spreadsheets with calculation-based attributes, 3D charts, named views and dynamic links to objects in other Applixware applications. When your linked data from elsewhere changes, it is automatically updated in your spreadsheet. There are live links to a relational database through Applix Data, goal seeking, drag-and-drop, projection tables and background recalculation. You can import those old Lotus 1-2-3 and Excel spreadsheets, too. You might not think you need another mail client, but check out Applix Mail. When you receive mail, a dialog box pops up with the sender name and subject, giving you the options "Read Now", "Read Later", and "Help". You can attach Applix files to your mail messages and upon receipt, launch the appropriate Applix tool for viewing. Mail can be marked "Urgent", marked with a "Reply by" date, and also sent by "certified" mail, giving you a receipt when the recipient has read the mail. Of course you can "Cc" and "Bcc" people. You also get shared mail folders, automatic conversion of messages and documents to your preferences, encryption, and mail filtering based on rules you specify. Applix Data connects Applixware applications to SQL databases like Informix, Oracle, Ingres, and Sybase, seamlessly querying data from one or more tables, selecting information with query conditions, and performing advanced queries and joins. Rows can be edited, inserted, and deleted. A live link in your document to the database means up-to-date data. Data provides a lot of capability when teamed with ELF and Builder. [INLINE] Applix Builder The Extension Language Facility (ELF) is an interpreted language with which users can build and deploy applications, front-ends to applications, automate tasks and connect to databases and other external sources of data. The Applix user interfaces are built with ELF and ELF macros can be used to automate tasks in any of the Applixware applications. Some capabilities include: TCP/IP socket interfacing, remote procedure calls, interactive debugging, many built-in macros, string manipulation, and arithmetic and Boolean operators. Builder is object oriented and gives you access to external data sources as well as the capabilities of the Applixware application suite for use in your custom applications. Also, full access to ELF macros and functions, external objects, shared classes, RPC and shared library support. Plus, the applications you develop in Builder on one platform are portable to Applixware on other platforms without modification. Applixware is a terrific package. When I heard it was available for Linux, I knew I could let go of Microsoft Office (and MS Windows) forever. _________________________________________________________________ Copyright © 1998, Gary Moore Published in Issue 24 of Linux Gazette, January 1998 _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next _________________________________________________________________ "Linux Gazette...making Linux just a little more fun!" _________________________________________________________________ A Bit About Security By Marcus Berglund _________________________________________________________________ If you are a potential website/permanent connection, the first thing you should know about is security... I, from personal experience know what happens when people 'hack' into my machine, it nearly become an international court case. I won't go into details, but it was from my ignorance, and is why I lost my job. When setting up a machine you should have a guess who might be able to access you machine, and when you setup a new user, eg. to get pirated programs, they'll know how to get in. Sure, you might be able to get free programs and people might look at you in a different way, but if someone with more experience than you (and there is always alot of them, no matter how good you are) sees an obvious security hole they will exploit it as much as they can, so they don't get in trouble and you do. Linux/Unix is a very flexible/Configureable OS and thats where security holes apear, and disapear. Just ask a system administrator, most Linux distributions need some work before they are close to internet useable, or hack proof. I personally couldn't list every file you would need to edit, but startup files (or links with redhat & debian) you will need to remove, if you don't use them, and /etc/inetd.conf is another place to start, if you don't understand these files, imidiately remove network connection, and read man pages!!! A basic checklist might be: time, echo, nfs*, telnet*, smb (netbios), ftp, login, pop3, nntp, tftp*, netstat, finger, http, etc... (* these are popular protocols, but are can be very insecure), if you are on a network and are unsure, ask your sysadmin, they will most certainly know more from experience what you should and shouln't use, and most (experienced with Linux) could probably give you some good advice... At this stage you've gone through and remove unecessary services, now restart your config files ('shutdown now' and login then as root then 'init 3' or restart (better ideas - send them in)), now you learn how the protocols work, what files they access, and what security holes they leave, eg. if you have people that are only using windows to share drives you might set them up in a group that has no telnet and ftp access (for example). Adding new users should never be as easy as it seems, unless you can trust the person, eg. I have an 'smb' group on my machine for samba users, and they are denied access through telnet and ftp, since they are they only other services I offer on my machine. When working out what people have access to what, plan what you are going to do, eg. restrict certain 'groups' access to paticular services. At this stage you are probably thinking, "What alot of stuffing around", but as an 'NT ISP' recently proved to me, even they are succeptable to incorrect user access attacks, so don't say that it is only resricted to the Unicies, all OS's suffer, it's just that Unicies can be a little harder to configure than NT, and can be attacked easier by very experienced Unix hackers, as NT with NT hackers... But probably the biggest advantage to Linux is that 99% of the time you can get the source code and, I ask one question, if you gave away the source code for a program, are you going to leave obvious security holes for personal access, I think not... It all mainly comes down to asking the computer, people on the internet and sysadmins what you should and shouldn't do, and a little common sense does help alot too, and have fun in the meanwhile. _________________________________________________________________ Copyright © 1998, Marcus Berglund Published in Issue 24 of Linux Gazette, January 1998 _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next _________________________________________________________________ "Linux Gazette...making Linux just a little more fun!" _________________________________________________________________ The Standard C Library for Linux, Part one: file functions By James M. Rogers _________________________________________________________________ C is a very small language. This is a good thing. C programmers will use nearly the entire C language every time they write a fair sized program. The standard C library extends the functionality of C in a way that is predictable on multiple systems. The library gives us tools like scanf() and printf() that make reading and writing formatted output much easier than working with blocks of characters using read() and write(). Also when you move from one C programming environment to another, the functionality of printf() will be the same. You don't have to relearn how to print formatted output every time you change machines. In this series of articles I will discuss the tools that are available for the programmers in the standard C library. At the end is a bibliography of the books and articles that I used to get this information. I refer to these books and magazines on a daily basis when I program. If you want to work as a C programmer I strongly recommend that you buy and read these books and magazines. Many times the standard functions are overlooked and reinvented by programmers (including myself!) to do things like seeing if a character is a letter by: (c=>'a' || c<='z' && c=>'A' || c<='Z') instead of using isalpha(c); The second form is much easier to read and figure out. There is another reason to use the second form. The first example only works for ASCII character sets, the second will work on _any_ machine. If you want to write portable code (code that can be compiled and ran on any machine with very minor changes) then use the standard C library. Several times in the past I have written code that took time to write and debug and interface to the rest of my program only to discover that there was already a function that did what I wanted to do. A few months ago I was writing my own Multi User Dimension (MUD), based on a client/server article in Linux Journal, and I needed to process what the user had entered, one word at a time. So I wrote my own string tokenizer. Turns out I could have used the strtok() function to do almost the exact same thing. And other people will know what the strtok() function does without having to decipher my code. Make your life easier, use the standard C library. It will also help all of us who try to update and maintain your code later. The GNU compiler, gcc, comes with the GNU standard C library. This compiler is one of the best in the world and the GNU standard C library conforms almost exactly to the standard. In the places where the standard is imprecise, you can expect very reasonable behavior from both the compiler and the library. I am going to discuss the GNU standard C library in these articles. The library handles the standard input and output functions for C programmers. It is also by far the largest library. Because the library is so large I am going to group the commands in these sections: file operations, input and output. Now before we talk about files we need to agree on the words that we are going to use. In Linux a file or device is considered to be a stream of data. This stream of data that is associated with a file or hardware device is accessed by your program opening the file or device. Once the stream is opened then you can read and/or write to it. Three streams are opened automatically when you execute a program. Standard input (stdin), standard output (stdout), and standard error (stderr). These can all be redirected by your shell when you run the program but normally stdin is your keyboard and stdout and stderr both go to your monitor. After you are done with your streams you need to tell the operating system to clean up buffers and finish saving data to the devices. You do this by closing the stream. If you don't close your stream then it is possible to lose data. stdin, stdout and stderr are all closed automatically the same way they are opened automatically. One of the most important things to remember when dealing with devices and files is that you are dealing with the real world. Don't assume that the function is going to work. Evan something like printf can fail. Disks fill up or occasionally fail, users input the wrong data, processors get too busy, other programs have your files locked. Murphy's Law is in full effect when it comes to computer systems. Every function that deals with the real world returns an error condition if the function failed. Always check every return value and take the appropriate action when there is an error condition. Exceptions are not errors unless they are handled badly. Exceptions are opportunities for extra computation (William Kahan, on exception handling.) The first example is to basically show how to open a file for reading. It just dumps a file called test in the current directory to the standard out. All exceptions are reported to standard error and then program halted is with an error return. It should produce an error if a file called test doesn't exist. -------------------------------------------------------- #include /* this is a compiler directive that tells the the compiler that you are going to be using functions that are in the standard input / output library */ main (){ /* declare variables */ FILE *stream; /* need a pointer to FILE for the stream */ int buffer_character; /* need an int to hold a single character */ /* open the file called test for reading in the current directory */ stream = fopen("test", "r"); /* if the file wasn't opened correctly than the stream will be equal to NULL. It is now customary to represent NULL by casting the value of 0 to the correct type yourself rather than having the compiler guess at the type of NULL to use. */ if (stream == (FILE *)0) { fprintf(stderr, "Error opening file (printed to standard error)\n"); exit (1); } /* end if */ /* read and write the file one character at a time until you reach end-of-file on either our file or output. If the EOF is on file_descriptor then drop out of the while loop. if the end-of-file is on report write errors to standard out and exit the program with an error condition */ while ((buffer_character=getc(stream))!=EOF) { /* write the character to standard out and check for errors */ if((putc(buffer_character, stdout)) == EOF) { fprintf(stderr,"Error writing to standard out. (printed to standard error)\n"); fclose(stream); exit(1); } /* end if */ } /* end while */ /* close the file after you are done with it, if file doesn't close then report and exit */ if ((fclose(stream)) == EOF) { fprintf(stderr,"Error closing stream. (printed to standard error)\n"); exit(1); } /* end if */ /* report success back to environment */ return 0; } /* end main*/ ------------------------------------------------------------- The above simple program is an example of opening a file, reading the file, and then closing the file while also using stdout, and stderr. I cut and pasted the code to a vi session and then saved, compiled, and ran the program. What follows is a quick summary of the file operations in the library. These are the operations that work directly with streams. Opening Streams Before a stream can be used you must associate the stream with some device or file. This is called opening the stream. Your program is asking for permission from the operating system to read or write to a device. If you have the correct permissions, the file exists or you can create the file and no-one else has the file locked then the operating system allows you to open the file and gives you back an object that is the stream. Using this object you can read and write to the stream and when you are done you can close the stream. Let me discribe the format of the descriptions that you will see here and in the man pages. The first entry is the type that is returned by the function call. The second part is the function name itself and the third part is the list of variable types that the function takes for arguments. Looking at the first line below we see that the fopen function takes two pointers to strings, one is a path to a file and the other is the open mode of the program. The function will return a pointer to FILE type which is a complex object that is defined in the library. So in order to accept the return type you must have declared a variable of type pointer to FILE, like the stream variable in the example above on line 9. On line 13 of the example you can see where I call the function fopen with the static filename of "test" and a mode of "r" and then accept the return value into the stream object. A stream can be opened by any of these three functions: FILE *fopen( char *path, char *mode) FILE *fdopen( int fildes, char *mode) FILE *freopen( char *path, char *mode, FILE *stream) char *path is a pointer to a string with the filename in it. char *mode is the mode of opening the file (table follows.) int fildes is a file descriptor which has already been opened and whose mode matches. You can get a file descriptor with the UNIX system function open. Please note that a file descriptor is not a pointer to FILE. You cannot close(stream), you must fclose(stream). This is a very hard error to find if your compiler doesn't warn you about it. If you are interested in Linux System calls type `man 2 intro` for an introduction to the functions and what they do. FILE *stream is an already existing stream. These functions return a pointer to FILE type that represents the data stream or a NULL of type (FILE *)0 on any error condition. fopen is used to open the given filename with the respective mode. This is the function that is used the most to open files. fdopen is used to assign a stream to a currently opened file descriptor. The file descriptor mode and the fdopen mode must match. freopen is normally used redirect stdin, stdout and stderr to file. The stream that is given will be closed and a new stream opened to the given path with the given mode. This table shows the modes and their results: open stream for truncate create starting mode read write file file position ---- ---- ----- ---- ---- -------- "r" y n n n beginning "r+" y y n n beginning "w" n y y y beginning "w+" y y y y beginning "a" n y n y end-of-file "a+" y y n y end-of-file To read the first line, "r" will open a stream for read, the stream will not be opened for write, will not truncate the file to zero length, will not create the file if it doesn't already exist and will be positioned at the beginning of the stream. Stream Flushing Sometimes you want your program to ensure that what you have written to a file has actually gone to the disk and is not waiting in the buffer. Or you might want to throw out a lot of user input and get fresh input, for a game. The following two functions are useful for emptying the streams buffers, though one just throws the data away while the other stores it safely on to the stream. int fflush(FILE *stream) int fpurge(FILE *stream) FILE *stream is an already existing stream. These functions return a 0 on success. On a failure they return an EOF. fflush is used to write out the buffers of the stream to a device or file. fpurge is used to clear the buffers of unwritten or unread data that is in a buffer waiting. I think of this as a destructive purge because it clears the read and write buffers by dumping the contents. Closing Streams When you are done with a stream you must clean up after your program. When you close a stream the command ensures that the buffers are successfully written and that the stream is truly closed. If you just exit a program without closing your files then more than likely the last few bytes that you wrote will be there. But you won't know unless you check. Also there is a limit to how many streams a single process can have open at one time. So if you keep on opening streams without closing the old streams you will use up system resources. Only one command is used to close any stream. int fclose(FILE *stream) FILE *stream is an already existing stream. Returns a 0 on success, or an EOF otherwise. fclose flushes the given streams buffers and then disassociates the stream from the pointer to FILE. Renaming and Removing Files These two commands work just like rm and mv, but without the options. They are not recursive but your programs can be so watch that you don't accidentally build your own version of rm -rf / <<>> int remove(char *path) int rename(char *oldpath, const char *newpath) char *path, oldpath and newpath are all pointers to existing files. Returns a 0 on success and a non-zero otherwise. remove works just like rm to remove the file in the string pointed to by path. rename works just like move to rename a file from oldpath to newpath, changing directories if need be. Temporary Files You can create your own temp files by using the following functions: FILE *tmpfile(void) This command returns a pointer to a FILE of stream which is a temp file that magically goes away when your program is done running. You never even know the files name. If the function fails it returns a NULL pointer of type (FILE *)0. char *tmpnam(char *string) This function returns a filename in the tmp directory that is unique, or a NULL if there is an error. Each additional call overrides the previous name so you must move the name somewhere else if you need to know the name after you open the file. Stream Buffering Normally a stream is block buffered, unless it is connected to a terminal like stdin or stdout. In block buffered mode the stream reads ahead a set amount a and then gives you what the input that you ask for as you ask for it. Sometimes you want this to be bigger or smaller to improve performance in some program. The following four functions can be used to set the buffering type and the size of the buffers. The defaults are normally pretty good so you shouldn't have to worry too much about these. int setbuf( FILE *stream, char *buf); int setbuffer( FILE *stream, char *buf, size_t size); int setlinebuf( FILE *stream); int setvbuf( FILE *stream, char *buf, int mode , size_t size); Where mode is one of the following: _IONBF unbuffered, output sent as soon as received. _IOLBF line buffered, output sent as soon as a newline is received. _IOFBF fully buffered, output isn't sent until size characters are received. setbuf is an alias for setvbuf(stream, buf, buf ? _IOFBF : _IONBF, BUFSIZ); setbuffer is an alias for setvbuf(stream, buf, buf ? _IOFBF : _IONBF, size); setlinebuf is an alias for setvbuf(stream, (char *)NULL, _IOLBF, 0); setvbuf sets a buffer for the given stream of size_t size and of buffer mode. Stream Posistioning Once you open a stream you are located at a certain postition depending on what mode you opened the stream in, as you read or write your position increases with each character. You can see where you are at in the stream and jump to any position in the stream. If you are writing a database program you don't want to have to read and ignore a million characters to get to the record that you want, you want to be able to jump right to the record and start reading. Note that terminals cannot have their stream repositioned, only block devices (like hard drives) will allow this. Also note that if you open a file for writing and use fseek to go out 10,000 bytes, write one character and then close the file that you will not have a file of 10,001 bytes. The file will be much smaller. This is called a sparse file. If you move a sparse file using the mv command it will not change size because a mv is only a change to the directory structure, not the file. If you cp or tar a sparse file then it will expand out to its true size. int fseek( FILE *stream, long offset, int whence); long ftell( FILE *stream); void rewind( FILE *stream); int fgetpos( FILE *stream, fpos_t *pos); int fsetpos( FILE *stream, fpos_t *pos); FILE *stream is a pointer to an already existing stream. long offset is added to the position indicated by the whence. int whence is SEEK_SET, SEEK_CUR, or SEEK_END depending on where you want the offset to be applied to: the beginning, the current position or the end. fpos_t *pos is a complex file position indicator. On some systems you must use this to get and set stream positions. If these functions are successful fgetpos, fseek, fsetpos return 0, and ftell returns the current offset. Otherwise, EOF is returned. There is no return value from rewind. fseek sets the file position in the stream to the value of offset plus the position indicated by the whence, either the begginning, the current or the end of file to get the new position in the stream. This is useful for reading along, adding something to the end of the stream and then going back to reading the stream where you left off. ftell returns the current position of the stream. rewind sets the current position to the beginning of the stream. Notice that no error code is returned. This function is assumed to always suceed. fgetpos is used like ftell to return the position of the stream. The position is returned in the pos variable which is of type fpos_t. fsetpos is used like fseek in that it will set the current postion of the stream to the value in pos. On some systems you have to use fgetpos and fsetpos in order to reliably position your stream. Error Codes When any of the above functions return an error you can see what the error was and even get a text error message to display for the user. There are a group of functions that deal with error values. It is enough for now to be able to see that you have an errors and stop. However, if you write a nice GUI word processor you don't want the program to stop everytime it can't open a file. You want it to display the error message to the user and continue. In a future article I will deal this error code functions, or someone else can summarize them for us and send in an article and some commented source code to show us how it's done. If anyone is interested the functions are: clearerr, feof, ferror, and fileno. Conclusion Well that's enough for this month. I have learned a lot and I hope you have as well. Most of this information is available through man page system but the dates on these are 4 years old. If anyone has updates on any of this information please send it to me and I will correct myself in further articles. Next month I want to talk about input and output. I will take the simple program above and add some functionallity to it to add a column of numbers and output the results to standard out. Hopefully this example program can grow into something useful. Bibilography: The ANSI C Programming Language, Second Edition, Brian W. Kernighan, Dennis M. Ritchie, Printice Hall Software Series, 1988 The Standard C Library, P. J. Plauger, Printice Hall P T R, 1992 The Standard C Library, Parts 1, 2, and 3, Chuck Allison, C/C++ Users Journal, January, February, March 1995 STDIO(3), BSD MANPAGE, Linux Programmer's Manual, 29 November 1993 Unidentified File Objects, Lisa Lees, Sys Admin, July/August 1995 A Conversation With William Kahan, Jack Woehr, Dr Dobb's Journal, November 1997 Java and Client-Server, Joe Novosel, Linux Journal, January 1997 _________________________________________________________________ Copyright © 1998, James Rogers Published in Issue 24 of Linux Gazette, January 1998 _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next _________________________________________________________________ "Linux Gazette...making Linux just a little more fun!" _________________________________________________________________ xscreensaver By Jamie Zawinski _________________________________________________________________ I saw the article on Xlockmore by Larry Ayers in issue 18 of the Linux Gazette, and I was surprised that it didn't mention, even in passing, my XScreenSaver program! Allow me to engage in a bit of advocacy. Back in 1991, before Xlockmore existed, there was only Xlock. Xlock was not a screensaver: it was only a locker. There was no way to make it activate itself automatically when the console became idle, nor was there any way to avoid having it lock the screen: that is, there was no way to have it turn off when the mouse moved. So, I wrote XScreenSaver. XScreenSaver is superior to Xlockmore in a number of ways. The most important way, of course, is that it is actually a *screen saver*. Although Xlockmore can be configured to not require a password, it still doesn't have the ability to turn on when the machine is idle; for that you have to use an external program that launches and kills it. The second way in which XScreenSaver is better is that it takes a server/client approach: the "xscreensaver" program itself knows how to detect idleness, and to lock the screen. The graphics hacks are not built in: the beauty of XScreenSaver is that any program which can draw on the root window can be launched by XScreenSaver for use as a graphics hack! This has several benefits: * You don't have to recompile and reinstall xscreensaver to install a new graphics hack: all you have to do is change your X resources, and issue one command. * Since programs don't have to be written *specifically* to run inside the xscreensaver framework, there are many more potential graphics hacks available. They don't even need to be written in the same language: they just have to draw on the root. Thus, it's easier to write programs to work with XScreenSaver than with Xlock or Xlockmore, because they don't have to follow a complex set of idiosynchratic rules on how to structure the code: the only rule is, "draw on the root." * By separating the task out into two processes, the whole system becomes more robust: the memory protection provided by the OS serves us well, in that, if one particular graphics hack has a bug (leaks memory, corrupts memory, gets a floating-point exception, etc) the integrity of the screen saver itself is not compromised. The offending hack may exit, but the screen saver itself is still running, and the screen is still blackened (or locked.) Also, since a screen saver is, by its nature, a very- long-running background task, even a small leak would build up over time. By arranging for the graphics hacks themselves to be relatively short-running tasks, this problem goes away. * On some systems, only programs which are running as root can check passwords. Therefore, on such systems, a screen locker would need to be a setuid-root program. Obviously one needs to be very careful about what programs one allows out of the security sandbox; a conscientious sysadmin would want to examine such a program very carefully before installing it. The XScreenSaver model allows this, by having the priveleged part of the program (the saver itself) be small, while letting the larger part (the graphics hacks) remain unpriveleged. XScreenSaver also includes a nice Demo Mode that lets the user interactively experiment with the currently-configured graphics hacks. (Until recently, most Linux users wouldn't have been able to take advantage of this, since the code had required Motif; but that is now configurable, and demo mode works with Athena widgets too -- since release 1.27 back in January 1997.) If you have a system with more than one monitor, XScreenSaver can save them both at once: a different graphics hack will run on each, and if the two monitors have different depths (for example, if one is monochrome and the other color) they can be configured to choose their screenhacks from different lists, so that each monitor is running the hacks that look best on it. Most of the xlockmore hacks (the ones that I liked, anyway, including the GL modes) are included with the XScreenSaver distribution; the only change made being to extract the various display modes from the monolithic XLock executable, and turn them into standalone root-window-oriented programs. However, it's possible to have XScreenSaver run xlock itself, as just another one of its many modes -- the best of both worlds! Do check it out: the canonical web page is http://people.netscape.com/jwz/xscreensaver, which includes screen shots and descriptions of most of the included graphics hacks. The latest version, as of this writing, is 2.12. At last count, it came with 64 different graphics hacks. Roughly a third of these were written by me; the others were graciously contributed by others. And, of course, it's all free, under the usual X-Consortium-style copyright. You also might enjoy my philosphical ramblings on the nature of screensavers, at http://people.netscape.com/jwz/gruntle/savers.html. _________________________________________________________________ Copyright © 1998, Jamie Zawinski Published in Issue 24 of Linux Gazette, January 1998 _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next _________________________________________________________________ Linux Gazette Back Page Copyright © 1998 Specialized Systems Consultants, Inc. For information regarding copying and distribution of this material see the Copying License. _________________________________________________________________ Contents: * About This Month's Authors * Not Linux _________________________________________________________________ About This Month's Authors _________________________________________________________________ Larry Ayers Larry lives on a small farm in northern Missouri, where he is currently engaged in building a timber-frame house for his family. He operates a portable band-saw mill, does general woodworking, plays the fiddle and searches for rare prairie plants, as well as growing shiitake mushrooms. He is also struggling with configuring a Usenet news server for his local ISP. André D. Balsa André lives in France, 80 miles south of Lyon. He currently runs a small Internet consulting business. When not busy exploring Linux performance issues, Andrew likes to spend his time with his 1-year old daughter, or else try different French recipes on his friends. He also helped set up the Linux Benchmarking Project pages at http://www.tux.org/bench/. and a web site at http://www.tux.org/~balsa/linux/cyrix, about the use of Cyrix 6x86 processors with Linux, which has had more than 9,000 visitors in less than two months uptime. Gerd Bavendiek Gerd has worked as a software engineer with various flavors of Unix since 1988. In 1994 he realized that using Linux could make his every-day work more convenient. Since that time he has used Linux and various GNU-software. He lives in Essen, Germany. In my spare time I build model-steam engines using real hardware: lathe, milling-machine and a lot of hand tools. Erik Campo Erik is a Computer Science graduate from UQAM (Universite du Quebec A Montreal) since June 1997. He has been working as a network programmer (C, C++, Java), teaching assistant and as a systems administrator in the Teleinformatique laboratory of UQAM for a year and a half. He is now working at UQAM's Registrar's Office as a Webmaster and as system administrator for the Teleinformatique laboratory. Erik is a Unix/Linux specialist, having installed and managed several flavours of Unix as Coherent, Minix, Linux and Solaris. He likes to write articles about Linux, listens to Heavy Metal music, plays both spanish and electric guitar and is a fanatic of the Spanish Civil War, history of General Franco and the political transition period in Spain from 1975 to 1978. For more information, please consult his web page at: http://campo.ti.ml.org/. Jim Dennis Jim is the proprietor of Starshine Technical Services. His professional experience includes work in the technical support, quality assurance, and information services (MIS) departments of software companies like Quarterdeck, Symantec/ Peter Norton Group, and McAfee Associates -- as well as positions (field service rep) with smaller VAR's. He's been using Linux since version 0.99p10 and is an active participant on an ever-changing list of mailing lists and newsgroups. He's just started collaborating on the 2nd Edition for a book on Unix systems administration. Jim is an avid science fiction fan -- and was married at the World Science Fiction Convention in Anaheim. Joel Jaeggli Joel is a programer consultant with the University of Oregon Computing Center. Fomerly he worked for the Network Startup Resource Center http://www.nsrc.org, helping ngos in emerging nation connect to the internet, and of course discover the joy of free Unixes. Dave Nelson Dave is a manager who mostly deals with computing as an abstraction. He used to do computational science research, and Linux helps him stay in touch with reality. Mark Nielsen Mark works at the Health Sciences Library at The Ohio State University as a systems specialist. One day he hopes to replace NT computers with Linux and KDE. James M. Rogers James and Shahla Rogers own a farm in the back woods of Ohio and have 6 dogs and 3 cats. He has served 14 years in the Airforce and Army, both enlisted and officer. His first computer was a TS-1000. He gave up his C128 in 1991 for Linux. He is currently a UNIX and C contract programmer for the University of Washington. Ylian Saint-Hilaire Ylian is a Computer Science graduate from UQAM (Universite du Quebec A Montreal) since last summer and is now completing his master's degree at the same university specializing in Network Computing. He is a web site publishing and Internet specialist. He has been working with Internet and computer applications development for the last seven years, both as part of his formal education and as a personal passion. From 1981 to 1984 he lived on a sailboat in the Caribbean with his family. In 1991 he was selected by the Montreal Chamber of Commerce as one of the "Leaders of Tomorrow". He is also a commercial pilot. His web address is http://kairos.dsa.uqam.ca/software/. Michael E. Smith Michael is the Acting Managing Director of LXNY - New York's Free Computing Organization; editor of the Unigroup of New York Newsletter (editor@unigroup.org); and enthusiastic advocate for Unix-like systems. He is a philosopher who has made his living as a computer consultant, presently as a Senior Associate of Charles River Computers, Inc., doing Unix Systems Administration at J. P. Morgan. Career highlights include authoring and deploying an original database system in MUMPS-15 and doing SETL research under Jacob T. Schwartz. Smith is a man of wide interests inside and outside of the computer field, but lately of little time to pursue them. Jamie Zawinski Jamie is a card-carrying Unix Hater, who doesn't use Linux, but despite that, has authored an awful lot of software that runs on Linux, including Lucid Emacs, XScreenSaver, XDaliClock, and the initial Unix versions of Netscape Navigator, and the Netscape Mail and News readers. _________________________________________________________________ Not Linux _________________________________________________________________ Thanks to all our authors, not just the ones above, but also those who wrote giving us their tips and tricks and making suggestions. Thanks also to our new mirror sites. Margie and I had a great Christmas break in Houston, Texas, where much of her family lives. Besides visiting with family and numerous friends we got to tour the Cockrell butterfly habitat, which was very pretty. Mostly sunny days and highs in the 60s made it tough to return to the middle of a Seattle winter, but we managed. Have fun! _________________________________________________________________ Riley P. Richardson Editor, Linux Gazette gazette@ssc.com _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back _________________________________________________________________ Linux Gazette Issue 24, January 1998, http://www.linuxgazette.com/ This page written and maintained by the Editor of Linux Gazette, gazette@ssc.com