The Whole Damn Thing 1 (text)
The Whole Damn Thing 2 (HTML)
are files containing the entire issue: one in text format, one in HTML.
They are provided
strictly as a way to save the contents as one file for later printing in
the format of your choice;
there is no guarantee of working links in the HTML version.
Got any great ideas for improvements! Send your comments, criticisms, suggestions and ideas.
This page written and maintained by the Editor of Linux Gazette, gazette@ssc.com
The Mailbag!Write the Gazette at gazette@ssc.com |
Contents: |
Date: Mon, 20 Jan 97 13:22:54 EST
Subject: Linux on Compaq
From: afarnsworth@S1.DRC.COM
Hi,
I just received a brand spanking new Compaq Deskpro 6000 with Adaptec
2940U SCSI card and Compaq Netflex III ethernet card. I think I have the
SCSI card problem whipped, but how do I find drivers for the Netflex III
card? I have check the usual places, does it even exist?
The Compaq Deskpro 6000 is a fairly new system out, though Compaq has been building Deskpro's for many years. The only problem I have had with them is their proprietary hardware. This is usually either their Network cards or their Hard drive controllers (usually RAID controllers). Other than that, it's pretty standard.
Please reply to my email address : afarnsworth@s1.drc.com for I don't have the ability to check the gazette often. Thanks.
Andy Farnsworth, Dynamics Research Corporation
Date: Sat, 18 Jan 1997 12:38:45 +0200 (EET)
Subject: Office-tools
From: J.Hernetkoski, jjjj@zenith.yok.utu.fi
Hi! Could you write an article about these two office-package for Linux:
(An article about StarOffice by Dwight Johnson appeared in issue 9 of Linux Gazette. An article about Applixware will be in the April issue of Linux Journal. I can probably get permission to run it in LG also, but not until that issue of LJ is on the stands. Which means it would also be the April issue of LG. Anyone want to do one sooner?--Editor)
Date: Tue, 21 Jan 1997 09:43:58 -0500
Subject: Linux samba win95
From: nravin@cs.fit.edu
-I am a system administrator....manages 20 PC's....all running windows95
-We were running Windows NT server(4.0) in the lab for some time Then we realised we had only 10 client access licenses and so were forced tp SWITCH to Linux.
-Linux emulates NT, as you may know
-I had the CONFIG.POL working perfectly with the NT network. -But when I switched to Linux I lost that control. No longer are the clients able to access the CONFIG.POL file even though I have kept it in the NETLOGON share.
-Now whosoever uses the PC's(most are novices) play around with the settings( of client) and is giving me nightmares, since I cannot lock them out.
-Is there a way out? How can I make the clients read the system policies from the CONFIG.POL using Linux server?
Please help.
Thanks
Neal
Date: Tue, 21 Jan 1997 15:03:07 +0000
Subject: dbms connectivity
From: Mike Lewis, mlewis@burly.com
Hi. I love linux but most of the projects I work on preclude it because of a lack of dbms connectivity. None of the major dbms players (Oracle, Sybase, Informix, etc.) or 3rd party developers (Intersolv, Visigenic, etc.) offer access from a linux client. I've tried a middleware solution from Openlink and I guess you could run SCO drivers with emulation, assuming you can get your hands on the low-level libraries.
This seems to be the only thing standing in the way of Linux getting business worthy respect. Could you put together a piece on this issue and explore the future availability of dbms connectivity from linux?
Thanks.
Mike
Date: Wed, 22 Jan 1997 18:21:57 -0800
Subject: X Windows & Unix
From: Nestor Villalobos, n.villalobos@codetel.net.do
Hi there! I just got Linux from RedHat and I have been wondering how to do Animations in XWindows. I would like a little picture box on the lower right hand corner of the screen on startup to start an animation. Is this possible? If it is, please email me back with instructions!!! Thanks for the help.
One Unix man to the other,
Nestor Villalobos
Linux | DOS |
---|---|
ls /directory/name | cd\directory\name -- dir |
ls /directory | more ls " " | less | dir | more |
cat /file | type \file |
cat " " | more less /file | " " | more |
cp /file /file /to | copy \file \to |
cd /directory | cd\directory |
mkdir /directory | mkdir \directory | rm /file | del \file |
One of the next things you will need to do is find out how to write or change file contents with an editor.I used to think elvis was the easiest editor, until Konrad Rokicki told me about pico, which comes with the pine mail server. If you used MS Write or Notepad, you'll find it very easy to use. Save Emacs for another day unless you are a good typist, I found the keyboard commands to be confusing for my two-fingered style. If you don't have pico installed, try elvis in the input mode, by typing: input filename, it's pretty easy too, except watch out for command mode and input mode (type: man elvis :and read the page.If you have a CD version of Linux, you either have pico installed or can have it if you choose.
If you're like me one of your priority projects will be to to do is use an Internet protocol to connect to your Internet Service Provider. My ISP uses PPP so that's what I used, and the following descriptions are for PPP.
The first thing you will need to confirm is that your kernel supports PPP, either in the kernel or by loadable modules. Type: pppd :and hit enter. If your kernel doesn't support PPP, you'll get a negative message, if you get a prompt you can assume for now that it's supported.
Next you will need to type: ls /usr/sbin | more :and hit enter. Look for files called ppp-on and ppp-off. Next, type: ls /etc | more : and hit enter. Here you will be looking for a file called resolv.conf. Then type: ls /etc/ppp : you can skip the: | more :this time, since it's a small directory,and hit enter. You'll be looking for files called options and ppp-on-dialer.
Edit your /etc/resolv.conf to look something like:
domain net-link.net nameserver 205.207.6.2 nameserver 205.217.6.3 gateway 205.217.6.10Naturally you will have to change the name and numeric to match that of your ISP
Next, edit your /etc/ppp/options file to look something like this:
/dev/modem 38400 # at this line you could substitute 19200, 57600, 115200 defaultroute noipdefault debug crtscts lock modem
These two files are necessary to either of the methods I am going to describe.
Now you can use minicom to dial up your ISP. Type: minicom :, and when it loads, type: ATDTYOURISPNUMBER :hit enter. When the remote modem answers you will be prompted for your username and password. When you have responded with this information, a string of garbage characters will run across the screen. Type: ctrl(key)a :then: Q :which will let you out of minicom without hanging up the modem. Then immediately type: pppd :then hit enter. Type: ping YOURISP'SNUMERIC :you will get a message that will inform you if you are connected. If you get a message that says in part "network not reached" try again. If no luck after a couple more tries, check to see that the files you edited have the correct information. Try changing your connection speed in /etc/ppp/options to 19200 and try again. If you connect this time, then one at a time try the faster speeds until you can't connect, then drop back to the fastest speed that worked.
There is an easier method using the script /usr/sbin/ppp-on, that involves editing that file to give your ISPdialup number, username, and password and optionally your connection speed. It is commented to help you figure out how to change those lines that you need to change. When that is done correctly, you can dial up by typing: ppp-on : Pretty cool, huh? If these methods don't work for you, start by reading the PPP_HOWTO in your /usr/doc/faq/howto directory, then respond by e-mail to troll@net-link.net, telling me any error messages, and I'll try to square you away.
There is another method using the chat program, but I haven't had much luck there, yet. Future installments, if any will fill you in on that if it seems that it's wanted. Personally, ppp-on is just fine for me so far.
you will want to get an e-mail program and a browser, if you don't already. I recommend lynx. It's fast and you don't need X installed to use it. There probably is a lynx binary in your distribution, but if not you can get one from sunsite or other ftp.Pine is a good mail program, and it includes the pico editor, as noted above.
NOTE TO LINUX EXPERTS- I would be glad to accept reasonable criticism of this article and the information therein. I don't really want to put up with heavy fire, if you can help the new user better than me, write an article yourself, there are plenty of avenues where such information would be of great service.
TTYL, Mike List
The directory navigator and program launcher called "DIRED" in the original incarnations of EMACS has two stand-alone Unix clones. Mike Lijewski's "dired" 2.2 is written in C++ (1996). The original "dired" was written in C by Stuart Cracraft (1980), available as version 3.05 (1997).
Historically, shortly after emacs "dired" appeared in the TECO implementation, a stand-alone version was written by Stuart Cracraft (1980). The emacs version and the C version have not kept up with one another.
Lijewski wrote "dired" in 1990, while at Cornell University Theory Center, without any knowledge of Cracraft's "dired". The Theory Center ran on IBM VM/CMS, under which there is a utility call "file manager". This program manages the flat VM/CMS file system and represents the main user interface into files. The creation of "dired" eased the transition from VM/CMS to Unix.
Lijewski's "dired" has the advantage of hindsight and C++ program development so it promises to be written in modern syntax and very maintainable. Cracraft's "dired" was rewritten in 1996 in ANSI C. It suffers with flaws in both design and readability, but the features are there.
Curiously, the common features of the two direds also account for the most often used dired commands.
The differences between Lijewski's "dired" and Cracraft's "dired" in 1997 appear below. Many features commonly exist in both versions, so only the superficial differences are discussed. Strengths and weaknesses of each are also listed.
The program tends to be used for browsing and deleting files; users find the other features too obtuse for daily use. Too many commands. Its hard to remember what key does which command.
Find dired305.zip at http://sunsite.unc.edu/pub/Linux/. Or email to gustafson@math.utah.edu for location of recent version.
Find Lijewski's c++ dired by sending email to lijewski@mothra.lbl.gov for location of the recent version. If you want to see it on sunsite, then let Mike hear about it!
Since I frequently post messages to various Unix and Linux newsgroups and mailing lists I often get technical questions mailed to me ``out of the blue.''
I recently received a request for a script to produce the following sort of output:
dir/ file1 file2 file dir/ dir/ file (etc)Here was my quick and dirty solution:
find . | awk -F/ '{for (x=1;x<NF;x++) { printf "\t"}; print $NF}'
... which only does about 80% of the job. The only problem is that the directory entries don't end with the ``/'' to indicate their file type. It was late -- so that's what I sent him.
Here's how that works:
find . just prints a list of full paths (using GNU find). Some non-Linux users may have to using 'find . -print' to accomplish this (or update to the GNU version on their systems).
awk is a text processing language/utility.
The -F (capital ``f'') sets a field separator to the '/' (slash character). Awk defaults to parsing it's input into records (lines) of fields (whitespace delimited). Using the -F allows me to tell awk to treat each record (still just lines) as a group of fields that are separated by slashes -- allowing me to deal with each directory element as a separate element very easily.
The next parameter to awk is a short program -- a for loop (like the C for() construct). It iterates from 1 to NF.
NF in awk is the ``number of fields'' for each record. This, among many other values, is preset by awk as it parses its input.
Awk defaults to reading it's input from a pipe or from each file listed after it's script on the command line. We're supplying it with input through the pipe, of course.
In the body of my awk 'for' loop I simply print a tab for each directory named in that line. This has the appearance of "wiping out" all of the leading directory names and indenting my line as desired.
Finally, after the end of the for loop I simply print the last field ($NF). Note how the printf takes a string similar to C's printf -- and it doesn't assume a newline. I could put C-like format specifiers like %s and %f in there -- and I'd have to supply additional parameters to the printf call if I did.
By contrast the awk print command (no trailing ``f'') does add an ORS (output record separator) character to the end of its line and doesn't treat its first argument as a format specification.
This evening I happened to be cleaning up my home directory (while procrastinating on doing paying work and cleaning the house) I happened across a copy of this and decided to fix it.
find . | { while read i ; do [ -d $i ] \ && echo $i/ \ || echo $i done } \ | awk -F/ ' /\/$/ { for (x = 1; x < NF -1 ;x++) { printf "\t" }; print $(NF-1) "/"; next; } { for (x = 1; x < NF; x++) { printf "\t" } print $NF }'Note that the original script: 'find ....| awk -F/ ...' is mostly still there. But the script has gone from one line to eleven -- all to get that silly little slash character on the end of each directory name.
(If anyone as a shorter program -- I'd like to see it -- there's probably a fairly quick way to do this using perl and find2perl)
The main thing I've added is the while loop which works like this:
find's output is piped into a group of commands (that's what the braces are for). That group of commands starts with a bash "while... do" loop. The bash "while...do" loop works like this:
'while' some command returns no error 'do' some commands'done' Note that, unlike C or Pascal programming the ``condition'' for the while loop is actually any command (or group of commands -- enclosed in braces or parentheses). The fact that programs return values (called errorlevels in DOS and some Mainframe OS) makes all commands implicitly ``conditions.'' (Actually C allows a variety of function calls within conditionals -- but we won't go into that).
Note that some commands might not return values that make any sense -- so those would not be suitable for use with any of the conditional contexts in any shell.
The command I'm using is bash' internal ``read'' command which just takes a variable name as an argument. Note that I don't say ``read $i'' -- the shell would then fill the value of $i into the command (i.e it would ``dereference'' it) and the read command would have no arguments. If you give the read command no argument it simply reads a value and throws it away (no error).
When you set values in bash (or Bourne shell, or zsh etc) you also don't ``dereference'' it. $i=foo would be an error unless you actually wanted to set the value of some variable -- whose name was currently stored in $i to be set to foo.
Back to our script. When the find command stops printing filenames into the pipe, the 'read i' command will fail to get any value -- so the body of the do loop will be skipped.
The 'do' keyword just marks the end of the list of commands in the conditional section and the beginning of the body of the loop (big surprise -- huh?).
The next three lines of the script are another common shell construct --
I could certainly have put all of this one line. However, for readability I broke it up and formatted it with leading tabs -- otherwise *I* couldn't read it, much less expect anyone else to do so.
The next line (continuation) starts with the '&&' operator. In bash and related shells you have things like the familiar ``|'' (pipe) and ``;'' semicolon which are called operators. This operator means ``if that last command was O.K. -- returned no error -- then ...''
You can think of the '&&' operator as do this ``and'' to that (in the *conditional* sense of the the word and).
The next line uses the '||' operator -- which is, as you might expect, similar to the '&&' operator except it means -- ``if the last command executed returned an error then ...'' This is roughly analogous to the English ``or'' (again, it the conditional sense).
Of course I could have wrapped this in an 'if ....; then ....; else...' construct -- but I'm used to the '&&' and '||' as are most shell programmers.
So far all we've done is added a ``/'' character to the end of each directory.
Now I'm left with a print out of full paths with directories ending in ``/'' (slashes) and other files printed normally -- back to replacing all but the last thing with tabs -- so we pipe the 'while' loop's output into the same awk script we were using before.
Ooops! Well, almost the same script -- it turns out that awk -F is happy to consider the trailing slash as a blank field on the end of a line. Hmm. O.K. we add an extra condition to the awk script.
An awk script consists of condition-action pairs. The most common awk ``conditions'' are patterns. That is so say that they are regular expressions (like the things you use grep to search for). A pattern is usually delimited by slashes (a mnemonic to the users of ed, later upgraded ex, later upgraded to vi) although you can also ``match'' against strings that are enclosed in quotes.
Actions in awk are enclosed in braces.
Awk is an extremely forgiving language. If you leave out the ``condition'' or ``pattern'' it will execute the action on that line for every record (line) that it comes across. That's what my first script did.
If you leave off the action (i.e. if you have a line that consists just of a condition) then awk will simply print the record. In other words the default action is {print}.
When I was a regular in the comp.lang.awk newsgroup (and alt.lang.awk that preceded it) I used to enjoy pointing out that the shorted awk programs in the work are:
1 and .(The first one just prints every line it sees since ``1'' is a ``true'' condition; the second program (a dot) prints every line that has at least one character -- since that is the regular expression for ``any character''. The second program actually does filter out blank lines since awk doesn't count the record separator as part of the line).
So, the modification of my awk script for this purpose is to add a condition that handles any record that *ends* with a slash. In those cases I convert all *but* the next-to-last field to a tab, and print that ``next-to-last'' field. I also have to add the ``/'' character to the end of that since awk doesn't consider the field separator to be part of any field.
Finally I add a 'next' command which tells awk not to look for any more pattern-action pairs with *this* record. If I didn't do that than awk would execute the action for each ``directory'' line -- and also execute the other action for it (i.e. it would print a blank line after printing each directory line).
Is the extra 10 lines of code worth it just to add a slash to the end of the directory names in our outline? Depends on how much your customer is willing to pay -- or how much grief it causes you, your boss or your users.
Mostly I decided to work on this as a training example. I think there are some neat constructs that every budding shell programmer might benefit from learning.
The ``find .... | {while read i .... do ... done}'' construct is well worth remember for other cases. It allows you to do complex operations on large numbers of files without resorting to writing a temporary file and having to clean up after it.
When you write scripts that explicitly create temporary files you suddenly have a host of new concerns -- what do I name it? where do I put it? don't forget to remove it! do I have enough space for it? what if my script gets interrupted? etc.
To be sure there are answers to each of these. For example I suggest ~/tmp/$0.`date +%Y%m%d`.$$ for a generic temporary filename for any script -- it gives the name of your script, the date in YYYYMMDD format and the process ID of the current instance of your script as the filename. It puts that into the temporary directory under your home (which no one else should have access to). There is virtually no chance of a name collision using this scheme (particularly if you change the date format to +%s which is the total number of seconds since midnight on Jan. 1, 1970). You can use the 'trap' command to ensure that your temp files are cleaned in all but the most extreme cases etc.
However, as I've said, it's worth understanding how to avoid temporary files -- and usually your scripts will execute faster as a result.
The [ ... ] && ... || ... construct is absolutely essential to any Unix sysadmin. Many of legacy scripts (particularly those in /etc/rc.d/ -- or it's local equivalent) rely on these operators and the test or '[' command.
Finally there is 'awk'. I've heard it argued that awk is a dinosaur and that we should convert all the awk code to perl (and presumably most of the Bourne shell and sed code with it). I won't argue that point here. Suffice it to say that anything you learn how to do in awk will just make learning perl that much easier when you get to it. awk is a much simpler language and is phenomenally easy to integrate into shell scripts (as you can see here).
Jim Dennis, Starshine Technical Services
|
||
|
muse:
|
his
column is dedicated to the use, creation, distribution, and dissussion of
computer graphics tools for Linux systems.
Last month I had promised to do a review of Keith Rule's new book on 3D File Formats this month. I'll also said there would be a section on adding fonts on Linux in last months colums. Ok, I'm a liar. First, I decided that although Keith's book deserves some examination I felt that another book, Mark Kilgard's OpenGL text, had a more direct bearing on Linux users. I'll consider taking a look at Keith's book some time in the future. Second, I had quite a bit of other material for January's column so had decided to move the font discussion to February's column. However, I forgot to update the introduction in January's column to reflect this change. My apologies. Now for the bad news: I had a major system crash on the 16th of January which first of all caused me over a week of grief trying to recover and second caused the loss of a large number of files. No, I wasn't doing backups. So shoot me. I managed to recover an earlier copy of this months Muse column from a laptop I have, but I lost a good portion of what I'd already done. Now, as I write this, I have 3 days to get the column done and uploaded. The result is that the book review and a number of other items will have to be put off till another time. So, does anyone have a decent tape backup system that can run on ftape drives? In this months column I'll be covering, along with how to add fonts to your system:
NOTE: I lost all my old email and mail aliases when my system went down. If you have been in touch with me in the past and want to stay in touch please send me some email (mjhammel@csn.net)! I'm particularly interested in hearing from Paul Sargent, who was helping me with my look into BMRT. I lost your email address Paul, along with all the messages we'd exchanged on the BMRT article series! Write me (or if you know Paul, please have him contact me)! |
Disclaimer: Before I get too far into this I should note that any of the news items I post in this section are just that - news. Either I happened to run across them via some mailing list I was on, via some Usenet newsgroup, or via email from someone. I'm not necessarily endorsing these products (some of which may be commercial), I'm just letting you know I'd heard about them in the past month. | |||||
xfig 3.2.0 Beta availableXfig is a menu-driven tool that allows the user to draw and manipulate objects interactively in an X window. The resulting pictures can be saved, printed on postscript printers or converted to a variety of other formats (e.g. to allow inclusion in LaTeX documents).xfig is available on ftp.x.org in /contrib/applications/drawing_tools/xfig. You also need a JPEG library, which can be found in /contrib/libraries. and TransFig version 3.2.0-beta1. TransFig contains the postprocessor needed by xfig to convert fig files to one of several output formats, such as PostScript, pic, LaTeX etc. The TransFig package is in the directory /contrib/applications/drawing_tools/transfig. |
Alexander Zimmermann has uploaded another update to his ImageMagick package.ImageMagick (TM), version 3.7.9, is a package for display and interactive manipulation of images for the X Window System. The package has been uploaded to sunsite.unc.edu:/pub/Linux/Incoming as:
ImageMagick supports also the Drag-and-Drop protocol form the OffiX package and many of the more popular image formats including JPEG, MPEG, PNG, TIFF, Photo CD, etc. You will also need the package libIMPlugIn-1.0-elf to get it working. These can be retrieved from ftp.wizards.dupont.com /pub/ImageMagick/binaries. |
||||
World Movers, the first VRML 2.0 Developer ConferenceI received the following information via email (unsolicited, but its probably the first time I got something I found really interesting via a blind email post). Note that I have nothing to do with this conference, other than I wish they'd invite me to go - expenses paid, of course:World Movers, the first VRML 2.0 Developer Conference, will be held on January 30 and 31 at the ANA Hotel in San Francisco, CA. At World Movers you will:
With a pan-industry advisory board and a wide array of hosts and participants, World Movers will give you a complete picture of VRML 2.0 content and applications from all perspectives. Register by calling (800)488-2883 or (415)578-6900, or go online at http://www.worldmovers.org. |
|||||
PNG Magick Plug-in 0.8There is a new plug-in for Unix/Linux versions of Netscape called PNG Magick Plug-in 0.8. This plug-in supports the following file formats: PNG, XPM, TIFF, MIFF, TGA, BMP, PBM, PGM, PPM, PNM, PCX, FITS, XWD, GIF, JPEG, WAV and MPEG-1. It is reported to support Drag and Drop capabilities as well.For MPEG-1 support you need the Xew library which doesn't seem to work well with the Linux version of this plug-in. PNG Magick Plug-in 0.8 is published under the GNU General public License and is available at http://home.pages.de/~rasca/pngplugin/. |
TkFont v1.1There is a new tool for viewing fonts on Linux. I haven't tried this yet so I don't know how well it works. It has been uploaded to tsx-11.mit.edu in the /incoming directory. The file-name is `tkfont-1.1.tar.gz'. |
||||
Version 0.1.8 of Lib3d is now available from Sunsite.Lib3d is a high performance 3d C++ library distributed under the GNU LGPL. Lib3d implements sub-affine texture mapping, Gouraud shading and Z-buffer rasterization, with support for X11, DGA, SvgaLib and DOS.Lib3d is available from ftp://sunsite.unc.edu/pub/Linux/Incoming/lib3d-0.1.8.tar.gz For more information: http://www.ozemail.com.au/~keithw |
|||||
CFP: ACM SIGGRAPH 97 Sketches ProgramDeadline: April 16, 1997The following was posted in a number of places. I got it via a friend on the Gimp User mailing list. I have no association with SIGGRAPH (unfortunately) so can offer no other details than the following:
SKETCHES are live, 15 minute presentations that provide a forum for
unique, interesting ideas and techniques in computer graphics.
Sketches allow the presentation of late-breaking results, works in
progress, art, design, and innovative uses and applications of
graphics techniques and technology. Sketch abstracts will be published
in the Visual Proceedings.
|
|||||
Did You Know?The VRML 2.0 Specification, Moving Worlds from SGI, provides for "spatial audio"? This is a definition of how sound is played in relationship to your point in space and distance from an object which has a sound attached to it. The O2 system from SGI has a VRML browser which was demonstrated on Part 2 of PC-TV's series on Unix which covered commercial Unix options. Part 3 of this series started airing at the end of January and is devoted to our favorite OS - Linux!There is a wonderful description on using color palettes with Web pages at http://www.adobe.com/newsfeatures/palette/main.html. The page is a reprinted article by Lisa Lopuck from Adobe Magazine and is quite detailed. Check it out! Have you been thinking about using POV-Ray 3.0's new caustics feature? Are you unsure exactly what it does? Want to learn all about it? Then check out The Caustic Tutorial for POV. This is a very detailed explanation on what caustics are and how to use them. Briefly, caustics are formed when light is either focused or dispersed due to passing through media with different indices of refraction. Bright spots in the shadows are where light is focused and dark spots are where the light has been dispersed.Thanks to Paul R. Rotering for this description (taken from the IRTC-L mailing list). Q and A Q: What is displacement mapping? A: Displacement mapping is not only the perturbing of the surface normal of an object, like a bump maps do, but in fact a distorting of the object itself. You can think of it as a height field over an arbitrary surface. The latest version of BMRT is reported to support displacement maps. Few other publicly available renderers provide this feature. Q: I have just downloaded the complete batch of plug-ins from the "Plug-in Registry", and noticed that the "interpolate", "lightest" and "darkest" plug-ins appear to do the same thing as the "blend", "add" and "multiply" channel ops respectively. Is this correct, or is there some difference under certain circumstances?
A:
Not exactly. Blend uses integer values and restricts you to
interpolation. Interpolate/Extrapolate uses floating point values and
does not restrict the range of the blending value --- you can do
extrapolation, too (look at my home page for some examples):
Both of these questions were answered by Mena Quintero Federico, aka Quartic, on the Gimp User mailing list. |
|||||
GIF animations update: MultiGIFAfter my first column (Linux Gazette, issue 12), Greg Roelofs wrote me to tell me about another tool for creating animated GIF images. Andy Wardley's MultiGIF allows the use of sprite images as part of the animation. Sprite images are like small sections of an image. Instead of creating a series of GIF images that are all the same size and simply appending each one at the end of the other (as WhirlGIF does) the user can create an initial image along with a series of smaller images that are positioned at offsets from the upper left corner of the full image.By using sprites (I'm not completely sure what a sprite really is, but Greg used this term and it appears similar to other uses I've seen - someone correct me if its not the correct use of the term) the GIF animator can reduce the file size anywhere from a factor of two to a factor of 20 in size. As proof, Greg offered his animated PNG-balls, which went from 577k to 233k in size. Another animation, a small horizontally oscillating "Cylon eyes" (referring to the old Battlestar Gallatica metal menace), provided a savings of a factor of 20. MultiGIF comes with C source code and is shareware. Andy only asks that you provide a donation if you find you are using it frequently. There is also a utility called gifinfo which can be used to identify GIF files, including multiframe GIF animations. Both WhirlGIF and MultiGIF come with fairly decent documentation describing how to use the various command line options. About the only thing that might be missing is why you would use one option over or in conjunction with another, but thats a minor point. I find the use of sprites with MultiGIF and its smaller output files more useful to me. However, new users who are not quite familiar with how to create sprites (including transparency) with tools like the Gimp might prefer the simpler WhirlGIF. |
Adding Fonts to your systemFonts are used extensively for creating graphics images. Many of the graphics on my Web pages and in the Graphics Muse use fonts I've installed from collections of fonts on commercial CDs. Fonts are also used for ordinary text in X applications, from the fonts in your xterm to the title bars provided by your window manager to the pages displayed by xman. The difference is hard to distinguish, but whether used for ordinary text or to create outrageous graphics, adding fonts to your system and letting your X server know about them is the first step .Just so you know: nearly all X applications accept the "-fn" and/or "-font" command line arguments. This is a feature built into the X Windows API. How this is used depends on the application. For xterms, just use "-fn To know what fonts are available on your system you can look under the font directories for fonts.alias files. There is supposed to be one of these in each directory under /usr/X11R6/lib/X11/fonts, but whether there is or not depends on the distribution you're using. The name to use is the name on the left. For example, under /usr/X11R6/lib/X11/fonts/misc, in the file fonts.alias there is the following line:
xterm -fn 5x7 You can actually use the string on the right, but unless you understand how fonts are defined you probably don't want to do this. I don't want this to turn into an X Windows column. There are other places for such discussions, and I'm sure LG could use a regular columnist for X. But this column is about computer graphics so this is all I'm going to say about using fonts in X applications from the X resources standpoint. In any case, since the X server is being used to handle the fonts, adding fonts to your system is the same whether you use them for graphics or as X resources. Suppose you had a font called westerngoofy that you wanted to use in the Gimp as the start of some neat title graphic for a Web page. By default there isn't an entry in any of the fonts.alias files for westerngoofy, so when you use the text tool in the Gimp it won't show up in the list of available fonts. There are 3 steps to making this font available for use with the Gimp:
|
|
Linux Graphics mini-Howto
Unix Graphics Utilities
Linux Multimedia Page
Some of the Mailing Lists and Newsgroups I keep an eye on and where I get alot of the information in this column:
The Gimp User and Gimp Developer Mailing Lists.
The IRTC-L discussion list (I'll get an address next month).
comp.graphics.rendering.raytracing
comp.graphics.rendering.renderman
comp.os.linux.announce
Future Directions
Next month:
Fri, 31 Jan 1997
Linus Torvalds, considered the "Father of the LINUX Operating System",
has been selected by the UniForum Board of Directors to receive The UniForum
Award. The Award will be presented to Torvalds on Thursday, March 13th, as
part of the morning Keynote Session at UniForum '97, being held at the Moscone
Convention Center in San Francisco.
The UniForum Award, presented annually since 1983, goes to individuals or groups whose work has significantly advanced the cause of open systems over time, or has had an immediate and positive impact on the industry with long term ramifications. The UniForum Board of Directors considered a number of nominees for this year's awards, and voted unanimously in their selection of Linus Torvalds for his breakthrough work on the LINUX kernel and for his pioneering efforts in making his work available at little or no cost to anyone wishing to develop on it.
Linus Torvalds is the creator and chief architect of the Linux operating system. At the University of Helsinki in the spring of 1991, frustrated with the price of Unix operating systems, Torvalds began writing some software code to handle certain computing chores on the 386. "I noticed that this was starting to be an operating system," he says. Since then, he has traveled all over the world promoting Linux. Although developing Linux has been almost a full-time job for him, he recently accepted a job at Transmeta in Santa Clara, California. He and Tovi Monni recently celebrated the birth of their baby daughter, Patricia Miranda Torvalds.
The UniForum Board also selected a second Award winner this year: James Gosling of JavaSoft, and his development team, for their work on Java. Gosling will receive his Award at the Wednesday, March 12th Keynote session at UniForum '97.
The Award presentation to Linus Torvalds, at the Thursday Keynote session, is open to all free of charge but requires attendees to register for UniForum '97. Registrants may also visit the exhibits floor which features booths from a number of LINUX vendors including Comtrol, LINUX International; SSC, publishers of Linux Journal; Red Hat Software and Work Group Solutions. To view the entire UniForum '97 Conference and Trade Show brochure, and to register on-line, please go to http://www.uniforum97.com/.
For additional information contact:
Richard Shippee, Director of Communications, UniForum
408-986-8840, ext 17, dick@uniforum.org
You can jump down to the section on tcpd
Or take a peek at the other stuff you need to keep an
eye on.
Ok. You've got Linux beat. You finally got AfterStep set up the way you want it, you've managed to set up ip masquerading for your home LAN, you've managed to set up a cool issue for people to see when they log in, you managed to convert a couple over to the One True OS, and chicks really dig you because, as we all know, Linux geeks are sexy.
One night as you're peeking at /var/adm/messages, you notice that someone from some place you've never heard of before tried to make 5 ftp connections, 6 telnets, and even an nntp connection. What's up with that?
Well, Linux (and all Unix type OS's in general) were designed to be a programmer's paradise. The same qualities that makes Linux such a wonderful networking and hacking operating system also expose a few security holes. There are a few programs that you probably rely on or use daily that can be used to gain root access (which is a Bad Thing). What's worse, the commercial distributions that many Linux users depend on have these programs with security holes inside packages that are installed as part of the base system.
That's the bad news. The good news is that we can make it tougher for the Bad Guys to do their dirty deeds. By checking the Linux ALERTS page, you can find out what the holes we know about are, and how to temporarily plug them up or even fix them up for good. There is also a nice little tool that is probably on your system that we can use to keep them from even having access to our machine.
And that's what I'm going to focus on. My belief here is that if we can keep the Remote Bad Guys (people who don't have legitimate access to our machines) out, then we only have to worry about the Local Bad Guys (if any). Plus it gives us a chance to fix anything on our machine that is a security hole the RBG's can use.
There's a daemon that's probably been installed on your machine that you don't know about. Or at least, you're not aware of what it can do. It's called tcpd, and it's how we shut off access to some of the basic services that the Bad Guys can use to get on our system.
Since tcpd can be pretty complex, I'm not going to go into all the details and tell you how to do the fancy stuff. The goal here is to keep the mischievous gibbons from knocking down what it took so long for use to set up.
tcpd is called into action from another daemon, inetd, whenever someone tries to access a service like in.telnetd, wu.ftpd, in.fingerd, in.rshd, etc. tcpd's job is to look at two files and determine if the person who is trying to access the service has permission or not.
The files are /etc/hosts.allow and /etc/hosts.deny. Here's how it all works:
Now, there are a couple of things to note here. First, if you haven't edited hosts.allow or hosts.deny since you installed Linux, then tcpd assumes that you want to let everyone have access to your machine. The second thing to note is that if tcpd finds a match in hosts.allow, it stops looking. In other words, we can put an entry in hosts.deny and deny access to all services from all machines, and then list ``friendly'' machines in the hosts.allow file.
Let's take a look at the man page. You'll find the info you need by typing man 5 hosts_access (don't forget the 5 and the underscore).
daemon_list : client_list daemon_list is a list of one or more daemon process names or wildcards client_list is a list of one or more host names, host addresses, patterns or wildcards that will be matched against the remote host name or address. List elements should be separated by blanks and/or commas.
Now, if you go take a look at the man page, you'll notice that I didn't show you everything that was in there. The reason for that is because the extra option (the shell_command) can be used to do some neat stuff, but *most Linux distributions have not enabled the use of this option in their tcpd binaries*. We'll save how to do this for an article on tcpd itself.
If you absolutely have to have this option, get the source from here and recompile.
Back to business. What the above section from the hosts_access man page was trying to say is that the format of hosts.[allow|deny] is made up of a list of services and a list of host name patterns, separated by a ``:''
You'll find the name of the services you can use by looking in your /etc/inetd.conf...they'll be the ones with /usr/sbin/tcpd set as the server path.
The rules for determining host patterns are pretty simple, too:
And finally, there are some wildcards you can use:
Ok. Enough technical stuff. Let's get to some examples.
Let's pretend we have a home LAN, and a computer for each member of the family.
Our home network looks like this:
linux.home.net 192.168.1.1 dad.home.net 192.168.1.2 mom.home.net 192.168.1.3 sis.home.net 192.168.1.4 bro.home.net 192.168.1.5
Now, since no one in the family is likely to try and ``hack root,'' we can assume they're all friendly. But....we're not so sure about the rest of the people on the Internet. Here's how we go about setting things up so people on home.net have full access to our machine, but no one else does.
In /etc/hosts.allow:
# /etc/hosts.allow for linux.home.net ALL: .home.net
And in /etc/hosts.deny
# /etc/hosts.deny for linux.home.net ALL : ALL
Since tcpd looks at hosts.allow first, we can safely deny access to all services for everybody. If tcpd can't match the machine sending the request to ``*.home.net'', the connection gets refused.
Now, let's pretend that Mom has been reading up on how Unix stuff works, and she's started doing some unfriendly stuff on our machine. In order to deny her access to our machine, we simply change the line in hosts.allow to:
ALL: .home.net except mom.home.net
Now, let's pretend a friend from....uh....friend.com wants to get something off our ftp server. No problem, just edit hosts.allow again:
# /etc/hosts.allow for linux.home.net ALL: .home.net except mom.home.net wu.ftpd: .friend.com
Things are looking good. The only problem is that the name server for home.net is sometimes down, and the only way we can identify someone as being on home.net is through their IP address. Not a problem:
# /etc/hosts.allow for linux.home.net ALL: .home.net except mom.home.net ALL: 192.168.1. except 192.168.1.3 ALL: .friend.com
And so on....
I have found that's it's easier to deny everybody access, and list your friends in hosts.allow than it is to allow everybody access, and deny only the people who you know are RBG's. If you are running a private machine, this won't really be a problem, and you can rest easy.
However, if you're trying to run a public service (like an ftp archive of Tetris games for different OS's) and you can't afford to be this paranoid, then you need shouldn't put anything in hosts.allow, and just put all of the people you don't want touching your machine in hosts.deny
You might also want to take a look at the next section.
Like I said earlier, a lot of the software that comes standard in CD-ROMs have security holes in them which could let local or even remote users execute commands as root on your system. Keep an eye on Linux ALERTS to find out about problems we know about and how to fix them.
Check to make sure that the services you have running on your machine are what you really want to offer. For example, most of us don't have a need to run in.nntpd, yet it's got an entry in /etc/inetd.conf. Do you really want everyone on the Internet to have access to in.fingerd? Do you really need to let everyone on the Internet have access to your ftp server?
Find what you don't need (or don't want to offer to any passing stranger who might happen across your machine) and either shut it down or deny outside access to it.
Yeah, yeah, yeah. Everyone's heard the speech about passwords, but they are pretty important. Especially if you're not restricting access to your machine. Remember, if they can get to your machine, they can get on your machine. And if they can get on your machine, they can get root access.
In case you haven't heard the speech, here's the condensed version:
Also, get and install shadow passwords. You might have to recompile a few services, but it's worth the extra protection.
Finally, it is important to note that only the first 8 characters of the password get used under Linux's login. In other words, if you have a password that looks like abcdefghijklmnopqrstuvwxyz, you will only need to enter abcdefhg in order to gain access to the account. This holds true whether you are using shadowed passwords or not.
[ Thanks to Olav Wölfelschneider for pointing that out. ]
Many of the security holes that exist are because the files are "setuid". That means that a non-root user can execute the files as root. Remove this permission from any files that don't need it. Like mount. It really isn't that much of a hassle to keep one of your virtual consoles logged in as root, and flip over to it when you need to get something done.
Also, if you have stuff sitting somewhere that you don't want anyone else to see, don't give them world rwx permission on the dir.
At least once a day, you need to go check the syslog and see what's been happening. You can find it /var/adm/syslog, and I'd also recommend taking a peek at /var/adm/messages. You'll want to look for multiple connections coming from places you don't know in a short period of time. If they look suspicious, then don't hesitate to slap an entry for the domain into /etc/hosts.deny
This is just common sense. It's not a wise idea to give out your root password to someone you just met on IRC 5 minutes ago who claims they can get Apache up and running on your system if you just tell them the root password.
Set up a guest account with limited read, write, execute abilities and let them use that.
It's also not wise to let people just log in and fiddle around on your machine. Despite common belief, it is possible to create Unix ``viruses,'' and all you really need is the knowledge, the will, and an opportunity. For more information, see the paper on The Plausibility of Unix Virus Attacks
To be completely honest with you, the only way to be 100% sure your machine can't be compromised is to physically deny access to it. That means get rid of the modem and ethernet card, fill up any hole in the computer's case with cement, and buy a big, mean pit bull to guard it while you are asleep.
Well, maybe that's going a bit far. But the point is, if they can't get to your machine, they can't do anything to it. If you think your machine has been compromised, disconnect it from the network, look through the syslog, try to find out how it was compromised, fix the problem, set all new passwords for your accounts, and then reconnect it.
We might not be able to make the machine 100% secure, but we can make it hard for the Bad Guys to do their thing.
Michael Elkins is a programmer who at one time was involved in the development of the venerable mail-client, Elm. He had some ideas which he would have liked to include in Elm but for whatever reasons the other Elm developers weren't receptive. So he struck out on his own, creating a text-mode mailer which incorporates features from a variety of other programs. These include other mailers such as Elm and Pine, as well as John Davis's Slrn newsreader. As an indication of the program's hybrid nature he has named it Mutt. Although the mailer began as an amalgamation of features from other programs, it has begun to assume an identity of its own.
Mutt has been in beta-testing for several months now and new versions have been released regularly. Lately I've noticed that binary packages have been appearing in the Sunsite incoming directory, which I take as a sign that the program is now deemed ``suitable for a general audience.'' I have found that it compiles cleanly and works dependably.
The composition of messages has always been a thorn in the side of developers of mail clients. After all, a usable mailer is the goal, not a text editor. The typical approach has been to include a simple message composition editor (such as Pico in Pine) and allow the option of starting an external editor of the user's choice. This has certain drawbacks. If in the middle of a message you need an editing function not included by the internal editor, it can be distracting and awkward to switch boats in midstream, so to speak.
This minor dilemma is neatly side stepped by Mutt; there is no internal editor included. All message composition is done with a familiar editor, preferably a text-mode one so that Mutt can be run at the console as well as under X-windows. As an example, I've set Mutt up to use Vile with a message-specific rc-file (sets word-wrap, etc).
Mutt can be compiled with a feature unusual in text-mode mail clients: it can fetch mail from a POP server, a duty which is more commonly assigned to an external agent such as Popclient. Compile-time support is also available for PGP-encrypted messages, though theoretically this is only available for US citizens.
A few of Mutt's other features include:
Mutt can be run from the command line, if you just want to mail a quick message without having to load your mail-spool file. Incidentally, Mutt uses the mailx (single-file) message format, so the transition from Pine or Elm is painless.
If you've ever used the Jed editor or Slrn the appearance of Mutt will be familiar. Like these programs Mutt is easy on the eyes, and the amount of coloring used is easily controlled. The documentation supplied with Mutt is very complete, but this isn't one of those programs which takes long to learn.
Binary versions of Mutt are available from the Sunsite archive site, currently in pub/Linux/Incoming. I recommend obtaining the source from the Mutt home site, where the latest versions will first appear. Compiling it yourself allows the program to be tailored to your needs; there are several compile-time options.
The non-export version, which contains PGP/MIME support, is
export-controlled; U.S. citizens can read the file README.US-only and follow
the directions to access the files. The non-export version has been
exported anyway (against the author's wishes), and can be obtained from the
following sites:
Why not give it a try? The source file is small, and compilation and installation just takes a few minutes. I think you'll like it.
Window-managers seem to be unique to unix-derived operating systems. Rather than assuming all windowing/GUI tasks, the X-server confines itself to the basic grunt-work of facilitating communications between the graphics hardware and the kernel. This is typical unix behavior, in which complex tasks are broken up into sub-tasks performed by separate programs. This is beneficial to the end-user. If something goes wrong in such a system it is easier to place blame and isolate the problem; flexibility and configurability are also much greater than in systems in which the graphic interface duties are intertwined inextricably with basic kernel functions.
The end result of this is that if you start the X-server ``bare'' (without a window manager) you will see borderless windows on a gray and black stippled background. Few people want this appearance, so over the years a wide variety of window-managing software has been developed. Some are proprietary, but in the free software world there are several active projects, a few of which I'll discuss in this article.
The F(?) Virtual Window Manager is, for several good reasons, the most commonly used Linux window manager. It was originally an offshoot of an early manager called Twm, but has evolved considerably in recent years.
Rob Nation, who was also partially responsible for top and rxvt, was the maintainer of the 1.xx versions of Fvwm. This series reached a developmental plateau a few years ago and a new group of developers adopted the program and initiated the 2.xx series. The 1.xx versions are stable and reliable and are still being used by many people, though they aren't actively maintained.
I won't go into the basic features of Fvwm, as this topic has been well-covered (by John Fisk and others) in past issues of the Gazette. Since those articles appeared there have been many new features and modules added to Fvwm, a few of which I'll describe.
By the way, don't be put off by the beta status of the 2.xx versions; since about version 2.0.37 the program has been relatively easy to compile and free of any but very minor bugs. Version 2 is asymptotically approaching a major release which will be version 2.1.
I can't help but think that the developers working on Fvwm2 are keeping an eye on the upstart Afterstep window-manager, which is based on Fvwm2 code. The newest Fvwm2 release (as of Jan 24,1997) is 2.0.45; patches have been incorporated which give Fvwm2 some of the nicer decorative features of Afterstep. These include tiled pixmaps for window-borders and title bars, as well as gradient-shading of the title bar from one color to another. Another addition is the ability to use mini-icons for title bar buttons. If you're not interested in such decorative elaborations they can be easily disabled by editing the fvwm.tmpl file before compilation. The new release is worth obtaining even if you don't care about the new visible features, as many bugs have been fixed. The man-page has also been expanded and updated to cover these changes.
It's now possible to write Fvwm modules in either Perl or Python. Several examples of each are included in the distribution, which is available from this Hawaiian site.
If you are fond of the appearance of the NExtstep operating system, you'll probably like Afterstep. This is an offshoot of Fvwm2 development which has attracted much attention recently in the Linux community, to the point where it is being included (despite its beta status) in some newer distributions.
Afterstep pioneered the use of pixmaps and mini-icons in borders and title-bars, as mentioned in the Fvwm2 section above. But the major difference is the Wharf module, a very configurable tool bar which uses larger-than-normal icons (64x64). The supplied icons are very stylish, and can be configured to have gradient-shaded backgrounds. As with the Fvwm2 Buttons module, the Wharf (NExt calls it a ``dock'') can ``swallow'' applications and other modules. Lately modules designed to be swallowed by the Wharf have become available from the Afterstep web-sites. Among these are a PPP dialer, a CD-player, and a mixer. Check out the Afterstep Home Page for the latest news and releases.
Rather than include a screen shot of Wm2 in action, here are links to the
Wm2 web page which has links to both a screen shot and the source itself:
Wm2 is still relatively new; I have noticed that it stresses the X-server more than would be expected of such a small application, possibly because of its use of the shaped-window X-extension. Screen refreshes seem to be slow. Nonetheless in this third version it seems to be stable, and it provides a refreshing contrast to the complexity of the other window-managers. The only configuration involved comes before compilation of the source. The various colors and preferred terminal emulator can be set in the Config.h file; after installation the only way to change these settings is to re-edit and recompile.
If you'd like more information on these as well as several other
window-managers, visit
this excellent
site, which has many links and screen shots.
My first shell, though I didn't know it by that name, was command.com in DOS. It couldn't do much more than simply execute commands, but it served my needs at the time. Later on I discovered the commercial DOS command.com replacement 4DOS, by JP Software. This came as something of a revelation to this novice computer user. Suddenly I could do file-name completion, use aliases, and change to a directory on a different drive with simple keystrokes. Wow, I thought, how did those programmers at JP Software think of so many clever command-line functions and options!
I later learned that 4Dos (and its OS/2 sibling, 4OS/2) were influenced and inspired by the various shells used on unix systems. When I first began using Linux I was able to learn the rudiments of the Bash shell fairly quickly because of past experience with the JP Software products.
New users of Linux are encouraged (in part by distribution defaults) to use the GNU Bash shell. Bash has been polished over the years to the point that any remaining bugs probably affect only the skilled users who make use of its more arcane functions. Bash, and its reduced-function alias sh, work well as agents for executing shell scripts. As a command shell in a console or an xterm Bash provides many labor-saving shortcuts and functions, most of which beginning users don't use. Reading the voluminous Bash documentation I began to realize that using Bash the way most users do, i.e. as the default login and command shell, touches only a small fraction of its capabilities. O'Reilly has published a three-hundred-page book detailing Bash shell programming and usage !
Recently Chet Ramey, the maintainer of Bash, released version 2.00 to the FTP sites. After reading the list of changes and bug-fixes I concluded that advanced users will be more appreciative of the release than will common end-users, like myself. It's an odd feeling to learn of a feature by finding out that bugs have been fixed in it! The documentation for Bash is extensive; the man pages are available now in HTML format (in a separate file called bash-doc-2.0.tar.gz). Bash can be obtained both from Sunsite and its mirrors (in /pub/gnu) and from the main GNU site.
I remember the first time I navigated my way through the Slackware installation menus; being offered the option to install tcsh and zsh made me realize how little I knew. What were these alternative shells? Evidently some users preferred them to bash, but why?
All of the shells discussed in this article are extensively documented, but that very feature, as helpful as it is to advanced users, can make it difficult to get a rough idea of why one shell might be preferable to another. Luckily it isn't hard to install another shell just to try it out. Edit the file /etc/shells (logged in as root) and add a line with the path to the new shell. Then execute the command chsh; a default choice will be offered to you. Ignore it and type in the name (with path) of the new shell. You'll have to log out and log back in to activate the new shell.
In issue 12 of the Gazette Jesper Pederson wrote a good introductory article about Tcsh; this article also shows how Jesper's program Dotfile Generator can be used to help write Tcsh resource files without spending many hours reading the manual. Since that article appeared a new version of the Dotfile Generator has been released which includes a module to generate Bash resource files. I highly recommend this program, which is available from this site. The Dotfile Generator won't overwrite your existing files; it writes to another filename (such as .bashrc-dotfile) This file can then be edited; I usually transplant sections to my original files to try things out. The Dotfile Generator allows you to try various features of your shell without having to learn the precise rc-file syntax first. 1
A little resource-file editing will be necessary to change over to Tcsh. The aliases which you have defined can be transplanted from your ~/.bashrc to ~/.cshrc without alteration, but the environment variables are another matter. Bash (and other ``Bourne-compatible'' shells, such as Zsh) uses a different format for this than Tcsh. As an example, export INFODIR=/mt/info in the ~/.bash_profile would have to be changed to setenv INFODIR /mt/info in ~/.tcshrc. I recommend going to the trouble of transferring aliases and environment variables if you want to give Tcsh a try. If you don't you'll be continually distracted by commands which don't work, and you will tend to blame the shell.
The one feature which really stands out (if you're accustomed to Bash) is the spelling-correction. When either a filename or command is misspelled the shell pops up a suggested correction. If you tend to type commands quickly and press ``enter'' without rereading what you've typed you'll love this. Sometimes the shell is wrong, though, but pressing n rather than y will force the shell to try and execute what you actually typed.
After using Tcsh for a while, you may find yourself thinking, ``I really don't want to switch completely to Tcsh; if only Bash had that spelling correction built in!'' Zsh might be what you want.
Zsh is a Bourne-compatible shell like Bash but with several csh-like features added. It also resembles the proprietary Korn shell as well as Pdksh, a free Korn-shell clone. It's not at all difficult to adapt Bash configuration files so that Zsh can use them as the syntax is nearly identical. ~/.zshenv is analogous to ~/.bash_profile, while ~/.zshrc corresponds to ~/.bashrc.
The first thing you notice when using Zsh for the first time is the prompt,
which by default looks like this:
<machine-name># /usr/local/src
As you can see, the current directory is on the right hand side of the screen, giving more room for a command before the line breaks. When a typed command reaches the path on the right the path disappears to make room.
The spelling correction behavior seems to be identical to that of Tcsh. As with Bash and Tcsh, completion of paths and filenames is bound to the tab key. Zsh has an elaborate implementation of programmable completion, in which file-type specific behavior for completions can be set in the resource-files.
One helpful aspect of Zsh's completion behavior deserves notice. Often there will be a filename and a subdirectory with the same prefix, say if a file called sample-2.01.tar.gz is unarchived into the directory in which it resides, creating in the process a new subdirectory called sample-2.01. Try the command cd sam<TAB> with some shells and you will be asked if you want to change directory to sample-2.01.tar.gz or to sample-2.01. Zsh is smart enough to realize that directories don't normally have a tar.gz suffix, and changes to the directory without comment or question.
The Zsh distribution contains extensive help-files which are in the Info format, allowing them to be browsed from within Emacs or with a stand-alone Info reader. After reading these documents I came away with the impression that Zsh probably rivals Bash in the number of arcane features and programming abilities. If you would like to see examples of the complexity possible in Zsh configuration, take a look at The Next Level, a package of Linux configuration files with explanation which has become a part of recent Red Hat distributions. The Next Level's author, Greg J. Badros, has included an elaborate set of Zsh resource files. I found them to be quite informative as an example of what's possible with this shell.
Zsh seems to be under active development; version 3.00 was released last year, and there have been minor releases since then. There is a Zsh home-page here which can serve as a good introduction.
These shells certainly aren't hard to find; most distributions I've seen include preconfigured packages for all three of them. One caveat: if you decide to settle on Tcsh or Zsh as your login shell don't remove Bash, or its symlink /bin/sh. Many shell scripts rely on /bin/sh in order to run properly. Some packages, such as the Andrew User Interface System, like to have csh available, so if you have the disk-space Tcsh, along with its symlink /bin/csh may as well be retained even if it's not your login shell.
The choice of shells reminds me of the eternal debate between vi-users and emacs-users. A decision depends more on working-style and personality than logic; try them all and see which one fits!
If like me, you come from the world of DOS and WordStar, you feel right at home with the joe editor, which uses WordStar keystrokes.
But as soon as you exit joe, you're back in the land of bash, where command-lines are edited "emacs-style." Soon, your fingers are confused. Before you know it, you're pressing <control>-d to move the cursor, only to find your command-line disappearing.
But why use one set of keys to edit text and another to edit commands? The beauty of Linux is that you can customize it to your heart's content. Here's how to make bash act like our old friend, joe:
The bash command-line is handled by the GNU readline library. So it's not surprising that bash uses the same keystrokes as GNU emacs. Luckily, you can change these key-bindings simply by setting new values in the file .inputrc.
The first step is to go to your $HOME directory and open or create a text file named .inputrc. Then add the following lines, which tell bash to use the basic joe keystrokes:
You can also add the following lines, which fix the behavior of the <home>, <end>, <delete>, and <backspace> keys:"\C-d": forward-char "\C-s": backward-char "\C-f": forward-word "\C-a": backward-word "\C-g": delete-char "\C-t": kill-word "\C-y": kill-whole-line
Finally, you can use .inputrc to modify any one of the dozens of keystrokes and variables that control bash. (Among other things, you can get bash to stop beeping at you!) Check the READLINE section of the bash man page for details."\e[1~": beginning-of-line "\e[3~": delete-char "\e[4~": end-of-line DEL: backward-delete-char
An experienced Linuxer will see that the changes we made to .inputrc has created a problem. We set <control>-s to it's WordStar meaning. But the Linux terminal uses <control>-s to send the "stop" signal. Pressing <control>-s freezes the terminal until you type <control>-q, the "start" signal.
The easiest way to fix this is to tell the terminal to use a different "stop" key. To reassign "stop" to <control>-p, type the following line (and put it in your .bashrc to make it permanent):
You can prove this works by pressing <control>-p then <control>-q. It's also a good idea to check your terminal configuration -- especially if you change other keys with .inputrc. Type:stty stop '^p'
This will display your terminal settings. If you reassigned the "stop" key as shown above, you should see "stop = ^P".stty -a
Now you're home free. All you have to do is exit and log in again. And you can edit commands "joe-style."
The first step I took was to try and find the best editor around. I started asking around to see who used what and to try to find out what the important qualities of an editor were. Don't make this mistake. Editors are one of the most religious beliefs a programmer holds. Every programmer is convinced that there's is the best. My office-mate uses PICO, some of my co-workers use EMACS, VI, SlickEdit, or any one of an unending list. Every person I talked to insured me that their selection was by far the best. When I inquired about the differences, they were primarily insignificant. That was when I learned the horrible truth. Most editors are essentially equivalent. No matter how hard people insist, most editors have more features than any user will ever use. (Except PICO). In the Linux community, these selections basically fall in to one of two categories. VI clones, or Emacs. My recommendation is that everyone learn one of these well. It doesn't really matter which one, just pick one, stick with it and use it. (Religiously if you must.)
I have gone to great lengths to learn VIM, a VI clone. And certainly if not THE best, one of the top contenders. Many features are shared among VI clones, basically the VI subset. The additional features are basically individual to each clone. VIM comes with most, if not all, Linux distributions. The home page for VIM is http://www.math.fu-berlin.de/~guckes/vim/. VIM is in active development and is getting better by the day. Syntax highlighting should be out, if not by the time you read this, then soon thereafter.
I will assume that most people know the basics of VI and want to change it from a simple tool to a powerful one. I will share some of the handy tips and tricks I use.
Programming, Tabs, and Tags.
ctags is a marvelous utility for C and C++ Programmers. ctags
comes with the VIM distribution. What this utility does is
create a list of all the subroutine names in the files you
specify and allow you to jump to the given subroutine while
in you editor with just one keypress. The way you run ctags
is simple.
Then, crank up your editor and move to wherever it is you call any subroutine from and press [CTRL]-]. This will take you immediately to wherever the routine is, even if it is in a different file.ctags *.c or ctags *.cpp
File Switching
I frequently work with several files concurrently and I
need to switch between these files continually. The
command to switch to another file in VIM is ":e fn".
The shortcut to switch to the last file edited is ":e #".
This is fine for normal use, but I switch files often, and
4 keystrokes seems like a bit much. This is where VIM's
key mapping comes in. VIM like most editors has an rc
file. It is called .vimrc, what a shock. 8)
In this file I have the following command.
This command lets me switch buffers with a single keypress. The other nice feature in VIM for switching between files is tab completion for file names. The way tab completion works is to take whatever letters you have typed in so far on for the file name and find all of the files that could possibly match. Hitting tab will scroll through the list of files until you find the one you want. If no beginning letters are specified for the file name, it will scroll through them all." Save and switch to other buffer. map N :w [CTRL]-M
If you examine the first line, you will see that it does the following.map C 0i/*[CTRL-ESC]A*/[CTRL-ESC]j map T 0xx$xxj
On the surface, changing from a MS DOS/MS Windows user to a Linux user is not such a major change. After all, to change directories in Linux, you use the ``cd'' command and that is the same as DOS. Linux provides X windows as a GUI and there are a number of similarities with MS Windows.
So maybe all that is necessary is to learn a few different commands and you are off and running. Well, right and wrong.
You might find yourself in the situation I was in when I first decided to install Linux. I had never had any experience with Unix or Linux or much of anything else outside of the realm of Microsoft.
The Intel/Microsoft consortium had given me a false sense of command over my PC. I had no idea of the ``behind the scenes'' activity that went on when DOS booted and Windows came up with it's attractive colors and cute little icons. I began to learn a bit when I tried to setup some software that wasn't MS applications. At work I learned that it was necessary to occasionally contact an equipment manufacturer to get the appropriate drivers for MS Windows. But all-in-all I was successful in almost every attempt. Little did I know...
As you may have deduced, I work with computers and less obvious (but it'll get even less obvious as we go along, I'm sure!) I have some schooling in the computer field. So it won't be too surprising to find that I was beginning to feel somewhat stifled by the MS environment. I knew there were more colors, more sounds, more ways of doing things than I saw on the shelf (at a rather high $$ amount, I might add) in my local computer store and in the pages of my favorite computer magazines.
One day, a friend mentioned Linux to me. She was quite an Internet fan. She spent hours in IRC channels and had heard about some of the Unix applications from the I-net dinosaurs (Unix users). So one day, while browsing through the computer books shelves at my favorite bookseller's, I noticed a copy of ``Linux Unleashed'' published by Sam's Publishing. I bought it thinking I'd just see what all the fuss was about.
I couldn't wait. When I opened the pages and began reading I was intrigued. The complexity and yet the continuous assurances that it *could* be done had me all fired up to try out this ``experimental'' OS.
Lucky me! A CD was glued inside the back cover of the book. My problem was, all I owned was a 386SX with 2 MB of memory and a 65 MB hard-drive. Not enough! So I bought a new PC.
I ordered a Micron with 16 MB of memory and a 1.6 GB hard drive and a CD-ROM drive. A heck of a lot of machine to my way of thinking! When it arrived, it came pre-loaded with MS Win95 (doesn't everything?)
I decided to use FIPS to do a ``non-destructive'' repartition of my new hard-drive. Well, it worked but the problem is the FIPS program took every bit of empty space on the drive, I couldn't write a single file in Win95 and I wasn't ready to completely forsake my old OS. So, having already made a backup (yeah, right!) I did a complete reformat of my C drive. I split the drive into 4 logical partitions and saved one of them for Linux exclusively.
Even for someone with a fair amount of PC experience, there is room for mistakes, doing what I was doing, and I made 'em. One thing I didn't do (I didn't know about this at the time) was to also create a small partition to use as Linux swap space. I did this a couple of months later when I re-installed to upgrade to Slackware 3.1.
So here is a warning...
IF YOU JUST GOT THAT PC FOR CHRISTMAS AND YOU'VE NOT EVER SET ONE UP BEFORE AND YOU ARE JUST LEARNING MS WINDOWS -- DO NOT INSTALL LINUX! DON'T EVEN THINK ABOUT IT!Take your time and learn about that machine and the wondrous things it is capable of doing for you. If later (and probably MUCH later) you find it is boring doing the Microsoft Word cut-and-paste shuffle, and Doom starts putting you to sleep, and you've invested in a class or two in Computer Science at your local community college, Linux might be just the thing.
While I was typing away on this article, the phone rang. It was my friend Ben and he had just hooked up his brand new P166 last evening. First thing he said was, ``I got this new computer last night and I need help before I throw it out the window.'' I got up and drove over to his place. (Coldest day of the year so far! Brrrrr!) I looked at his machine. Pre-loaded with Win95 (aren't they all?) He didn't know what to do once the system booted and displayed the new GUI. I showed him a couple of things and then told him not to install Linux. He's definitely not ready!
None of us who are migrating from MS dominance. It's that simple. But don't let that discourage you. If you know a bit more about PCs than the occasional, at work, or gaming user and if you are as fascinated by computing concepts and advances (Java, SMP, Graphics rendering, etc.) as I am. If you LIKE to program or if you want to set up as an ISP, then Linux is for you. And be prepared, Linux is a whole different animal!
Learning takes time and in time you will learn. I started with Linux in March of 1996. In the last ten months, I have installed Linux (Slackware at home and Debian at work) about eight times. I have learned something every day. I will say that while Linux is priced right, I have spent more on books in the last ten months than I had in the last 5 years.
Here are some of the things I have accomplished...
I have setup...
Let me say that ``setup'' is not truly the best word to use. In many instances the setups I mentioned above required only that I tweak a configuration file or adjust a Makefile. In some instances the program refused to work and I had to read and study and yes, I had to ask a couple of questions from the newsgroups too.
Out of the box, my printer didn't function so I had to read the Printing HOWTO. Of course, it might have worked but how would I know since I didn't have any idea about how lpr was used to queue up a print job. Then I needed to get a SLIP or PPP connection functioning so I could ask those questions on the newsgroups. I had been taught some Ada when in school and when I saw GNAT was available, I wanted to have it so I might refresh my skills there. I had to wait for InfoMagic's September ``Linux Developer's Resource'' before I was able to get a GNAT installed that would compile anything.
Just last week I got Pov-Ray up and running and I have been enthusiastic about ray-traced images since I first saw a ringed planet scene created with it. But I had to wait...and tinker...and wait...and read...and make mistakes...and start all over again. There are times when, like my friend Ben, I feel like throwing the PC out the window and I have learned to move on to something else. And whenever I move on, I learn more.
So I am sold! I have not as yet taken the MS partitions off of my machine but 95% of the time I am working within the Linux environment. Although sometimes my frustrations run high, I can honestly say that I have not had as much fun with a computer since I first started my Pascal classes back a few years ago.
So here I am, somewhere between a novice and a guru, lost in the Linux OS Wonderland. I'm having a great time...why don't you join me?
by Jim Dennis, Proprietor, Starshine Technical Services
Converted to HTML by Heather Stern
procmail is the mail processing utility language written by Stephen van den Berg of Germany. This article provides a bit of background for the intermediate Unix user on how to use procmail.
As a "little" language (to use the academic term) procmail lacks many of the features and constructs of traditional, general-purpose languages. It has no "while" or "for" loops. However it "knows" a lot about Unix mail delivery conventions and file/directory permissions -- and in particular about file locking.
Although it is possible to write a custom mail filtering script in any programming language using the facilities installed on most Unix systems -- we'll show that procmail is the tool of choice among sysadmins and advanced Unix users.
Unix mail systems consist of MTA's (mail transport agents like sendmail, smail, qmail mmdf etc), MDA's (delivery agents like sendmail, deliver, and procmail), and MUA's (user agents like elm, pine, /bin/mail, mh, Eudora, and Pegasus).
On most Unix systems on the Internet sendmail is used as an integrated transport and delivery agent. sendmail and compatible MTA's have the ability to dispatch mail *through* a custom filter or program through either of two mechanisms: aliases and .forwards.
The aliases mechanism uses a single file (usually /etc/aliases or /usr/lib/aliases) to redirect mail. This file is owned and maintained by the system administrator. Therefore you (as a user) can't modify it.
The ".forward" mechanism is decentralized. Each user on a system can create a file in their home directory named .forward and consisting of an address, a filename, or a program (filter). Usually the file *must* be owned by the user or root and *must not* be "writeable" by other users (good versions of sendmail check these factors for security reasons).
It's also possible, with some versions of sendmail, for you to specify multiple addresses, programs, or files, separated with commas. However we'll skip the details of that.
You could forward your mail through any arbitrary program with a .forward that consisted of a line like:
"|$HOME/bin/your.program -and some arguments"
Note the quotes and the "pipe" character. They are required.
"Your.program" could be a Bourne shell script, an awk or perl script, a compiled C program or any other sort of filter you wanted to write.
However "your.program" would have to be written to handle a plethora of details about how sendmail would pass the messages (headers and body) to it, how you would return values to sendmail, how you'd handle file locking (in case mail came in while "your.program" was still processing one, etc).
That's what procmail gives us.
What I've discussed so far is the general information that applies to all sendmail compatible MTA/MDA's.
So, to ensure that mail is passed to procmail for processing the first step is to create the .forward file. (This is safe to do before you do any configuration of procmail itself -- assuming that the package's binaries are installed). Here's the canonical example, pasted from the procmail man pages:
"|IFS=' '&&exec /usr/local/bin/procmail -f-||exit
75 #YOUR_USERNAME"
This seems awfully complicated compared to my earlier example. That's because my example was flawed for simplicity's sake.
What this mess means to sendmail (paraphrasing into English) is:
"is not actually a parameter that is required by procmail, in fact, it will be discarded by sh before procmail ever sees it; it is however a necessary kludge against overoptimising sendmail programs:"
This complicated line can be just pasted into most .forward files, minimally edited and forgotten.
If you did this and nothing else your mail would basically be unaffected. procmail would just look for its default recipe file (.procmailrc) and finding none -- it would perform its default action on each messages. In other words it would append new messages into your normal spool file.
If your ISP uses procmail as its local delivery agent then you can skip the whole part of about using the .forward file -- or you can use it anyway.
In either event the next step to automating your mail handling is to create a .procmailrc file in your home directory. You could actually call this file anything you wanted -- but then you'd have to slip the name explicitly into the .forward file (right before the "||" operator). Almost everyone just uses the default.
Now we can get to a specific example. So far all we've talked about it how everything gets routed to procmail -- which mostly involves sendmail and the Bourne shell's syntax. Almost all sendmail's are configured to use /bin/sh (the Bourne shell) to interpret alias and .forward "pipes."
So, here's a very simple .procmailrc file:
:0c:
$HOME/mail.backup
This just appends an extra copy of all incoming mail to a file named "mail.backup" in your home directory.
Note that a bunch of environment variables are preset for you. It's been suggested that you should explicity set SHELL=/bin/sh (or the closest derivative to Bourne Shell available on your system). I've never had to worry about that since the shells I use on most systems are already Bourne compatible.
However, csh and other shell users should take note that all of the procmail recipe examples that I've ever seen use Bourne syntax.
The :0 line marks the beginning of a "recipe" (procedure, clause, whatever. :0 can be followed be any of a number of "flags." There is a literally dizzying number of ways to combine these flags. The one flag we're using in this example is 'c' for "copy."
You might ask why the recipe starts with a :0. Historically you used to use :x (where x was a number). This was a hint to procmail that the next x lines were conditions for this recipe. Later, the option was added to precede conditions with a leading asterisk -- so they didn't have to be manually counted. :0 then came to mean something like: "count them yourself."
The second colon on this line marks the end of the flags and the beginning of the name for a lockfile. Since no name is given procmail will pick one automatically.
This bit is a little complicated. Mail might arrive in bursts. If a new message arrives while your script is still busy processing the last message -- you'll have multiple sendmail processes. Each will be dealing with one message. This isn't a problem by itself. However -- if the two processes might try to write into one file at the same time they are likely to get jumbled in unpredictable ways (the result will not be a properly formatted mail folder).
So we hint to procmail that it will need the check for and create a lockfile. In this particular case we don't care what the name of the lock file would be (since we're not going to have *other* programs writing into the backup file). So we leave the last field (after the colon) blank. procmail will then select its own lockfile name.
If we leave the : off of the recipe header line (ommitting the last field entirely) then no lockfile is used.
This is appropriate whenever we intend to only read from the files in the recipe -- or in cases where we intend to only write short, single line entries to a file in no particular order (like log file entries).
The way procmail works is:
It receives a single message from sendmail (or some sendmail compatible MTA/MDA). There may be several procmail processing running currently since new messages may be coming in faster than they are being processed.
It opens its recipe file (.procmailrc by default or specified on its command line) and parses each recipe from the first to the last until a message has been "delivered" (or "disposed of" as the case may be).
Any recipe can be a "disposition" or "delivery" of the message. As soon as a message is "delivered" then procmail closes its files, removes its locks and exits.
If procmail reaches the end of it's rc file (and thus all of the INCLUDE'd files) without "disposing" of the message -- than the message is appended to your spool file (which looks like a normal delivery to you and all of your "mail user agents" like Eudora, elm, etc).
This explains why procmail is so forgiving if you have *no* .procmailrc. It simply delivers your message to the spool because it has reached the end of all its recipes (there were none).
The 'c' flag causes a recipe to work on a "copy" of the message -- meaning that any actions taken by that recipe are not considered to be "dispositions" of the message.
Without the 'c' flag this recipe would catch all incoming messages, and all your mail would end up in mail.backup. None of it would get into your spool file and none of the other recipes would be parsed.
The next line in this sample recipe is simply a filename. Like sendmail's aliases and .forward files -- procmail recognizes three sorts of disposition to any message. You can append it to a file, forward it to some other mail address, or filter it through a program.
Actually there is one special form of "delivery" or "disposition" that procmail handles. If you provide it with a directory name (rather than a filename) it will add the message to that directory as a separate file. The name of that file will be based on several rather complicated factors that you don't have to worry about unless you use the Rand MH system, or some other relatively obscure and "exotic" mail agent.
A procmail recipe generally consists of three parts -- a start line (:0 with some flags) some conditions (lines starting with a '*' -- asterisk -- character) and one "delivery" line which can be file/directory name or a line starting with a '!' -- bang -- character or a '|' -- pipe character.
Here's another example:
:0
* ^From.*someone.i.dont.like@somewhere.org
/dev/null
This is a simple one consisting of no flags, one condition and a simple file delivery. It simply throws away any mail from "someone I don't like." (/dev/null under Unix is a "bit bucket" -- a bottomless well for tossing unwanted output DOS has a similar concept but it's not nearly as handy).
Here's a more complex one:
:0
* !^FROM_DAEMON
* !^FROM_MAILER
* !^X-Loop: myaddress@myhost.mydomain.org
| $HOME/bin/my.script
This consists of a set of negative conditions (notice that the conditions all start with the '!' character). This means: for any mail that didn't come from a "daemon" (some automated process) and didn't come a "mailer" (some other automated process) and which doesn't contain any header line of the form: "X-Loop: myadd..." send it through the script in my bin directory.
I can put the script directly in the rc file (which is what most procmail users do most of the time). This script might do anything to the mail. In this case -- whatever it does had better be good because procmail way will consider any such mail to be delivered and any recipes after this will only be reached by mail from DAEMONs, MAILERs and any mail with that particular X-Loop: line in the header.
These two particular FROM_ conditions are actually "special." They are preset by procmail and actually refer to a couple of rather complicated regular expressions that are tailored to match the sorts of things that are found in the headers of most mail from daemons and mailers.
The X-Loop: line is a normal procmail condition. In the RFC822 document (which defines what e-mail headers should look like on the Internet) any line started with X- is a "custom" header. This means that any mail program that wants to can add pretty much any X- line it wants.
A common procmail idiom is to add an X-Loop: line to the header of any message that we send out -- and to check for our own X-Loop: line before sending out anything. This is to protect against "mail loops" -- situations where our mail gets forwarded or "bounced" back to us and we endlessly respond to it.
So, here's a detailed example of how to use procmail to automatically respond to mail from a particular person. We start with the recipe header.
:0
... then we add our one condition (that the mail appears to be from the person in question):
* ^FROMharasser@spamhome.com
FROM is a "magic" value for procmail -- it checks from, resent-by, and similar header lines. You could also use ^From: -- which would only match the header line(s) that start with the string "From:"
The ^ (hiccup or, more technically "caret") is a "regular expression anchor" (a techie phrase that means "it specifies *where* the pattern must be found in order to match." There is a whole book on regular expression (O'Reilly & Associates). "regexes" permeate many Unix utilities, scripting languages, and other programs. There are slight differences in "regex" syntax for each application. However the man page for 'grep' or 'egrep' is an excellent place to learn more.
In this case the hiccup means that the pattern must occur at the beginning of a line (which is its usual meaning in grep, ed/sed, awk, and other contexts).
... and we add a couple of conditions to avoid looping and to avoid responding to automated systems
* !^FROM_DAEMON
* !^FROM_MAILER
(These are a couple more "magic" values. The man pages show the exact regexes that are assigned to these keywords -- if you're curious or need to tweak a special condition that is similar to one or the other of these).
... and one more to prevent some tricky loop:
* !^X-Loop: myaddress@myhost.mydomain.org
(All of these patterns start with "bangs" (exclammation points) because the condition is that *no* line of the header start with any of these patterns. The 'bang' in this case (and most other regex contexts) "negates" or "reverses" the meaning of the pattern).
... now we add a "disposition" -- the autoresponse.
| (formail -rk \
-A "X-Loop: yourname@youraddress.com" \
-A "Precendence: junk"; \
echo "Please don't send me any more mail";\
echo "This is an automated response";\
echo "I'll never see your message";\
echo "So, GO AWAY" ) | $SENDMAIL -t -oi
This is pretty complicated -- but here's how it works:
Most of the difficulty in understanding procmail as nothing to do with procmail itself. The intricacies of regular expressions (those wierd things on the '*' -- conditional lines) and shell quoting and command syntax, and how to format a reply header that will be acceptable to sendmail (the 'formail' and 'sendmail' stuff) are the parts that require so much explanation.
The best info on mailbots that I've found used to be maintained by Nancy McGough (sp??) at the Infinite Ink web pages:
http://www.jazzie.com/ii/
More information about procmail can be found in Era Eriksson's "Mini-FAQ." at http://www.iki.fi/~era/procmail/mini-faq.html
I also have a few procmail and SmartList links off of my own web pages.
by James McDuffie,mcduffie@scsn.net |
The are a couple reasons why I purchased the Pilot. For one thing the Pilot is not very expensive. The Pilot comes in two different versions, called the Pilot 1000 and the Pilot 5000. These are the exact same except for the amount of memory they have loaded. The Pilot 1000 has 128k of memory while the Pilot 5000 has 512k of memory. What I did was purchase a Pilot 1000 and a 1 MB upgrade chip at the same time. This way I saved money in the long run than if I had purchased a Pilot 5000 and then later upgraded to 1 MBB of memory. The Pilot is considerably cheaper than other PDAs. Such as the Newton which is priced as under $800. The Pilot 1000 can be found for as low as $224 and the Pilot 5000 for as low as $269. The 1 MB upgrade chip can be found for as little as $89. Prices such as this make the Pilot a cost effective solution.
Another issue was how portable the Pilot is. Carrying around a heavy PDA all day is not very comfortable. But the Pilot is very portable. It measures 4.7 x 3.2 x .7 inches, small enough to fit comfortably in your hand. The Pilot only weighs 5.7 ounces, with batteries. Because of this the Pilot can fit comfortably in your shirt pocket or your pants pocket. The Pilot's power supply is two 2 triple A batteries. These batteries can last you up to a month if you use the Pilot moderately. After all a PDA is supposed to help you, not burden you down by being bulky and heavy.
The Pilot is very expandable too. Such is the case with the 1 MB upgrade chip that can be purchased from varies places. I find that 1 MB of memory is more than enough memory for my needs. The Pilot is also expandable in that you can upload any of numerous shareware or commercial applications for the Pilot. There is even a program that allows you to hook your Pilot up to a modem and dial into your ISP and then check your POP mail! These applications are very small. The average application made for the Pilot runs about 10k. With a 1 MB chip you could theoretically have 100 10k apps on the Pilot. The Pilot features a RS-232 serial connector on the bottom of it. The connector is used for syncing the Pilot with your desktop computer or for other uses. Other uses include hooking up a modem or hooking up a soon to be release wireless modem and pager. The Pilot can grow as your need for it grows.
The software is simple enough to use. You simply supply supply the program name, the serial port and other information such as a filename. The pilot-xfer program allows you to install programs or data files that programs use into the Pilot. To install program all you would have to do is use the command pilot-xfer /dev/cua?? -i [program name]. After entering this your press the hot-sync button on the Pilot cradle and the Pilot installs the program. The program is then available for immediate use. Or if you wanted to install a text file into the memo you would simply enter install-memo /dev/cua?? [file name]. There are plenty of other programs that help you transfer information with other applications such as the date book, the address book and the to do list.
For me, the name of these programs are pretty long and with typing the
serial device name it gets tedious fast. So I set up a couple of aliases
to speed up things. Some of my aliases are:
alias pxi='pilot-xfer /dev/cua2 -i'
alias im='install-memo /dev/cua2'
These are the functions I use the most, because I hardly ever download
applications from my Pilot since I already have them on my hard drive.
The same goes for memos I install. But for the information that I create
in the Pilot I use the sync-memodir program. It puts every memo in
a separate fill. But the down side is that does not put the files in
categories as they are on your Pilot. The up side is that the Windows
software is not required.
The card is useful because if COM1 and COM2 are in use then COM3 and COM4
are not available. A COM port is simply a label that identifies a
specific IRQ and address. COM1 and COM3 share the same IRQ as does COM2
and COM4. But this card allows you to add another serial port at any
combination of IRQ and address that you desire. I have mine set on IRQ 12
and address 238. To get this to work with Linux all I had to do was tell
Linux to map this specific address and IRQ combination to the device
/dev/cua2. The following command does this:
setserial /dev/cua2 port 0x238 irq 12 autoconfig
It tell Linux where the serial port is available and to what device to
map it. With this working I was able to play around with my Pilot while
using my modem. Also I now have an extra serial port should I need it for
other tasks.
These links should be enough to learn about the US Robotics Pilot and how to use them. I hope this information will be helpful. If you have any questions what so ever, please contact me.
All postings to the list should be sent to the address
pilot-unix@lists.best.com
Commands, such as subscribe or unsubscribe requests should be sent to the address
pilot-unix-request@lists.best.com
Note that there are two list modes - normal (you receive each message as it is sent) and digest. The default mode is digest mode. To subscribe to the digest, send an email message with the single word "subscribe" in the message body to "pilot-unix-request@lists.best.com". To subscribe to the normal list, use the word "subsingle" in the message body. You can also get a list of commands which the list server understands by sending mail with the single word "help" in the body to the -request address.
If you have administrative questions or requests which require the intervention of a person, please send those to
pilot-unix-owner@lists.best.com.
Disclaimer: Secure Socket Layer technology is a pretty touchy
legal matter. There's lots of money riding on it for a relatively small
number of companies. Therefore keep in mind that what I say in this article
may not be correct. If you plan to use Stronghold/Netscape (or any other SSL
server/client pair) for inter-office communication get legal advice, or make
sure you know what you're doing.
Also I won't go into some of the knowledge that I think you already
have, like the basics of public key cryptography or the fact that SSl URLs
are https:// instead of http://.
If you've looked for affordable ways to incorporate Secure Socket technology
into your intranet you've probably run into Stronghold. Although Stronghold
runs on platforms other than Linux it's a great, low resource intensive way,
to use a spare Linux box for providing encrypted/authenticated document
transfers over the Internet. This is perfect if you need to
"network" separate offices over the Internet without worrying about
prying eyes looking in on your document transfers.
The main problem you face when trying to use Stronghold for inter-office
communication is the lack of good documentation. Stronghold is mainly
intended for companies who want to receive credit card orders on-line. As
such, the installation scripts and documentation don't go into much detail
about setting up Certificate Authorities (more on this later) and the
features that allow you to not only have server authentication, but also
client authentication as well. To clarify things a bit I'll give you a short
"tutorial" on Secure Socket features. Since Netscape is the only
browser that currently has a decent Secure Socket Layer (or SSL from here on
out) implementation, I'll use that.
Start up Netscape (3.0) and select Options -> Security
Preferences. Click on the tab that says Site Certificates. This
dialog box contains information about what Certificate Authorities your
browser currently recognizes and what level of trust you have assigned to
each. To illustrate this, select United States Postal Service CA and
click the button that says "Edit Certificate..."
Now you should see another dialog box pop up which contains various
information on that particular certificate. Notice the two fields:
"This certificate belongs to:" and "This certificate was
issued by:". In both cases it contains the same information. This means
that the certificate has been "self-signed" by the certificate
owner.
A little further down in the dialog box you'll see a pair of "radio
buttons" that allow you to either accept or deny connections from
secured Web sites that have been certified with this key. In other words, if
you allow connections from sites whose keys have been signed by the USPS
you're telling Netscape that you trust the USPS enough to certify
SSL-enabled Web servers and that no further proof of a server's identity is
needed. In reality, the USPS doesn't publicly certify keys (at least that I
know of), we're just using that as an example. The final check-box tells
Netscape to warn you before a secure connection is established to a Web
server that has been certified by this key. Click "Cancel" to exit
this dialog box.
If you connect to a site that has not been certified by one of the CAs
listed, all is not lost; you can still accept the individual site's key as
an individual "Site Certificate."We won't worry about this method
too much, but if you want to see which, if any, site certificates are
installed in Netscape then select Site Certificates from the drop-down
list above the "Site Certificate" list box. Note that, for some
reason, Certificate Authority certificates are considered
"Site Certificates."
What you've looked at here is enough for basic electronic commerce. In other words, if you want to send sensitive information to a Web site, all you really need to know is that the site is who it claims to be. The Certificate Authorities listed provide this level of security. If you want to use your Web server to distribute sensitive information to select individuals, Server Authentication doesn't do you much good. Client Authentication gives you the ability to authenticate the clients who connect to your SSL Web server.
Client Authentication of one of the neatest features of Netscape. In the previous screen, select the tab that says Personal Certificates. If you installed any Client Certificates (doubtful) they'll be here. If a server requests Client Authentication, Netscape can perform one of three actions:
You can tell which action you want Netscape to perform by selecting the
appropriate option from the drop-down list in the "Personal
Certificates" dialog box.
Client certificates can be purchased from
various Certificate Authorities. This can get to be expensive if you want to
certify multiple client browsers, not to mention a hassle. Luckily
Stronghold comes with the basic tools that will allow you to create your own
small-time certificate authority that you can use to certify clients who
connect to your server and even other servers on your intranet.
There are lots of relevant files that Stronghold works with. I'll list the main, non-HTTP-specific ones. I'll also assume you have installed the program in the default directory (preferred).
/usr/local/ssl/private/YOUR-SERVER.key This is your server's *private* key and should not be world-accessible at all. The way Stronghold installed the directory "private" is chmod 700 root.
/usr/local/ssl/certs/YOUR-SERVER.cert This is where your servers *public* key is located. This should be world-readable, and in fact your server won't work in secure mode if it is not.
/usr/local/ssl/CA/rootcerts.pem This file contains the public keys from the various CAs who issue Client Certificates. When your server wants to check that a Client Certificate is actually issued by a valid CA it looks in this file. This can be changed, but more on that later.
/usr/local/ssl/CA/cacert.pem When you start your own CA this file will contain your public key. Note: This is not your server's public key.
/usr/local/ssl/CA/private/cakey.pem The private key for your CA is stored here. As with all private keys, only root (or whatever username you administer your CA under) should be able to see or change it.
/usr/local/ssl/CA/ssleay.conf AND /usr/local/ssl/lib/ssleay.conf For one reason or another, Stronghold has two separate configuration files. There is only a slight difference between them and Stronghold seems to want to use them both so I'll describe the files as if they were one and point out the differences as we come to them.
ssleay.conf is the main configuration file for Stronghold's key processing tools. It's relatively complex but fairly well commented out so I won't go into the whole thing, just a general overview and extra explanation where I think it's necessary.
The thing that makes this configuration file different from what we've come to expect from Linux (and UN*X in general) is the way it's subdivided. If you've done much MS Windows programming you'll notice that it is divided into key=value pairs and most sections also have an "application name," for instance:
[ policy_match ] countryName = match stateOrProvinceName = match organizationName = match organizationalUnitName = optional commonName = supplied emailAddress = optionalIn this section policy_match is the "application name" and the rest are standard key=value pairs. Here the whole section can be referenced by the label "policy_match"
default_crl_days: This "CRL" stumped me for a while. Apparently it has to do with Certificate Revocation Lists, a feature that is not really implemented in the SSleay toolkit (the package that was used to give Stronghold it's SSL capabilities). Actually that's not completely true, the CRL capability is there but CRL handling utilities aren't.
policy: The "policy" field lets you select which policy you want to sign keys under. You probably won't need to mess with this since, in most cases, you will check and sign keys by hand. If you want to use a specific policy (check the Stronghold docs, what there is of them ;) ) change this field to "policy_match" and edit the policy_match section below to reflect your chosen policy. The two possible values: policy_match and policy_anything are "application names" of the sections of the configuration file that define who you will and will not sign keys for, or your "policy."
distinguished_name: There is only one difference between the two different configuration files that Stronghold's key management tools use, and this is it. This key=value pair will point to one of two different "application names": req_distinguished_name or makeca_distinguished_name. The only time it will point to makeca_distinguished_name is when you are creating your own Certificate Authority, the rest of the time it will point to req_distinguished_name.
[makeca_distinguished_name]: This and the next entry are not
key=value pairs, but rather "application names" that define
particular groups of information.
The makeca_distinguished_name section of the file is only really referenced
when you first create your CA. Also you do
not need all of the fields that are included under this heading. For
instance, when I made my CA key pair I removed both
"organizationalUnitName" and "commonName." Because we
aren't dealing with slick commercial software, it may object if you start
altering this configuration file heavily.
[req_distinguished_name]: This section of the config file is where information on machines to certify is kept. When you create a key-pair/signing request for your SSl server with genkey, default information is looked up here. Feel free to change some of the fields if you don't want this much info in your keyfile. Beware, some commercial key signers (i.e. RSA or whoever) may object to altered request formats. As before, your CA may choke if it gets a request that has been highly altered. One field to especial watch out for is "commonName," this is where Netscape looks to see if a web server is using an appropriate keyfile for it's domain name. For example, if Netscape tries to make a secure connection to www.insecure.org and the keyfile that the server sends says it belongs to www.secure.org, you'll get a little dialog box warning you about a possible security problem. If no "commonName" is supplied, Netscape fails to connect and gives an error-message.
genkey:Genkey is the program that is used to generate an initial key-pair for your secure server and send out a signing request certificate to your chosen CA. Before you run genkey make sure and create backup of both your private and public keys for your Web server. After you make backups, delete the original keys as genkey won't operate if it finds that a key-pair already exists. Run the program like this:
genkey YOUR_SERVER_NAME
This will create a key-pair for your server and send out a Certificate
Signing Request (or CSR). Since we are going to create our own CA and sign
the key for the Web server with that, make sure that the CSR is sent to your
own e-mail address and not Verisign. Now you have generated an initial
key-pair and CSR. Get the CSR from your e-mail and save it for later.
Also note that the defaults for genkey are had from the
req_distinguished_name section of /usr/local/ssl/lib/ssleay.conf, if
there are fields you don't want included in your keyfile remove them from
this section.
makeca: Makeca is the program that is used to actually create your
Certificate Authority. This program gets it's default information from the
file /usr/local/ssl/CA/ssleay.conf in the makeca_distinguished_name
section (assuming you have installed everything in the default locations).
Makeca is executed without any arguments and is actually pretty intuitive.
As before, if there are entries that you don't want in your CA's keyfile
just remove their entries from the makeca_distinguished_name section
of the relevant configuration file.
ca: Ca is the actual program that you will use to perform Certificate Authority functions. This includes signing other Web server keys and Netscape's client keys. Assuming that you have been following along up till now I'll assume that you have already used genkey to create a key for your Web server and that you have mailed the CSR to yourself. To sign your Web server's CSR save it as /tmp/csr and type the following:
ca -config /usr/local/ssl/lib/ssleay.conf -in /tmp/csr
ca will check the indicated configuration file to see what, if any, policy
has been defined for signing keys and ask you for your CA password. After
the key is signed it is stored in /usr/local/ssl/CA/new_cert/. New
certificates are not stored by name but by serial number, with the newest
cert having the highest number.
The cert is stored in PEM (Privacy Enhanced Mail) format and as such, can be
included in e-mail as is.
getca: Once you have a signed certificate for your Web server you are ready to install it. Getca is the program for this and is called with:
getca YOUR_SERVER_NAME < /tmp/cert
We are assuming that /tmp/cert is your signed keyfile in PEM format.
One of the odd things about getca is that the input file must be
"piped" into the program.
If this went correctly your Web server should now have a public key signed
by your CA. Now for the tricky part...
Even though you now have a signed key certificate for your Web server you still can't use it. This is because Netscape isn't aware of your CA, this is to say that your CA isn't in the list of Site Certificates that we looked at earlier. To add your CA to that list follow these steps:
application/x-x509-ca-cacert cacertThere are other ways to add MIME types that don't involve messing with config files but I like the direct approach. Adding this MIME type tells Stronghold that every file that ends with a .cacert extension should be sent as a Certificate Authority's public key.
x509 -outform DER < cacert.pem > cert.cacertLike getca, x509 requires input and output to be "piped." In any event your key is now in proper format and can be moved into one of your Web server's document directories.
Creating Client Certificates for Netscape is a pretty complex task, and one of the least documented features of SSL. All of Netscape's Client Certificate functions work through a WWW interface, and as such you'll need two special files: a HTML and a CGI, here are both:
key_req.cgi--------------------------------------------------------- #!/usr/bin/perl read(STDIN,$input,$ENV{'CONTENT_LENGTH'}); open(TEST, ">/tmp/client_csr"); $input =~ s/\+/ /g; $input =~ s/&/\n/g; $input =~ s/%2B/\+/g; $input =~ s/%2F/\//g; $input =~ s/%3D/=/g; $input =~ s/%0A//g; print TEST ("$input"); print("Content-type: text/html\n\n$input\n"); -------------------------------------------------------------------- keygen.html--------------------------------------------------------- <HTML><HEAD> <TITLE>Make akey</TITLE></HEAD><BODY> <FORM ACTION="/cgi-bin/key_req.cgi" METHOD=POST> E-mail: <br> <INPUT TYPE="TEXT" NAME="Email" MAXLENGTH=40SIZE=40><br> Common Name: <br> <INPUT TYPE="TEXT" NAME="CN" MAXLENGTH=64 SIZE=64><br> Organization Name: <br><INPUT TYPE="TEXT" NAME="O"><br> Organization Unit: <br><INPUT TYPE="TEXT" NAME="OU"><br> Locality: <br><INPUT TYPE="TEXT" NAME="L"><br> State or Province: <br><INPUT TYPE="TEXT" NAME="SP"><br> Country (2 letter): <br> <INPUT TYPE="TEXT" NAME="C" MAXLENGTH="2" SIZE="2"><br> <KEYGEN NAME="SPKAC" CHALLENGE="testkeygen"><br> <INPUT TYPE="submit" VALUE="Generate Key"></FORM> </BODY></HTML> --------------------------------------------------------------------These files may need a little modification to work on your system, but they should work like this:
keygen.html This is the actual HTML that Netscape needs to process a key request. Like many things in Stronghold's SSL key management utilities, you can omit just about whatever fields you want. For instance you might want to only create keys that have an e-mail address and a name, for this you would just remove everything except those two fields. This HTML was snagged from the SSL user mailing list archive at http://remus.prakinf.tu-ilmenau.de/ssl-users/
key_req.cgi This is the CGI program that will take Netscape's key request and format it into something that your CA can understand and sign. The script outputs two copies of the key request, the first goes to /tmp/client_csr and the second is echoed back to Netscape as text.
To create a Client Certificate signed by your CA follow these steps:
ca -spkac /tmp/client_csr -out /tmp/clientcert.derYou'll be asked for your CA password and, if all goes well, a signed Client Cert will be output into /tmp/clientcert.der.
application/x-x509-user-cert derThis will tell Stronghold to use this MIME type for every file that ends in the .der extension.
After the certificate is installed select Options -> Security
Preferences and click the Personal Certificates tab. Your new
Client Certificate should appear in the listbox.
If you're not the only person who has access to your machine, or even if you
*think* that you are. It's a good idea to password protect your Client
Certificate, this way someone won't be able to masquerade as you by simply
having access to your computer. In Options -> Security
Preferences, selecting the Passwords tab will bring up a dialog
box that will allow you to password protect your copy of Netscape. If you
set a password here it will will be used to actually encrypt your Client
Certificate(s). Loose this password and you're out of luck.
Unfortunately client authentication isn't very advanced with any SSL Web
server package as of yet. In the future this will change so we might as well
get comfortable with SSL technology now, even though parts can get pretty
bumpy.
First we'll go through the steps to enable reliable Client Authentication
with Stronghold:
/CN=James Shelburne/Email=brammal@iamerica.net
If you will look at the source for the HTML form above you'll notice that the "keys" are the same (i.e. CN for CommonName, Email for e-mail address etc.). If I had included other fields in my certificate, Stronghold would identify me by a larger list of "keys and values."
To test out SSLFakeBasicAuth insert a line like this in Stronghold's SSL configuration file (Note: this only works in the SSL config file. SSLFakeBasicAuth doesn't work with unencrypted HTTP transfers)
<Location /TEST_DIR> AuthType Basic AuthName Secret_Stuff AuthUserFile /usr/local/apache/conf/ssl_user_file <Limit GET POST> require valid-user </Limit> </Location>
The file /usr/local/apache/conf/ssl_user_file (or whatever file you choose to use) should contain the SSL identifier strings for each person that you want to be able to access your SSL server. If I wanted to set up my server so that I was the only one who would be able to access it, then the only line in my ssl_user_file would be:
/CN=James Shelburne/Email=brammal@iamerica.net
When I try to make a secure connection to the server, Netscape will send the Client Certificate made earlier. Stronghold will see that SSLFakeBasicAuth is enabled and if I try and access /TEST_DIR, it will check the users in the AuthUserFile to see if I'm there. If I'm in the file I'll be granted access, if not, then access will be refused.
If you want to control access for a number of different user groups, feel free to have multiple ssl_user_files each containing the identifying strings for the people in that group. You might have ssl_accounting, ssl_sales and etc.
How do you find the strings that each user is identified by? When SSLVerifyClient is set to 2 and a person tries to access a directory on the server that is protected by SSLFakeBasicAuth the user string comes up in the file /usr/local/apache/logs/ssl/access_log. However, a better way to get the same information is through the use of CGI environment variables, in particular SSL_CLIENT_DN. Here's a short CGI script that when accessed through SSL will display the user's identifying string:
CLIENT_DN displayer-------------------------------------------------- #!/usr/bin/perl print <<EOF Content-type: text/html <html><head> <title>Your SSL_CLIENT_DN string</title></head> <h3>Your SSL_CLIENT_DN string is:<br></h3> <h4>$ENV{'SSL_CLIENT_DN'}</h4> </html> EOF ---------------------------------------------------------------------
There are other CGI environment variables but SSL_CLEINT_DN is the most useful. If you know your way around CGI programming you can automate your site on the basis of the SSL_CLIENT_DN variable.
Here I am at Usenix at the Mariott Hotel in Anaheim. Actually, it is pleasant to be in nice weather after almost drowning in Seattle. It had rained here the day before so the air was actually clean. But, let me talk about the show instead of the weather.
Usenix is a five-day show that, this year, has a heavy Linux presence. For those not familiar with Usenix, it has been the "wear a tie and get laughed at" Unix show for years. It is technical and tends to draw a very seriously technical crowd.
It is broken up into tutorials, a trade show and a technical conference. Well, plus the informal beer drinking sessions and such.
The first two days are tutorials and I elected to attend an all-day tutorial on the Linux 2.0 kernel presented by Stephen Tweedie. I found it to be excellent and that seemed to be the general opinion of the approximately 125 people who attended.
In eight hours and 170 overheads, Stephen addressed four specific areas of the kernel: memory management, the scheduler, filesystems and I/O and networking. I feel the goal of the talk, "to be with the design and algorithms behind the Linux kernel and to be able to read the Linux source code with some understanding" was met. While Stephen did not necessarily expect attendees to be familiar with Unix systems programming, the more you knew about Unix the easier it was to understand the presentation. After all, learning all about a new operating system in eight hours is quite a challenge.
On Tuesday, Ted T'so taught a tutorial on writing device drivers under Linux. This talk was attended by about 60 students. I elected to take Tuesday as a day to catch up on LJ work and make a run to Fry's Electronics to see if they carry Linux Journal. They don't--which makes no sense as Fry's is exactly the kind of place a Linux geek would want to go.
Tuesday evening started with free food and drink. This is one of the best ways to get geeks talking. The Marriott did a great job with an array of food carts with various choices including fruit, veggies, potato patties, nachos, hamburgers and hot dogs. There were also drink and dessert carts. They even had my drug of choice, Dr. Pepper.
There were Birds-of-a-Feather sessions scheduled from 6PM to 10PM. The two Linux ones were scheduled at the same time, both at 7PM. As I already know a lot about Caldera Linux I elected to go to the talk on Electronic Design Automation (EDA). Peter Collins, manager of software services for Exemplar Logic, headed the BoF and talked about how his company had done an NT port but now had a Linux port. He pointed out that EDA grew up on Unix-based systems like Suns and the capabilities of Linux were a better fit for current EDA users.
The trade show started on Wednesday. While this was not a Linux-specific trade show, Linux had a large presence. Linux vendors included Caldera, EST (makers of the bru backup utility), InfoMagic, Linux International, Red Hat, Walnut Creek CDROM, Workgroup Solutions and Yggdrasil. Plus, of course, our booth where we were giving away sample copies of Linux Journal. Lots of other vendors came by to talk about Linux and the Linux products they sell.
Linux interest was very high. While Usenix is a geek conference, these are mostly professional geeks who are making serious technical decisions for real companies. I answered many "It seems like Linux could do this" inquiries.
Within the trade show I think SSC offered the biggest hit. We just finished our new "fences" t-shirt. We sold out of the shirts in about four hours on the first day. This gave me the feeling that I was at the right show--not one where Microsoft was being honored.
On Wednesday afternoon we proved how significant the Linux interest/presence was. Linus was scheduled to talk on the future of Linux in a fairly large room, which soon filled up, with standees everywhere--including the hall outside. Usenix quickly offered to move the crowd into a much larger hall.
The talk went well as Linus explained new features and new ideas. I won't bore you with details. The important thing is that the goal is world domination. To some this sounded like humor. Maybe it was. Only time will tell. In the mean time, building a superior product can't hurt.
Wednesday evening was a time for more Linux sessions. I attended one called The Classroom of the Future that showed how an experimental program brought the Internet to K-12 schools in Ireland. I also attended another called The Future of the Linux Desktop, missing Greg Wettstein's talk on perceptions. [see Greg's article "Linux in the Trenches" in LJ #5, September 1994--Ed.]
Thursday was another day of talks and trade show. Peter Struijk, SSC's "head nerd" managed to make it to Victor Yodaiken' presentation on real-time Linux [see LJ #34, February 1997] and a talk on the /proc file system by Stephen Tweedie. In the evening, I hosted a session on embedded, turnkey and real-time systems and intended to make it to Developing Linux-based electronic markets for Internet Trading Experiments but ended up talking with some of the attendees of my session instead.
The evening ended with a short talk about Linux and reality with Stephen Tweedie and then a trip back to the hotel room to finish up this column. Then, if I run out of things to do I may actually get some sleep.
Friday offers a day of Uselinux business talks. However, the combination of editorial deadlines and exhaustion mean that you won't get to read about it here.
It was a great show. Usenix has always been a great show offering high-quality sessions and a really nice mix of "non-suites". Having Usenix/Uselinux made it all the better. I am sure there will be serious cooperation between Usenix and Linux International to continue to make Linux a big part of Usenix.
If I have one complaint it was that there was too much to do. Add a Linux International board meeting to a schedule that included sessions, talks and BoFs from 9AM to 11PM with parallel Linux tracks plus the normal Usenix tracks and there just wasn't time to breathe or, more importantly, sit down to a beer and talk to fellow kernel hackers, systems administrators or vendors.
Anyone who wants to get copies of the Proceedings of this conference or find out what the future holds with regard to Usenix, should contact USENIX Association at office@usenix.org or check out their web site at http://www.usenix.org/ or, if all else fails, call 510-528-8649. Oh, and if you don't know what 8649 spells you must be new to the Unix community.
You've made it to the weekend and things have finally slowed down. You crawl outa bed, bag the shave 'n shower 'cause it's Saturday, grab that much needed cup of caffeine (your favorite alkaloid), and shuffle down the hall to the den. It's time to fire up the Linux box, break out the trusty 'ol Snap-On's, pop the hood, jack 'er up, and do a bit of overhauling! |
Phew! It's good to be back!
So how's everyone doing? How are things going? I had a great semester this past Fall -- got my 4.0 and everything :-) Still, things got rather hectic toward the end of classes and I'm still trying to get myself shoveled out from beneath a pile of backlogged email. I managed to survive six finals, the usual glut of "end-of-the-semester projects", a flight to Washington D.C. and a drive from there to N.Y. with my brother, his wife, and three small boys to visit our parents for Christmas, a new HD installation and complete system re-installation (the story of my life...), AND I actually managed to show my face at work once or twice before classes started again. If you're wondering why you haven't heard back from me, hang in there, I'm coming... :-)
And is it only me, or does it seem that the 'ol Linux Gazette has really taken on quite a nice face lift since Marjorie Richardson took the helm...? I have to admit, the LG looks GREAT -- new graphics, better organization, a search engine, and so forth. Having worked on the LG in the past I know how much time and effort goes into each issue and I know that Marjorie has worked hard on this. I know that a lot of folks have taken the time to drop a note (the Mail section is as busy as it always was... :-) but if you haven't, you really need to! Here, let me make it easy for all of you with mail-capable browsers...
See! that wasn't so bad, and the reality of it is that demonstrated interest and ongoing support are what keeps this 'ol ezine going in the first place! Remember: "The masses may vote with their feet, but hackers vote with both hands! (...unless you're able to type with your toes or are gifted with a prehensile tail or something... :-)"
Anyway, drop Marjorie a note, she'll really appreciate it.
I don't know about you, but one of the things that I really missed after doing the kernel 2.0 upgrade was being able to use supermount. For those of you who are unfamiliar with it, supermount is a program (in the form of a kernel patch) written by Stephen Tweedie that, in effect, allows you to insert, take out, and re-insert removable media such as floppies and CDs without going through all the rigmarole of using mount and umount. For those of us who are converts from the DOS era who are perpetually forgetting to umount a floppy before popping it out of the drive, this comes as blessed succor.
And the good news is: IT'S BACK!
Actually, it probably wasn't gone all that long, truth be known. I've been periodically checking in at the favorite sunsite.unc.edu mirror site and peeking around the /pub/linux/kernel/patches/ subdir for a newer version of supermount. No luck. Then recently, I saw a note posted by Stephen in response to someone's query that the program was available for the 2.0 kernels. To break the suspense, here's the URL:
There's a patch for kernel versions 2.0.0 and 2.0.23 and a README file that outlines the fairly simple steps to applying the patch, recompiling the kernel (and speaking of forgetting to do things, if you don't do a 'make zlilo' then DON'T FORGET TO RERUN LILO if you install the new kernel), and setting up the needed /etc/fstab entry to start using it. For those of you who've used supermount in the past, you'll be pleased to know that the installation and setup haven't changed since the kernel 1.2.13 version -- you should be able to use your old /etc/fstab (if it's still lying around somewhere) and have things come up working like they did in the Good Old Days!
Also, I wrote a short article on supermount several months ago for the LG and mentioned that I'd had a lot of trouble getting it to work correctly with the SoundBlaster 2X CD-ROM that I was using at the time. I was able to change CDs but the directory listing simply wasn't being updated correctly. Well, after the system upgrade this past Fall, I've switched to a Toshiba 8X CD and it works fine with this. Which reminds me...
If you want to use supermount with a CD-ROM, at least with the ATAPI type drive that I've got, then you'll likely want to make a small change to one of the kernel files to allow the CD-ROM drive door to be opened when the drive has been mounted. As most of you probably have noticed, once you mount the CD drive, the door is locked -- you have to umount the drive in order to open it and change CDs. Obviously, this doesn't work well if the point to using supermount is NOT having to do this type of this. So, to disable door locking, and PRESUMING YOU'RE USING AN ATAPI TYPE CD-ROM, then edit the file:
/usr/src/linux/drivers/block/ide-cd.c
Look for the following section which is near the beginning of the file:
/* Turning this on will disable the door-locking functionality. This is apparently needed for supermount. */ #ifndef NO_DOOR_LOCKING #define NO_DOOR_LOCKING 0 #endif
Change that '0' to a '1' after the NO_DOOR_LOCKING and you'll be all set. This, as the quick-witted will have already surmised, does what it implies: it disables door locking so you'll be able to change CDs. How about that for easy, eh?
So, to summarize what you'll need to do, here's the brief rundown:
$ cd /usr/src/linux $ cp "path-to-patch"/supermount-0.4c-for-2.0.diff . $ patch -s -p1 < supermount-0.4c-for-2.0.diff
Now, for the trusting (or merely lazy like myself... :-), here's a copy of the patches and the README file:
If you're the suspicious or just plain cautious type then go ahead and get the files from the URL above. Also, you might want to check there for updates or newer releases.
One thing that I've not really tried yet is seeing what happens if the CD-ROM drive is mounted via supermount and you attempt to play an audio CD. I've not had the nerve to try this. In this case, it's probably safe to go ahead and umount the drive, play the CD, and then mount the drive once again -- since there's an entry for the CD-ROM drive in /etc/fstab, all you should have to do is something like:
mount /cdrompresuming that /cdrom is where you normally mount your CD.
The other thing that I've not tried is using supermount with BOTH ext2 and MS-DOS type floppies. I suspect that it would cause a bit of trouble but, again, I've not been daring (or foolish...?) enough to try this little maneuver.
Anyway, I hope that give supermount a try! The README file is pretty helpful in terms of answering basic setup and usage questions and he includes a copy of his /etc/fstab file as an example. Hope you enjoy!
John
Nashville, TN
Mon Jan 20 10:26:51 CST 1997
I hesitate to even bring this up... :-)
One of the more common USENET postings in almost any of the linux groups these days is some newbee who innocently ventures a question such as "Is there a word processor for Linux like Word for Windows...". After the poor bloke gets flamed to a crisp with ardent admonitions to eschew such lollipop-ware and use a real text-processing system such as LaTeX or GROFF, there usually ensues a heated debate over the virtues of one's favorite system for getting something into print...
I think I'd like to avoid such debate... :-)
I would, however, like to humbly offer one possible solution to the need for a word processor under Linux -- especially if you're either unfamiliar with LaTeX or find that it doesn't completely meet your text-processing needs. And that is, using DOSEMU and one of the common word processors available for DOS. Now, if you already have a system working for you then by all means stick with it! However, if you still find yourself rebooting to DOS, OS/2, or Windows to do a bit of word processing then this might be one possible alternative.
But before I go on...
Let me quickly mention that I'm well aware that the usual business apps which have long been available for the other OS's -- the word processors, spreadsheets, desktop publishing packages, PIMs, and so forth -- are starting to appear as Linux-native applications! This is great news and I certainly welcome and support such efforts to bring these much-needed tools to the the Linux OS! Thing is, what I've tried so far really hasn't been helpful for me. To wit:
My other fairly minor complaint with it is the look of the output -- I've not been terribly impressed with the set of default fonts that come with it. Again, this is strictly a matter of taste, but the output hasn't been exactly what I'd hoped for..
Sorry, call me a heretic... :-)
I really don't want to get mired down in a review of all the possible word processor tools out there -- I mention these in order to say, "I've given them a a try..." Two other applications that really deserve to be mentioned include the Caldera's WordPerfect for Linux and the Applixware Suite available through Red Hat Software. I've not had a chance to try either of these out, although I've read a good deal of pro's and con's about each of them in the linux USENET hierarchy. A buddy at school just got a copy of the academic version of Applixware and I'm pretty interested in seeing this in action. So far, he's been pretty pleased with it, so I definitely need to stop by and give this a try!
Anyway, what I've found is working quite well for me is a combination of DOSEMU and WordPerfect 6.1 for DOS. If you happen to have an old (or new) copy of WP for DOS available to you, and you're willing to give DOSEMU a whirl, let me urge you to give this a try.
At this point, I'm going to do something I swore to myself I'd never do -- I'm going to weenie out on you a NOT go through the entire process of setting up DOSEMU. The reason for this is that, although I've gotten it up and running on my own box at home here, I really don't feel terribly comfortable with being able to walk anyone else through the process. I ended up tinkering around with it and, through an admittedly haphazard process of trial & (mostly) error, got the thing to work. There are still several things about it that I don't understand and so I won't inflict my ignorance upon you.
Still with me... :-)
Thing is, there's a very helpful little file that comes with DOSEMU called "QuickStart" that goes through the setup process step-by-step. If a Neanderthal like me can get this working, I'm confident that you can too!
What I would like to do is present a brief synopsis of my experiences with this in the hopes that it might be helpful to someone trying the same things. Again, let me emphasize that this represents strictly my own experiences. As the old saying goes, "your mileage may vary..."
After upgrading to kernel 2.0 I found it necessary to upgrade a number of packages, including DOSEMU. At the time, I picked up the most recent version which was dosemu-0.63.1.36. The configuration, compilation, and installation were as simple as:
$ ./configure $ make $ make installThis defaulted to including DPMI support, requiring the emumodule and syscallmgr modules to be loaded before being able to use DOSEMU. DPMI support allows you to try your hand at booting up Windows under DOSEMU. Over the past few months I've had mixed success at best in doing this. Also, since this is not currently supported by the DOSEMU folks, you're completely on your own if you want to venture into this! :-)
After compiling and installing the binaries, I used the QuickStart file as a guide and created the needed /etc/dosemu.conf and /etc/dosemu.users files. DOSEMU comes with a heavily commented configuration file -- dosemu.conf -- that let's you customize in a rational manner. For the curious, here's my current working version of dosemu.conf:
Let me make a couple comments about this before going on:
Be aware that if you do this, there's a chance that if DOSEMU crashes it will not correctly reset the keyboard and could potential require a cold boot (or a remote 'kbd_mode -a' to reset it). See the comments in the dosemu.conf file about this.
After doing all of this (in an incremental fashion) I found myself with a working version of DOSEMU and a functional WP program!
I also decided to load the emumodule and syscallmgr modules at boot time so that I could use DOSEMU more easily. To do so, you'll probably want to the use 'insmod' program that gets compiled with the rest of the DOSEMU files. The easy way to do this is to use the 'load_module.sh' script in the root DOSEMU directory. I found that by editing the first couple lines of the script I was able to call it from any directory: just add the correct path names at the top:
#!/bin/bash MODULESDIR=/usr/local/lib/dosemu-0.63.1.36/0.63.1.36/modules BINDIR=/usr/local/lib/dosemu-0.63.1.36/bin [...]and then add a stanza to /etc/rc.d/rc.local such as:
if [ -x /usr/local/lib/dosemu-0.63.1.36/load_module.sh ]; then echo "Loading DOSEMU 0.63 modules..." . /usr/local/lib/dosemu-0.63.1.36/load_module.sh fiThe modules use up very little memory and the convenience of not having to remember to load them is probably worthwhile.
There really isn't an awful lot of startling news here -- if you're used to installing DOS programs then this is pretty much a "no-brainer". The one important point to make, however, has to do with video driver installation. I discovered something quite valuable recently when I re-installed my system over Christmas Break.
The first time I set up WP 6.1 I installed only the S3 drivers (since I'm using an S3-based Diamond card). I found that doing so provided graphics mode support under DOS in resolutions up to 1280x1024. However, I was keenly disappointed to find that the best graphics-mode resolution I could get under DOSEMU was an abysmal 320x200. No matter how I poked, prodded, wheedled, cajoled, threatened, and messed with it, that's all I got.
Serious Bummer... :-(
Over Christmas, when I reinstalled the system, I noticed that one of the video drivers was labeled simply "VESA" and so, on a whim, installed that as well as the S3 drivers. This turned out to be quite fortuitous as although the S3 drivers still did not give better than 320x200 resolution, the VESA driver actually allowed me to get 1024x768 in 8-bit color. On a 17" monitor, this is a very comfortable resolution and provides pretty good WYSIWYG previewing.
So, the moral of the story is -- if you're in doubt, give the VESA video drivers a whirl.
Once I got DOSEMU installed and properly configured (BTW, I also created the /etc/dosemu.users file that simply has the word "all" as the sole word on the first line -- this let's anyone (i.e., me) to execute the program) and WP 6.1 installed, I was quite pleased to discover that nearly all the features available from running it under DOS were also available under DOSEMU:
A feature of WP that is completely UNAVAILABLE under DOS is that I can be editing a file in WP under DOSEMU and, using Ctrl-Alt-Fn, switch to another virtual terminal and continue to work under Linux. Running X Window concurrently also has shown no signs of causing problems.
Let me say this again since I get a chill just thinking about it...
I can run DOSEMU + WP 6.1 in a virtual terminal and have full editing and printing capabilities while at the same time freely switch to another VT or even to X Window and have all these processes running concurrently!!
This is what makes Linux such a seriously cool OS!!
This is way too cool... ;-)
The one caveat I'd mention is that of using WP in graphics mode. I don't know about WP 5.1, but version 6.1 supports a fairly respectable graphics-mode that provides WYSIWYG editing and print preview. On my system, the performance is quite acceptable, although not quite as responsive as under DOS (but then who'd want to run anything under DOS if they didn't really need to... :-) However, switching to a VT or to an X Window session while in graphics mode renders the system completely unusable -- the keyboard AND the console both go into impenetrable lockup which only a cold boot fixes. This has, at least, been my experience. However, I found that if I simply exited back to text mode before switching to another VT then everything worked fine.
Finally, let me make one last comment about using WP under DOSEMU. One of my ongoing complaints about many (though certainly not all) of the current "word processors" available for Linux is the quality of the printed output. The features that drew me to using WP were the familiarity with the program and the quality of the final output. WP 6.1 supports, among other things, TrueType fonts and having invested in a Corel Draw some time back (and its 750+ TT fonts) I was pretty keen to being able to continue to use these. I've been quite pleased that under Linux I can still do basic word processing in a known environment with predictable output. That was the clincher for me.
Again, let me quickly add that this might not be at all what you want or you might simply dislike the WP system itself. The thing about Linux is that it give you a choice once again!
And, for the skeptics out there, those who said, "it can't be done...", here's a screen shot of WP 6.1 running under X...
Give this a try! If you like it, keep it. If not, delete it and have a look at something else. Also, if you're looking for something to run under X then you might be well served to give either the Applixware suite or the Linux WordPerfect port a try. DOSEMU will run under X (as xdos) but WP loses some of its functionality -- mouse support and keystroke support can be a bit flaky and graphics-mode support is completely lost. So, if X is where you spend most of your time, you might consider investing in or investigating one of the native X programs.
Most of all, though...
Have Fun & Happy Linux'ing!
John
Nashville, TN
Mon Jan 20 13:18:01 CST 1997
Well, here's a little nothingburger that comes pretty close to being a bona fide FAQ -- the question arises from time to time as to how to (automatically) wallpaper one's X Window session after starting X. For the impatient, the short answer is:
xv -quit -root image.gifPresuming, of course, that the image that you wanted to use was in fact called "image.gif" the above would use the ubiquitous xv program by John Bradley to tile your root window with the specified window. The "-quit" option causes xv to do its work and then quietly terminate.
If you're using one of the 1.x versions of FVWM then just add a stanza such as the following:
Function "InitFunction" Exec "I" exec /usr/X11/bin/xv -quit -root /usr/gx/image.gif & [...] EndFunctionThat is, you simply add a stanza for xv to the "InitFunction" and this is done automatically!
Since I've not upgraded to the newer FVWM 2.x version (nor FVWM-95, or any of the other myriad new window managers) you're rather on your own with this one. However, I suspect that a quick perusal of the manual page or the configuration file should quickly point the way.
At the moment, I'm using olvwm 4 (with the 3.2 libraries) and added the following to the /var/openwin/lib/Xinitrc file:
#!/bin/sh # Xinitrc executed by openwin script to display startup logo # and restore desktop setup (saved using owplaces) # Hereby placed into public domain by Kenneth Osterberg 1993. [...] # Start programs exec /usr/X11/bin/xv -quit -root /var/openwin/lib/marbleFlowers.gif & exec /usr/local/X11/bin/xcalendar -geometry 240x240+0+160 & exec /usr/X11/bin/xclock -geometry 134x127+252+0 & exec /usr/local/X11/bin/rxvt -ls -font 9x15 -geometry 80x32+500+195 & exec /usr/local/X11/bin/rxvt -ls -font 9x15 -geometry 79x31+252+268 & exec /home/fiskjm/bin/syslogtk -geometry +398+0 & # Startup the OpenLook window manager if [ ! -z "$WINDOWMANAGER" ]; then exec $WINDOWMANAGER else exec $OPENWINHOME/bin/olwm fiThis has the identical effect of tiling the root window before olvwm is launched.
If you're interested in this, there are actually all sorts of nifty things that you can play with along this line. Keep in mind that xv has a plethora of options for setting the root window image interactively. To do so, simply find an image that you'd like to play with, launch xv with the image filename as the argument, and then select the "Root" button. I won't list all the possible options -- try them out and amuse yourself!
Thing is, to really have a good time you need to have a few images to play with and question is, where to get these little rascals...?
Well...
Here's a couple ideas to get you going:
There are all KINDS of great images out there that you can play with. FWIW, the GIMP home page has a fantastic marble tile image on its front page. It's wallpapering my desktop at this moment.
You might also do a quick Yahoo, Alta Vista, or WebCrawler search for any of the numerous Online Art Museums and Art Galleries. Or, for all you 60's Baby Boomers who grew up watching the Apollo flights and dreamt of being an astronaut, check out NASA's huge collection of space related images. If you're a Netscape user, simply click the right mouse button over the image and save it to disk. Keep in mind that some images do have copyright protection.
One of the other fun programs to play with is xfractint which generates fractal images. It will also SAVE those images in GIF format.
So let's do a quick walk through on this.
After I somewhat reluctantly installed Win95 this past Fall (I was taking a Visual Programming class and you can guess as to which Visual language we had to use...) I discovered a few new wallpaper images including one that I really liked -- the Forest.bmp image. I happen to enjoy hiking around in the nearby Great Smoky Mountains and grew up in the pine forests of upstate New York. Anyway, I decided that I'd gotten a bit tired of the 'ol SteelBlue background and was ready for a change. Here's what I did...
After mounting my Win95 partition and copying the c:\win95\Forest.bmp file to my home directory I used xv to have a look at this rascal and convert it to a GIF image. XV allows you to save an image as any number of different formats and I chose GIF, Full Color. That done, I had a suspicion that this might be a bit of a color resource hog -- a suspicion that was confirmed by another handy little program, xli.
Xli is a graphics manipulation program that is easily found at any of the sunsite mirrors in the X11 directory under the graphics viewers subdirectory. One of its handy features is the "identification" mode that it can run in. To get information about an image (from the command line) simply type in:
xli -ident image.gifand assuming that the image you were interested was, in fact, named "image.gif" then it would print out a useful one-liner. Doing this to the Forest.gif image that we just created using xv, we find:
$ xli -ident ~/Forest.gif /home/fiskjm/Forest.gif is a 256x256 GIF87a image with 256 colorsHmm... the size is OK, but with 256 colors this will definitely burn out my color map quicker than you can say Netscape! Now, enter the next useful program to our arsenal of image tools -- ImageMagick.
ImageMagick is one of those seriously cool, Must-Have programs if you're playing around with images very often. I recently found the latest version (nicely pre-compiled, thank you...) at the GA Tech sunsite mirror (ImageMagick-3.7.9-tgz) along with the needed libraries (libIMPlugin-1.0-tgz). Installing the precompiled bin's was a no-brainer and I was up and running in no time flat.
One of the programs that is included with ImageMagick (it's actually a suite of programs) is convert. convert allows you to quickly and easily convert images from one format to another and to optionally set its various attributes. You need to have a look at the manual page (which is included with the binaries) to really appreciate all the things this is capable of doing. For what I was trying to do, all I needed was to set the size of the color map to something a bit more sane.
Using the "-colors" option I was able to set the "preferred number of colors" to something that was a bit more X friendly:
convert -colors 32 ~/Forest.gif ~/forest.gifDoing this and running xli on it once again, we find that it has, in fact, been stripped down to a more lean 32 colors:
xli -ident ~/forest.gif ~/forest.gif is a 256x256 interlaced GIF89a image with 32 colorsThat's a bit better. Now I suppose that I could have used an even smaller number but 32 colors gave an image that differed visually from the original image very little.
Anyway, that was it! I now had a 256x256 image with 32 colors that no longer threatened to burn out my entire color map! I added a stanza to the Xinitrc file and voila!, instant wallpaper!
Keep in mind that this is hardly the only way to do this. There are several other nifty programs out there that provide similar functionality. Try scrounging around in the X11/graphics/ subdirectory of any of the sunsite mirrors or at ftp.x.org in its contrib subdirectory.
Another program that I'll mention before closing this up is the truly awesome xearth program. If you're looking for a truly impressive, animated X wallpaper program, look no further. This is one way too cool program! I don't have a screen shot of it to show you but believe me, it's worth setting it up and giving it a whirl! At the moment, you should be able to find it in the /X11/xapps/graphics/ subdirectory of any of the sunsite Linux mirrors. The file to look for is xearth-1.0.tgz.
Anyway, hope this gets you going! I admit that it's been a bit of a smorgasbord of suggestions, but you might be able to find something useful here! :-)
As usual, hope you enjoy!
John
Nashville, TN
Mon Jan 20 21:08:25 CST 1997
Well, as usual, things around here have been busier than I'd hoped and I just don't have the time to do all of the writing that I'd like to. Also, I'm trying to keep this page to a reasonable size :-) (those of you who've been hanging around here for a while might remember those 160K+ size pages...).
So what has everyone been up to? Found any new toys... :-)
Over Christmas, I finally started working on something that I'd been promising to do for ages: I've started to learn emacs! I have to say that this has been a bit of a paradigm shift after having used VIM for such a long time. However, I can see why the loyalties run so deep -- Emacs is a seriously cool and indisputably powerful editor. Truth is, however, that I've not taken the purist approach: I have to admit that I'm really using XEmacs. I also got my hands on an xemacs-derivative called infodock which is another way-too-cool and VERY powerful editor.
I'd hoped to write a bit on my initial experiences and impressions but I guess that will have to wait for another month or so. Thing is, there are actually quite a number of GREAT editors out there to mess around with. And the more that I try out different ones (I've decided that I really am an "editor junkie...") the more I'm convinced that the essence of the editor flame wars that periodically erupt can be summed up in preference.
Not to throw a wet blanket on anyone's jihad, but...
Although feature sets, user interfaces, resource utilization, performance issues, and so forth are very valid issues when discussing the various merits and liabilities of one's favorite editor, the bottom line is: you probably use it because you like it! I have to admit that after using VIM for the past couple years, emacs is something of an acquired taste. However, if you've been using emacs for a while, then vi looks a bit stark and your fingers feel bewildered.
Anyway, there's no accounting for taste and no apologies for it either. The great thing about Linux is that "it restores the choice once again!" Try everything out, use what is useful, keep what you like. And FWIW, those of you using a VI clone like myself might be interested in giving the latest iteration of VIM a test drive. As of a little bit ago, VIM 4.5 source was in the sunsite Incoming directory. It can now be compiled, using the Motif widget set, to have both a console-based and an honest-to-goodness X-based interface. The X version is called gvim (for "Graphical VIM") and I'm using it right now. It has all of the usual keystrokes (for all you ten-fingered typers...) but has nice mouse support for cursor positioning and cut-and-paste operations. It also sports a handsome scrollbar, handles multiple windows with aplomb, and even touts a rudimentary but useful menu bar. It has a very extensive online help system that is vaguely hypertext-like: you can navigate from one "node" to another using a keystroke similar to that with tags: Ctrl-] selects a node and Ctrl-t returns you to the original location.
With any luck, I'll have some time this next month and will try to put something together -- mostly just chat, nothing terribly profound. I've got a few screen dumps for the visually-oriented. Those of you who are considering taking the leap and learning emacs might well be served to have a look at this rascal. AND, keep in mind that it is NOT just an X Window app -- it'll run in console mode just as easily as under X. Have a look at the XEmacs home page for more info:
Well, I've got a bit of work to get one tonight and so I'll wrap this up for the month. I'm still trying to get out from underneath a pile of email. Hang in there... I'm coming!
Best Wishes and Happy Linux'ing!!
John M. Fisk
Nashville, TN
Monday, January 27, 1997
If you'd like, drop me a note at:
John M. Fisk <fiskjm@ctrvax.vanderbilt.edu>
Version Information:
$Id: issue14.html,v 1.1.1.1 1997/09/14 15:01:40 schwarz Exp $
James McDuffie is a 17 year old high school student who is looking forward to graduating. In college he plans to major in Computer Science and minor in English. He would like to be a writer while still working with computers. James wrote the article Connecting Computers via PLIP which appeared in issue #6 of the Linux Gazette. He has been an avid reader of the Linux Gazette ever since it was just starting out. And wishes that it continues helping the Linux community for some time to come.
James Shelburne currently lives in Waco, Texas where he spends most of his free time working on various Linux networking projects. Some of his interests include Perl + CGI, Russian, herbal medicine and the Ramones (yes, you heard right, the Ramones). He is also a staunch Linux advocate and tries to convert every MacOS/MS Windows/AMIGA user he comes into contact with. Needless to say, only other Linux users can stand him.
Jens Wessling is a 26 year old Research Scientist working for the Environmental Research Institute of Michigan. He has been playing with Linux since Kernel 1.0.99. He is married and has 2 cats. He is currently working on his Masters Degree in Computer and Information Science at the University of Michigan. Life frequently gets in his way.
Thanks to all our authors, not just the ones above, but also those who wrote giving us their tips and tricks and making suggestions. Thanks also to our new mirror sites. We get new ones every month. I was very excited to have both one in Russia and the new Italian translation site go up this month. (See the Mirror Page.)
My two favorite holidays are Valentine's Day and Halloween. Not sure I want to know what that little fact may have to say about my psyche. At any rate I hope the animated heart wasn't too annoying. I thought it was quite cute. Thanks to Michael, our web guy, for finding it and the roses to present to our authors.
Two days after Valentine's on February 16, Riley and I will be celebrating our 5th wedding anniversary. In fact, we're celebrating all weekend -- a long one with the holiday -- by leaving town and telling no one where we are going. Riley is a very special guy, and we've had a great 5 years. I look forward to many more with him.
February 16 is also the birthday of my nephew Alex Carter. He's 14 and working on his Black Belt in Tae Kwon Do. He's a smart kid and loves playing on his computer. I need to find the time to introduce him to Linux.
On a professional note, I am now Managing Editor of Linux Journal as well as Linux Gazette. Gary Moore and I have switched jobs--keeps things from getting boring. However, I refused to give up custody of Linux Gazette, it's just too much fun.
Have fun!
Marjorie L. Richardson
Editor, Linux Gazette gazette@ssc.com
Linux Gazette, http://www.ssc.com/lg/
This page written and maintained by the Editor of Linux Gazette,
gazette@ssc.com