From: Aaron M. Lee aaron@shifty.adosea.com
Howdy Jim, My name's Aaron and I am sysadmin Cybercom Corp., an ISP in College Station, TX. We run nothing but Linux, and have been involved w/ a lot of hacking and development on a number of projects. I have an unusual problem and have exhausted my resources for finding an answer- so i thought you might be able to help me out, if you've got the time. Anyway, here goes...
I've got a scsi disk I was running under Sparclinux that has 3 partitions, 1 Sun wholedisk label, 2 ext2. That machine had a heart attack, and we don't have any spare Hypersparcs around- but I _really_ need to be able to mount that drive to get some stuff off of it. I compiled in UFS fs support w/ Sun disklabel support into the kernel of an i386 Linux box, but the when I try to mount it, it complains that /dev/sd** isn't a valid block device, w/ either the '-t ufs' or '-t ext2' options. Also, fdisk thinks the fs is toast, and complains that the blocks don't end in physical boundaries (which is probably the case for an fdisk that doesn't know about Sun disklabels), and can't even tell that the partitions are ext2 (it thinks one of them is AIX!). Any ideas?
Consider the nascent state of Sparc support for Linux I'm not terribly surprised that you're having problems. You seem to be asking: "How do I get Linux/Intel to see the fs on this disk?"
However I'm going to step back from the that question and ask the broader question: "How do you recover the (important) data off of that disk in a usable form?"
Then I'll step back even further and ask: "How important is that data? (what is its recovery worth to you)?"
... and
"What were the disaster plans, and why
are those plans inadequate for this
situation?"
If you are like most ISP's out there -- you have not disaster or recovery plans, and little or no backup strategy. Your boss essentially asks you to running back and forth on the high wire at top speed -- without a net.
As a professional sysadmin you must resist the pressure to perform in this manner -- or at least you owe it to yourself to carefully spell out the risks.
In this case you had a piece of equipment that was unique the Sparc system -- so that any failure of any of its components would result in the lack of access to all data on that system.
Your question makes it clear that you didn't have sufficiently recent backups of the data on that system (otherwise the obvious solution would be to restore the data to some other system and reformat the drive in question).
My advice would be to rent (or even borrow) a SPARC system for a couple of days (a week is a common minimum rental period) -- and install the disk into that.
Before going to the expense of renting a system (or buying a used one) you might want to ensure that the drive is readable at the lowest physical level. Try the dd command on that device. Something like:
dd if=/dev/sda | od | less
... should let you know if the hardware is operational. If that doesn't work -- double and triple-check all of the cabling, SCSI ID settings, termination and other hardware compatibility issues. (You may be having some weird problem with a SCSI II differential drive connecting to an incompatible controller -- if this is an Adaptec 1542B -- be sure to break it in half before throwing it away to save someone else the temptation (the 1542C series is fine but the B series is *BAD*)).
Once you are reasonably confident that the hardware is talking to your system I'd suggest doing a direct, bitwise, dump of the disk to a tape drive. Just use a command like:
dd if=/dev/sda of=/dev/st0
... if you don't have a sufficiently large tape drive (or at least a sufficiently large spare hard disk) *and can't get one* than consider looking for a better employer.
Once you have a tape backup you can always get back to where you are now. This might not seem so great (since you're clearly not where you'd like to be) but it might be infinitely preferable to where you'll be if you have a catastrophic failure on mounting/fsck'ing that disk.
For the broader problem (the organizational ones rather Than the technical ones) -- you need to review the requirements and expectations of your employer -- and match those against the resources that are being provided.
If they require/expect reliable access to their data -- they must provide resources towards that end. The most often overlooked resource (in this case) is sysadmin time and training. You need the time to develop disaster/recovery plans -- and the resources to test them. (You'd be truly horrified at the number of sites that religiously "do backups" but have an entire staff that has never restored a single file from those).
Many organizations can't (or won't) afford a full spare system -- particularly of their expensive Sparc stations. They consider any system that's sitting on a shelf to be a "waste." -- This is a perfectly valid point of view. However -- if the production servers and systems are contributing anything to the companies bottom line -- there should be a calculable cost for down time. If that's the case then there is a basis for comparison to the costs of rentals, and the costs of "spare" systems.
Organizations that have been informed of this risks and costs (by there IS staff) and continue to be unwilling or unable to provide the necessary resources will probably fail.
Thanks in advance for any possible help, --Aaron
It's often the case that I respond with things that I suspect my customer don't want to hear. The loss of this data (or the time lost to recovering it) is an opportunity to learn and plan -- you may prevent the loss of much more important information down the road if you now start planning for the inevitable hardware and system failures.
From:Steven W., steven@gator.net
Can you help me? Do you know of a Unix (preferably Linux) emulator that runs under Windows95?
-- Steven.
Short Answer: I don't know of one.
Longer Answer:
This is a tough question because it really doesn't *mean* anything. An emulator is a piece of software that provide equivalent functionality to other software or hardware. Hopefully this software is indistinguishable from the "real" thing in all ways that count.
(Usually this isn't the case -- most VT100 terminal emulation packages have bugs in them -- and that is one of the least complicated and most widespread cases of emulation in the world).
A Unix "emulator" that ran under Win '95 would probably not be of much use. However I have to ask what set of features you want emulated?
Do you want a Unix-like command shell (like Korn or Bash)? This would give you some of the "feel" of Unix.
Do you want a program that emulates one of the GUI's that's common on Unix? There are X Windows "display servers" (sort of like "emulators") that run under NT and '95. Quarterdeck's eXpertise would be the first I would try.
Do you want a program that allows you to run some Unix programs under Win '95? There are DOS, OS/2, and Windows (16 and 32 bit) ports of many popular Unix programs -- including most of the GNU utilities. Thus bash, perl, awk, sed, vi, emacs, tar, and hundreds of other utilities can be had -- most of them for free.
Do you want to run pre-compiled Unix binaries under Win '95? This would be a very odd request since there are dozens of implementations of Unix for the PC platform and hundreds for other architectures (ranging from Unicos on Cray super- computers to Minix and Coherent on XT's and 286's). Binary compatibility has playing only a tiny role in the overall Unix picture. I suspect that supporting iBCS (a standard for Unix binaries on intel processors -- PC's) under Win '95 would be a major technical challenge (and probably never provide truly satisfying results).
*note*: One of the papers presented at Usenix in Anaheim a couple of months ago discussed the feasibility of implementing an improved Unix subsystem under NT -- whose claim of POSIX support as proven to be almost completely useless in the real world. Please feel free to get a copy of the Usenix proceeding if you want the gory details on that. It might be construed as a "Unix emulation" for Windows NT -- and it might even be applicable to Win '95 -- with enough work.
If you're willing to run your Windows programs under Unix there's hope. WABI currently supports a variety of 16-bit Windows programs under Linux (and a different version support them under Solaris). Also work is continuing on the WINE project -- and some people have reported some success in running Windows 3.1 in "standard mode" under dosemu (the Linux PC BIOS emulator). The next version of WABI is expect to support (at least some) 32-bit Windows programs.
My suggestion -- if this is of any real importance to you -- is that you either boot between Unix and DOS/Windows or that you configure a separate machine as a Unix host -- put it in a corner -- and using your Win '95 system as a terminal, telnet/k95 client and/or an X Windows "terminal" (display server).
By running any combination of these programs on your Windows box and connecting to your Linux/Unix system you won't have to settle for "emulation." You'll have the real thing -- from both sides. In fact one Linux system can serve as the "Unix emulation adapter" for about as many DOS and Windows systems as you care to connect to it.
(I have one system at a client site that has about 32Mb of RAM and 3Gb -- it's shared by about 300 shell and POP mail users. Granted only about 20 or 30 of them are ever shelled at any given time but it's no where near it's capacity).
I hope this gives you some idea why your question is a little non-sensical. Operating systems can be viewed from three sides -- user interface (UI), applications programming interface (API), and supported hardware (architecture).
Emulating one OS under another might refer to emulating the UI, or the API or both. Usually emulation of the hardware support is not feasible (i.e. we can't run DOS device drivers to provide Linux hardware support).
If one implemented the full set of Unix system calls in a Win '95 program that provided a set of "drivers" to translate a set of Unix like hardware abstractions into calls to the Windows device drivers -- and one ported a reasonable selection of software to run under this "WinUnix kernel" -- one could call that "Unix emulation."
However it would be more accurate to say that you had implemented a new version of Unix on a virtual machine which you hosted under Windows.
Oddly enough this is quite similar to what the Lucent (Formerly Bell Labs?) Inferno package does. Inferno seems to have evolved out of the Plan 9 research project -- which apparently was Dennis Ritchie's pet project for a number of years. I really don't know enough about the background of this package -- but I have a CD (distributed to attendees of the aforementioned Usenix conference) which has demo copies of Inferno for several "virtual machine" platforms (including Windows and Linux).
Inferno is also available as a "native" OS for a couple of platforms (where it includes it's own device drivers and is compiled as direct machine code for a machine's platform).
One reason I mention Inferno is that I've heard that it offers features and semantics that are very similar to those that are common in Unix. I've heard it described as a logical outgrowth of Unix that eschews some of the accumulation of idiosyncrasies that has plagued Unix.
One of these days I'll have to learn more about that.
I have Windows95 and Linux on my system, on separate partitions, I can't afford special equipment for having them on separate machines. I really like Linux, and Xwindows, mostly because of their great security features. (I could let anybody use my computer without worrying about them getting into my personal files). Windows95's pseudo-multi-user system sucks really bad. So, mainly, this is why I like Linux. I also like the way it looks. Anyways, I would just run Linux but my problem is that Xwindows doesn't have advanced support for my video card, so the best I can get is 640x480x16colors and I just can't deal with that. Maybe I'm spoiled. The guy I wrote on the Xwin development team told me that they were working on better support for my card, though. (Aliance Pro-Motion). But, meanwhile, I can't deal with that LOW resolution. The big top-it-off problem is that I don't know of anyway to have Linux running _while_ Win95 is running, if there even is a way. If there was, it would be great, but as it is I have to constantly reboot and I don't' like it. So this is how I came to the point of asking for an emulator. Maybe that's not what I need after all. So what can I do? Or does the means for what I want not exist yet?
-- Steven.
If you prefer the existing Linux/X applications and user interface -- and the crux of the problem is support for your video hardware -- focus on that. It's a simpler problem -- and probably offers a simpler solution.
There are basically three ways to deal with a lack of XFree86 support for your video card:
Be sure to contact the manufacturer to ask for a driver. Point out that they may be able to make small changes to an existing XFree86 driver. You can even offer to help them find a volunteer (where you post to the comp.os.linux.dev...sys. newsgroup and one or two of the developer's mailing lists -- and offer some support). Just offering to do some of the "legwork" maybe be a significant contribution.
This is an opportunity to be a "Linux-Activist."
-- Jim
From:Charles A. Barrassocharles@blitz.com
I was wondering how I would go about using X with 2 monitors and 2
video cards? I am currently using XFree86 window manager. I know you
can do this with the MetroX window manager but that costs money :(.
I'm sure I gave a lengthy answer to this fairly recently. Maybe it will appear in this month's issue (or maybe I answered it on a newsgroup somewhere).
In any event, the short answer is: You don't.
The PC architecture doesn't support using multiple VGA/EGA cards concurrently. I don't think XFree86 can work with CGA cards (and who'd want to!). You might be able to get a Hercules compatible Monochrome Graphics Adapter (MGA) to work concurrently with a VGA card (since they don't use overlapping address spaces). I don't know if this is the method that Metro-X supports.
There are specialized video adapters (typically very expensive -- formerly in the $3000+ range) that can co-exist with VGA cards. Two sets of initials that I vaguely recall are TIGA and DGIS. Considering that you seem unwilling to pay $100 (tops) for a copy of Metro-X I think these -- even if you can still find any of them -- are way out of your price league.
Another, reasonable, alternative is to connect a whole Xterminal or another whole system and run X on that. You can then remotely display your windows on that about as easily as you could set them to display on the local server.
(I know -- you might not get some cool window manager to let you drag windows from one display server to another -- a trick which I've seen done with Macs under MacOS and with Suns and SGI's. But I've never set one of those up anyway -- so I couldn't begin to help you there).
You might double check with the Metro-X people to see what specific hardware is required/supported by their multiple display feature and then check with the XFree86.org to see if anyone has any drivers for one of those supported configurations.
As a snide note I find your phrase "that costs money :(" to be mildly offensive. First the cost of an additional monitor has got to be at least 3 times the price of a copy of Metro-X. Second "free" software is not about "not having to pay money."
I'm not trying to sell you a copy of Metro-X here. I don't use it -- and I specifically choose videos cards that are supported by XFree86 when I buy my equipments.
Likewise I don't recommend Linux to my customers because it "doesn't cost them anything." In fact it does cost them the time it takes me to install, configure and maintain it -- which goes for about $95/hr currently. I recommend Linux because it is a better tool for many jobs -- and because the benefits of it's being "free" -- in the GNU sense of the term -- are an assurance that no one can "have them over a barrel" for upgrades or additional "licensing" fees. They are always *free* to deploy Linux on as many systems as they want, have as many users and/or processes as they want on any system, make their own modifications to the vast majority of tools on the system or hire any consultants they want to make the customizations they need.
I'm sorry to be so "political" here -- but complaining that Metro-X "costs money" and asking me for a way to get around that just cost me about $50 worth of my time. Heck -- I'll go double or nothing -- send my your postal address and I'll buy you a copy of RedHat 4.1. That comes with a license for one installation of Metro-X and only costs about $50. I'll even cover the shipping and handling.
(Please call them first to make sure that it really does support your intended hardware configuration).
Thanks for the time,
No problem. (I did say "mildly" didn't I).
-- Jim
From: Wietse Venema wietse@szv.sin.tue.nl
tcpd has supported virtual hosting for more than two years. Below
is a fragment from the hosts_access(5) manual page.
Wietse
Thanks for the quick response. I'll have to play with that. I suppose a custom "virtual finderd" would be a good experiment.
Do you know where there are any working examples of this and the twist option posted to the 'net? I fight with some of these and don't seem to get the right results.
What I'd like is an example that drops someone into a chroot'd jail as "nobody" or "guest" and running a copy of lynx if they are from one address -- but lets them log in a a normal user if they are from an internal address. (We'll assume a good anti-spoofing packet-filter on the router(s)).
Did you ever add the chrootuid functionality to tcpd?
How would you feel about an option to combine the hosts.allow and hosts.deny into just tcpd.conf?
(I know I can already put all the ALLOW and DENY directives in a single file -- and I'm not much of a programmer but even *I* could patch my own copy to change the filename -- I'm just talking about the general case).
SERVER ENDPOINT PATTERNS
In order to distinguish clients by the network address
that they connect to, use patterns of the form:
process_name@host_pattern : client_list ...
(which is what he said one to me when I suggested merging his chrootuid code with tcpd).
I've blind copied Wietse on this (Hi!). I doubt he has time to read the Linux Gazette. -- Jim
From:Wietse Venema, wietse@wzv.win.tue.nl
Use "twist" to run a service that depends on destination address:
fingerd@host1: ALL: twist /some/where/fingerd-for-host1
What I'd like is an example that drops someone into a
chroot'd jail as "nobody" or "guest" and running a
copy of lynx if they are from one address -- but
lets them log in a a normal user if they are from an
internal address. (We'll assume a good anti-spoofing
packet-filter on the router(s)).
I have a little program called chrootuid that you could use.
Did you ever add the chrootuid functionality to tcpd?
I would do that if there was a performance problem. Two small
programs really is more secure than a bigger one.
How would you feel about an option to combine the
hosts.allow and hosts.deny into just tcpd.conf?
What about compatibility with 1 million installations world-wide?
(I know I can already put all the ALLOW and DENY
directives in a single file -- and I'm not much of a
programmer but even *I* could patch my own copy to
change the filename -- I'm just talking about the
general case).
This is because the language evolved over time. Compatibility can
become a pain in the rear.
--
Weitse
From:Kenneth Ng, kenng@kpmg.com And that's it. Granted ssh is better. But sometimes you have to go
somewhere that
only supports ftp.
That's one of several ways. Another is to use ncftp
-- which supports things like a "redial" option to keep
trying a busy server until it gets through. ncftp also has
a more advanced macro facility than the standard .netrc (FTP).
You can also use various Perl and Python libraries (or classes)
to open ftp sessions and control them. You could use 'expect'
to spawn and control the ftp program.
All of these methods are more flexible and much more robust
than using the standard ftp client with redirection ("here"
document or otherwise).
--
Jim
From: Stephen P. Smith, ischis@evergreen.com If I try an anonymous ftp session, the email password is rejected.
what are the possible sources of failure?
where should i be going for more help? :-)
Do you have a user named 'ftp' in the /etc/passwd file?
done.
wu-ftpd takes that as a hint to allow *anonymous* FTP.
If you do have one -- or need to create one -- be sure that
the password for it is "starred out." wu-ftpd will not
authenticate against the system password that's defined for a
a user named "ftp."
done.
You should also set the shell to something like /bin/false or
/bin/sync (make sure that /bin/false is really a binary and
*not* a shell script -- there are security problems -- involve
IFS (inter-field separators) if you use a shell script in the
/etc/passwd shell field).
done.
There is an FAQ for anonymous FTP (that's not Linux specific).
There is also a How-To for FTP -- that is more Linux oriented.
If you search Yahoo! on "wu-ftp" you'll find the web pages
at Washington University (where it was created) and at
academ.com -- a consulting service that's taken over development
of the current beta's.
Guess I will just have to do it the hard
way. Will tell you what I find (just in
case you want to know.
What does your /etc/ftpaccess file look like?
Did you compile a different path for the ftpaccess file
(like /usr/local/etc/)?
What authentication libraries are you using (old
fashioned DES hashes in the /etc/passwd, shadow,
shadow with MD5 hashes -- like FreeBSD's default,
or the new PAM stuff)?
Is this invoked through inetd.conf with tcpd
(the TCP Wrappers)? If so, what does your /var/log/messages
say after a login failure? (Hint: use the command:
'tail -f /var/log/messages > /dev/tty7 &' to leave a continuously
updated copy of the messages file sitting on one of your
-- normally unused -- virtual consoles).
One trick I've used to debug inetd launched programs (like
ftpd and telnetd) is to wedge a copy of strace into the
loop. Change the reference to wu.ftpd to trace.ftpd --
create a shell or perl script named trace.ftpd that consists
of something like:
... and then inspect the strace file for clues about
what failed. (This is handy for finding out that the
program couldn't find a particular library or configuration
file -- or some weird permissions problems, etc).
--
Jim
From: Yash Khemani, khemani@plexstar.com i am guessing that the lilo floppy does not have on it the pcmcia
drivers. what is the solution at this point to run RedHat on this
machine?
You've got the right idea.
The 1010101010101... from LILO is a dead giveaway that
your kernel is located on some device that cannot be
accessed via the BIOS.
There are a couple of ways to solve the problem.
I'd suggest LOADLIN.EXE.
LOADLIN.EXE is a DOS program (which you might have
guessed by the name) -- which can load a Linux kernel
(stored as a DOS file) and pass it parameters (like
LILO does). Basically LOADLIN loads a kernel (Linux or
FreeBSD -- possibly others) which then "kicks" DOS
"out from under it." In other words -- it's a one-way
trip. The only way back to DOS is to reboot (or
run dosemu ;-) .
LOADLIN is VCPI compatible -- meaning that it can run
from a DOS command prompt even when you have a memory
manager (like QEMM) loaded. You can also set LOADLIN
as your "shell" in the CONFIG.SYS. That's particularly
handy if you're using any of the later versions of DOS
that support a multi-boot CONFIG.SYS (or you're using the
MBOOT.SYS driver that provided multi-boot features in
older versions of DOS).
To use LOADLIN you may have to create a REALBIOS.INT
file (a map of the interrupt vectors that are set by
your hardware -- before any drivers are loaded).
To do this you use a program (REALBIOS.EXE) to create
a special boot floppy, then you boot off that floppy
(which records the interrupt vector table in a file)
-- reboot back off your DOS system and run the second
stage of the REALBIOS.EXE.
This little song and dance may be necessary for each
hardware configuration. (However you can save and
copy each of the REALBIOS.INT files if you have a
couple of configurations that you switch between --
say, with a docking station and without).
With LOADLIN you could create a DOS bootable floppy,
with a copy of LOADLIN.EXE and a kernel (and the
REALBIOS.INT -- if it exists). All of that will
just barely fit on a 1.44M floppy.
Another way to do this would be to create a
normal DOS directory on your laptop's IDE drive --
let's call it C:\LINUX (just to be creative).
Then you'd put your LOADLIN.EXE and as many different
kernels as you liked in that directory -- and maybe
a batch file (maybe it could be called LINUX.BAT) to
call LOADLIN with your preferred parameters. Here's a
typical LINUX.BAT:
(where LNX2029.KRN might be a copy of the Linux-2.0.29
kernel -- with a suitable DOS name).
I'd also recommend another batch file (SINGLE.BAT) that
loads Linux in single-user mode (for fixing things when
they are broken). That would replace the LOADLIN line
in the LINUX.BAT with a line like:
Another way to do all of this is to simply dd a
properly configured kernel to a floppy. You use the
rdev command to patch the root device flags in the
kernel and dump it to a floppy. This works because
a Linux kernel is designed to work as a boot image.
The only problem with this approach is that it doesn't
allow you to pass any parameters to your kernel (to
force single user mode, to select an alternate root
device/filesystem, or whatever).
For other people who have a DOS system and want to
try Linux -- but don't want to "commit" to it with
a "whole" hard drive -- I recommend DOSLINUX.
A while back there was a small distribution called
MiniLinux (and another called XDenu) which could
install entirely within a normal DOS partition --
using the UMSDOS filesystem. Unfortunately MiniLinux
has not been maintained -- so it's stuck with a 1.2
kernel and libraries.
There were several iterations of a distribution called
DILINUX (DI= "Drop In") -- which appears to have eventually
evolved into DOSLINUX. The most recent DOSLINUX seems was
uploaded to the Incoming at Sunsite within the last two
weeks -- it includes a 2.0.29 kernel.
The point MiniLinux and DOSLINUX is to allow one to install
a copy of Linux on a DOS system as though it were a DOS
program. DOSLINUX comes as about 10Mb of compressed
files -- and installs in about 20-30Mb of DOS file space.
It includes Lynx, Minicom, and a suite of other utilities
and applications.
All in all this is a quick and painless way to try Linux.
So, if you have a DOS using friend who's sitting on the fence,
give them a copy of DOSLINUX and show them how easy it is.
thanks!
You're welcome.
(Oh -- you might want to get those shift keys fixed --
e.e. cummings might sue for "look and feel")
--
Jim
Response from Weitse Venema
Do you know where there are any working examples of this
and the twist option posted to the 'net? I fight with
some of these and don't seem to get the right results.
Automatic File Transfer
In Linux Gazette, there is a mention of how to transfer files
automatically using ftp.
Here is how:
#!/bin/csh
ftp -n remote.site << !
user joe blow
binary
put newfile
quit
!
Installing wu-ftpd on a Linux Box
I just installed wu-ftpd on my linux box. I have version 2.4.
I can login under one of my accounts on the system and everything
works just fine.
#! /bin/sh
exec strace -o /tmp/ftpd.strace /usr/sbin/wu.ftpd
Trying to Boot a Laptop
I've got a Toshiba satellite pro 415cs notebook computer on which I've
installed RedHat 4.1. RedHat 4.1 was installed on a jaz disk connected
via an Adaptec slimscsi pcmcia adapter. the installation went
successfully, i believe, up until the lilo boot disk creation. i
specified that i wanted lilo on a floppy - so that nothing would be
written to the internal ide drive and also so that i could take the
installation and run it at another such laptop. after rebooting, i
tried booting from the lilo floppy that was created, but i get nothing
but continuous streams of 0 1 0 1 0 1...
@ECHO OFF
ECHO "About to load Linux -- this is a one-way trip!"
PAUSE
LOADLIN lnx2029.krn root=/dev/sda1 ro
LOADLIN lnx2029.krn single root=/dev/sda ro
yash
zmodem Reply
From: Donald Harter Jr., harter@mufn.org
I saw your post about zmodem in the Linux Gazette. I can't answer the
readers question, but maybe this will help. My access to the internet is a
dial in account(no slip, no ppp). I access the freenets. I can't use
zmodem to transfer files from the internet and freeenets to my pc. I can
use kermit though. It seems that there are some control characters involved
in zmodem that prevent it from being used with my type of connection. I saw
a some information about this on one of the freenets. They suggested using
telix and another related protocol. I tried that, but it didn't work
either. Kermit is set up to run slow. You can get kermit to go faster in
certain circumstances by executing its "FAST" macro. I can download data at
about 700cps with the "FAST" macro of kermit. Unfortunately kermit hangs up
the line for me so I have to "kill -9 kermitpid" to exit it. That problem
can probably be eliminated with the right compile options. In certain cases
I can't use the "FAST" macro when uploading.
I'm familiar with C-Kermit. In fact I may have an article in the June issue of SysAdmin magazine on that very topic.
The main points of my article are that C-Kermit is a telnet and rlogin client as well as a serial communications program -- and that it is a scripting language that's available on just about every platform around.
I know about Telix' support for the kermit transfer protocol. It sucks. On my main system I get about 1900 cps for ZMODEM transfers -- about 2200 for kermit FAST (between a copy of C-Kermit 5A(188) and 6.0.192 and about 70 cps (yes -- seventy!) between a copy of C-Kermit and Telix' internal kermit.
Other than that I've always liked Telix. Minicom has nice ncurses and color -- but is not nearly as featureful or stable as either Telix for DOS or any version of C-Kermit.
Your line hangups probably have to do with your settings for carrier-watch. Try SET CARRIER-WATCH OFF or ON and see if it still "hangs" your line. I suspect that its actually just doing read() or write() calls in "blocking" mode. You might have to SET FLOW-CONTROL NONE, too. There are lots of C-Kermit settings. If you continue to have trouble -- post a message to the comp.protocols.kermit.misc newsgroup (preferred) or send a message to kermit-support@columbia.edu.
When I first started using C-Kermit (all of about two months ago) my initial questions where answered by Frank da Cruz himself (he's the creator of the Kermit protocol and the technical lead of the Kermit project at Columbia University). (That was before he knew that I'm a "journalist" -- O.K. quit laughing!). Frank is also quite active in the newsgroup. I think he provides about 70 or 80 per cent of the technical support for the project.
Oh yeah! If you're using C-Kermit you should get the _Using_C-Kermit_ book. It was written by Frank da Cruz and Christine Gianone -- and is the principal source of funding for the Kermit project. From what I gather a copy of the book is your license to use the software.
-- Jim
From: Robert Rambo, robert.rambo@yale.edu
Hi, I was wondering if you can help me out. When I use the command
'startx -- -bpp16' to change the color depth, the windows in X are much
bigger than the monitor display. So, nothing fits properly and
everything has become larger. But the color depth has changed
correctly. I use FVWM as my display manager. Is there some way to fix
this problem?
If using the 16 bit plan (16bpp) mode to increase your color depth -- that suggests that selecting this mode is causing the server to use a lower resolution.
That is completely reasonable. If you have a 2Mb video card and you run it in 1024x768x256 or 1024x768x16 -- then you try to run it with twice as many colors -- the video RAM has to come from somewhere. So it bumps you down to 800x600 or 640x480. These are just examples. I don't deal with graphics much so I'd have to play with a calculator to figure the actual maximum modes that various amounts of video RAM could support.
There are alot of settings in the XConfig file. You may be able to tweak them to do much more with your existing video card. As I've said before -- XConfig files are still magic to me. They shifted from blackest night to a sort of charcoal gray -- but I can't do them justice in a little article hear. Pretty much I'd have to lay hands on it -- and mess with it for a couple of hours (and I'm definitely not the best one for that job).
If you haven't upgraded to a newer XFree86 (3.2?) then this would be a good time to try that. The newer one is much easier to configure and supports a better selection of hardware -- to a better degree than the older versions. I haven't heard of any serious bugs or problems with the upgrades.
You may also want to consider one of the commercial servers. Definitely check with them in advance to be absolutely certain that your hardware is supported before you buy. Ask around in the newsgroups for opinions about your combination of hardware. It may be that the XFree86 supports you particular card better than Metro-X or whatever.
You may also want to look at beefing up your video hardware. As I've said -- I don't know the exact figures -- but I'd say that you probably need a 4Mb card for anything like 16bpp at 1024x768. You should be able to look up the supported modes in your card's documentation or on the manufacturer's web site or BBS.
Also, is there some way to change the color depth setting to start X with a depth of 16 every time. I do not use the XDM manager to initiate an X session.
Yes -- it's somewhere in that XConfig file. I don't remember the exact line. I really wish a bona fide GUI X wiz would sign up for some of this "Answer Guy" service.
It doesn't matter whether you use xdm or not. If you put the desired mode in the XConfig file. However -- since you don't you could just write your own wrapper script, alias or shell function to call 'startx' with the -- -bpp16 options. You could even re-write 'startx' (it is just a shell script). That may seem like cheating -- but it may be easier than fighting your way through the XConfig file (do you get the impression that I just don't like that thing -- it is better than a WIN.INI or a SYSTEM.INI -- but not be much).
-- Jim Dennis,
From: Brian Moore, bem@thorin.cmc.net
Being a big IMAP fan (and glad to see it finally getting recognition:
Netscrape 4 and IE4 will both support it), your answer left a lot out.
Will these support the real features (storing and organizing folders on the server side)?
I heard that NS "Communicator" (the next release Netscape's Navigator series is apparently going to come with a name change) supports IMAP -- but it's possible to implement this support as just a variant of POP -- get all the message and immediately expunge all of them from the server.
It seems that this is how Eric S. Raymond's 'fetchmail' treating IMAP mail boxes -- as of about 2.5 (it seems that he's up to 3.x now)
The easiest IMAP server to install is certainly the University of Washington server. It works, handles nearly every mailbox format around and is very stable. It's also written by the guy in charge of the IMAP spec itself, Mark Crispin. As for clients, there is always Pine, which knows how to do IMAP quite well. This is part of most Linux distributions as well.
I did mention pine. However it's not my personal favorite. Do you know of a way to integrate IMAP with emacs mh-e/Gnus (or any mh compatible folder management system)?
For GUI clients there is ML, which is a nice client, but requires Motif and can be slow as sin over a modem when you have a large mailbox. That's available in source at http://www-CAMIS.Stanford.EDU/projects/imap/ml
I thought I mentioned that one as well -- but it's a blur to me.
I personally avoid GUI's like the plague. I'm typing this from my laptop, through a null modem link to my machine in the other room.
I run emacs under screen -- so I can use mh-e for most mail, Gnus for netnews and for some of my mailing lists (it can show news folders as though they were threaded news groups). screen allows me to detach my session from my terminal so I can log out, take off with the laptop, and re-attach to the same session later (via modem or when I get back home).
Asking on the mailing list about static linked linux versions will get you one (and enough nagging may get them to actually put one of the current version up). ML is really the nicest mail client I have ever used. As for pop daemons with UIDL support, go for qpopper from qualcomm. ftp.qualcomm.com somewhere. Has UIDL and works fine.
O.K. I'll at that to my list.
Does that one also support APOP's authentication mechanism (which I gather prevents disclosing your password over an untrusted network by using something like an MD5 hash of your password concatenated with a date and time string -- or something like that)?
Does qpopper allow you to maintain a POP user account file that's separate from your /etc/passwd file?
Do you know of an IMAP server that supports these sorts of features (secure authentication and separate user base)?
(I know this probably seems like a switch -- the so called "Answer Guy" asking all the questions -- but hey -- I've got to get my answers from *somewhere*)
-- Jim
From: Graham Todd, gtodd@yorku.ca
PINE - one of the easiest to use mail clients around - does IMAP just
fine. You can read mail from multiple servers and mailboxes and save
it locally or in remote folders on the servers - which is what IMAP is
all about: Internet Message Access Protocol = flexible and
configurable *access* to mail servers without having to pop and fetch
messages all over the place (but still having the ability save locally
if you want).
The Netscape's Communicator 4.0b2 thing does too but there are so many other ugly bits that I'm not gonna bite.
Jeez pretty soon with this fancy new IMAP stuff you'll be able to do almost as much as you can right now with emacs and ange-ftp (which I use regularly to access remote mail folders and boxes with out having to login - it's all set up in .netrc). Of course the answer is almost always "emacs" .... BTW Linux makes a GREAT program loader for emacs ;-)
Seems kind of kludgey. Besides -- does that give you the main feature that's driving the creation of the IMAP/ACAP standards? Does it let you store your mail on a server and replicate that to a couple of different machines (say your desktop and your laptop) so you can read and respond to mail "offline" and from *either* system?
Yeah, more or less. If you save the mail on your server to local folders or make a local folder be /me@other.mail.host:/usr/spool/me. Using ange-ftp to me seem exactly like IMAP in Pine or Netscape communicator 4.0b2. Though apparently IMAP will update folders across hosts so that only that mail deleted locally (while offline) will get deleted on the remote host on the next login etc. etc. I don't know much about IMAP's technical standard either but find I get equal mail management capability from ange-ftp/VM. (equal to Pine and Communicator so far).
WARNING: In a week or so when I get time I'm gonna ask you a tricky question about emacs and xemacs.
Feel free. Of course I do know a bit more about emacs than I do about X -- so you may not like my answer much.
Heh heh OK... (comp.emacs.xemacs is silent on this). Emacs running as emacs -nw in a tty (i.e console or an xterm) runs fine and lets me use all the job control commands (suspend/fg etc) but with Xemacs job control won't work unless I'm running as root. That is if I'm running "xemacs" or "xemacs -nw" in an xterm or at the console and do C-z and then once I'm done in the shell I do "fg", xemacs comes back but the keyboard seems to be bound to the tty/console settings (Ctrl-z Ctrl-s Ctrl-q etc all respond as if I were in a dumb terminal). The only recourse is to Ctrl-z back out and kill xemacs. This does not happen if I run xemacs setuid root (impractical/scary) or as root (scary). Something somewhere that requires root permission or suid to reset the tty characteristics doesn't have it in xemacs - but does in emacs... My only response so far has been that "you'll have to rebuild/recompile your xemacs" - but surely this wrong. Does anything more obvious occur to you? I feel it must be something simple in my set up (RH Linux 2.0.29). Of course if I could get this fixed I'd start feeling more comfortable not having GNU-Emacs on my machine ;-) which may not be an outcome you would favour.
I once had a problem similar to this one -- suspending minicom would suspend the task and lock me out of it. It seemed that the ownership of the tty was being changed.
So -- the question comes up -- what permissions are set on your /dev/tty* nodes. It seems that most Linux distributions are set up to have the login process chown the these to to the current user (and something seems to restore them during or after logout).
I don't know enough about the internals of this process. I did do a couple of experiments with the 'script' command and 'strace' using commands like:
strace -o /tmp/strace.script /usr/bin/script
... and eyeballing the trace file. This shows how the script command (which uses a psuedo tty -- or pty) searches for an available device.
I then did a simple 'chown 600 /dev/ttyp*' as root (this leaves a bunch of /dev/ttyq* and /dev/ttyr nodes available). The 'script' command then reports that the system is "out of pty's."
Obviously the script command on my system don't do a very thorough search for pty's. It effectively only looks at the first page of them.
The next test I ran was to add a new line to my /etc/services file (which I called stracetel) -- and a new line to me /etc/inetd.conf that referred to it.
This line looks like this:
stracetel stream tcp nowait root /usr/sbin/tcpd \ /usr/bin/strace -o /root/tmp/t.strace /usr/sbin/in.telnetd
... all on one line, of course.
Then I connected to that with the command:
telnet localhost stracetel
This gives me an strace of how telnetd handles the allocation and preparation of a pty. Here, as I suspected, I saw chown() and chmod() calls after telnetd did it's search through to list of pty's to find the first one.
Basically both programs (and probably most other pty clients) attempt to open each pty until one returns a valid file descriptor or handle. (It might be nice if there was a system call or a daemon that would allow programs to just say "give me a pty" -- rather than forcing a flurry of failed open attempts -- but that's probably too much to ask for.
There result of these experiments suggests that there are many ways of handling pty's -- and some of them may have to be set as compile time options for your system.
It may be that you just need to make all the pty's mode 666 (which they are on my system) or you might chgrp them to a group like tty or pty, make them mode 660 and make all the pty using programs on your system SGID.
I've noticed that all of my pty's are 666 root.root (my tty's root.tty and ttyS*'s are root.uucp all are mode 660 and all programs that need to open them are either root run (getty) or SGID as appropriate).
Some of the policies for ownership and permissions are set my your distribution. Red Hat 2.x is *old* and some of these policies may have changed in the 3.03 and 4.1 releases. Mine is a 3.03 with *lots* of patches, updated RPM's and manually installed tarballs.
Frankly I don't know *all* of the security implications of having your /dev/tty* set to mode 666. Obviously normal attempt to open any of these while they're in use return errors (due to the kernel locking mechanisms). Other attempts to access them (through shell redirection, for example) seem to block on I/O. I suspect that a program that improperly opened it's tty (failed to set the "exclusive" flag on the open call) would be vulnerable.
Since you're an emacs fan -- maybe you can tell me -- is there an mh-e/Gnus IMAP client?
No Kyle Jones (VM maintainer/author) has said maybe IMAP4 for VM version 7. I think his idea is to make VM do it what it does well and rely on outside packages to get the mail to it ...
Also -- isn't there a new release of ange-ftp -- I forget the name -- but I'm sure it changed named too.
Yes it's called EFS - it preserves all the functionality but is more tightly meshed with dired - supposedly it will be easier to use EFS in other elisp packages (I don't know why or how this would be so).
I'll have to play with those a bit. Can VM handle mh style folders?
-- Jim
From: David J. Weis, weisd3458@uni.edu
I had a couple minor questions on UUCP. If you have a few minutes, I'd
appreciate the help immensely. I'll tell you a little bit about what we're
doing.
Glancing ahead -- I'd guess that this would take quite a bit more than a few minutes.
My company has a domain name registered (plconline.com) and two offices. One is the branch office which is located in the city with the ISP. The head office is kind of in the sticks in western Iowa. I've been commissioned to find out how difficult it would be to set up the uucp so the machine in Des Moines (the big city ;-) would grab all the domain mail and then possibly make a subdomain like logan.plconline.com for all the people in the main office to use email.
This would all be running on RedHat 4 over dialup uucp. The system in Des Moines uses uucp over tcp because it has to share the line with masquerading, etc.
Thanks for any advice or pointers you have.
Unfortunately I this question is too broad to answer
via e-mail. O'Reilly has a whole book on uucp and
there are several HOW-TO's for Taylor UUCP and
sendmail under Linux.
My uucp mostly works but I haven't configured it to run over TCP yet. I also haven't configured my system to route to any uucp hosts within my domain.
You can address mail to a uucp host through a DNS by using the '%' operator. For example I can get my main mail system (antares.starshine.org) to forward mail to my laptop using an address like:
jim%mercury@starshine.org
... the DNS MX record for starshine.org routes mail to my ISP. My ISP then spools it up in UUCP until my machine (antares) picks it up. The name antares is basically transparent to most of this process.
When antares gets the mail it converts the percent sign into a "bang" (!) and spools it for mercury (which happens to be my laptop).
Obviously requiring all of your customers and correspondents to use percent signs in their addressing to your users is not going to work very well. It will probably result in alot of lost mail, alot of complaints and a constant barrage of support calls.
There are two ways to make your internal mail routing transparent to the rest of world. You can create a master aliases list on your mail hub (the easy way) or you can create DNS and MX entries for each of the hosts.
If you'd like more help we could arrange to talk on the phone. UUCP is difficult to set up for the first time (nearly vertical initial learning curve). Once it's set up it seems to be pretty low maintenance. However my meta-carpus can't handle explaining the whole process via e-mail (and I don't understand enough of it well to be brief).
-- Jim
From: Barry, remenyi@hotmailcom
Hi, I have a problem that I can't find the solution to:
I run Redhat 4.1 with mtools already installed, with it, I can copy a file to or from a dos disk in A: with mcopy etc.. But if I change the disk & do mdir, it tells gives me the listing of what was in the last disk. The only solution is to wait hours for the cache to expire before I can look at another disk.
The problem occurs no matter how I access the floppy, I also tried using dosemu, and mount, but I have the same problem. I can read and write from the first disk that I put in with no problems, but if I change the disk, the computer acts as if the first disk is still in the drive. It also doesn't matter who I am loged in as eg. root has the same problem. I also upgraded mtools to 3.3 but no change.
Is there some way to disable the disk cache (I assume thats the problem) for the floppy drive?
You probably have a problem with the "change disk" detection circuitry on your floppy.
There's a pretty good chance that you'd see the same thing under DOS too.
Unfortunately I don't know of an easy way to solve this problem. You could try replacing the floppy ($30 or so) the controller ($20 -- to ???) and/or the cable.
If that's not feasible in your case you could try something like a mount/sync/umount (on a temporary mount point). This might force the system to detect the new floppy. It's very important not to try to write anything to a floppy when the system is confused about which floppy is in there.
DOS systems that I have used -- while they were afflicted with this problem -- sometimes severely trash the directories on a diskette in that situation.
It probably doesn't even matter if the mount, sync, umount that I describe fails -- just so the system is forced to "rethink" what's there. I'd consider writing a short script to do this -- put a temporary mount point that's "user" accessible to avoid having to be root to do this (and especially to avoid having to create any SUID root perl scripts or write a C wrapper or any of that jazz).
Here's a sample line for your /etc/fstab:
# /etc/fstab /dev/fd0 /mnt/tmp umsdos noauto,rw,user 0 0
(according to my man pages the "user" options should imply the nosuid, nodev etc. options -- which prevent certain other security problems).
So your chdisk script might look something like:
#! /bin/sh /bin/mount /mnt/tmp /bin/sync /bin/umount /mnt/tmp
... you could also just do a 'mount /mnt/tmp' or a 'mount /mnt/a' or whatever you like for your system -- and just use normal Linux commands to work with those files. The mtools are handy sometimes -- but far from indispensable on a Linux system with a good fstab file.
As a security note: mount must be SUID in order to allow non-root users to mount filesystems. Since there have been security exploits posted on mount specifically and various other SUID files chronically, I suggest configuring mount and umount such that they can only be executed by members of a specific group (like a group called "disk" or "floppy"). Then you can add yourself and any other users who have a valid reason to work at your console to that group. Finally change the permissions on mount and umount to something like:
-r-sr-x--- 1 root disk .... /bin/mount
.... i.e. don't allow "other" to execute it.
This also applies to all your SVGALib programs (which should not be executed except from the console) and as many of your other SUID programs as you can.
(... it would be nice to do that to sendmail -- and I've heard it's possible. However it's a bit trickier than I've had time to mess with on this system).
As PAM (pluggable authentication module) technology matures you'll be able to configure your system to dynamically assign group membership's based on time of day and source of login (value of `tty`).
This will be nice -- but it doesn't appear to be quit ready yet.
-- Jim
I just wanted to write to thank you for you response to my mail. I did as you suggested and the problem is solved!
Actually, you were also right about the problem occurring in DOS as I used to have a lot of floppies go bad before I went all the way to linux, but I didn't make the connection.
Anyway, thanks again, you've made my day!
Barry
You're welcome. I'm glad it wasn't something complicated. BTW: which suggestion worked for you? Replacing one or another componenent? Or did you just use the "mount, sync, umount" trick?
Under DOS I used to use Ctrl-C, from the COMMAND.COM A: prompt to force disk change detection. You can use that if you still boot this machine under DOS for some work.
-- Jim
From: Benjamin Peikes, benp@npsa.com
Answer guy,
I have two questions for you.
1) I'm using one machine with IPAliasing and was wondering if there is a version of inetd built so that you can have different servers spawned depending on the ip number connected to.
That's an excellent question. There is apparently no such feature or enhanced version of inetd or xinetd.
It also doesn't appear to be possibly to use TCP Wrapper rules (tcpd, and the /etc/hosts.allow and /etc/hosts.deny) to implement this sort of virtual hosting.
So far it appears that all of the support for virtual hosting is being done by specific applications. Apache and some other web servers have support for it. The wu-ftpd's most recent versions support it.
I suspect that you could create a special version of inetd.conf to open sockets on specific local IP addresses and listen on those. I would implement that as a command line option -- passing it a regex and/or list of ip addresses to listen on after the existing command line option to specify which configuration file to use. Then you'd load different copies of this indetd with commands like:
/usr/sbin/inetd /etc/inetd.fred 192.168.14.0 17.18.0.0 /usr/sbin/inetd /etc/inetd.barney barneyweb /usr/sbin/inetd /etc/inetd.wilma 192.168.2.3
(This would be something like -- all of the 192.168.14.* address and all of the 17.18.*.* addresses are handled by the first inetd -- all of the access to a host named barneyweb (presumably looked up through the /etc/hosts file) would be handled by the next inetd. and all of the accesses to the ipalias 192.168.2.3 would be handled by the last one)
This would allow one to retain the exact format of the existing inetd files.
However I don't know enough about sockets programming to know how much code this would entail. The output of 'netstat -a' on my machine here shows the system listening on *:smtp and *:telnet (among others). I suspect that those stars would show up different if I had a socket open to a specific service on a specific service.
This scheme might use up to many file descriptors. Another approach would be to have a modified tcpd. This would have to have some option where by the destination *as well as* the source was matched in the /etc/tcpd.conf file(s).
(Personally I think that tcpd should be compiled with a change -- so that the single tcpd.conf file is used in preference to the /etc/hosts.allow and /etc/hosts.deny files. Current versions do support the single conf file -- but the naming is still screwy).
I'm not sure quite how Wietse would respond to this -- possibly by repeating the question:
"If you want me to add that -- what should I take OUT?"
(which is what he said one to me when I suggested merging his chrootuid code with tcpd).
I've blind copied Wietse on this (Hi!). I doubt he has time to read the Linux Gazette.
2) A related problem: I have one machine running as a mail server for several domains where the users are using pop to get their mail. The problem is that the From: line always has the name of the server on it. Is there a way to use IPaliasing to fix this? Or do I have to muck around with the sendmail.conf file?
This is becoming a common question.
Here's a couple of pointers to web sites and FAQ or HOWTO documents that deal specifically with "Virtual Mail Hosting"
(look for references to "virtualdomains")
... and here's one guide to Virtual Web Hosting:
I guess the best way to do this would be to change inetd to figure
out on which interface the connection has been made on and then
pick the correct inetd.conf to reference, like
inetd.conf.207.122.3.8
inetd.conf.207.122.3.90
I would recommend that as a default behavior. I suggested adding additional parameters to the command line specifically because it could be done without breaking any backward compatibility. The default would be to simply work as it does now.
I still suspect that this has some scalability problems -- it might not be able to handle several hundred or several thousand aliased addresses.
I might still be useful to implement it as a variation of -- or enhancement to -- tcpd (TCP_Wrappers).
I think that inetd reads in the configuration file when it starts because it needs a SIGHUP to force it to reread the conf file. All you would have to do is make it reference the right table.
This is also documented in the inetd man page.
Do you know where I could find the code? I would be interested in looking at it?
The source code from inetd should be in the bundle of sources that comes with the "NetKit"
Look to:
ftp:..ftp.inka.de/pub/comp/Linux/networking/NetTools/
and mirrored at:
ftp://sunsite.unc.edu/pub/Linux/system/network/NET-3-HOWTO/
... this includes the history of it's development and the names of people who were active in it at various stages.
If you're going to try to hack this together -- I'd suggest a friendly posting to the comp.linux.development.system newsgroup -- and possibly some e-mail to a couple of carefully chosen people in the NET-3-HOWTO.
-- Jim
From: John Doe
The next time you answer a modem question, you'd do well
to recommend reading of the very good Navas Modem FAQ at
http://www.aimnet.com/~jnavas/modem/faq.html/
Well, here's someone who wants to make a anonymous tip to "The Answer Guy."
At "John Doe's" request I looked over this site. It does have extensive information about modems -- including lots of press releases about which companies are acquiring each other (3Com over US Robotics, Quarterdeck gets DataStorm).
However there didn't appear to be any references to Linux, Unix or FreeBSD.
So -- if one needs information about modems in general this looks like an excellent site to visit. However it the question pertains specifically to using your modem with Linux -- I'd suggest: http://sunsite.unc.edu/LDP/HOWTO/Serial-HOWTO.html
-- Jim
From: Yang, lftian@ms.fudan..edu.cn I have an AT 3300 card( from Aztech) which integrates the function of sound card and 28.8K modem. It seems that it need a special driver for its modem function to be work. In MSDOS, there is a aztpnp.exe for that purpose. Do you know is there any way I can get the card work (at least its modem function) in Linux?
Tianming Yang
I'm not familiar with that device. The name of the driver suggests that this is a Plug 'n Play (pnp) device (sometimes we use the phrase "plug and *pray*" -- as it can be a toss of the dice to see if they'll work as intended.
My guess would be that this is a PCMCIA card for a laptop system (which I personally pronounce "piecemeal").
Did you look in the "Hardware HOWTO" (start at www.ssc.com, online mirror of FAQ's and HOWTO's)?
Did you go to Yahoo! and do a keyword search on the string:
linux +aztech
... (the plus sign is important there)?
Since all of the real details about the configuration of the card are determined by the manufacturer (Aztech in this case) I would start by contacting them.
If they've never heard of Linux -- or express no interest in supporting it -- please consider letting them know that Linux support affects your purchasing decisions. Also let them know that getting support for Linux is likely to cost them very little.
How to get a Linux driver for your hardware:
If you are a hardware company that would like to provide support for Linux and FreeBSD and other operating systems -- but you don't have the development budget -- just ask.
That's right. Go to the comp.os.linux.development.system newsgroups and explain that you'd like to provide full documentation and a couple of units of your hardware to a team of Linux programmers in exchange for a freely distributable driver. Be sure to make the sources for one of your other drivers (preferably any UNIX, DOS, or OS/2 driver) available to them.
If you don't like that approach, consider publishing the sources to your existing drivers. If you are really in the hardware business than the benefits of diverse OS support should far outweigh any marginal "edge" you might get from not letting anyone see "how you do it."
(Just a suggestion for all those hardware vendors out there).
-- Jim
From: Dani Fricker, 101550.3160@CompuServe.COM
i need your help. for some reasons i have to identify a user on my
webserver by his/her ip-address. fact is that users logon comes from
different physical machines. that means that i have to assign something
like a virtual ip-address to a users log name. something like a reversal
masquerading.
The IP Address of any connecting client is provided to any CGI scripts you run, and is stored in the server's access log (or a reverse DNS lookup of it is stored therein -- depending on your httpd and configuration).
* Note: I suggest disabling reverse DNS lookup on webserver wherever possible. it generates alot of unnecessary traffic and you can isolate, sort, and look up the IP addresses in batches when you want to generate statistics involving domain names.
(I also tend to think that most of the reports done on web traffic logs have about as much rigor and resemblance to statistical analysis as reading chicken entrails).
my ip-gateway connects my inner lan over two token ring network cards (sorry, not my idea!) with the internet (lan <-> tr0 <-> tr1 <-> internet). the masquerading forward roule of ipfwadm gives me the possibility to indicate a source and a destination address.
Oh. So all of the clients that you're interested in are on a private LAN and going through a masquerading/NAT server (network address translation).
I would try using ident for starters. Run identd on your Masquerade Host and make calls to the ident service from your CGI scripts. I don't think it will work -- but it should be worth a little info.
From there you might be able to configure all the clients on the inner LAN to use an *applications* level proxy (squid -- formerly cached, CERN httpd, or the apache cache/ proxy server). Masquerading can be thought of as a "network layer proxying services" while SOCKS, and similar services -- which work with the co-operation of the client software -- are applications layer proxies.
I don't know if the private net IP address or other info will propagate through any of these HTTP proxies.
If this is *really* important to you, you could consider writing your own "NAT Ident" service and client. I don't know how difficult that would be -- but it seems like the code for the identd (and the RFC 931? spec) might give you a starting point for defining a protocol (you might want to secure that service under TCP_Wrappers). You might want to consider making this a TCP "Multiplexed" service -- look for info on tcpmux for details about that.
The gist of tcpmux is that it allows your custom client to talk to a daemon on TCP port 1 of the server host and ask for a service by name (rather than relying on "Well-Known Port Addresses"). So, if you're going to create a new service -- it makes sense to put it under tcpmux so you don't pick your own port number for it -- and then have the IANA assign that port to something else that you might want later.
do you see a possibility for an 'address assignment' between the two interfaces? if you do please let me know.
I don't know of any existing way to determine the IP address of a client on the other side of any NAT/masquerading host -- I'm not even sure if there's any existing way to do it for a client behind a SOCKS or TIS FWTK or other applications level proxy.
I'll be honest. With most "Answer Guy" questions I do some Yahoo!, Alta-vista and SavvySearch queries -- and ask around a bit (unless I already know the answer pretty well -- which doesn't happen all that often these days). I skipped that this time -- since I'm pretty sure that there's nothing out there that does this.
I welcome any corrections on this point. I'll be happy to forward any refutations and corrections to Dani.
All of this begs the greater question:
What are you really trying to do?
If you are trying to provide some form of transparent access control to your webserver (so local users can see special stuff without using a "name and password") -- there are better ways available.
Netscape and Internet Explorer both support a form of client-certificate SSL -- which is supported at the server side by the Stronghold (commercial Apache) server.
As an alternative -- I'd look at the possibility of finding or writing a Kerberos "auth" module for Apache (and deploying Kerberos to the clients). This might be more involved than you're management is willing to go for -- but writing new variations of the indentd service might also fall into that category.
IP addresses are a notoriously bad form of access control. If you have a properly configured set of anti-spoofing rules in the packet filters on your router -- and you can show that no other routes exist into your LAN -- then you can base access controls to services (TCP/Wrappers) to about the granularity of "from here" and "not from here." Attempting to read more into them than that is foolhardy.
Ethernet and Token Ring MAC (media access control) addresses (sometimes erroneously called "BIA's" -- burned in addresses) are just about as bad (most cards these days have options to over-ride the BIA with another MAC -- usually a feature of operating the card in "promiscuous" mode).
Yet another approach to the problem might be to simply put a web server on the internal LAN (no routing through the NAT/masquerading host) -- and use something like rdist to replication/mirror the content between the appropriate document trees on the internal and exterior web servers.
Basically we'd need to know much more about your requirements in order to give relevant recommendations.
-- Jim
From: Mohammad A. Rezaei, rezaei@tristan.TN.CORNELL.EDU
I just read your response to duplicating a hard drive using dd.
I think using dd limits the uses of this technique too much.
I absolutely agree. I wonder where I suggested 'dd' without expressing my misgivings.
Please consider quoting little portions of my posting when making references to them -- I write alot and can't remember past postings without some context.
I have more than once installed/transfered entire hard drives using
tar. simply put both drives in the same machine, mount the new drive
in /mnt and do something like
tar -c -X /tmp/excludes -f / | (cd /mnt; tar xvf -)
The file....
/tmp/excludes should contain:
/mnt
/proc
and any other non-local, mounted drives, such as nfs mount points.
There are better ways to do this. One way is to use a command like:
find ... -xdev -type f | tar cTf - - | \ (cd ... && tar xpf - ) Another is to use: find ... | cpio pvum /new/directory ... which I only learned after years of using the tar | (cd ... && tar) construct.
In both of these cases you can use find parameters to include just the files that you want. (Note: with tar you *must* prevent find from printing any directory names by using the -type f (or more precisely a \! -type d clause) -- since tar will default to tar'ing any directories named in a recursive fashion).
The -T (capital "tee") option to GNU tar means to "Take" a list of files as an "include" list. It is the complement to the -X option that you list.
You can also pipe the output of your find through grep -v (or egrep -v) to filter out a list of files that you want to exclude.
finally, one has to install the drive onto the new machine, boot from floppy and run lilo. The disks don't have to be identical. the only disadvantage is having to run lilo, but that's takes just a few minutes.
The only message I can remember posting about 'dd' had an extensive discussion of using tar and cpio for copying trees. Am I forgetting one -- or did you only get part of my message?
Hope this helps.
Hopefully it will help some readers. The issues of copying file trees and doing differential and incremental backups is one that is not well covered in current books on system administration.
When I do a full backup I like to verify that it was successful by extracting a table of contents or file listing from the backup media. I then keep a compressed copy of this. Here I use tar:
tar tf /dev/st0 | gzip > /root/tapes.contents/.....
.... where the contents list is named something like:
antares-X.19970408
.... which is a hostname, a volume (tape) number and a date in YYYYMMDD format (for proper collation -- sorting).
To do a differential I use something like:
find / -newer /root/tape.contents/.... \ | egrep -v "^(/tmp|/proc|/var/spool/news)" \ | tar czTf - /mnt/mo/diff.`date +%Y%m%d`.tar
... (actually it's more complicated than that since I build the list and compute the size -- and do some stuff to make sure that the right volume is on the Magneto Optical drive -- and mail nastygrams to myself if the differential won't fit on that volume -- if the volume is the most recent one (I don't overwrite the most recent -- I rotate through about three generations) -- etc).
However this is the core of a differential backup. If you wanted an incremental -- you'd supply a different file to the -newer switch on your find command.
The difference between differential and incremental is difficult to explain briefly (I spent about a year explaining it to customers of the Norton Backup). Think of it this way:
If you have a full -- you can just restore that.
If you have a full, and a series of differentials, you can restore the most recent full, and the most recent differential (any older fulls or differentials are unneeded)
If you have a full and a series of incrementals you need to restore the most recent full, and each subsequent incremental -- in order until the most Recent.
It's possible (even sensible in some cases) to use a hybrid of all three methods. Let's say you have a large server that takes all day and a rack full of tapes to do a full backup. You might be able to do differentials for a week or two on a single tape per night. When that fills up you might do an incremental, and then go back to differentials. Doing this to a maximum of three incrementals might keep your all day backup marathons down to once a month. The restore must go through the "hierarchy" of media in the correct order -- most recent full, each subsequent incremental in order, and finally the most recent differential that was done after that.
(Personally, I avoid such complicated arrangements like the plague. However they are necessary in some sites.)
-- Jim