Articles / Application Directories

Application Directories

Most software packages need to install a large number of files to work -- binaries, images, documentation, etc. Until now, this has been done by providing an install script (possibly in a Makefile or an RPM spec file) which puts each file in its correct location. If you're lucky, there may also be an uninstaller to get rid of them again. Both must be run as root, which is awkward and has security issues. In this article, I present an alternative system. This system, which has already been used by a number of applications, is faster, easier (both for the user and the distributor), and doesn't require root access. It's also simpler, safer, and less error prone.

What directories are for

Directories are used to group files and allow them to be found and manipulated easily. A good directory structure will make common operations easier. I'd like you to look at the next two operations carefully, and decide which you think is more common:

  1. I want some documentation in "man" format, but I don't know which package I need help with.
  2. I want the documentation for the GIMP, but I don't know what format it's in.

Now try these two:

  1. I want to delete all binaries from my system, but leave the info pages.
  2. I want to delete all files belonging to Netscape, but leave other programs alone.

I'm guessing you answered (2) to both of those. So why do we have "bin/gimp" and not "gimp/bin"?

Introducing self-contained applications

People seem to be aware that installing and uninstalling software is a problem, but the solution isn't ever-more-complicated package managers!

An application directory contains all the files needed by an application -- its help files (in a subdirectory called "Help"), an executable to run when the application is invoked ("AppRun") and, optionally, a graphical icon to represent it. (See the full details on this page).

The elegance of this scheme really came home to me when I tried to package System, a little program that shows a bar graph of processes and their memory usage. I'd created it as an application directory, and was running it myself by clicking on it in a filer window. I wanted to make source code and binaries available for other people.

I'd already had some experience of packaging with RPM -- you have to create a "spec" file containing instructions to unpack, compile, install, etc. -- a major headache! And then the Debian users complain that they want debs, so you need to make source RPMs and debs, plus both binary RPMs and debs for every platform. And then users without packaging systems want binary tarballs to "make install" -- a real nightmare!

Here's how I made the source release of System:

  1. I moved the binary out of the System directory, and did a "make clean" in the "src" directory (to save space).
  2. I dragged System to the archiver to create the source archive.

Here's how I made the Linux-x86 binary release:

  1. I moved the "src" directory out of System and stripped the binary (both just to make the archive smaller).
  2. I dragged System to the archiver to create the binary archive.
Wow. That was quite a lot easier!

When someone wants to use System, they extract the archive and click on the System icon which appears. The binary version will simply run; the source version will bring up an xterm and compile itself (the first time), and then run.

The ease of this scheme from the user's side didn't really affect me until I tried using an application provided by someone else. It was a little load monitor for the panel. Here's how I installed it:

  1. I clicked on the Load.tgz file to extract it. The "Load" application appeared, complete with its own little icon.
  2. I dragged it to the panel and it ran (there was a brief delay while it automatically compiled itself in an xterm).

Compare those instructions to "Open a terminal, cd to the right directory, run configure, run make, su to root, then do make install." If that still sounds pretty easy, imagine you're explaining this to your parents. If it still seems too easy, imagine they don't know the root password.

Conclusion

To summarize the advantages of application directories:

  1. You can use the same actions to run an application, whether it's a binary or a source archive.
  2. There's no need to run any scripts as root. This is much safer and allows users to install software themselves. It also means that the software admin doesn't need to be root to install software.
  3. You can install software wherever you want (just move the directory) and install a new version of some software simply by putting it somewhere else or giving it a different name; there are no conflicts with shared files.
  4. You can uninstall by deleting the directory. You know where it is because it's the thing you click on to run it. This is very useful if you don't use a package manager.

You may be worried about supporting multiple architectures. That's why we separate "bin" from "share", right? So we can mount them remotely, the correct directory for each platform?

With application directories, this is even easier! The AppRun file is usually a shell script which loads a binary for the current platform, compiling a new one if it's missing. So, just share the same application to every client machine, and it will pick the right binary when you run it!

I hope I've convinced you that application directories are an easy way to distribute software, and I eagerly await support from the various file managers and shells out there!

If you want to see application directories in action, take a look at ROX-Filer.

RSS Recent comments

22 Apr 2001 00:31 thesquid

Program Files
Windows 3.1 etc. used to have the UNIX-like scheme of throwing all the binaries and all the libraries into directories together for the most part (except for an application's main .EXE and .HLP files) and this was a problem. The "Program Files" technique of standardizing a place for each installed package to have its own separate folder was a huge step in the right direction. This is definitely something Micro$oft has done right, and this type of keep-applications-separated-for-easier-management definitely needs to be looked at for UNIX.

22 Apr 2001 00:59 ejr

Depot, encap, etc.
There's a long history here. The most up-to-date
system that merges the package directory and Unix
schemes is encap.
Others include old CMU
Depot and GNU
Stow. NeXT placed apps in their own
directories, and I imagine MacOS X does, too.

The main problem is that packages aren't always
cleanly divisible, leading to Solaris-like packages.
(Solaris is just one example.) And now you start
getting nasty inter-package dependencies. So then
you start heading towards being over-specific and
inflexible, or towards modules. But
removing a module from a current environment is
difficult. Also, analyzing your needs sufficiently well
to use modules at all just takes too much time.

In short, there's a lot of previous work, and no
satisfactory answer for all cases.

Part of the problem
lies with how programs look for their data. Some,
like gcc and just about everything, get horribly
confused when compiled-in paths fail. Others, like
teTeX (www.tug.org/tetex/) and
other web2c-based TeXs, do a great job in 99% of
the cases. But they cheat by being all-in-one
systems. Mixing, say, a newer version of Context
with teTeX without fully replacing the old one can
lead to insanity. This is true even though all the
packages are nicely placed into their own
subdirectories (modulo some config files).

It's a hard problem. Everyone has different needs.
Using encap / epkg with semi-large, unremovable
modules seems a fair compromise. You might want
to look into those.

Jason

22 Apr 2001 01:25 moshez

UNIX FAQ, Anyone?
When I once read the UNIX FAQ, I distinctly remember

the question "how do I find out which directory my

executable is in?", and the answer was "you can't".

I was intrigued to see how this solves the problem --

predictably, the script in ROX-filer uses something'

like "APP_DIR = `dirname $0`". This breaks in so

many ways: what if I run "./AppRun"? dirname is "."

and if the application CDs anywhere...well, it's

screwed. Not to mention that it becomes dangerous

to have symbolic links to the script runner, etc. etc.

I must say that this turned me off quite a bit.

22 Apr 2001 01:32 bergo

Bad Approach
Having software packages spread across the filesystem is not a behaviour we should encourage. In fact, some packages do spawn new hierarchies, like teTeX, but that's quite a special package. We don't want the average johnny to establish his project like a weed on the system. In fact, one of the differences from Windows to Unix-like systems that I like most is that the disk hierarchy of Unix doesn't become the hell a Windows hierarchy is. Win 95 tried to fix that with the Program Files thing, but the plethora of deep hierarchies below that got as unmanageable as the previous situation.

Do you want to install a particular software to a particular hierarchy ? configure --prefix=/usr/Program\ Files/gimp.
And to get a working configure script and a working uninstall target, learn to use GNU autoconf, automake and associated tools.

One of the advantages of the Unix way of life is that your PATH variable does not need to be 4 KBytes-long (which speeds up execution too -- on some Windows systems running FOO.EXE can take the equivalent of a find / -name "FOO.EXE", often over SMB/NFS-mounted directories, sending the user to la-la-land. Not to mention the simplification of volume management (i.e.: Great! Power came back, we can go back to work. Oh, wait, 1 of the 36 NFS servers is halted on boot waiting for the admins to fix it, since the software we use is spread across the hierarchy in 9 diffferent servers, any of them that goes down brings the whole software down. Open mouth, insert foot.)

And about the "I want the documentation but I don't know what package the help is in", then what you need is a broader help about the kind of task you're trying to perform. Example: If you need help setting up a firewall and don't know what man page to read, then what you need is not a man page, but something like an LDP HOWTO on firewall setup. Another amazing thing about Unix is that its simplicity usually keeps people from messing with what they shouldn't (e.g.: "look at the ipchains man page! I can setup a firewall! -- but what's a firewall ?")

22 Apr 2001 01:35 lahvak

Re: Program Files

> The "Program Files" technique
> of standardizing a place for each
> installed package to have its own
> separate folder was a huge step in the
> right direction. This is definitely
> something Micro$oft has done right, and
> this type of
> keep-applications-separated-for-easier-manageme
nt
> definitely needs to be looked at for
> UNIX.

I cannot disagree more. This is simply an

administration nightmare. At work, I have two

machine on mu desk. One runs Debian, the other

NT. I have much more software installed on the

Linux machine, and I use it much more often.

Still, during the year since the last reinstall

of the NT, the disk is in total chaos. I have no

idea where things are, each installer puts things

in different place, I am even not sure what is

there. Trying to set PATH so that I can run

thinge from command line is simply impossible.

Several administrators told me that when they

need to give a NT machine to a new user,

they always reformat the disks and reinstal the

system from scratch, because trying to clean it

by hand is impossible. Giving a unix desktop

machine means deleting the old user and adding

new one.

As for users installing their own software, I did

it for years on solaris. It is extremely easy if

the software uses autoconf, you just run

./configure --prefix=${HOME}, and everything

installs into appropriate directories in your

home. Otherwise you just edit couple of lines in

makefile.

The only advantage I can see is with uninstalling

software. On NT, you can always find where each

package lives, delete the directory, and then

wonder why your registry is full of old useless

crap. On my Linux machine at work, almost

everything is installed from packages. Deb's

automatically uninstall themselves, I don't have

to care where the software is, where the blasted

uninstall program for that particular package is,

I just run dpkg or dselect and let it do the job.

Of course that maked it harder for packagers, but

it is really easy for users. At home I have a

lot of software in /usr/local/, and it

was hard to manage for a while. Than I

discovered stow. Now every package has

its directory under /usr/local/stow, and all file

from there are symlinked to the appropriate

directories. One thing I wish is for autoconf to

cooperate nicely with stow. The way I imagine it

is configure checking for $STOW_ROOT environment

variable, and automatically set prefix to

$STOW_ROOT/packagename, and add stow command to

the makefile install section.

As a user on a unix machine I vary rarely have to

even know where something is installed. And if I

need to know it, there is only a couple of places

where it can be. Every time I use windows,

either my NT box or a 98 machine my wife has

at her office, I and up spending a lot of time in

that horrible "find files" dialog window. And

what good is it to have bash on windows if you

have to change your $PATH every time you install

new software, and your $PATH is longer

than a week before payday.

22 Apr 2001 01:59 bergo

Re: Bad Approach
Nevermind the last paragraph. (too much coding, few sleep -- noticed too late that he wasn't defending option #1)

22 Apr 2001 04:00 xant

Re: UNIX FAQ, Anyone?

> I was intrigued to see how this solves
> the problem --
> predictably, the script in ROX-filer
> uses something'
> like "APP_DIR = `dirname
> $0`". This breaks in so
> many ways: what if I run
> "./AppRun"? dirname is
> "."
> and if the application CDs
> anywhere...well, it's
> screwed.

$APP_DIR=`cd \`dirname $0\` && pwd`

22 Apr 2001 04:27 xant

Grumble.. Unix dir structure is easier for admins AND users, just learn a LITTLE sh
Ok, gotta change the hostname on this box. Hmm,

I'll bet that's set in 18 million places. How on

earth will I find them all? Oh yes,

# find /etc ! -type d -exec grep -i $HOSTNAME {} \;

(I actually did this yesterday.)

Shoot, I think I need to upgrade libgtk+ for this

software. How on earth will I find out what

packages are linked against it? Oh yes I'm on

Unix, so I can

# for i in `echo $PATH | tr : '\n'`; do ldd $i/* |

grep libgtk; done

(I've used this one for a number of things;

greatly simplifies boot disk creation, too.)

Shall I go on? Damn, I really can't remember the

name of the package that manages my audio system.

# find /usr/doc ! -type d -exec grep -iH audio {}

2> /dev/null \;

A huge number of tasks are made easier by the Unix

directory structure that are unrealistic even to

try with app directories. The only thing that is

made harder is uninstalling software. But this is

Unix; we don't need no stinking registry. Want an

uninstaller? Here's your Makefile.

# you DO have variables for these already, right?

FILELIST=$(BINARIES) $(DOCS) $(LIBRARIES) . . .

UNINSTALLER=/var/lib/uninstall/myapp.uninstall

install:

# do the actual installing

# . . . done installing, now make the uninstaller

echo '#!/bin/sh' >> $(UNINSTALLER)

for i in $(FILELIST); do echo 'rm -f ' $i >>

$(UNINSTALLER) \

done

# this one removes all empty directories,

which is what you want

for i in $(DIRLIST); do echo 'rmdir -p' $i >>

$(UNINSTALLER) \

done

chmod +x $(UNINSTALLER)

@echo ==== Run $(UNINSTALLER) to get rid of

myapp ====

uninstall:

$(UNINSTALLER)

22 Apr 2001 04:45 karellen

umm
I'm confortable with the /bin, /sbin, /usr/bin, /usr/sbin scheme. Users who can't cope with that should go use Windows and BeOS or something. There is GNU Stow which does what you're saying. I'm using instmon && installwatch to install/remove source packages. We shouldn't complicate the
compilation/install procedure more as it's already overcomplicated. I can imagine a GUI popup syaing "The program has failed to compile, do you want to run the compilation troubleshooter?". No thanx, /bin rules.

22 Apr 2001 04:53 siks

Re: Bad Approach

> I like most is that the disk hierarchy of
> Unix doesn't become the hell a Windows
> hierarchy is. Win 95 tried to fix that
> with the Program Files thing, but the
> plethora of deep hierarchies below that
> got as unmanageable as the previous
> situation.

You're very right! Application directory is not what

I'd like most, it's not clear how to handle dependecies and shared components. M$ has a solution for this: applications put into /windows/system32 dir their own shared codes. Sounds terrible! Can't They recognize the difference between system progs and application progs?

So, my partial solution is change the /bin/install to new code, which store what, where and how did, and then uninstall will be easy.

Imre Veres

22 Apr 2001 05:14 gurubonz

Re: Program Files

>
> I cannot disagree more. This is
> simply an
> administration nightmare. At work, I
> have two
> machine on my desk. One runs Debian,
> the other
> NT. I have much more software
> installed on the
> Linux machine, and I use it much more
> often.
> Still, during the year since the last
> reinstall
> of the NT, the disk is in total chaos.
> I have no
> idea where things are, each installer
> puts things
> in different place, I am even not sure
> what is
> there. Trying to set PATH so that I
> can run
> thinge from command line is simply
> impossible.
> Several administrators told me that
> when they
> need to give a NT machine to a new
> user,
> they always reformat the disks and
> reinstal the
> system from scratch, because trying to
> clean it
> by hand is impossible. Giving a unix
> desktop
> machine means deleting the old user
> and adding
> new one.
> As for users installing their own
> software, I did
> it for years on solaris. It is
> extremely easy if
> the software uses autoconf, you just
> run
> ./configure --prefix=${HOME}, and
> everything
> installs into appropriate directories
> in your
> home. Otherwise you just edit couple
> of lines in
> makefile.
> The only advantage I can see is with
> uninstalling
> software. On NT, you can always find
> where each
> package lives, delete the directory,
> and then
> wonder why your registry is full of
> old useless
> crap. On my Linux machine at work,
> almost
> everything is installed from packages.
> Deb's
> automatically uninstall themselves, I
> don't have
> to care where the software is, where
> the blasted
> uninstall program for that particular
> package is,
> I just run dpkg or dselect and let it
> do the job.
> Of course that maked it harder for
> packagers, but
> it is really easy for users. At home
> I have a
> lot of software in /usr/local/, and
> it
> was hard to manage for a while. Than
> I
> discovered stow. Now every package
> has
> its directory under /usr/local/stow,
> and all file
> from there are symlinked to the
> appropriate
> directories. One thing I wish is for
> autoconf to
> cooperate nicely with stow. The way I
> imagine it
> is configure checking for $STOW_ROOT
> environment
> variable, and automatically set prefix
> to
> $STOW_ROOT/packagename, and add stow
> command to
> the makefile install section.
> As a user on a unix machine I vary
> rarely have to
> even know where something is
> installed. And if I
> need to know it, there is only a
> couple of places
> where it can be. Every time I use
> windows,
> either my NT box or a 98 machine my
> wife has
> at her office, I and up spending a lot
> of time in
> that horrible "find files"
> dialog window. And
> what good is it to have bash on
> windows if you
> have to change your $PATH every time
> you install
> new software, and your $PATH is
> longer
> than a week before payday.

I agree wholeheartedly.

I use debian & slackware. /usr/local/src/whatever works really well for me in installing source & looking for docs.

The file system layout is only difficult for clueless individuals who probably should be relying purely on apt-get or some other package tool, or sticking with messydos if they are really that desperate to have a messydos style layout.

There are a number of very good reasons why unix continues to be more stable & secure against worms & other virii.

Logical file structures that can be secured very quickly & easily is but one reason :)

22 Apr 2001 05:25 gurubonz

Re: UNIX FAQ, Anyone?

> When I once read the UNIX FAQ, I
> distinctly remember
> the question "how do I find out
> which directory my
> executable is in?", and the
> answer was "you can't".
>

This is what locate & find are for.

Most unix for beginners books give you a rundown on the various file structures anyway. find & locate just enable you to get there quicker.

Of course locate probably wont work on most linux systems that are treated like messydos windows boxes, but an updatedb soon rectifies that.

This all seems like a storm in a tea cup because unix doesn't do things the same way as messydos.

RTFM can elighten you :)

22 Apr 2001 06:04 ParkerPine

(Carefully) agreeing
Several times I was asking myself whether a different directory structure might not be better but considering this question too blasphemic and thus keeping it for myself. ;-)

The problem has been so often talked about, namely uninstalling apps. Assuming there'd be a 'make uninstall' available it would still mean that one would have to keep Makefiles for each application which requires a lot of administrative overhead from the sysadmin. As for now, uninstalling is a painful and often incomplete affair, that roughtly starts with a 'rm $(which app)'. After that one starts to reflect whether there are more files this application has put somewhere. So checking /etc, the user's homedirs is just the start. After that one has to scan all man-directories, possibly /usr/share, /usr/local/share, /usr/local/lib etc. That's annoying, isn't it?

A cool thing would be a certain convention on a new directory (perhaps calling it /var/apps) where each installed program (either by source- or bin-tarball) would put a logfile that would be readable by an imaginary program called 'remove' residing in /usr/sbin or so...in fact the equivalent of /bin/install.

I wouldn't go as far as to consider the MS Windows'ish way better. Actually it is probably much worse as there is no clean handling of shared libraries etc. (dll's really are a nasty invention!), but the general idea of having one central directory, where either the binaries itself or at least some uninstall information are gathered, would perhaps be worth considering.

22 Apr 2001 07:22 hjones

Re: (Carefully) agreeing

&[..]
> A cool thing would be a certain
> convention on a new directory (perhaps
> calling it /var/apps)
& [..]

Isn't this exactly for the SVR4ish /opt is for?

My SGI puts apps into a fairly reasonable heirarchy in /opt. As long as there are tools to deal with the resultant gigantic PATH in sensible ways, I'd prefer that to having to figure out which of the 300 binaries in /usr/local/bin are actually related to imagemagick or netpbm (at least netpbm follows a vague naming convention).

My personal objection to the original description is the (apparent) reliance on a particular type of GUI/desktop to do the dirty work. Of the machines I run, exactly none have X consoles.

22 Apr 2001 07:36 dcturner

"If it works, don't fix it"
I could believe that either way works. I migrated from RISC OS (which rigidly uses application directories) to Windows (which vaguesly uses application directories) to Linux (which doesn't) The largest problem I had was with Windows, as there were no rules about where apps went, so it was impossible to be sure that you had fully weeded out any given app. MS Apps were possibly the worst for this - ever tried to de-install IE?

Also, what about a hybrid system? When dealing with a particular app you firstly have to ask "is it old-style or new-style" before the next level of searching, which _increases_ the overhead associated with a task.

22 Apr 2001 07:50 remco2

Auto completion
Ok, I want to start Gimp. So I enter gi<tab>, wait

half an hour until bash has searched thru my

harddrive until in finds all matching binaries and

I can pick the right one.

22 Apr 2001 08:23 Avatar tal197

Author's clarifications
OK, I see that a few points weren't clear enough. To answer most of the problems raised so far:

> One of the advantages of the Unix way of life is that your PATH

> variable does not need to be 4 KBytes-long

You do NOT need an enourmous PATH - in fact, you can just keep your current one. Some people seem to think (maybe this is how Windows does it) that you'd set PATH="/usr/bin/gimp/bin:/usr/bin/system/bin:/usr/bin/netscape/bin" etc. You don't. You set PATH="/usr/bin". When you type 'gimp' at the shell prompt it looks in '/usr/bin' and finds 'gimp' (as usual).

Since 'gimp' is a directory, it runs the AppRun file inside. Easy.

> This breaks in so many ways: what if I run "./AppRun"? dirname is

> "." and if the application CDs anywhere...well, it's screwed."

Nope - this works fine. If you run ./AppRun then the CWD must be inside the application directory! I often use this for debugging (so messages get logged in the xterm).

UNIX does not require $0 to be the pathname of the executable, true. However, it does *strongly* recommend it, and all shells and filers do this in practise. For app dirs, we extend this suggestion to be an absolute requirement. Problem solved! (for symlinks: symlink to the app dir, not the AppRun file inside, then it works fine)

> As for users installing their own software, I did it for years on

> solaris. It is extremely easy if the software uses autoconf, you just

> run ./configure --prefix=${HOME}, and everything installs into

> appropriate directories in your home. Otherwise you just edit couple

> of lines in makefile.

That's not "extemely easy" for most people! (think 'parents' here).

It's also much slower waiting for it to compile than using a binary.

You also need to edit your profile to set up your PATH, MANPATH and so on.

Finally, you need the compiler and all the library header files installed - normal users often don't have that.

That simply doesn't compare to "Click on it" !

Most people have looked at this from a shell-user's perspective. That's fine (as described above), but it works even better from the GUI since you don't even need to make sure it's in your PATH - just click on it!

> I can imagine a GUI popup syaing "The program has failed to

> compile, do you want to run the compilation troubleshooter?". No

> thanx, /bin rules.

If an application directory program doesn't compile then you're left with an xterm showing all the output from the compile process. Just like normal.

22 Apr 2001 08:33 Ilan

NeXT and OSX application bundles
NeXT (and OSX, which borrows heavily from NeXT) have a system called Application Bundles. This is where an application is represented as a folder with a .app extension. If we're talking about Gimp, then you would the gimp icon in the file manager, but that would be representing a folder called gimp.app. Inside this folder, there are all the resources for the application (internationalization strings, icons, pixmaps, help files, etc) plus the binary. If you want to read more about this, check out articles on MacOS X at www.arstechnica.com. Also GNUStep, the open source NeXT clone, also makes use of bundles, so you can "liberate" code from that project (if you don't mind getting your hands dirty with Objective-C). Application bundles are a far more robust system than the current packaging systems. It is the technically superior solution. Unfortunately for the linux community, instead of working on making the application installation/management more robust and unbreakable, many of the big guys (RedHat, Ximian, etc) are throwing the internet at the problem and avoiding the issue; they can make money selling internet application managment services that appear to solve the problem. As Bill Gates once discovered, you can make far more money from selling software that is unreliable than you can from selling software that is robust.

22 Apr 2001 09:33 steveb

Been using this for 5 years..
I've been using a scheme like this for about 5
years. With a few trivial additions it can keep
both the 'I want each app in a directory'
folk and the 'I only want 3 things in my
path' folk happy.

When I build an app, I use a prefix of:

  /usr/local/packages/NAME-VER

For most packages, this is as simple as:

  configure --prefix
/usr/local/packages/apache-1.3.14

Build and install as normal, then create a symlink
/usr/local/packages/NAME, eg:

  ln -s apache-1.3.14
/usr/local/packages/apache

For things that need to be on the path (apache was
a poor choice...) you can then create symlinks in
/usr/local/{bin,lib,sbin,man} as appropriate. Up
till very recently I did this by hand, but I've
started using a small shell script since I've had
several tens of new machines to install..

I'm not sure that this is necessarily better than
the control that package managers like RPM give,
but for locally-compiled packages I find it
considerably less work than building packages for
one-off installation (especially on Solaris, which
is what I use on my day-job).

The side-benefit of this scheme is (hopefully)
obvious, but I'll save someone else having to
point it out - this way of arranging things makes
it straightforward to coexist multiple versions of the
same package, and allows a quick retreat if
an application upgrade goes horribly wrong.

22 Apr 2001 10:48 msaarna

Best of both worlds
I guess a lot have people haven't heard of
encap.
www.encap.org/faq.html (www.encap.org/faq.html)

A lot of the philosophies behind this apply to this debate, and can be adapted to RPMs, DEBs, or whatever. Basically, apps are installed to application directories (say in /usr/local) and
get symlinks in the official areas pointing to binaries, libs, mans, ...etc in the application directory.

That way, If you wish to manually delete something you can
just delete the application directory and run a program that
cleans up broken symlinks.

The only thing the author brings up that this doesn't address
is the non-root install thing, since other users won't have
rights to create the symlinks.

...Just my 50th of a dollar.

22 Apr 2001 11:38 jsquyres

Agree and disagree
I both agree and disagree.

This is from the perspective of a software developer -- someone who has to *write* the installers.

-----

Disagree: I'm don't think that most users should build/install software. This is one of the Big Problems with Windows and Mac on the desktop, right? That users would install software just about anywhere, making it a complete nightmare for a sysadmin (and the user themselves!). And what about the additional disk resources? What if every user compiled their own version of package X? Why have a copy for every user? (try convincing an enterprise-wide IT manager that the file server needs another 60GB RAID array because users need their own copies of software)

I claim that it's the sysadmin's job to install software in a central location that everyone can use. Sure, users can install their own software for testing (under their $HOME or something), and perhaps some esoteric packages that only they will use, but most software packages that are worth installing are suitable for general consumption (at least for purposes of this discussion :-). Indeed, as a sysadmin myself, I will rarely try to fix a software package that a user has self-installed; I will likely install the package myself and have them use that one; it frequently solve most problems (in my experience).

Installing software is a complex task. There are frequently many different variables other than just compiling and deciding where to place the binary/man/config files. Hence, even a "just click here!" solution is not sufficient (e.g., what if there are configuration options that I need to specify before it compiles? And what if those options are *complicated*? Even if the user "just clicks here", they still have to answer those questions, which they may or may not know how to do properly). That's why there are sysadmins -- to do these kinds of things.

And I would expect that sysadmins know how to read the INSTALL file, "./configure; make all install", or whatever else is takes to configure and install the app (granted, some packages make it easier than others -- see my "Agree" points below).

There is the argument, however, "what about home computers? There is no sysadmin there." Yes, with my parents, I have to painfully walk them through windows installers (do you think I would have my parents run Linux? It's not *that* ready for prime-time yet...); even the "simplest" (in my computer-geek view) questions in the install wizards will confuse my parents because they don't know or care what it means -- as they shouldn't. They're users -- they want to *use* the computer. They don't need to know how it works. After all, a computer is a labor-saving device, right? ;-)

If someone is installing their own Linux (BSD, whatever) system, they they're signing up for some pain. The current state of things is that Linux/etc. is *not* click-click-click, it's installed (like Windoze). I still stand by my claim -- if you install Linux/etc. at home, you're signing yourself up to be a sysadmin, and are therefore responsible for the learning curve that comes with it.

-----

Agree: the current generation of generally accepted configure/build/install systems suck.

Before you flame me with cries of "Use autoconf/automake/libtool!", let me say that they are fully functional tools, but they still suck. I use them heavily in my software projects, but if you've every tred using autoconf/automake for anything more than a trivial project, you know what a nightmare they can be.

First off, all three are way out of date. The only part that keeps getting updated is config.[sub|guess].

Autoconf: Debugging autoconf scripts, and writing *truly* portable autoconf tests is just as hard as writing the package that you're trying to distribute. That is, you effectively have two write *two* packages: your cool software itself, and a whole separate set of autoconf tests. Yes, autoconf has a bunch of built-in tests, but they typically don't cover all the things that a large, complex software package needs. For example:

- does the system have a prototype for gethostbyname()?

- is sa_len in struct sockaddr?

- does the C++ compiler use template repositories, and if so, what is the name of the directory that it uses?

The list is endless. This is why every decent-sized software package has a large, complex configure.in (possibly with a bunch of extra .m4 files). This sucks. I just want to release my software -- I don't want to have to debug configure scripts ad nausaum. Additionally (IMHO), most programmers don't know how to write good configure scripts. Look at many packages here on Freshmeat -- they work great on Linux, but not on any other flavor of unix.

These are are a few reasons why I think autoconf is a framework that somewhat helps, but it's not enough.

Automake: automake is nice -- it gives you a whole boatload of automatic targets, including my personal favorite: uninstall. It does a good job on most things like install, uninstall, etc. But I liken automake to most microsoft products: it's very easy to get simple projects going. To do anything more interesting (e.g., large, complex software packages), you have to dive deep into its [lack of] documentation, do oodles of empirical testing to figure out how it *really* works, and then workaround its bugs. automake has some *serious* drawbacks. Here's a few:

- the "depend" target is gcc-specific

- if there's no PROGRAMS target in a given directory, you can't make a convenience library (e.g., make a library with the .o's from a list of subdirectories)

- if there's no source files in a given directory, the "tags" target will fail

- the delicate timestamp dependency between all of automake's generated files is very easy to break and cause unexepected side-effects (try importing a released 3rd-party automake-ized package into your CVS tree, for example)

libtool: again, on the surface, libtool is very nice. It allows you to make shared and static libraries with ease. Particularly when paired with automake. But it also has some serious drawbacks:

- no clean support for making a single library from source files from multiple directories

- no support for C++ libraries (try with any modern C++ compiler other than g++)

- way out of date; making shared libraries on AIX 4.3.3 doesn't work (for example)

-----

All this being said, autoconf/automake/libtool are currently the best tools out there. They suck, but they suck the least among the others, and are generally widely accepted. Hence, we programmers have to use them. The fact of the matter is that when used properly, they give a fully functional (and clean) install and uninstall package. And we sysadmins like that -- trust me. Contrary to what someone said above, installing is not just a matter of "mkdir foo; cp ... foo", and uninstalling is not just a matter of "rm -rf foo". Installaing and uninstalling properly is a complex task.

I'd *love* for there to be something better. The software carpentry project tried to make something better, but I think the end results -- while they are a few steps in the right direction -- won't be a comprehensive solution (IMHO).

So, sorry, I digressed a bit off-topic here, but my main point of disagreement still stands: normal users shouldn't install most software. The syadmin should do that.

22 Apr 2001 11:49 badforgood

Re: Author's clarifications

%That simply doesn't compare to
> "Click on it" !

People who want "Just click on it" probably should

go to Windows. Unix/Linux is so powerful because

so much more can be done by *not* clicking. I

mean, there's nothing wrong with X11/KDE or such

things, but I really wouldn't want to see my Linux

machine to be a "Just click" box!

In fact, this Microsoft-created phenomenon causes

mental trouble for ordinary people. I mean: "Click

open coke". "Drag coke to mouth and drop it there".

A Unix-user'd probably simply write "coke | mouth".

See ya,

Nils

(no, I'm not insane!)

22 Apr 2001 11:54 richardww

Re: Been using this for 5 years..
If you make a symlink in /usr/bin for example and try to run it, doesn't the program usually moan that it can't find its files?

22 Apr 2001 12:37 lubricated

Re: UNIX FAQ, Anyone?

>
> % When I once read the UNIX FAQ, I
> % distinctly remember
> % the question "how do I find
> out
> % which directory my
> % executable is in?", and the
> % answer was "you can't".
> %
>
>
> This is what locate & find are
> for.
> Most unix for beginners books give you
> a rundown on the various file structures
> anyway. find & locate just enable
> you to get there quicker.
> Of course locate probably wont work on
> most linux systems that are treated like
> messydos windows boxes, but an updatedb
> soon rectifies that.
> This all seems like a storm in a tea
> cup because unix doesn't do things the
> same way as messydos.
> RTFM can elighten you :)

Don't be a tard.

They are trying to figure out where the exacutable

from inside the exacutable.

22 Apr 2001 12:47 lubricated

Re: Author's clarifications

> You do NOT need an enourmous PATH - in
> fact, you can just keep your current
> one. Some people seem to think (maybe
> this is how Windows does it) that you'd
> set
%
PATH="/usr/bin/gimp/bin:/usr/bin/system/bin:/usr/bin/netscape/bin"
> etc. You don't. You set
> PATH="/usr/bin". When you type
> 'gimp' at the shell prompt it looks in
> '/usr/bin' and finds 'gimp' (as
> usual).
> Since 'gimp' is a directory, it runs
> the AppRun file inside. Easy.

And when you want to run convert the shell

automagically figures out that it should

run convert from inside the ImageMagick directory.
Don't tell me that convert should have its own
subdirectory because that would require more files
than just convert furthermore if each of the
imagemagick programs had its own subdirectory then
there would be alot of stuff that is duplicated.

22 Apr 2001 13:02 adamnealis

Re: UNIX FAQ, Anyone?

> When I once read the UNIX FAQ, I
> distinctly remember
> the question "how do I find out
> which directory my
> executable is in?", and the
> answer was "you can't".

There is always which or whereis these days.

Also, locate is popular.

22 Apr 2001 14:29 quotemstr

Re: Been using this for 5 years..
I do nearly the same thing, but gnu stow makes it
easier. Just install to /usr/local/stow/foo, cd to
/usr/local/stow, stow foo, and voila, everything
under foo is symlinked appropriately.

22 Apr 2001 14:30 quotemstr

Re: Best of both worlds
Encap sounds just like stow.

22 Apr 2001 17:12 foolishboy

I agree. This is what I do.
Actually, this is quite a sane way of doing things.

Rather than have software packages all intermingled, putting them each in their own carefully versioned directory makes a lot of sense.

Here's how I work things:

/pkgs/packagename/packageversion (ex: /pkgs/apache/1.3.19/)- This is the --prefix of any installed package. As far as the

package is concerned, this is where it lives. All binaries and static files go here. This directory can be copied between hosts of the same OS and platform without disruption.

/pkgs/packagename/current - A symlink to what the sysadmin deems is the "current" version of a package. ex: /pkgs/apache/current -> /pkgs/apache/1.3.19/ .

/data/packagename (ex: /data/apache/ )- This is where any host-specific or changable data goes. Configuration files, log files, state files.. They all go here.

/u - This is a symlink soup (ala Stow, Depot, or-- what I use-- pkglink). /u/bin/gcc -> /pkgs/gcc/current/bin/gcc.

Advantages: I can change the current version of a package available to my users by moving the current symlink. If things break, I move the symlink back. Using cfengine to manage the current symlinks gives me revision control over all package versions. If I bless a package as current, yet one user still wants to use an older version of the package, it's still there, they just need to put /pkgs/perl/4.036/bin in their path instead of using the /u/bin/perl symlink.

Disadvantages: Commercial and prebuilt software doesn't always allow a person to specify install location. This pollutes the /usr directories (which I want to be positively static) and can be quite difficult to shoe-horn into a /pkgs format.

If anyone would like to talk about this, my email addr is fm@timestudies.skylab.org.

22 Apr 2001 17:25 icastle

Re: Author's clarifications
Thomas,

- How are do you see your scheme coping with heterogenous networked environments? e.g. a mixed set of x86 and alpha(AXP) based machines wanting their own executables but access to common data files etc

At the moment it is "mount -t nfs usr/bin /usr/bin; mount -t nfs server:/usr/share /usr/share"

- A separation between read-only and writeable file systems?

/usr/bin/appdir/readonly, /usr/bin/appdir/readwrite

vs.

/readonly/usr/bin/appdir /readwrite/usr/bin/appdir

?

- A separation between system wide and per user data?

/usr/bin/appdir
~/appdir

?

Have you any views on "per process name spaces" a la plan9 and recent proposals for linux and how that might impact on this issue?

22 Apr 2001 17:46 bk12

A folder hierarchy suggestion
IMHO, final users (=parents) shouldn't have to build

applications, but they should be able to install

binary packages. These packages should be easy

to install and uninstall without getting the system

filled with unused files from previous installs.

Furthermore, the "curious-but-not-geek" user

should be able to find everything (doc,

binaries,data) about an application easily.

I like the idea of application folders but don't want

to bloat the PATH variable. Therefore I suggest a

folder hierarchy like this :

/app_files

-/myapp

--/bin

--/data

--/doc

-/gimp

--/bin

--/data

--/doc

-/kmail

--/bin

--/data

--/doc

/apps

/lib_files

-/mylib

--/bin

--/data

--/doc

-/someotherlib

--/bin

--/doc

/libs

"/apps" would be filled with symlinks to

"/app_files/<app_folder>/bin" for easy access from

bash. This would avoid PATH variable bloating as

"/apps" would be the only folder in it.

"/libs" would do quite the same, but for libraries.

Libraries can be seen as "apps' apps" : the user

execute code in apps to perform something, apps

in turn execute code from libraries, which might

execute code from other libraries...

With such a structure, you would easily know

where everything about an app is and manual

uninstallation would be as simple as "rm

/app_files/<app_folder>", then running a script to

remove broken symlinks from "/apps".

Note that I don't address devel packages in this

hierarchy, but it must be possible to find a similar

solution.

22 Apr 2001 18:10 Blaa

intra-system communication
without going into specifics here, organising files by type as
opposed to by association is wonderful for inter-program
communication. something windows could do with learning
about.

Blaa

22 Apr 2001 18:52 danelst

I disagree, too.
Use RPM for binary installations and checkinstall

(registered at freshmeat) for source code

installations, and everything is fine. You will

never get away without file sharing, and the

Windows approach only makes this messy.

22 Apr 2001 19:14 mal0rd

Why use static directories at all?
Clearly, not everybody is going to agree on the "best" way to organize the directory tree. In fact, I'm sure that most people can agree that there is no "best" way. The solution: have just one big directory, with meta tags associated with each file.
There could be properties like what partiton/disk the file is on, what package it belongs to, if it is a library, it's mime-type, natural language,if it is a game, if it is in the effective path(as there wouldn't really be one), ecetera, and anything else that the user/admin would like to associate with the file.
Then a shell could dynamically create a directory structure based on the users preferences that could be organized any way they liked, either of the two that are being decribed here could be created.
This propably won't happen, but it would make things alot simpler AND more powerful.

22 Apr 2001 21:10 kemikal

Re: Auto completion

> Ok, I want to start Gimp. So I enter
> gi<tab>, wait
> half an hour until bash has searched
> thru my
> harddrive until in finds all matching
> binaries and
> I can pick the right one.

I hate to sound Redmondian here, but if you're running GIMP, chances are good that you are in X... I love a terminal window just as much as the next guy, but for image manipulation, you're gonna have to grab the mouse sooner or later. Just click the desktop icon.

If you really can't stand a desktop shortcut, put symlinks to all your executables in a /shortcut directory and do your tab completing from there.

I'm neither for nor against this gimp/bin idea... I am for any good idea whether or not it rocks the boat. "Just because this is the way we've always done it" means nothing to me on its own merit. If there's a good reason, by all means, say so. If we just like to argue with people who have ideas, we should examine our methods. No progress will be made this way.

23 Apr 2001 06:32 Avatar tal197

Re: A folder hierarchy suggestion

> IMHO, final users (=parents) shouldn't have to build
> applications, but they should be able to install
> binary packages. These packages should be easy
> to install and uninstall without getting the system
> filled with unused files from previous installs.
> Furthermore, the "curious-but-not-geek" user
> should be able to find everything (doc, binaries,data) about
> an application easily.
Absolutely :-)> I like the idea of application folders but don't want to

> bloat the PATH variable.Don't worry - they don't.

bash will find the app directory in PATH just as if it was

an executable. Then, discovering that it's a directory, it runs the

AppRun file inside it. Assuming bash supported them...

23 Apr 2001 06:47 Avatar tal197

Re: Author's clarifications

Thomas,

- How are do you see your scheme
coping with heterogenous networked
environments? e.g. a mixed set of x86
and alpha(AXP) based machines wanting
their own executables but access to
common data files etc

No problem - I actually do this! Put the app in
your home directory and run it from x86. Then,
ssh/walk to an alpha box and run it there. The
application directory now contains both binaries.
Put the application in /usr/local/apps or whereever
and you're done :-)

At the moment it is "mount -t nfs
usr/bin /usr/bin; mount -t nfs
server:/usr/share /usr/share"
Sounds rather complicated ;-)
- A separation between read-only and
writeable file systems?

App dirs are assumed to be read-only when run (but writeable while compiling,
obviously).

I consider user preferences to be data created by the application, not a part
of it. Uninstalling an app shouldn't remove my settings for it (maybe I'm just
upgrading!). Of course, non-app dirs are the same (removing Netscape won't kill
my bookmarks file, etc).

- A separation between system wide and
per user data?

I've written another paper about just such a scheme (but didn't want
to put too many controversial ideas in this piece). It allows program-specified defaults which are overridden by distribution defaults which, in turn, are overridden by sys-admin settings which are, finally, overridden by user settings. It's easier than it sounds!
[ details ]

Have you any views on "per process
name spaces" a la plan9 and recent
proposals for linux and how that might
impact on this issue?

The system has to be created to work without it.
However, using a union fs mount or similar could be
useful to make /home/thomas/apps include everything
in /usr/local/apps (ie, my apps and others').

23 Apr 2001 07:05 Avatar tal197

Re: Author's clarifications

> > You do NOT need an enourmous PATH - in fact, you can

> > just keep your current one.

> And when you want to run convert the shell automagically

> figures out that it should run convert from inside the

> ImageMagick directory.

> Don't tell me that convert should have its own

> subdirectory because that would require more files than

> just convert.

It depends. If convert can be considered a package in its
own right (ie, can be upgraded independantly of ImageMagick)
then it should have its own dir, changelog, and so on).

If not, then I'd prefer it in a single command. For example,
RCS used to provide many commands (ci, co, rcsdiff, etc).
The newer CVS has one command - 'cvs'
and uses subcommands, eg 'cvs co', 'cvs ci' and 'cvs diff'.

This, convert might become 'im convert' or 'im con' (which
is actually shorter than the original!). That also avoids
naming conlicts (surely someone else wants 'convert' too!).

If you're talking about backwards compatibility, then you
would of course need to create a symlink (or script) to run
the real command.

> furthermore if each of the imagemagick
> programs had its own subdirectory then
> there would be alot of stuff that is duplicated.

If convert requires the use of other programs in the package
then those other programs must also be in a known location
(probably in PATH). That's still true without app dirs,
though.

23 Apr 2001 09:40 Avatar tal197

Re: Grumble.. Unix dir structure is easier for admins AND users, just learn a LITTLE sh

> Ok, gotta change the hostname on this box. Hmm,
> I'll bet that's set in 18 million places.
It is? Why?> How on earth will I find them all? Oh yes,

> # find /etc ! -type d -exec grep -i $HOSTNAME {} \;

> (I actually did this yesterday.)

Great - but that has nothing to do with application directories.

You system config is still in /etc. That's because your computer's hostname

isn't part of any package or program.

> Shoot, I think I need to upgrade libgtk+ for this

> software. How on earth will I find out what

> packages are linked against it? Oh yes I'm on

> Unix, so I can

>

> # for i in `echo $PATH | tr : '\n'`; do ldd $i/* | grep libgtk; done

> (I've used this one for a number of things; greatly simplifies boot disk

> creation, too.)

Doesn't work. Just to give a random example, you missed

/usr/local/lib/python2.0/site-packages/_gtkmodule.so (and

thus a few dozen python applications).

If you want to find everything, you have to search the whole

disk. What if users have installed their own stuff in ~/bin

(even without app dirs)?

> Shall I go on? Damn, I really can't remember the

> name of the package that manages my audio system.

That's because you've put everything in /usr/bin flat.

If you had relocatable directories, you'd probably have

your audio system somewhere inside /usr/apps/sound/, which is

much easier to search.> # find /usr/doc ! -type d -exec grep -iH audio {} 2> /dev/null \;

Which would become:

# find /usr/apps/sound/*/Help -type f | xargs grep -i audio

Shorter and faster to run.> A huge number of tasks are made easier by the Unix

> directory structure that are unrealistic even to

> try with app directories. The only thing that is

> made harder is uninstalling software.

> But this is Unix; we don't need no stinking

> registry. Want an uninstaller? Here's your Makefile.

I don't want to write a Makefile every time I install

something! And I don't want to rely on the author-supplied

one being correct (how many authors really test their

uninstallers?).

I don't see any of the tasks you've listed being any easier

without application directories, but the install/uninstall

thing is *much* easier.

23 Apr 2001 13:24 lahvak

Re: Program Files
I am replying to my own message. I just want to
clarify that I didn't disagree with the concept of
application directories, only with the previous
posting that claimed that Windows does application
directories well. I think application
directories are not a bad idea, but the way they
are done in windows is an absolute distaster.

I kind of like the way stow does it, except that
it is a little bit of a pain, and sometimes it is
not possible to use, if a package during some late
phase of instalation assumes that things are
already in place and are not going to be moved
again (bytecompiling with Python comes in mind).

I think that nice solution would be integrate stow
with the system. Say you have a directory, maybe
/usr/apps, that would contain application
directories. There would be an easy way how to
specify for each directory which files should be
exported. So you would export binaries to
/usr/bin, libraries to /usr/lib, configuration
files to ... (ehm, can somebody explain me why
don't we have /usr/etc?), etc. :) The exported
files would get automatically symlinked to the
desired locations. The files that are needed only
for the application itself would not get
symlinked, and the application would have to know
where to find them. The system would have to
provide some mechanism for that. Creating a
package would mean puting all files in a hierarchy
under one directory, specify which files get
exported and where, packing it (tgz?). user would
just unpack the directory under /usr/apps or
/usr/local/apps and the OS would take care of the
rest.

24 Apr 2001 06:15 Aredhead

Re: umm
I agree, how often have you not used a system, where the $PATH was set wrong, well then you just use /bin/program or /sbin/program, because you know they're allways placed in those librarys on every *nix system.
Instead of trying to guess if it was /usr/share/programs/programname or /usr/share/programname or what ever the user on that system decided to use as the install.

25 Apr 2001 17:05 Avatar LionKimbro

Re: Why use static directories at all?

> Clearly, not everybody is going to agree
> on the "best" way to organize the
> directory tree. In fact, I'm sure that
> most people can agree that there is no
> "best" way. The solution: have just one
> big directory, with meta tags associated
> with each file.

That's exactly it.
I've done that with my own note taking system, and it is exactly the right way.

> There could be properties like what
> partiton/disk the file is on, what
> package it belongs to, if it is a
> library, it's mime-type, natural
> language,if it is a game, if it is in
> the effective path(as there wouldn't
> really be one), ecetera, and anything
> else that the user/admin would like to
> associate with the file.
> Then a shell could dynamically create
> a directory structure based on the users
> preferences that could be organized any
> way they liked, either of the two that
> are being decribed here could be
> created.
> This propably won't happen, but it
> would make things alot simpler AND more
> powerful.

We need to also pay attention to security and priveledges, and how files are copied from one filesystem to another. How would this work? Gatekeepers may be necessary to tag and translate. I will be thinking about this for a while, and pushing for this structure, provided I cannot find any flaws with it.

Brilliant. It's exactly what we need. Any system that is complex avoids categories like the plague. The files on an average persons UNIX desktop are definitely *very* complex ("...The mainframe sits like an ancient sage meditating in the midst of the data center. Its disk drives lie end-to-end like a giant ocean of machinery. The software is as multifaceted as a diamond, and as convoluted as a primeval jungle. The programs, each unique, move through the system like a swift-flowing river. That is why I am happy where I am..."), and definitely resist any 1 arbitrary categorization, such as the file system imposes. Tagging the tree and being able to view it in different hierarchial schemes would be great. File resources would be so much easier to manage, and searching would be a piece of cake.

26 Apr 2001 13:29 tarzeau

do your apps directory
a little bash script

---makeappsdirs

#generate application list

if [ ! -e applist ]; then

COLUMNS=200 dpkg -l |awk '{ if ($1=="ii") print $2 }' > applist

fi

#make application directories

mkdir $(cat applist | awk '{ print $1 }')

#make links for files of an application

#dpkg -L $(cat applist | awk '{ print $1 }')

cat applist | awk '{ print $1 }' | while read a; do

echo $a

dpkg -L $a | while read b; do

#echo $a $b

#echo $(basename $b)

if [ ! -d $b ]; then

#echo ln -s $b $(basename $a /)/$(basename $b)

ln -s $b $(basename $a /)/$(basename $b)

fi

done

#sleep 5

done

---snip-snap--->8

you don't have debian? oh well that's your problem... :)

lesbian gnü/linuks

linuks.mine.nu

and the best game is for linux: jumpbump.mine.nu

kind regards

tarzeau

26 Apr 2001 13:53 suitti

Install schemes I use
I use the /usr/local tree as a place to install new system

stuff that I'll restore after a system upgrade. I've

changed distributions more than once. The biggest

headache with upgrades is locating all the shared libraries

for the apps that went forward. Next is that some of

my old software no longer compiles, requiring a 'port'

effort. Not difficult, but often more work than finding the

shared libraries and using the old binaries.

Some apps are installed using rpm. I retain the

original package somewhere. Some are installed

from source. If I don't install it exactly as per

instructions using defaults, I write a small script

that configures and installs where I want it to.

If I'm installing an RPM from a CD I own, I often

don't retain the RPM in any other form. I should

have a file listing which CDs these came from.

This is a weak link. I dont' want these archives

on disk, as then they have to be backed up.

Some apps are installed from source. I often retain

only a source tarball, and perhaps a custom install

script.

But some packages are more complicated.

When I download a new Netscape browser, it installs

in a directory tree of it's own. Since it uses shared

libraries, and since those aren't installed in a system

lib directory, a script is used to set the LD_LIBRARY_PATH

before launching the binary. I use a symlink in

/usr/local/bin for the binary. I may have multiple

versions, so I have symlinks for each version and

a current default. "netscape" is the default, and

"netscape3", "netscape472" and "netscape6" all

exist.

I run three databases - msql, Mysql, and Postgres.

These each are given a home directory (account),

and the binaries, docs, libraries, etc., are all there.

Binaries that users might use get symlinks in

/usr/local/bin. The home directories are placed

on the disk drive that I believe that database

needs to be on. Home directories on my system

get more aggressive backup than system directories.

Basically, I perform monthly system directory full

backups with weekly incrementals. Home directories

get full backups regularly. This resembles Doug

Comer's Tilde trees proposal from the mid 80's.

I used to run web servers from /usr/local. Now they

each get a user home directory too. Unlike a Netscape

SuiteSpot typical install where things are spread all over

everywhere, my servers get a home directory, and every

attempt is made to put everything there. And, for upgrade

reasons, I try to have all the content live in htdocs, so

this directory tree can be moved into a brand new

server easily.

Another installation exception is pbmplus. I run

the original pbmplus package and netpbm. I install

the original from source, and netpbm from RPM.

The file names conflict, so can't share the same directory.

I installed the original in /usr/local/bin, and netpbm

in /usr/local/bin/netpbm/. There's only one user

that needs both, and that's me, so the PATH, MANPATH,

and library path changes are limited. Still, it's a pain

to upgrade the regular pbmplus package. I usually

just compile everything and install over the old copies.

If there's stuff left over (there usually isn't) I either

know it (because I just removed something from

the source install) or I live with it.

I'm still not fond of RPMs. The dependencies

drive me nuts. Much of this is shared library based.

I'd much prefer that everything was compiled statically,

or statically with just a couple exceptions - the C

and X11 libraries. That would limit the complexity

of system upgrades. I run alot of old binaries, but

at least they're all elf based - the a.out binaries

have finally been recompiled.

I don't know that there is a webalizer RPM, but

it serves as an example of dependencies. It

uses libgd, which uses libjpeg, zlib, libpng, and

optionally Berekely DB. Great if you don't mind

installing everything system wide. But I needed

to install without root recently. A day, and a

brand new 51 line shell script later, and I was

able to compile it. The script has to hack one of

the source files to get AIX's native C compiler to work.

Not mentioned are web applications. Much of my

recent development uses the web as a user interface.

I try to have each web app complely confined to a single

directory tree. That means I need to allow CGI

programs anywhere. IMO, cgi-bin was a bad idea.

I used to have apps that shared files, like read-only

gdm 'databases'. I mostly use SQL databases now,

where shared data is managed. If an app has

disk space issues, the app dir gets a symlink to where

the data is, or the whole server tree is moved to

some larger disk.

Web app installs may require config file changes.

Automating this for binary installs would be nice,

but messy in the general case.

PATH issues are short term. Deinstall issues are

slightly longer term. System upgrade issues are

longer still. I try not to upgrade the OS more often

than every five years, other than patches, per machine.

I still seem to be doing it all the time. One of my

machines is 1987 vintage. It's the hardest to replace,

or upgrade, as it is the most functional. 16 Mhz - about

1 MIPS.

27 Apr 2001 13:54 tylke

A newbies perspective
I like this idea.

First I must say that I am a Windows user by default (sorry), and I only write this from the perspective of a newbie to any *nix environment.

For the past few months, I have been using Linux, running it at home and on a few boxes at my day job.

(OK, now to my point)

My boss wishes to upgrade from MS Office 97 to 2000 and the same for Win9x to Win2K, but he doesn’t like the price or the fact that I have to call Micro$oft every time I install to validate that I truly do own the software & a license. Not to mention the fact that in 6 months I will have to redo the process because any MS Windows box will come to a creeping halt @ that time. (Repeat above process.)

I have tried to explain to him that migrating to a Linux environment would save us vast amounts of trouble/reinstalls/money and we should look to it for all of our future needs.

Now enter 100+ employees into this equation, each of which will need to be able to their apps as easily as they did in the old MS Windows environment.

I understand, in my position as Sysadmin, I should be able to set these boxes up for them in this way and expect them to only learn the new apps they have available to them. But for a sysadmin that has only MS Windows experience this would be a daunting task.

You retort: Of course it is, but if they do not like it, they should stay in their happy MS Windows environment and suffer. (I have seen this sentiment above numerous times.)

Is the goal of Linux (BSD, etc.) not to offer users a more robust/reliable operating system? One that is not only free but also dependable? (I may have missed a few bullet points here.)

(Note: My boss would gladly contribute to monetarily (Or via hardware, etc.) to open source development projects, but he needs reassurance that his employees will be able to use the new software without a major training period. (They need not be sysadmins.))

Yes, I understand there will always be the issue of libs & other dependency issues, but for individual applications, this would make my life (and others) a lot easier. Moreover, it would open Linux, etc. to a much larger group of individuals, not just the elite few.

My final point being that if Linux is to become welcome on the boxes of home & office users alike they should not be required to become sysadmins first.

01 May 2001 11:31 dlc

Re: (Carefully) agreeing

> A cool thing would be a certain
> convention on a new directory (perhaps
> calling it /var/apps) where each
> installed program (either by source- or
> bin-tarball) would put a logfile that
> would be readable by an imaginary
> program called 'remove' residing in
> /usr/sbin or so...in fact the equivalent
> of /bin/install.

Take a look at /var/adm/packages on any Slackware box.

08 May 2001 23:08 atavus

Re: UNIX FAQ, Anyone?
Well, under linux you can look at /proc//exe which is a symlink to the executable.

08 May 2001 23:08 atavus

Re: UNIX FAQ, Anyone?
Well, under linux you can look at /proc/pid/exe which is a symlink to the executable.

17 May 2001 01:35 AbInitio

Re: Auto completion

> Ok, I want to start Gimp. So I enter
> gi<tab>, wait
> half an hour until bash has searched
> thru my
> harddrive until in finds all matching
> binaries and
> I can pick the right one.

Dunno what takes you so long. I got 47 entries in just short of 1/2 second (my guess). Sounds like time for you to add updatedb to your cron file, eh?

19 May 2001 05:57 dalen

Re: umm

> I'm confortable with the /bin, /sbin,
> /usr/bin, /usr/sbin scheme. Users who
> can't cope with that should go use
> Windows and BeOS or something. There is
> GNU Stow which does what you're saying.
> I'm using instmon && installwatch to
> install/remove source packages. We
> shouldn't complicate the
> compilation/install procedure more as
> it's already overcomplicated

Well, this wouldn't complicate the install procedure. It would simplify the install procedure. You'd just have to move the app directory into the directory containing all your apps. That's why a newbie mac user can use MacOS X which also uses application directories.

19 Jun 2001 12:21 Meatpopsicle

Re: A newbies perspective

> I like this idea.
>
> First I must say that I am a Windows
> user by default (sorry), and I only

I know this probably sounds really dumb at first, but why doesn't *nix have a registry? How hard would it be to use Postgress or mySQL for a database of all application entries -- then you could have an actual path and a "common path" mapping

-- this would solve the problem with RH always putting things wherever it dang well pleases instead of in the path that every other version of *nix uses.

--this would also end the age old debate of /bin/gimp vs. gimp/bin because it wouldn't matter - the system would look for gimp in the database and find the real path, and if I typed /bin/gimp in a telnet session, the system would be smart enough to say "oh, on this sytem that's really at /gimp/bin" or vice-versa

--this would also make things much easier for large software developers that need to program install/uninstall scripts. Instead of seaching in n! places for a lib or mod, simply run a query against the database -- "does this system have c++ headers?" | select _location from _headers where _application ='c++'

The really nice part about this scheme would be that for _every_ task, you simply run a query - whether it's finding _where_ something is, or if it exists, or what version it is, or if it has it's source code installed or not, etc, etc, etc -- no matter what, you're just running queries. this could potentially be a good thing

--would this really be that hard to do? Can we afford NOT to do it?

04 Jul 2004 12:39 jcast

Re: A folder hierarchy suggestion

>

> % IMHO, final users (=parents) shouldn't

> have to build

> % applications, but they should be able

> to install

> % binary packages. These packages should

> be easy

> % to install and uninstall without

> getting the system

> % filled with unused files from previous

> installs.

> % Furthermore, the

> "curious-but-not-geek" user

> % should be able to find everything

> (doc, binaries,data) about

> % an application easily.

> Absolutely :-)> I like the idea of

> application folders but don't want to

> > bloat the PATH variable.Don't worry

> - they don't.

> bash will find the app directory in PATH

> just as if it was

> an executable. Then, discovering that

> it's a directory, it runs the

> AppRun file inside it. Assuming bash

> supported them...

And you think bash is the only place PATH variables (note the plural!) are used? You have to modify bash, tcsh, ksh, python, perl, and execp, but also things like ld.so, gcc, cpp, man, info, whatever other documentation systems we come up with, autoconf (yeah, I know, databases will make autoconf obsolete, I'll believe that when every Windows program in existence cleans the registry on uninstall), emacs, python, perl (perl actually has a couple of PATH-likes of its own), web2c (which has an entire program kpathsea to search for texmf files (lucky you, imagine having to modify TeX, Metafont, and Metapost separately)). Also, the locale library supports (according to man 5 environ) a couple of environment variables which may be PATH-likes but aren't documented in man 5 locale, and I'm sure there are other examples I can't find. Changing bash alone is not going to solve your problems with PATH.

Screenshot

Project Spotlight

MASTIFF

A static analysis automation framework.

Screenshot

Project Spotlight

TurnKey Drupal 6 Appliance

A Drupal appliance that is easy to use and lightweight.