Packaging Best Practices

=Making packages=

When having a huge repository with tens, maybe hundreds of packages, the need for rules of conduct increases dramatically. Below are some guidelines with justification for how to do things. Neither of these are written in stone, so objections are welcome.

Package development

 * Rule: All applications should be in CVS. Follow make/template.mk closely and make your own make/FOO.mk, where FOO is your application's lower case base name(that is: without version numbers). The rest of the files needed to build the package should go in the directory sources/FOO/.
 * Justification: Having files placed all over makes the repository messy and makes it hard to change the system as it evolves. What belongs to some package should be easily locatable.


 * Rule: Test your packages before committing them to cvs.
 * Justification: If it doesn't work for you don't bother other with it.


 * Rule: Run scripts/optware-check-package.pl on the package before committing them to cvs.
 * Justification: If "scripts/optware-check-package.pl" gives errors the package will be rejected anyway.


 * Rule: Avoid placing binaries in the CVS.
 * Justification: We are Open Source, binaries are inflexible and may, without our knowledge, contain things we don't want on our slugs.


 * Rule: Take your time to complete the package information at the top of the makefile. Try to make all dependencies and conflicts explicit. (Remember that multiple dependencies are seperated by commas - if you use spaces, ipkg will silently ignore all but the first dependency).
 * Justification: If you don't, your application may break other user installed applications or just plain not work.


 * Rule: If your source code is being pulled from sourceforge.net, you should be using http://dl.sourceforge.net/sourceforge/ for sourceforge download locations, not regional ones.
 * Justification : Region-specific source code locations have a higher chance of going offline.


 * Rule: Make an effort to get your package to cross-compile. If you cannot get it to do so, ask other developers for advice before "going native".
 * Justification: It can be hard to get a package to cross-compile for the first time, but it will be a lot less work to maintain in the long run.


 * Rule: If you change something in the package, only add it for testing if you think that you may break something. If you are sure you don't break something, don't. If you are unsure, add it for testing.
 * Justification: Adding a package for testing again (and removing it from the list of other packages) only makes sense if you really changed something, not on minor modifications like correcting spelling errors or removing some files. It just would cause work for testers.

Package design

 * Rule: All files not shared with Linksys applications shall be under /opt
 * 1) Admin binaries in /opt/sbin
 * 2) User binaries in /opt/bin
 * 3) Man pages in /opt/man/manX, where X is man page type number. Note that we now have a man program so those who left the man pages out before, should consider putting them back in.
 * 4) Application notes, sample configuration files and misc. files not accessed by the application FOO should be in /opt/doc/FOO.
 * 5) Libraries in /opt/lib
 * 6) Include files in /opt/include
 * Justification:
 * If your drive is unslung, disconnecting the drives will still support normal Linksys boot. If newer apps are placed in the flash, compatibility might break.
 * If you upgrade your unslung firmware, you will have to back up prior to, and restore them after the process for applications not to break.
 * There is little room in the root file system - a valuable commodity.
 * Having things organized makes it easier to administer.
 * If drive is not unslung, it will work, as the files will still be in the flash file system.


 * Rule: Do not place files under /opt/usr and /opt/local. The reasons for having /bin and /usr/bin separate on normal Unix systems don't seem to apply here.
 * Justification: We don't want too many directories. It makes locating files more troublesome and bloats the PATH environment (which may also result in slightly slower startup times).


 * Rule: Files shared with Linksys applications shall remain in the root file system
 * 1) If files that are not currently shared by Linksys becomes a part of later Linksys firmware revisions, a link should, for those who upgrade, be created from their previous location in the /opt tree to the actual location in the Linksys firmware.
 * Justification: When we boot without drives, Linksys firmware should be able to access their files.


 * Rule: Temporary files created by applications should in general be placed in /opt/tmp. If files are very small, /tmp (which is a ramdisk) can be used.
 * Justification: Filling up a ramdisk with large temporary files is not a good idea.


 * Rule: All directories that may be assumed by the application to exist (that is: that is will not create itself if not present) must be created by the package. When creating a package, assume that no directories exists under /opt.
 * Justification: They may not exist and your application will fail.


 * Rule: Do not store files directly where they may already be placed by another package (e.g. /opt/etc/ftpusers). Place them safely in /opt/doc/FOO and let the user perform the migration manually. Always provide a small howto on this in /opt/doc/FOO
 * Justification: It might require -force-overwrite to install and may delete valuable data, such as configuration files that the user has worked hard to set up.


 * Rule: If you mess with public configuration files in postinst, make a backup file and report to the user where that backup file can be found.
 * Justification: Messing with configuration files is risky business. Users may invest a lot of time in them. Respect that.

Diversion scripts

 * Rule: In general, all diversion scripts should be placed on the hard drive, that is e.g./share/hdd/conf/unslung and NOT in /unslung, as this tends to be a flash directory starting Unslung 3.x
 * Justification: Three bad things can happen:
 * 1) You may prevent the slug from booting properly without disks attached and may have to reflash.
 * 2) There will be yet another file that will be lost if you upgrade without backing up your flash.
 * 3) You may prevent the slug from booting at all (by a bug in your script), regardless of whether disks are attached and will have to reflash to recover.


 * Rule: If you want a diversion script that is supposed to execute BEFORE the drives are mounted, you cannot (obviously) not place it on a hard drive, but must place it on the flash. If so, I suggest a few seconds sleep to get the drives recognized and then check for /proc/hd_conn, /proc/hd2_conn etc and returning to the original script without any modifications unless one of these are present. These /proc entries signifies the detected presence of the corresponding drives by the kernel/USB storage.
 * Justification: As above.

Writing network daemons

 * Rule: Network daemons should as a general rule be run from xinetd. That means that a package containing a network daemon should install corresponding xinetd configuration script for each of its services in /opt/etc/xinetd.d.
 * Justification: There is one valuable thing in the Unslung environment - memory. The fewer applications running at all times, the better - at least a as a general rule - one should consider daemon memory use as a factor in such a tradeoff. There are also at least two exceptions:
 * 1) Some daemons are almost always running, like Samba. These have little benefit from running from xinetd and should instead run as a standalone server.
 * 2) Some daemons require excessive time to start up, but once running answer connect attempts pretty fast. dropbear is like that and it would be impractical to have it running from xinetd and therefor it should run standalone.


 * Rule: The package name should always be used as the xinetd config file name. If more than one xinetd config file is needed, name it FOO-servicename.
 * Justification: We don't want config file name collisions.


 * Rule: To prevent multiple packages from placing competing config files in /opt/etc/xinetd.d, care should be taken to utilize the Conflicts: setting in the control file.
 * Justification: We don't want to break xinetd.


 * Rule: All xinetd configuration files should be included as package confiuguration files in the FOO.mk file.


 * Justification: We want the user to be able to prevent a package upgrade from overwriting changes she might have done to the xinetd configuration file for the package.


 * Rule: A package that includes an xinetd configuration file should "killall -HUP xinetd" in it's postinst script (see the atftp postinst script for an example). Just call killall, not /bin/killall, so the postinst script can work on nslu2 and wl500g targets.
 * Justification: Allows for automatic enabling of new packages without having to reboot.

Using crontab entries

 * Rule: Packages that require crontab support should install a cron configuration script in /opt/etc/cron.d.
 * Justification: This allows packages to take advantage of crontab services, without requiring manual intervention by the package installer.


 * Rule: The package name should always be used as the cron config file name.
 * Justification: We don't want config file name collisions.

Cross compilation
There are a number of pitfalls that prevents packages from being cross compiled. Here are some:


 * Rule: There should be no include or library paths refering to any directory outside $(STAGING_DIR)/opt. In some cases /opt will be refered to by some programs so beware that your host system should not have anything under /opt.


 * Rule: If cross-compiling, you may sometimes have to compile using the host compiler (small, intermediate programs that generates something for later use). If so, pass the ${HOSTCC} setting defined in your FOO.mk on to the application make process and patch the application Makefile to use it.
 * Justification: Hardcoding host compiler name may break cross compilation on other host systems.


 * Rule: Avoid building large support programs for use on the host.
 * Justification: Developers are sometimes tempted by the discovery that there is some missing feature or tool on the official build host to set up their package to build a copy with HOSTCC. This is unnecessary in almost all cases - talk to the core developers and they will probably be willing to upgrade the official build host, or suggest some other workaround.

Staging

 * Rule: Package makefiles should never delete files from the staging directory (unless they have just installed that file to the staging directory). Package makefiles should never ever delete directories from the staging directory (even if they have just installed that directory).
 * Justification: There's no mechanism for figuring out who owns what in the staging directory, and you don't know what other packages are using staged files for. Deleting package FOO's files from the staging area will cause packages that depend on FOO to fail unexpectedly.


 * Rule: Avoid staging libtool archives (.la files), or if you do, make sure they don't contain compile-time link paths.
 * Justification: These files are unnecessary on a linux system, and it is hard to use them correctly when cross-compiling. Typically they contain compile-time paths (e.g. to the staging directory) that can get hard-written into binaries when libtool links a binary opposite the .la archive.  In some cases it is necessary to stage these files (for example libexpat.la is used by other packages to detect expat during configure) - then you must patch them with sed (see expat.mk for an example).


 * Rule: When staging a library that uses -config scripts, place a copy of these scripts into $(STAGING_DIR)/bin; do not ever place slug binaries in $(STAGING_DIR)/bin.
 * Justification: Applications that use your library will add $(STAGING_DIR)/bin to the PATH so that they run your staged copy of the script, instead of any copy installed on the host system. Putting $(STAGING_DIR)/opt/bin (where your script might otherwise end up) in the PATH could be dangerous, as there may be binaries in there that cannot be executed on a host system.


 * Note: curl-config is used during the make process to determine which libraries needs to be linked with the application(s) in the package and which directories they should come from. This can easily fail if the local (host) version of curl-config is run. (The same applies to freetype-config, xml2-config, and so on.)

Using the Unslung build makefiles

 * Rule: Prefix all makefile variables with the name of the package you are building (e.g. GTK_PATH_TO_GLIB_GENMARSHAL, not PATH_TO_GLIB_GENMARSHAL). Never set the value of any other makefile variable.
 * Justification: This way there can be no clashes. Bad and hard to trace bugs would result from two packages using the same variable name to mean different things. Remember that makefile variables have global scope; they affect every package in which they are used, not just the one in whose .mk file they are defined.


 * Rule: Avoid referring to makefile variables defined by another package, unless you are aware of the circumstances under which this is safe. It is always safe to refer to any variable at all in the command part of a rule. Never refer to a variable defined in a different package in the target or prerequisites part of a rule. Never define a variable in terms of a variable defined in a different package.
 * Justification: Makefile variables in the head of a rule are evaluated when the makefile is parsed. At this time, other package makefiles may not yet have been read. Variables that occur in commands, by contrast are evaluated when the command is executed.


 * If you want to refer to another packages's variable in a rule's prerequisite, use a recursive make invocation in the command part of the rule instead. Example: adduser.mk does this to depend on busybox's sources.


 * If you want to refer to another package's variable in a fully general way, you may use sed to extract the variable definition. Example: php-apache.mk does this to get access to the version of php from php.mk.

GNU autoconf, automake, and libtool
Many packages are based on the GNU autotools suite. There are variety of gotchas attending their use.


 * Rule: don't patch generated files. Don't patch configure if configure.in or configure.ac exist; don't patch Makefile if Makefile.in exists; don't patch Makefile.in if Makefile.am exists.  Do patch Makefile.am if there is one, otherwise Makefile.in; do patch configure.ac or configure.in if there are those. If you patched configure.in, configure.ac, or Makefile.am, run autoreconf (see below).
 * Justification: Generated files change unpredictably from version to version. Upstream changes will probably break your patch. Also, patches to generated files are hard for other developers to understand, if they later need to reproduce or fix your work.


 * Rule: If you need to re-run GNU autotools (e.g. because you have patched a configure.in or Makefile.am), invoke the "versioned" automake and aclocal binaries: e.g. automake-1.9, not plain "automake". If you are invoking autoreconf, or an autogen.sh script, set the AUTOMAKE and ACLOCAL environment variables first. Don't run any autogen.sh script that ignores these variables - use autoreconf instead.
 * Justification: Different automakes generate different configure scripts and makefiles, especially when it comes to cross-compilation. A developer running a different version of automake might generate a broken configure or makefile and not know what the problem was.


 * Rule: If your package uses libtool, invoke $(PATCH_LIBTOOL) $(_BUILD_DIR)/libtool as part of the -unpack target, after running configure. (see glib.mk for an example).
 * Justification: Otherwise, under certain circumstances, your package will have build-time paths compiled into it. Also, your package will fail to build on certain linux distributions. libtool has hardwritten beliefs about how arm-linux works that are false on unslung, so it needs to be patched.

Bob_tm