CONTRIB.9FRONT.ORG NO REFUNDS

Advanced Namespace Tools blog

13 December 2016

Testing

Most of what I’ve been doing since completing the basic update to current 9front is testing established functionality, to verify that things work - and that I still understand how to use them! Thankfully, my own documentation in the “them” subsection of the doc server has been helpful in reminding me.

Basic boot-up and normal use

The primary test is just booting and making sure that the system behaves normally. An ANTS-9front system should behave just like standard 9front when you cpu in to the standard user namespace. Everything that doesn’t specifically relate to ANTS boot/namespaces/utilities should be unaffected. It is hard to know exactly how to “test everything” so I just use the system normally and poke at anything I’m concerned about.

ANTS administrative namespace, rerooting, and /proc/pid/ns modification

The next thing to test is that ANTS boot/control/admin namespace is set up correctly. This is the no-root-fileserver namespace that exists on the cpu server console, or if you cpu in to the defined port (currently 17060 by default). This should be a workable minimal environment with access to everything compiled into the kernel or added to the tools.tgz ramdisk. From here, I should be able to use rerootwin -f boot to reroot into the main userspace. Additionally, the rewriting of other processes namespace by writing ns commands to the ns file of processes in /proc should work.

Venti/Fossil management, snapshotting, and progressive backup

It isn’t necessary to use fossil+venti with ANTS - cwfs64 and hjfs are both supported in the plan9rc bootup options. I still like the fossil+venti system, though, and a lot of the ANTS design was motivated by wanting them to be easier and more reliable to use and manage. To test this functionality, I set up two nodes running venti+fossil and have one node take a fossil snapshot and then use ventiprog to transfer the venti data to the other server’s venti, then start a new copy of the other machine’s fossil using ramfossil to test the data has been backup up successfully.

Hub-controlled gridboot with auto-service registration

This is a newer, still experimental feature of ANTS that I was working on last year. Instead of using a local cpurc or termrc to control boot, a node can dial a grid server and connect to a central hubfs. That hubfs will supply the commands to control bootup. Often this is combined with having services automatically announced to a central inferno registry. The most recent round of tests I have been doing is checking out this functionality.

Goal: Make some actual automated tests

Given both the system-wide nature of ANTS, as well as the numerous possible options and configurations, figuring out an automated testing regime is difficult. The plan9rc bootup script alone hs enough options that it is combinatorally unfeasible to test every possible combination of boot methods. Booting is difficult to put within an automated test-harness anyway, even with qemu.

Probably the best goal is to have automated tests for userspace functionality, then select about 3 different plausible bootup scenarios and run the tests from each of them. Even this is challenging because a lot of functionality requires orchestrating multiple systems. Still, the effort required to design and create tests will probably be beneficial in ways not directly related to finding particular bugs.