CONTRIB.9FRONT.ORG NO REFUNDS

Advanced Namespace Tools blog

21 November 2018

Setting up an ANTS spawngrid

This post describes work-in-progress. If you are interested in the current status, stop in to gridchat on the 9gridchan public grid.

Install and update ANTS/9front on two vultr nodes

I start by spinng up two 2gb ram vultr instances and installing from the latest ants/9front isos. I choose cpu server setup for one, and start with terminal mode for the other, which i will change later. I enable private networking and name then eusto1 and eucpu1. After installing and removing the cd image and rebooting, i can cpu in. I do so and enter a hub called uphub to work in - this gives me remote persistence and also improves latency by indirecting the shell i/o.

   sysupdate
   cd /sys/src/ants
   hg pull
   hg update
   rm frontmods/include/libc.rebuild.complete
   build 9front
   build fronthost
   cp grid/* /rc/bin
   bind -b '#S' /dev
   9fs 9fat
   cp 9ants64 /n/9fat
   cp tools.tgz /n/9fat
   fshalt -r

I rebooted to test the new kernel, all was good. Time for a few more config changes. Editing the /lib/namespace file I added

   bind -c #z /zrv

To make use of the ants non-forkable zrv device. I also add

   bind -a #l1 /net.alt
   bind -a #I1 /net.alt

to make use of the private vultr network. A bit more on that is that I want the venti to be listening on that private network to communicate with any other of my servers in the same datacenter, so i will make some edits to the boot process to set this up.

   bootfile=9ants64
   bootcmd=plan9rc
   nobootprompt=local!/dev/sdF0/fossil 
   service=cpu
   venti=#S/sdF0/arenas /net.alt/tcp!*!17034
   mouseport=ps2intellimouse
   monitor=vesa
   vgasize=800x600x16
   sysname=eusto1
   afterfact=/x/altnet
   altaddr=10.6.16.31
   user=glenda

The altnet script:

   #!/bin/rc
   bind -b '#l1' /net.alt
   bind -b '#I1' /net.alt
   ip/ipconfig -x /net.alt ether /net.alt/ether1 $altaddr 255.255.0.0
   ip/ipconfig -x /net.alt loopback /dev/null 127.1
   echo mtu 1450 >/net.alt/ipifc/0/ctl

Reboot again and all is good. I repeated this process to set up a similar node eucpu1.

Grid setup and config

   eusto1% aux/listen1 -tv tcp!*!17034 tlssrv -A /bin/aux/trampoline /net.alt/tcp!127.1!17034

   contest4% aux/listen1 -tv tcp!127.1!27034 /bin/tlsclient -a tcp!eusto1!17034

   contest4% 9fs 9fat
   contest4% cd /n/9fat
   contest4% webfs
   contest4% echo 'venti=tcp!127.1!27034' >eusto1.info
   contest4% ventiprog -f eusto1.info

   eusto1% mkdir /usr/grid
   eusto1% echo 111 >/usr/grid/stononce

compiling safehubfs and touch644 and editing scorefns.makectlhub on eusto1

create /n/9fat/scorenames from older version seed in new ndb format

update /lib/ndb/local on both eu machines and my local client

starting up score/spawn service

   eusto1% . /rc/bin/scorefns
   eusto1% storageinit
   eusto1% hub authcheck
   creating hubfs /srv/authcheck
   io% aux/listen1 -v tcp!*!16999 /bin/tlssrv -a /bin/userreq

   eucpu1% . /rc/bin/scorefns
   eucpu1% srvscores
   eucpu1% rimport eusto1 /srv /n/sto
   eucpu1% startcpusvc

venti block and score db synchronization

ventiprog -f otherserver.info on both sides, possibly run like while(sleep 3600){ventiprog -f infofile &} in a hub

   rimport remote /srv /n/remote
   mount /n/remote/fatsrv /n/remote9fat
   9fs 9fat
   scorecopy

while(sleep whatever)scorecopy in a hub of course

or cron jobs.

client usage

add server information to ndb

   term% rimport eusto1 /srv /n/eusrv
   term% mount /n/eusrv/gridsvc /n/g
   term% gridorc -u rabbit -d tcp!eusto1!16999

inside gridorc:

scores spawn scorename use final printed rcpu command in a separate window to access save oldname newname # names are immutable, no changes are saved unless you save a new name invite user name # invite a user to be able to spawn a copy of a currently running environment status # list currently active environments boom name # destroy environment

Functionality

Once a user has been given an account on the system, they run a program called ‘gridorc’ targeting a server of their choice, probably whichever is located closest to them:

   gridorc -u username -h server

Once connected, the user may enter

   scores

to see a list of filesystem snapshots available for spawning. To make an environment available for rcpu access, the user enters

   spawn scorename

which triggers the creation of a fossil file server serving a copy of the filesystem referred to by that name, the creation of a standard plan9 namespace environment rooted in that snapshot, and the provision of a cpu listener to allow access. The user can rcpu in to what behaves as a full self-sufficient plan9 environment. At any point, the user may

   save oldname newname

to create a new snapshot of the current state of the disk fileystem. Once a named snapshot is created, that particular name is immutable - you always get the same fs state when it is spawned. The save command must always be issued to preserve changes in state, which will always create a new immutable snapshot name. The data for this snapshot will be replicated to all other colony storage servers along with the name information.

Snapshots are private to users by default. A user may allow another user to spawn a copy of the current state of their fs with the command

   invite username scorename 

which will add a user to the filesystem, place them in the sys group, and then save a snapshot. To free the resources associated with a spawned environment, the command

   boom scorename

is issued. The final command available,

   status

lists the namespace environments owned by the user currently being served by the storage-cpu pair.