Aquisiton and management of reindeer herd data

Obtaining and maintaining accurate records of reindeer (Rangifer tarandus) herd data has become a necessary tool for efficient herd management. A computerized record keeping and reporting system was developed due to the speed which which animals were seen at the seasonal handlings. Custom software was written using the dBASE III + data management package to handle the special needs of herd record keeping. The software was then compiled using the Clipper compiler. The resulting program and data were implemented in ramdisk on a Toshiba 3100 microcomputer. Data structures were carefully chosen to provide for recording of tag identification, sex, age, body weight, abnormalities, disease testing, and treatments for each deer. Additionally, fields were provided to maintain records of ongoing biologic experiments. A report generation program was written to provide a current herd status report to the herders.


Introduction
Reindeer herding in western Alaska has long been identified with the traditional culture and subsistence lifestyle of the Eskimos in that region.The introduction of current technology and advanced animal husbandry practices to this native industry has provided a benefit to the Eskimo herders while not detracting from their lifestyle.The use of helicopters has increased roundup efficiency, and the administration of parasite treatments has greatly enhanced overall herd health.Also, as will be seen in this paper, the computer is playing an important new role in the area of herds record keeping and management strategies.
Computers have been used as a means of data storage and retrieval for many years in such di-verse activities as criminal identification and librarary book circulation (Cotterman,1974).
The need for accurate record keeping on a large scale in the Alaskan reindeer industry became apparent when herds became so large that handwritten records became unmanagable.
The first use of computers in the Alaskan reindeer industry as a record keeping tool was in 1982 when the University of Alaska's Reindeer Research Program implemented a portablecomputer based system for field use as part of an applied resarch project (Kokjer, Ray-Landis, and Dieterich, 1985).Generators or a 12 volt battery/transformer combination were used for a power source.This system was designed around commercially available computer hard-ware and software that suited the needs of this application.Portability and processing speed were the major criteria to be fullfilled.Tag identification numbers, sex, age, weight, vaccinations, treatments, physical abnormalities, and serologic results were among the data that were entered into the computer for each deer.
By 1986 a hardware and software upgrade was needed to keep pace with increasing demands on the system.A new generation of IBM compatible, portable microcomputers with extremely fast processors had become available at the time and, in additon, the Ashton-Tate2 company had just released its latest data management software package, dBASE III + .
Three Toshiba 3 portable microcomputers, dBASE III +, and a compiler were purchased.
These upgrades together with rewriting the system pograms provided the speed and memory capabilities to handle large amounts of data while maintaining the portability necessary for field operations.

Data aquisition and data entry
The first step in the processing of any data for analysis is the aquisition of the data itself and its entry into the computer.During a reindeer corralling, as many as 100 reindeer can be handled in an hour making the job of aquiring and recording data for each reindeer very time critical.Often only a few seconds are allowed for observing the animal and recording the data.
As well as recording new data for each reindeer, old data records for the animal are displayed so that a history for an animal may be viewed.Thus, full advantage must be taken of the speed potential of the given computer hardware and software in order to match the speed of incoming data and to search for and display the old data.By choosing how the data is stored in the computer and by using process known as program compilation, the response time of the In any computer system, access to a database (an organized collection of data) that is stored on disk is typically 3000 times more time consuming than access to a database stored in computer memory (Ciminier and Adrianio, 1987) .
While a program that manipulates a database is running, there may by hundreds of such accesses per minute.Therefore, if the database is stored in computer memory instead of disk while the program is running, a substantial decrease in access time can be achieved.An area of memory set aside for this purpose is known as a RAM (Random Access Memory) disk or ramdisk.A two megabyte extended memory package was added to the microcomputers that provided a large memory area to be used as a ramdisk.
Before running the program, the database for the selected herd is copied from disk to ramdisk.Then while the program is running, the database is accessed in the ramdisk and new data is added.When the program is finished, the updated database is then copied over the old database on the disk.This new copy is now the current version of the database.
If the program itself is also copied from disk into ramdisk, the processing speed can be further increased.The inherent risk in this process, however, is that if power is lost before the database can be copied from ramdisk back to disk the updates are lost and only the database that was originally on the disk before the program ran will remain.The upgraded system uses a R&D 4 12 volt to 110 volt power transformer with a low power alarm.When the source (a 12 volt airplane battery) drops below 12 volts, the alarm is tiggered allowing for several minutes to exit the program normally and save updates to disk.
In addition to using ramdisk as a storage area, the use of a compiler to convert the programming code into machine language greatly system can be greatly improved.
increases the overall response of the system.dBASE III + is a powerful database management software package that contains a proprietary programming language capable of being compiled.In this initial form, this programming language is " interpretive" meaning that each line of programming code must be interpreted and converted separately to machine language while the program is running.Interpretation each line of code while the program is running is time consuming and delays the response of the system.
The process of compilation alleviates this problem in two ways.First, a compiler converts each line of code to machine language before the program is run so that the timeconsuming conversion process is eliminated.Secondly, compiled code takes up less memory space than interpretive code leaving more room in memory for the database (Coats, 1982).The Clipper 5 compiler was chosen because it is specifically designed to work with dBASE III + .By combining the use of memory as a storage area and running compiled programs, the response time of the system was greatly increased.

Database structure
Data entry programs that were written for the original system were modified to take advantage of the use of ramdisk and compilation in the upgraded system.The original database structure was not changed in order to simplify the modification process.The data for each herd was maintained in a series of files which together comprised the database structure.The primary file in the database structure for a herd is called ANIMALS.DBF represents a reindeer and contains the ear tag identification number (tag id), sex, and birth year for that deer.This record is refered to as the header re-

Data entry programs
Data entry programs that are run during a reindeer handling utilize the structures described above.As a deer is handled, it's tag id is called out and entered into the computer.ANI-MALS.DBF is searched until the record containing the tag id is found.EXAMS.DBF is then searched for the handling records for that deer which are displayed on a formatted screen.
If a tag id is searched for in ANIMALS.DBF and not found, the animal is considered to be a new animal (or maverick) and the new data is placed in NEWANIMS.DBF.There is no search process for fawns as the data for a fawn is by default placed in FAWNS.DBF.This process in the upgraded system takes about 1 second even for the largest herd databses (24,000 records or 6000 animals).This compares to 12 seconds in the original system.While a difference of 11 seconds per animal may not appear to have much impact in system performance, when compounded over a typical handling time of 12 hours, the differencd is substantial.
When the handling is complete,ANI-  cords have a common relationship to one another (McFadden and Hoffer,1985).In the case of the indexed master file, the common relationship is the tag id.This indexed master file will be used for report generation and analysis.

Report generation
It is possible with the upgraded system to produce a report of the handling and a summary of the herd data within minutes after the last deer has been handled.This report contains information about the handling that just occured and produces a herd summary based on new data and data previously entered.In addition, a tally is generated which displays the number of animals seen at the handling.The tally is catagorized into sex and age classes.
Before a report can be generated, the five database files must be merged into the single in-dexed master file.This is done using the merge program.The first step in the merging process is to step through ANIMALS.DBF and obtain all of the old records for each deer.EXPTR is used to locate the first handling record in EXAMS.DBF for a particular deer and NUMB indicates how many records to obtain.
The header record from ANIMALS.DBF is then combined with each of the handling records from EXAMS.DBF and appended to the master file.As a result, each record in the master file contains the tag id, sex, and birth year for a deer in addition to the handling data for a given handling date.
If a deer was seen at the current handling, the new record for that deer is obtained from NEWRECS.DBF and is appended to the master file as the last record in the series for that deer.Finally, the new animal records from

Conclusion
It is beyond the scope of this paper to discuss all of the variables and questions involved in creating a herd management strategy.There are a lot of factors such as market conditions for meat products and range utilization that make developing a comprehensive management strategy a complex process.Maintaining a database for a herd over several years and producing relevant informaton on a regular basis provides a valuable tool for analysis of herd dynamics.This is a job that the computer is particularly well suited for.
It is important to note that consistency in data collection and reliability of software and hardware are the two most important criteria in using this system to make decisions based upon the data.Handlings must be regularly attended by data collection personnel in order to maintain a continous record and the programs written must be proven to work correctly and produce reliable results.The hardware must not fail at critical moments (such as a disk unit "crashing" before the ramdisk can be copied back to it!).If these criteria are met, a computer based data aquisition and management system can be a valuable asset to research and herd management.
cord.Each header record in ANIMALS.DBF contains a "pointer" called EXPTR which points to a record in another file called EXAMS.DBF containing the handling records for all the deer.The pointer is simply a number that indicates which handling record in EXAMS.DBF is the first in a series of handling records for a particular deer.Each record in EXAMS.DFB contains the date of a handling and the data observed and recorded for a deer for that date.The header record in ANI-MALS.DFB also contains a number, NUMB, indicating how many handling records are recorded in EXAMS.DBF for a deer.Another pointer in the header record is called NEWPTR and points to a record in a file called NEWRECS.DBF which contains the most recently entered data for a deer.Each record in NEWRECS.DBF contains the current handling date and the data observed and recorded for a deer for the current date.A database which maintains a series of files (EXAMS.DBF, NEWRECS.DBF) that are pointed to from another file (ANIMALS.DFB) is known as a hierarchal database (McFadden and Hoffer, 1985).In addition to ANIMALS.DBF, EXAMS.DBF, and NEWRECS.DBF, two more files comprise the rest of the database structure.NEWANIMS.DBF is used to store handling records for animals that appear at a handling for the first time and thus are not in ANI-MALS.DBF.FAWNS.DBF is used to store handling records for fawns only if they are handled in a separate area from adults.Each record in NEWANIMS.DBF and FAWNS.DBF has a format that is a combination of the header record format and the handling record format so that tag id, sex, birth year, and the handling data may be recorded into one record.NEWANIMS.DBF and FAWNS.DBF are part of the database structure but are not pointed to by any of the other files and are therefore 5 Nantucket Corporation, P.O.Box 3621, Culver City, CA 90230, U.S.A. Rangifer, Special Issue No. 3, 1990.
MALS.DBF contains pointers to NEW T -ANIMS.DBF and FAWNS.DBF contain data for any new animals and fawns that were seen.These files are ready to be merged into a single indexed master file called a relational database.A relational database is one in which sets of re-