# # Resource bundle description # Language.code:en Language.name:English OutputEncoding.unix:iso_8859_1 OutputEncoding.windows:iso_8859_1 # # Common output messages and strings # common.cannot_create_file:ERROR: Can't create file %s common.cannot_find_cfg_file:ERROR: Can't find the configuration file %s common.cannot_open:ERROR: Can't open %s common.cannot_open_fail_log:ERROR: Can't open fail log %s common.cannot_open_output_file:ERROR: Can't open output file %s common.cannot_read:ERROR: Can't read %s common.cannot_read_file:ERROR: Can't read file %s common.general_options:general options (for %s) common.must_be_implemented:function must be implemented in sub-class common.options:options common.processing:processing common.specific_options:specific options common.usage:Usage common.info:info common.invalid_options:Invalid arguments: %s common.true:true common.false:false common.deprecated: SUPERSEDED # # Script option descriptions and output messages # scripts.language:Language to display option descriptions in (eg. 'en_US' specifies American English). Requires translations of the option descriptions to exist in the perllib/strings_language-code.rb file. scripts.xml:Produces the information in an XML form, without 'pretty' comments but with much more detail. scripts.listall:Lists all items known about. scripts.describeall:Display options for all items known about scripts.both_old_options:WARNING: -removeold was specified with -keepold or -incremental, defaulting to -removeold. Current contents of %s directory will be deleted. scripts.inc_remove_conflict:WARNING: -incremental and -removeold were specified. Defaulting to -removeold. Current contents of %s directory will be deleted. scripts.only_one_old_option:Error: conflicting 'old' options: can only specify one of -removeold, -keepold, -replaceold. Exiting. scripts.no_old_options:WARNING: None of -removeold, -keepold or -incremental were specified, defaulting to -removeold. Current contents of %s directory will be deleted. scripts.gli:A flag set when running this script from gli, enables output specific for gli. scripts.gai:A flag set when running this script from gai (greenstone administration tool), enables output specific for gai. scripts.verbosity:Controls the quantity of output. 0=none, 3=lots. scripts.out:Filename or handle to print output status to. # -- buildcol.pl -- buildcol.activate:Run activate.pl after buildcol has finished, which will move building to index. buildcol.archivedir:Where the archives live. buildcol.builddir:Where to put the built indexes. buildcol.cachedir:Collection will be temporarily built here before being copied to the build directory. buildcol.cannot_open_cfg_file:WARNING: Can't open config file for updating: %s buildcol.collectdir:The path of the "collect" directory. buildcol.copying_back_cached_build:Copying back the cached build buildcol.copying_rss_items_rdf:Copying rss-items.rdf file from %s to %s buildcol.create_images:Attempt to create default images for new collection. This relies on the Gimp being installed along with relevant perl modules to allow scripting from perl. buildcol.debug:Print output to STDOUT. buildcol.desc:PERL script used to build a greenstone collection from archive documents. buildcol.faillog:Fail log filename. This log receives the filenames of any files which fail to be processed. buildcol.incremental_default_builddir:WARNING: The building directory has defaulted to 'building'. If you want to incrementally add to the index directory, please use the "-builddir index" option to buildcol.pl. buildcol.index:Index to build (will build all in config file if not set). buildcol.indexname:Name of index to build (will build all in config file if not set). buildcol.indexlevel:Level of indexes to build (will build all in config file if not set). buildcol.incremental:Only index documents which have not been previously indexed. Implies -keepold. Relies on the lucene indexer. buildcol.keepold:Will not destroy the current contents of the building directory. buildcol.library_url:Provide the full URL of the greenstone digital library to be (de)activated buildcol.library_name:For GS3, provide the library name (servlet name) of the library to be (de)activated in the current running Greenstone buildcol.maxdocs:Maximum number of documents to build. buildcol.maxnumeric:The maximum nuber of digits a 'word' can have in the index dictionary. Large numbers are split into several words for indexing. For example, if maxnumeric is 4, "1342663" will be split into "1342" and "663". buildcol.mode:The parts of the building process to carry out. buildcol.mode.all:Do everything. buildcol.mode.build_index:Just index the text. buildcol.mode.compress_text:Just compress the text. buildcol.mode.infodb:Just build the metadata database. buildcol.mode.extra:Skip the main indexing stages, and just build the extra (orthogonal) passes buildcol.no_default_images:Default images will not be generated. buildcol.no_image_script:WARNING: Image making script could not be found: %s buildcol.no_strip_html:Do not strip the html tags from the indexed text (only used for mgpp collections). buildcol.store_metadata_coverage:Include statistics about which metadata sets are used in a collection, including which actual metadata terms are used. This is useful in the built collection if you want the list the metadata values that are used in a particular collection. buildcol.no_text:Don't store compressed text. This option is useful for minimizing the size of the built indexes if you intend always to display the original documents at run time (i.e. you won't be able to retrieve the compressed text version). buildcol.sections_index_document_metadata:Add document level metadata at section level for indexing buildcol.sections_sort_on_document_metadata:(Lucene only) Add document level metadata at section level for sorting buildcol.sections_index_document_metadata.never:Don't add any document metadata at section level. #' buildcol.sections_index_document_metadata.always:Add all specified document level metadata even if section level metadata of that name exists. buildcol.sections_index_document_metadata.unless_section_metadata_exists:Only add document level metadata if no section level metadata of that name exists. buildcol.out:Filename or handle to print output status to. buildcol.params:[options] collection-name buildcol.remove_empty_classifications:Hide empty classifiers and classification nodes (those that contain no documents). buildcol.removeold:Will remove the old contents of the building directory. buildcol.skipactivation:Pass this along with the activate flag to run every task of the activate script except any actual (de)activating steps buildcol.unlinked_col_images:Collection images may not be linked correctly. buildcol.unknown_mode:Unknown mode: %s buildcol.updating_archive_cache:Updating archive cache buildcol.verbosity:Controls the quantity of output. 0=none, 3=lots. # -- classinfo.pl -- classinfo.collection:Giving a collection name will make classinfo.pl look in collect/collection-name/perllib/classify first. If the classifier is not found there it will look in the general perllib/classify directory. classinfo.desc:Prints information about a classifier. classinfo.general_options:General options are inherited from parent classes of the classifier. classinfo.info:info classinfo.no_classifier_name:ERROR: You must provide a classifier name. classinfo.option_types:Classifiers may take two types of options classinfo.params:[options] classifier-name classinfo.passing_options:Options may be passed to any classifier by including them in your collect.cfg configuration file. classinfo.specific_options:Specific options are defined within the classifier itself, and are available only to this particular classifier. # -- downloadfrom.pl -- downloadfrom.cache_dir:The location of the cache directory downloadfrom.desc:Downloads files from an external server downloadfrom.download_mode:The type of server to download from downloadfrom.download_mode.Web:HTTP downloadfrom.download_mode.MediaWiki:MediaWiki website downloadfrom.download_mode.OAI: Open Archives Initiative downloadfrom.download_mode.z3950:z3950 server downloadfrom.download_mode.SRW:SearchRetrieve Webservice downloadfrom.incorrect_mode:download_mode parameter was incorrect. downloadfrom.info:Print information about the server, rather than downloading downloadfrom.params:[general options] [specific download options] # -- downloadinfo.pl -- downloadinfo.desc:Prints information about a download module downloadinfo.collection:Giving a collection name will make downloadinfo.pl look in collect/collection-name/perllib/downloaders first. If the module is not found there it will look in the general perllib/downloaders directory. downloadinfo.no_download_name:Error: Please specify a download module name. downloadinfo.params:[options] [download-module] downloadinfo.general_options:General options are inherited from parent classes of the download modules. downloadinfo.specific_options:Specific options are defined within the download module itself, and are available only to this particular downloader. downloadinfo.option_types:Download modules may take two types of options # -- explode_metadata_database.pl -- explode.desc:Explode a metadata database explode.collection:The collection name. Some plugins look for auxiliary files in the collection folder. explode.document_field:The metadata element specifying the file name of documents to obtain and include in the collection. explode.document_prefix:A prefix for the document locations (for use with the document_field option). explode.document_suffix:A suffix for the document locations (for use with the document_field option). explode.encoding:Encoding to use when reading in the database file explode.metadata_set:Metadata set (namespace) to export all metadata as explode.plugin: Plugin to use for exploding explode.plugin_options:Options to pass to the plugin before exploding. Option nmaes must start with -. Separate option names and values with space. Cannot be used with -use_collection_plugin_options. explode.use_collection_plugin_options: Read the collection configuration file and use the options for the specified plugin. Requires the -collection option. Cannot be used with -plugin_options. explode.params: [options] filename explode.records_per_folder: The number of records to put in each subfolder. # -- replace_srcdoc_with_html.pl -- srcreplace.desc: Replace source document with the generated HTML file when rebuilding srcreplace.params: [options] filename srcreplace.plugin: Plugin to use for converting the source document # -- exportcol.pl -- exportcol.out:Filename or handle to print output status to. exportcol.cddir:The name of the directory that the CD contents are exported to. exportcol.cdname:The name of the CD-ROM -- this is what will appear in the start menu once the CD-ROM is installed. exportcol.collectdir:The path of the "collect" directory. exportcol.desc:PERL script used to export one or more collections to a Windows CD-ROM. exportcol.noinstall:Create a CD-ROM where the library runs directly off the CD-ROM and nothing is installed on the host computer. exportcol.params:[options] collection-name1 collection-name2 ... exportcol.coll_not_found:Ignoring invalid collection %s: collection not found at %s. exportcol.coll_dirs_not_found:Ignoring invalid collection %s: one of the following directories not found: exportcol.fail:exportcol.pl failed: exportcol.no_valid_colls:No valid collections specified to export. exportcol.couldnt_create_dir:Could not create directory %s. exportcol.couldnt_create_file:Could not create %s. exportcol.instructions:To create a self-installing Windows CD-ROM, write the contents of this folder out to a CD-ROM. exportcol.non_exist_files:One or more of the following necessary files and directories does not exist: exportcol.success:exportcol.pl succeeded: exportcol.output_dir:The exported collections (%s) are in %s. exportcol.export_coll_not_installed:The Export to CD-ROM functionality has not been installed. # -- import.pl -- import.archivedir:Where the converted material ends up. import.manifest:An XML file that details what files are to be imported. Used instead of recursively descending the import folder, typically for incremental building. import.cannot_open_stats_file:WARNING: Couldn't open stats file %s. import.cannot_open_fail_log:ERROR: Couldn't open fail log %s import.cannot_read_OIDcount:Warning: unable to read document OID count from %s.\nSetting value to 0. # import.cannot_read_earliestdatestamp:Warning: unable to read collection's earliestDatestampcount from %s.\nSetting value to 0. import.cannot_sort:WARNING: import.pl cannot sort documents when groupsize > 1. sortmeta option will be ignored. import.cannot_write_earliestdatestamp:Warning: unable to store collection's earliestDatestamp in %s. import.cannot_write_OIDcount:Warning: unable to store document OID count in %s. import.collectdir:The path of the "collect" directory. import.complete:Import complete import.debug:Print imported text to STDOUT (for GreenstoneXML importing) import.desc:PERL script used to import files into a format (GreenstoneXML or GreenstoneMETS) ready for building. import.faillog:Fail log filename. This log receives the filenames of any files which fail to be processed. import.groupsize:Number of import documents to group into one XML file. import.gzip:Use gzip to compress resulting xml documents (don't forget to include ZIPPlugin in your plugin list when building from compressed documents). import.importdir:Where the original material lives. import.incremental:Only import documents which are newer (by timestamp) than the current archives files. Implies -keepold. import.keepold:Will not destroy the current contents of the archives directory. import.maxdocs:Maximum number of documents to import. import.NO_IMPORT:Prevents import.pl from running. (Note, for Greenstone 3 collections with document editing via the web enabled, running import.pl will delete any edits. Set this option in the collection's configuration file to prevent import.pl from being run accidentally.) import.NO_IMPORT_set:Not continuing with import as -NO_IMPORT is set. import.no_import_dir:Error: Import dir (%s) not found. import.no_plugins_loaded:ERROR: No plugins loaded. import.OIDregex:The regular expression to use with filename_regex OIDtype. Use capturing brackets to select parts of the filename. eg "([a-zA-z]+\d+)" will select one or more letters followed by one or more digits. import.OIDtype:The method to use when generating unique identifiers for each document. import.OIDtype.hash:Hash the contents of the file. Document identifiers will be the same every time the collection is imported. import.OIDtype.hash_on_ga_xml:Hash the contents of the Greenstone Archive XML file. Document identifiers will be the same every time the collection is imported as long as the metadata does not change. import.OIDtype.hash_on_full_filename:Hash on the full filename to the document within the 'import' folder (and not its contents). Helps make document identifiers more stable across upgrades of the software, although it means that duplicate documents contained in the collection are no longer detected automatically. import.OIDtype.incremental:Use a simple document count. Significantly faster than "hash", but does not necessarily assign the same identifier to the same document content if the collection is reimported. import.OIDtype.assigned:Use the metadata value given by the OIDmetadata option; if unspecified, for a particular document a hash is used instead. These identifiers should be unique. Numeric identifiers will be preceded by 'D'. import.OIDtype.dirname:Use the immediate parent directory name. There should only be one document per directory, and directory names should be unique. E.g. import/b13as/h15ef/page.html will get an identifier of h15ef. Numeric identifiers will be preceded by 'D'. import.OIDtype.filename:Use the tail file name (without the file extension). Requires every filename across all the folders within 'import' to be unique. Numeric identifiers will be preceded by 'D'. import.OIDtype.filename_regex:Run a regular expression (provided by -OIDregex) on the filename to extract the document identifier. import.OIDtype.full_filename:Use the full file name within the 'import' folder as the identifier for the document (with _ and - substitutions made for symbols such as directory separators and the fullstop in a filename extension) import.OIDmetadata:Specifies the metadata element that hold's the document's unique identifier, for use with -OIDtype=assigned. import.saveas:Format that the archive files should be saved as. import.out:Filename or handle to print output status to. import.params:[options] collection-name import.removeold:Will remove the old contents of the archives directory. import.removing_archives:Removing current contents of the archives directory... import.removing_tmpdir:Removing contents of the collection "tmp" directory... import.site:Site to find collect directory in (for Greenstone 3 installation). import.sortmeta:Sort documents alphabetically by metadata for building (specifying -sortmeta as 'OID' is a special case, and instructs Greenstone to use the document identifier for ordering). Search results for boolean queries will be displayed in the order determined by sortmeta. This will be disabled if groupsize > 1. May be a commma separated list to sort by more than one metadata value. import.sortmeta_paired_with_ArchivesInfPlugin:Detected -sortmeta. To effect the stipulated sorting by metadata (or OID) remember this option should be paired with either the '-reversesort' or '-sort' option to ArchivesInfPlugin. import.statsfile:Filename or handle to print import statistics to. import.stats_backup:Will print stats to STDERR instead. import.verbosity:Controls the quantity of output. 0=none, 3=lots. import.assocfile_copymode:Controls how files associated with a document (aka associated files) are formed in the 'archives' directory. If you are unsure which option to use, set this to 'copy' as it is guaranteed to work with file-level document-version history in all cases. In contrast, setting this to 'hardlink' will help reduce overall disk usage, however if you are also using the file-level document-version history feature of Greenstone then more care needs to be taken over how files in the 'import' folder are updated. If using 'hardlink' then when you come to add a new version of an existing file into 'import' it must be *moved* in rather than copied. This is so the hardlinked version in 'archives' stays separate from the new version, thus enabling this older archives version of the files to be correctly stored as part of the file-level document-version history when the collection is next built. import.assocfile_copymode_copy:Make a fresh copy on the filesystem in 'archives'. import.assocfile_copymode_hardlink:The 'copy' made in archives is actually a hardlink back to the version in the 'import' folder. # -- csv-usernames-to-db.pl cu2db.desc:A simple script to batch add users to the greenstone users database. Takes a comma-separated value (csv) file as input. Each line represents one user, and consists of username,password,groups,comment. If the user belongs ot more than one group, then groups will be a comma-separated list, and you'll need to use a different field separator for the file (along with the field-separator option). cu2db.params:[options] csv-filename cu2db.field-separator:Controls which character is used to separate the fields in the CSV file cu2db.already-encrypted:Use this if the passwords in the CSV file are already encrypted # -- schedule.pl -- schedule.deleted:Scheduled execution deleted for collection schedule.scheduled:Execution script created for collection schedule.cron:Scheduled execution set up for collection schedule.params:[options] schedule.error.email:-email requires -smtp -toaddr and -fromaddr to be specified. schedule.error.importbuild:-import and -build must be specified. schedule.error.colname:Collection using -colname must be specified. schedule.gli:Running from the GLI schedule.frequency:How often to automatically re-build the collection schedule.frequency.hourly:Re-build every hour schedule.frequency.daily:Re-build every day schedule.frequency.weekly:Re-build every week schedule.filepath_warning:**** Warning: schedule.pl may not work when Greenstone is installed in a path containing brackets and/or spaces: %s. schedule.action:How to set up automatic re-building schedule.action.add:Schedule automatic re-building schedule.action.update:Update existing scheduling schedule.action.delete:Delete existing scheduling schedule.email:Send email notification schedule.schedule:Select to set up scheduled automatic collection re-building schedule.colname:The colletion name for which scheduling will be set up schedule.import:The import command to be scheduled schedule.build:The buildcol command to be scheduled schedule.toaddr:The email address to send scheduled build notifications to schedule.toaddr.default:Specify User's Email in File->Preferences schedule.fromaddr:The sender email address schedule.fromaddr.default:Specify maintainer in main.cfg schedule.smtp:The mail server that sendmail must contact to send email schedule.smtp.default:Specify MailServer in main.cfg schedule.out:Filename or handle to print output status to. # -- export.pl -- export.exportdir:Where the export material ends up. export.cannot_open_stats_file:WARNING: Couldn't open stats file %s. export.cannot_open_fail_log:ERROR: Couldn't open fail log %s export.cannot_sort:WARNING: export.pl cannot sort documents when groupsize > 1. sortmeta option will be ignored. export.collectdir:The path of the "collect" directory. export.complete:Export complete export.debug:Print exported text to STDOUT (for GreenstoneXML exporting) export.desc:PERL script used to export files in a Greenstone collection to another format. export.faillog:Fail log filename. This log receives the filenames of any files which fail to be processed. (Default: collectdir/collname/etc/fail.log) export.groupsize:Number of documents to group into one XML file. export.gzip:Use gzip to compress resulting xml documents (don't forget to include ZIPPlugin in your plugin list when building from compressed documents). export.importdir:Where the original material lives. export.keepold:Will not destroy the current contents of the export directory. export.maxdocs:Maximum number of documents to export. export.listall:List all the saveas formats export.saveas:Format to export documents as. export.saveas.DSpace:DSpace Archive format. export.saveas.GreenstoneMETS:METS format using the Greenstone profile. export.saveas.FedoraMETS:METS format using the Fedora profile. export.saveas.GreenstoneXML:Greenstone XML Archive format export.saveas.GreenstoneSQL:MySQL Database storage. The -process_mode option specifies which of metadata/text/both is to be stored in a MySQL database; the remainder (if any) will be exported to the GreenstoneXML Archive format as usual. export.saveas.MARCXML:MARC XML format (an XML version of MARC 21) export.out:Filename or handle to print output status to. export.params:[options] collection-name export.removeold:Will remove the old contents of the export directory. export.removing_export:Removing current contents of the export directory... export.sortmeta:Sort documents alphabetically by metadata for building. This will be disabled if groupsize > 1. export.statsfile:Filename or handle to print export statistics to. export.stats_backup:Will print stats to STDERR instead. export.verbosity:Controls the quantity of output. 0=none, 3=lots. # -- mkcol.pl -- mkcol.about:The about text for the collection. mkcol.buildtype:The 'buildtype' for the collection (e.g. mg, mgpp, lucene) mkcol.infodbtype:The 'infodbtype' for the collection (e.g. gdbm, jdbm, sqlite) mkcol.bad_name_cvs:ERROR: No collection can be named CVS as this may interfere with directories created by the CVS versioning system. mkcol.bad_name_svn:ERROR: No collection can be named .svn as this may interfere with directories created by the SVN versioning system. mkcol.bad_name_modelcol:ERROR: No collection can be named modelcol as this is the name of the model collection. mkcol.cannot_find_modelcol:ERROR: Cannot find the model collection %s mkcol.col_already_exists:ERROR: This collection already exists. mkcol.collectdir:Directory where new collection will be created. mkcol.group_not_valid_in_gs3:The group option is not valid in Greenstone 3 mode (-gs3mode). mkcol.creating_col:Creating the collection %s mkcol.creator:The collection creator's e-mail address. mkcol.creator_undefined:ERROR: The creator was not defined. This variable is needed to recognise duplicate collection names. mkcol.desc:PERL script used to create the directory structure for a new Greenstone collection. mkcol.doing_replacements:doing replacements for %s mkcol.group:Create a new collection group instead of a standard collection. mkcol.gs3mode:Mode for Greenstone 3 collections. mkcol.long_colname:ERROR: The collection name must be less than 8 characters so compatibility with earlier filesystems can be maintained. mkcol.maintainer:The collection maintainer's email address (if different from the creator). mkcol.no_collectdir:ERROR: The collect dir doesn't exist: %s mkcol.no_collectdir_specified:ERROR: No collect dir was specified. In gs3mode, either the -site or -collectdir option must be specified. mkcol.no_colname:ERROR: No collection name was specified. mkcol.optionfile:Get options from file, useful on systems where long command lines may cause problems. mkcol.params:[options] collection-name mkcol.plugin:Perl plugin module to use (there may be multiple plugin entries). mkcol.public:If this collection has anonymous access. mkcol.public.true:Collection is public mkcol.public.false:Collection is private mkcol.quiet:Operate quietly. mkcol.site:In gs3mode, uses this site name with the GSDL3HOME environment variable to determine collectdir, unless -collectdir is specified. mkcol.success:The new collection was created successfully at %s mkcol.title:The title of the collection. mkcol.win31compat:Whether or not the named collection directory must conform to Windows 3.1 file conventions or not (i.e. 8 characters long). mkcol.win31compat.true:Directory name 8 characters or less mkcol.win31compat.false:Directory name any length # -- pluginfo.pl -- pluginfo.collection:Giving a collection name will make pluginfo.pl look in collect/collection-name/perllib/plugins first. If the plugin is not found there it will look in the general perllib/plugins directory. pluginfo.desc:Prints information about a plugin. pluginfo.general_options:General options are inherited from parent classes of the plugin. pluginfo.info:info pluginfo.no_plugin_name:ERROR: You must provide a plugin name. pluginfo.option_types:Plugins may take two types of options pluginfo.params:[options] plugin-name pluginfo.passing_options:Options may be passed to any plugin by including them in your collect.cfg configuration file. pluginfo.specific_options:Specific options are defined within the plugin itself, and are available only to this particular plugin. # -- plugoutinfo.pl -- plugoutinfo.collection:Giving a collection name will make plugoutinfo.pl look in collect/collection-name/perllib/plugouts first. If the plugout is not found there it will look in the general perllib/plugouts directory. plugoutinfo.desc:Prints information about a plugout. plugoutinfo.general_options:General options are inherited from parent classes of the plugout. plugoutinfo.info:info plugoutinfo.no_plugout_name:ERROR: You must provide a plugout name. plugoutinfo.option_types:Plugouts may take two types of options plugoutinfo.params:[options] plugout-name plugoutinfo.passing_options:Options may be passed to any plugout by including them in your collect.cfg configuration file. plugoutinfo.specific_options:Specific options are defined within the plugout itself, and are available only to this particular plugout. # # Classifier option descriptions # AllList.desc:Creates a single list of all documents. Use by the oaiserver. AZCompactList.allvalues:Use all metadata values found. AZCompactList.desc:Classifier plugin for sorting alphabetically (on a-z, A-Z, 0-9). Produces a horizontal A-Z list, then a vertical list containing documents, or bookshelves for documents with common metadata. AZCompactList.doclevel:Level to process document at. AZCompactList.doclevel.top:Whole document. AZCompactList.doclevel.firstlevel:The first level of sections only AZCompactList.doclevel.section:All sections. AZCompactList.firstvalueonly:Use only the first metadata value found. AZCompactList.freqsort:Sort by node frequency rather than alpha-numeric. AZCompactList.maxcompact:Maximum number of documents to be displayed per page. AZCompactList.metadata:A single metadata field, or a comma separated list of metadata fields, used for classification. If a list is specified, the first metadata type that has values will be used. May be used in conjunction with the -firstvalueonly and -allvalues flags, to select only the first value, or all metadata values from the list. AZCompactList.mincompact:Minimum number of documents to be displayed per page. AZCompactList.mingroup:The smallest value that will cause a group in the hierarchy to form. AZCompactList.minnesting:The smallest value that will cause a list to be converted into a nested list. AZCompactList.recopt:Used in nested metadata such as -metadata Year/Organisation. AZCompactList.sort:Metadata field to sort the leaf nodes by. AZCompactSectionList.desc:Variation on AZCompactList that classifies sections rather than documents. Entries are sorted by section-level metadata. AZList.desc:Classifier plugin for sorting alphabetically (on a-z, A-Z, 0-9). Produces a horizontal A-Z list, with documents listed underneath. AZList.metadata:A single metadata field or a comma separated list of metadata fields used for classification. Following the order indicated by the list, the first field that contains a metadata value will be used. List will be sorted by this element. AZSectionList.desc:Variation on AZList that classifies sections rather that documents. Entries are sorted by section-level metadata. BasClas.accentfold:Remove all accents (diacritics) before sorting metadata. BasClas.casefold:Lowercase metadata for sorting. BasClas.bad_general_option:The %s classifier uses an incorrect option. Check your collect.cfg configuration file. BasClas.builddir:Where to put the built indexes. BasClas.buttonname:The label for the classifier screen and button in navigation bar. The default is the metadata element specified with -metadata. BasClas.desc:Base class for all the classifiers. BasClas.no_metadata_formatting:Don't do any automatic metadata formatting (for sorting.) BasClas.outhandle:The file handle to write output to. BasClas.removeprefix:A prefix to ignore in metadata values when sorting. BasClas.removesuffix:A suffix to ignore in metadata values when sorting. BasClas.verbosity:Controls the quantity of classifier processing output during building. 0=none, 3=lots. Browse.desc:A fake classifier that provides a link in the navigation bar to a prototype combined browsing and searching page. Only works for mgpp collections, and is only practical for small collections. DateList.bymonth:Classify by year and month instead of only year. DateList.desc:Classifier plugin for sorting by date. By default, sorts by 'Date' metadata. Dates are assumed to be in the form yyyymmdd or yyyy-mm-dd. DateList.metadata:The metadata that contains the dates to classify by. The format is expected to be yyyymmdd or yyyy-mm-dd. Can be a comma separated list, in which case the first date found will be used. DateList.reverse_sort:Sort the documents in reverse chronological order (newest first). DateList.nogroup:Make each year an individual entry in the horizontal list, instead of spanning years with few entries. (This can also be used with the -bymonth option to make each month a separate entry instead of merging). DateList.no_special_formatting:Don't display Year and Month information in the document list. DateList.sort:An extra metadata field to sort by in the case where two documents have the same date. HFileHierarchy.desc:Classifier plugin for generating hierarchical classifications based on a supplementary structure file. Hierarchy.desc:Classifier plugin for generating a hierarchical classification. This may be based on structured metadata, or may use a supplementary structure file (use the -hfile option). Hierarchy.documents_last:Display document nodes after classifier nodes. Hierarchy.hfile:Use the specified classification structure file. Hierarchy.hlist_at_top:Display the first level of the classification horizontally. Hierarchy.reverse_sort:Sort leaf nodes in reverse order (use with -sort). Hierarchy.separator:Regular expression used for the separator, if using structured metadata. Hierarchy.sort:Metadata field to sort leaf nodes by. Leaves will not be sorted if not specified. Hierarchy.suppressfirstlevel:Ignore the first part of the metadata value. This is useful for metadata where the first element is common, such as the import directory in gsdlsourcefilename. Hierarchy.suppresslastlevel:Ignore the final part of the metadata value. This is useful for metadata where each value is unique, such as file paths. HTML.desc:Creates an empty classification that's simply a link to a web page. HTML.url:The url of the web page to link to. List.desc:A general and flexible list classifier with most of the abilities of AZCompactList, but with better Unicode, metadata and sorting capabilities. List.metadata:Metadata fields used for classification. Use '/' to separate the levels in the hierarchy and ';' or ',' to separate a list of metadata fields within each level. List.metadata_selection_mode_within_level:Determines how many metadata values the document is classified by, within each level. Use '/' to separate the levels. List.metadata_selection.firstvalue:Only classify by a single metadata value, the first one encountered. List.metadata_selection.firstvalidmetadata:Classify by all the metadata values of the first element in the list that has values. List.metadata_selection.allvalues:Classify by all metadata values found, from all elements in the list. List.metadata_sort_mode_within_level:How to sort the values of metadata within each partition. Use '/' to separate the levels. List.metadata_sort.unicode:Sort using the Unicode Collation Algorithm. Requires http://www.unicode.org/Public/UCA/latest/allkeys.txt file to be downloaded into perl's lib/Unicode/Collate folder. List.metadata_sort.alphabetic:Sort using alphabetical ordering, including for digits. E.g. 10 would sort before 9. List.metadata_sort.alphanumeric:Sort using a more natural sort, where digits are treated as numbers and sorted numerically. E.g. 10 would sort after 9. List.metadata_sort.structured:Customisable sort - uses a single level list of characters to define sort order. By default this contains latin characters, where accented characters come in the list directly after their non-accented counterpart (see perllib/structured_sort_definition.pm). List.metadata_sort.structured_grouped:Customisable sort - uses a two level structure of characters by default, where groups of characters are sorted equivalently, unless there is a tie-break needed. By default, uses a structure of latin accented characters (see perllib/structured_sort_definition.pm). List.bookshelf_type:Controls when to create bookshelves. This only applies to the last level. Other levels will get bookshelf_type = always. List.bookshelf_type.always:Create a bookshelf icon even if there is only one item in each group at the leaf nodes. List.bookshelf_type.never:Never create a bookshelf icon even if there is more than one item in each group at the leaf nodes. List.bookshelf_type.duplicate_only:Create a bookshelf icon only when there is more than one item in each group at the leaf nodes. List.classify_sections:Classify sections instead of documents. List.partition_type_within_level:The type of partitioning done at each level, for those values that start with word characters (not digits). Separate levels by '/'. List.numeric_partition_type_within_level:The type of partitioning done at each level, for those values that start with digits 0-9. Separate levels by '/'. List.level_partition.none:None. Will apply to the entire level, both numeric and non-numeric values; i.e. Setting none in either partition_type_within_level and numeric_partition_type_within_level will result in both these options being set to none. List.level_partition.per_letter:Create a partition for each letter (word character). List.level_partition.per_digit:Create a partition for each digit 0-9. List.level_partition.per_number:Create a partition for each number. Control how many digits are used to create numbers using the -numeric_partition_name_length_within_level option. List.level_partition.single:Create a single partition '0-9' for all values that start with digits. List.level_partition.constant_size:Create partitions with constant size. List.level_partition.approximate_size:Create a partition per letter, then group or split the letters to get approximately the same sized partitions. List.level_partition.approximate_size_numeric:Create a partition per number (using -numeric_partition_name_length_within_level to determine how many digits to include in the number), then group or split the partitions to get approximately the same sized partitions. List.level_partition.all_values:Create a partition for each metadata value. List.partition_size_within_level:The number of items in each partition (only applies when partition_type_within_level is set to 'constant_size' or 'approximate_size'). Can be specified for each level. Separate levels by '/'. List.numeric_partition_size_within_level:The number of items in each numeric partition (only applies when -numeric_partition_type_within_level is set to 'constant_size' or 'approximate_size'). Can be specified for each level. Separate levels by '/'. List.numeric_partition_name_length_within_level:Control how many consecutive digits are grouped to make the number for the numeric partition name. -1 implies all the digits. List.partition_name_length:The length of the partition name; defaults to a variable length from 1 up to max_partition_name_length characters, depending on how many are required to distinguish the partition start from its end. This option only applies when -partition_type_within_level is set to 'constant_size' or 'approximate_size'. List.max_partition_name_length:If partition_name_length is not set, then this is the maximum number of characters to use in generating partition start and end values. List.partition_sort_mode_within_level:How to sort the values of metadata to create the partitions. List.numeric_partition_sort_mode_within_level:How to sort the values of numeric metadata to create the numeric partitions. List.numbers_first:Sort the numbers to the start of the list. (By default, metadata values starting with numbers are sorted at the end). List.sort_leaf_nodes_using:Metadata fields used for sorting the leaf nodes (i.e. those documents in a bookshelf). Use '|' to separate the metadata groups to stable sort by, and ';' or ',' to separate metadata fields within each group. For example, "dc.Title,Title|Date" will result in a list sorted by Titles (coming from either dc.Title or Title), with those documents having the same Title sorted by Date. List.sort_leaf_nodes_sort_mode:How to sort the leaf node metadata fields. List.reverse_sort_leaf_nodes:Sort the leaf documents in reverse order. List.sort_using_unicode_collation:This will override all sort mode arguments: they will all be set to 'unicode'. List.filter_metadata:Metadata element to test against for a document's inclusion into the classifier. Documents will be included if they define this metadata. List.filter_regex:Regular expression to use in the filter_metadata test. If a regex is specified, only documents with filter_metadata that match this regex will be included. List.use_formatted_metadata_for_bookshelf_display:Metadata values are formatted for sorting (unless -no_metadata_formatting is specified). This might include lower-casing, tidying up whitespace, removing articles. Set this option to use these formatted values for bookshelf names. Otherwise the original value variant that occurs most frequently will be used. SimpleList.metadata:A single metadata field or a comma separated list of metadata fields used for classification. Following the order indicated by the list, the first field that contains a metadata value will be used. List will be sorted by this element, unless -sort is used. If no metadata is specified, then all documents will be included in the list, otherwise only documents that contain a metadata value will be included. SimpleList.desc:Simple list classifier plugin. SimpleList.sort:Metadata field to sort by. Use '-sort nosort' for no sorting. Phind.desc:Produces a hierarchy of phrases found in the text, which is browsable via an applet. Phind.language:Language or languages to use building hierarchy. Languages are identified by two-letter country codes like en (English), es (Spanish), and fr (French). Language is a regular expression, so 'en|fr' (English or French) and '..' (match any language) are valid. Phind.min_occurs:The minimum number of times a phrase must appear in the text to be included in the phrase hierarchy. Phind.savephrases:If set, the phrase infomation will be stored in the given file as text. It is probably a good idea to use an absolute path. Phind.suffixmode:The smode parameter to the phrase extraction program. A value of 0 means that stopwords are ignored, and of 1 means that stopwords are used. Phind.text:The text used to build the phrase hierarchy. Phind.thesaurus:Name of a thesaurus stored in Phind format in the collection's etc directory. Phind.title:The metadata field used to describe each document. Phind.untidy:Don't remove working files. RecentDocumentsList.desc:Classifier that gives a list of newly added or modified documents. RecentDocumentsList.include_docs_added_since:Include only documents modified or added after the specified date (in yyyymmdd or yyyy-mm-dd format). RecentDocumentsList.include_most_recently_added:Include only the specified number of most recently added documents. Only used if include_docs_added_since is not specified. RecentDocumentsList.sort:Metadata to sort List by. If not specified, list will be sorted by date of modification/addition. SectionList.desc:Same as List classifier but includes all sections of document (excluding top level) rather than just top level document itself. Collage.desc:An applet is used to display a collage of images found in the collection. Collage.geometry:The dimensions of the collage canvas. For a canvas 600 pixels wide by 400 pixels high, for example, specify geometry as 600x400 Collage.maxDepth:Images for collaging are drawn from mirroring the underlying browse classifier. This controls the maximum depth of the mirroring process. Collage.maxDisplay:The maximum number of images to show in the collage at any one time. Collage.imageType:Used to control, by expressing file name extensions, which file types are used in the collage. A list of file name extensions is separated by the percent (%%) symbol. Collage.bgcolor:The background color of the collage canvas, specified in hexadecimal form (for example #008000 results in a forest green background). Collage.buttonname:The label for the classifier screen and button in navigation bar. Collage.refreshDelay:Rate, in milliseconds, that the collage canvas is refreshed. Collage.isJava2:Used to control which run-time classes of Java are used. More advanced version of Java (i.e. Java 1.2 onwards) include more sophisticated support for controlling transparency in images, this flag helps control what happens; however the built-in Java runtime for some browsers is version 1.1. The applet is designed to, by default, auto-detect which version of Java the browser is running and act accordingly. Collage.imageMustNotHave:Used to suppress images that should not appear in the collage, such as image buttons that make up the navigation bar. Collage.caption:Optional captions to display below the collage canvas. # # Plugin option descriptions # AcronymExtractor.adding:adding AcronymExtractor.already_seen:already seen AcronymExtractor.desc:Helper extractor plugin for location and marking up acronyms in text. AcronymExtractor.done_acronym_extract:done extracting acronyms. AcronymExtractor.done_acronym_markup:done acronym markup. AcronymExtractor.extract_acronyms:Extract acronyms from within text and set as metadata. AcronymExtractor.extracting_acronyms:extracting acronyms AcronymExtractor.marking_up_acronyms:marking up acronyms AcronymExtractor.markup_acronyms:Add acronym metadata into document text. ArchivesInfPlugin.desc:Plugin which processes the archive info database (archiveinf-doc) which is generated by the import process. It passes each archive file listed in the database to the plugin pipeline to be processed by GreenstoneXMLPlugin. ArchivesInfPlugin.reversesort:Sort in reverse alphabetical order. Useful if the -sortmeta option was used with import.pl. ArchivesInfPlugin.sort:Sort in ascending alphabetical order. Useful if the -sortmeta option was used with import.pl. AutoExtractMetadata.desc: Base plugin that brings together all the extractor functionality from the extractor plugins. AutoExtractMetadata.extracting:extracting AutoExtractMetadata.first:Comma separated list of numbers of characters to extract from the start of the text into a set of metadata fields called 'FirstN', where N is the size. For example, the values "3,5,7" will extract the first 3, 5 and 7 characters into metadata fields called "First3", "First5" and "First7". AutoLoadConverters.desc:Helper plugin that dynamically loads up extension converter plugins. AutoloadConverter.noconversionavailable:Conversion not available BaseMediaConverter.desc:Helper plugin that provides base functionality for media converter plugins such as ImageConverter and video converters. BaseImporter.associate_ext:Causes files with the same root filename as the document being processed by the plugin AND a filename extension from the comma separated list provided by this argument to be associated with the document being processed rather than handled as a separate list. BaseImporter.associate_tail_re:A regular expression to match filenames against to find associated files. Used as a more powerful alternative to associate_ext. BaseImporter.desc:Base class for all the import plugins. BaseImporter.dummy_text:This document has no text. BaseImporter.no_cover_image:Do not look for a prefix.jpg file (where prefix is the same prefix as the file being processed) to associate as a cover image. BaseImporter.metadata_mapping_file:Use the specified metadata mapping file to generate additional metadata for a document. The specified comma-separated value file (csv) needs to be encoded as UTF8, and consists of a series of rules, with 5 entries per line. The first entry in the line specifyies a source metadata value to select from the doucment being process, and the second entry is a regular expression the metadata must match for the rule to be applied (Note: the syntax used is Perl's regular expression substitution, where use of parentheses form capture groups). If it does match, then the third element is what the matching metadata value is transformed into (groups formed with brackets from the source metadata matching term can be referenced as $1, $2 and so on). The fourth entry specifies any modifiers for the substitution, such as 'g' for global and 'i' for case-insensitive. The fifth entry specifies the metadata name that is set with the newly created value. The rules are applied in the order they are provided in the comma-separated value file, so it is permissible for metadata set by one of the earlier rules to then be used in a later matching rule. Destination metadata names that start '_transient' are not stored in the final document. For an example of a metadata_mapping_file, refer to the one provided in GSDLHOME/etc/metadta_mapping_rules.csv BaseImporter.OIDtype.auto:Use OIDtype set in import.pl BaseImporter.process_exp:A perl regular expression to match against filenames. Matching filenames will be processed by this plugin. For example, using '(?i).html?\$' matches all documents ending in .htm or .html (case-insensitive). BaseImporter.processing_tmp_files:Internal flag, set by converter plugins to indicate that we are processing a tmp file. BaseImporter.smart_block:Block files in a smarter way than just looking at filenames. BaseImporter.store_original_file:Save the original source document as an associated file. Note this is already done for files like PDF, Word etc. This option is only useful for plugins that don't already store a copy of the original file. BaseImporter.file_rename_method:The method to be used in renaming the copy of the imported file and associated files. BaseImporter.rename_method.url:Use url encoding in renaming imported files and associated files. BaseImporter.rename_method.base64:Use base64 encoding in renaming imported files and associated files. BaseImporter.rename_method.none:Don't rename imported files and associated files. BibTexPlugin.desc:BibTexPlugin reads bibliography files in BibTex format. BibTexPlugin creates a document object for every reference in the file. This plugin is a subclass of SplitTextFile class, so if there are multiple records, all are read. BookPlugin.desc:Creates multi-level document from document containing <> level tags. Metadata for each section is taken from any other tags on the same line as the <>. e.g. <>xxxx<> sets Title metadata. Everything else between TOC tags is treated as simple html (i.e. no HTMLPlugin type of processing is done, such as processing html links). Expects input files to have a .hb file extension by default (this can be changed by adding a -process_exp option); a file with the same name as the hb file but a .jpg extension is taken as the cover image (jpg files are blocked by this plugin). BookPlugin is a simplification (and extension) of the HBPlugin used by the Humanity Development Library collections. BookPlugin is faster as it expects the input files to be cleaner (The input to the HDL collections contains lots of excess html tags around <> tags, uses <> tags to specify images, and simply takes all text appearing after a <> tag and on the same line as Title metadata). If you're marking up documents to be displayed in the same way as the HDL collections, use this plugin instead of HBPlugin. CommonUtil.block_exp:Files matching this regular expression will be blocked from being passed to any later plugins in the list. CommonUtil.could_not_open_for_writing:could not open %s for writing CommonUtil.desc:Base Utility plugin class that handles filename encoding and file blocking. CommonUtil.encoding.ascii:Plain 7 bit ASCII. This may be a bit faster than using iso_8859_1. Beware of using this when the text may contain characters outside the plain 7 bit ASCII set though (e.g. German or French text containing accents), use iso_8859_1 instead. CommonUtil.encoding.unicode:Just unicode. CommonUtil.encoding.utf8:Either utf8 or unicode -- automatically detected. CommonUtil.filename_encoding:The encoding of the source file filenames. CommonUtil.filename_encoding.auto:Automatically detect the encoding of the filename. CommonUtil.filename_encoding.auto_language_analysis:Auto-detect the encoding of the filename by analysing it. CommonUtil.filename_encoding.auto_filesystem_encoding:Auto-detect the encoding of the filename using filesystem encoding. CommonUtil.filename_encoding.auto_fl:Uses filesystem encoding then language analysis to detect the filename encoding. CommonUtil.filename_encoding.auto_lf:Uses language analysis then filesystem encoding to detect the filename encoding. CommonUtil.no_blocking:Don't do any file blocking. Any associated files (e.g. images in a web page) will be added to the collection as documents in their own right. CONTENTdmPlugin.desc:Plugin that processes RDF files in exported CONTENTdm collections. ConvertBinaryFile.apply_fribidi:Run the "fribidi" Unicode Bidirectional Algorithm program over the converted file (for right-to-left text). ConvertBinaryFile.convert_to:Plugin converts to TEXT or HTML or various types of Image (e.g. JPEG, GIF, PNG). ConvertBinaryFile.convert_to.auto:Automatically select the format converted to. Format chosen depends on input document type, for example Word will automatically be converted to HTML, whereas PowerPoint will be converted to Greenstone's PagedImage format. ConvertBinaryFile.convert_to.html:HTML format. ConvertBinaryFile.convert_to.text:Plain text format. ConvertBinaryFile.convert_to.paged_text:Sectionalised plain text, where every page's text is its own section. ConvertBinaryFile.convert_to.pagedimg:A series of images. ConvertBinaryFile.convert_to.pagedimg_jpg:A series of images in JPEG format. ConvertBinaryFile.convert_to.pagedimg_gif:A series of images in GIF format. ConvertBinaryFile.convert_to.pagedimg_png:A series of images in PNG format. ConvertBinaryFile.convert_to.pagedimgtxt_jpg:A series of images in JPEG format with any extracted text, one for each page. ConvertBinaryFile.convert_to.pagedimgtxt_png:A series of images in PNG format with any extracted text, one for each page. ConvertBinaryFile.desc:This plugin is inherited by such plugins as WordPlugin, PowerPointPlugin, PostScriptPlugin, RTFPlugin and PDFPlugin. It facilitates the conversion of these document types to either HTML, TEXT or a series of images. It works by dynamically loading an appropriate secondary plugin (HTMLPlugin, StructuredHTMLPlugin, PagedImagePlugin or TextPlugin) based on the plugin argument 'convert_to'. ConvertBinaryFile.keep_original_filename:Keep the original filename for the associated file, rather than converting to doc.pdf, doc.doc etc. ConvertBinaryFile.use_strings:If set, a simple strings function will be called to extract text if the conversion utility fails. ConvertToRogPlugin.desc:A plugin that inherits from RogPlugin. CSVFieldSeparator.desc:Helper plugin that works out what the field separator character is. CSVFieldSeparator.csv_field_separator:The character you've consistently used to seperate each cell of a row in your csv spreadsheet file. CSV stands for comma separated values, however you can specify the csv_field_separator character you used in your csv files here. If you leave this option on auto, the Plugin will try to autodetect your csv field separator character. CSVFieldSeparator.metadata_value_separator:The character you've consistently used to separate multiple metadata values for a single metadata field within a cell of the csv spreadsheet. If you used the vertical bar as the separator character, then set metadata_value_separator to \| (backslash vertical bar). CSVFieldSeparator.metadata_separate_fields:A comma separated list of metadata fields that the metadata_value_separator is to be applied to. If left blank then metadata_value_separator is applied to all the metadata fields in the CSV file. CSVPlugin.desc:A plugin for files in comma-separated value format. Metadata can be assigned to source documents (specified in the Filename field), or new documents created for each line of the file. CSVPlugin.filename_field:Which field in the CSV file to use for specifying source documents. CSVPlugin.store_field_values_as_document_text:Store all the metadata values as the text of the document. Only applies if there is no source document specified. Useful for searching. CSVPlugin.use_namespace_for_field_names:Prepend a namespace to each field name. The value of this option is the namespace to use. e.g. 'wmtb' Note, if you want the metadata to be visible in GLI, you will need to use ex. prefix with your namespace e.g. 'ex.wmtb', and this will need to be used in format statements. CSVPlugin.no_document_if_source_unspecified:If there is no source document specified, don't create a dummy document. CSVPlugin.no_document_if_source_missing:If there is a specified source document, but it is not there, don't create a dummy document. CSVPlugin.ignore_field:A field name in the CSV file in which to specify that a line should be ignored (by adding non empty value). Used to eg. block lines which are not ready for the collection yet. CSVDeprecatedPlugin.desc:An old plugin for files in comma-separated value format. A new document will be created for each line of the file. DateExtractor.desc:Helper extractor plugin for extracting historical date information from text. DateExtractor.extract_historical_years:Extract time-period information from historical documents. This is stored as metadata with the document. There is a search interface for this metadata, which you can include in your collection by adding the statement, "format QueryInterface DateSearch" to your collection configuration file. DateExtractor.maximum_century:The maximum named century to be extracted as historical metadata (e.g. 14 will extract all references up to the 14th century). DateExtractor.maximum_year:The maximum historical date to be used as metadata (in a Common Era date, such as 1950). DateExtractor.no_bibliography:Do not try to block bibliographic dates when extracting historical dates. DirectoryPlugin.desc:A plugin which recurses through directories processing each file it finds. DirectoryPlugin.recheck_directories:After the files in an import directory have been processed, re-read the directory to discover any new files created. DirectoryPlugin.use_metadata_files:SUPERSEDED - Add MetadataXMLPlugin to the list of plugins in order to read metadata from metadata XML files. DatabasePlugin.desc:A plugin that imports records from a database. This uses perl's DBI module, which includes back-ends for mysql, postgresql, comma separated values (CSV), MS Excel, ODBC, sybase, etc... Extra modules may need to be installed to use this. See /etc/packages/example.dbi for an example config file. DSpacePlugin.desc:A plugin that takes a collection of documents exported from DSpace and imports them into Greenstone. DSpacePlugin.first_inorder_ext: This is used to identify the primary document file for a DSpace collection document. With this option, the system will treat the defined ext types of document in sequence to look for the primary document file. DSpacePlugin.first_inorder_mime:This is used to identify the primary document file for a DSpace collection document. With this option, the system will treat the defined mime types of document in sequence to look for the primary document file. DSpacePlugin.only_first_doc:This is used to identify the primary document file for a DSpace collection document. With this option, the system will treat the first document referenced in the dublin_core metadata file as the primary document file. EmailAddressExtractor.desc:Helper extractor plugin for discovering email addresses in text. EmailAddressExtractor.done_email_extract:done extracting e-mail addresses. EmailAddressExtractor.extracting_emails:extracting e-mail addresses EmailAddressExtractor.extract_email:Extract email addresses as metadata. EmailPlugin.desc:A plugin that reads email files. These are named with a simple number (i.e. as they appear in maildir folders) or with the extension .mbx (for mbox mail file format).\nDocument text: The document text consists of all the text after the first blank line in the document.\nMetadata (not Dublin Core!):\n\t$Headers All the header content (optional, not stored by default)\n\t$Subject Subject: header\n\t$To To: header\n\t$From From: header\n\t$FromName Name of sender (where available)\n\t$FromAddr E-mail address of sender\n\t$DateText Date: header\n\t$Date Date: header in GSDL format (eg: 19990924) EmailPlugin.no_attachments:Do not save message attachments. EmailPlugin.headers:Store email headers as "Headers" metadata. EmailPlugin.OIDtype.message_id:Use the message identifier as the document OID. If no message identifier is found, then will use a hash OID. EmailPlugin.split_exp:A perl regular expression used to split files containing many messages into individual documents. EmbeddedMetadataPlugin.desc:Plugin that extracts embedded metadata from a variety of file types. It is based on the CPAN module 'ExifTool which includes support for over 70 file formats and 20 metadata formats. Highlights include: video formats such as AVI, ASF, FLV, MPEG, OGG Vorbis, and WMV; image formats such as BMP, GIF, JPEG, JPEG 2000 and PNG; audio formats such as AIFF, RealAudio, FLAC, MP3, and WAV; Office document formats such as Encapsulated PostScript, HTML, PDF, and Word. More details are available at the ExifTool home page http://www.sno.phy.queensu.ca/~phil/exiftool/ EmbeddedMetadataPlugin.join_before_split:Join fields with multiple entries (e.g. Authors or Keywords) before they are (optionally) split using the specified separator. EmbeddedMetadataPlugin.join_character:The character to use with join_before_split (default is a single space). EmbeddedMetadataPlugin.apply_join_before_split_to_metafields:Use in tandem with join_before_split. A Regular Expression specifying which metadata fields join_before_split will be applied to. By default, will apply to any metadata fields whose fieldnames end on Keywords. Set value to .* to apply to all metadata fields. Use vertical bar as separator to list specific metadata field names, e.g. a value of Keywords|Title|Creator, will match metadata fields whose names are exactly any of Keywords, Title and Creator. EmbeddedMetadataPlugin.trim_whitespace:Trim whitespace from start and end of any extracted metadata values (Note: this also applies to any values generated through joining with join_before_split or splitting through metadata_field_separator). EmbeddedMetadataPlugin.set_filter_list:A comma-separated list of the metadata sets we would like to retrieve. EmbeddedMetadataPlugin.set_filter_regexp:A regular expression that selects the metadata we would like to retrieve. ExcelPlugin.desc:A plugin for importing Microsoft Excel files (versions 95 and 97). FavouritesPlugin.desc:Plugin to process Internet Explorer Favourites files. FOXPlugin.desc:Plugin to process a Foxbase dbt file. This plugin only provides the basic functionality to read in the dbt and dbf files and process each record. A customized plugin based on this general one would need to be written for a particular database to process the appropriate fields. GreenstoneXMLPlugin.desc:Processes Greenstone Archive XML documents. Note that this plugin does no syntax checking (though the XML::Parser module tests for well-formedness). It's assumed that the Greenstone Archive files conform to their DTD. GreenstoneSQLPlugin.desc:Processes the contents of a Greenstone SQL database for metadata and/or full text of documents, and processes Greenstone Archive XML documents for the part of that that's not in the database and for document structure. Note that this plugin does no syntax checking (though the XML::Parser module tests for well-formedness). It's assumed that the Greenstone Archive files conform to their DTD. GISExtractor.desc:Helper extractor plugin for extracting placenames from text. Requires GIS extension to Greenstone. GISExtractor.extract_placenames:Extract placenames from within text and set as metadata. Requires GIS extension to Greenstone. GISExtractor.gazetteer:Gazetteer to use to extract placenames from within text and set as metadata. Requires GIS extension to Greenstone. GISExtractor.place_list:When extracting placements, include list of placenames at start of the document. Requires GIS extension to Greenstone. HathiTrustMETSPlugin.desc:Plugin that processes HathiTrust METS files which are accompanied by page-by-page OCR'd text files (in a subfolder with the same name as the METS file). HathiTrustMETSPlugin.headerpage:Add a top level header page (containing dunny text) to each document. HBPlugin.desc:Plugin which processes an HTML book directory. This plugin is used by the Humanity Development Library collections and does not handle input encodings other than ASCII or extended ASCII. This code is not very clean and could no doubt be made to run faster, by leaving it in this state we hope to encourage the utilisation of BookPlugin instead ;-)\n\nUse BookPlugin if creating a new collection and marking up files like the Humanity Library collections. BookPlugin accepts all input encodings but expects the marked up files to be cleaner than those used by the Humanity Library collections HBPlugin.encoding.iso_8859_1:Latin1 (western languages) HTMLImagePlugin.aggressiveness:Range of related text extraction techniques to use. HTMLImagePlugin.aggressiveness.1:Filename, path, alternative text (ALT attributes in img HTML tags) only. HTMLImagePlugin.aggressiveness.2:All of 1, plus caption where available. HTMLImagePlugin.aggressiveness.3:All of 2, plus near paragraphs where available. HTMLImagePlugin.aggressiveness.4:All of 3, plus previous headers (

,

...) where available. HTMLImagePlugin.aggressiveness.5:All of 4, plus textual references where available. HTMLImagePlugin.aggressiveness.6:All of 4, plus metadata tags in HTML pages (title, keywords, etc). HTMLImagePlugin.aggressiveness.7:All of 6, 5 and 4 combined. HTMLImagePlugin.aggressiveness.8:All of 7, plus duplicating filename, path, alternative text, and caption (raise ranking of more relevant results). HTMLImagePlugin.aggressiveness.9:All of 1, plus full text of source page. HTMLImagePlugin.caption_length:Maximum length of captions (in characters). HTMLImagePlugin.convert_params:Additional parameters for ImageMagicK convert on thumbnail creation. For example, '-raise' will give a three dimensional effect to thumbnail images. HTMLImagePlugin.desc:A plugin for extracting images and associated text from webpages. HTMLImagePlugin.document_text:Add image text as document:text (otherwise IndexedText metadata field). HTMLImagePlugin.index_pages:Index the pages along with the images. Otherwise reference the pages at the source URL. HTMLImagePlugin.max_near_text:Maximum characters near images to extract. HTMLImagePlugin.min_height:Pixels. Skip images shorter than this. HTMLImagePlugin.min_near_text:Minimum characters of near text or caption to extract. HTMLImagePlugin.min_size:Bytes. Skip images smaller than this. HTMLImagePlugin.min_width:Pixels. Skip images narrower than this. HTMLImagePlugin.neartext_length:Target length of near text (in characters). HTMLImagePlugin.no_cache_images:Don't cache images (point to URL of original). HTMLImagePlugin.smallpage_threshold:Images on pages smaller than this (bytes) will have the page (title, keywords, etc) meta-data added. HTMLImagePlugin.textrefs_threshold:Threshold for textual references. Lower values mean the algorithm is less strict. HTMLImagePlugin.thumb_size:Max thumbnail size. Both width and height. HTMLPlugin.assoc_files:Perl regular expression of file extensions to associate with html documents. HTMLPlugin.desc:This plugin processes HTML files HTMLPlugin.description_tags:Split document into sub-sections where
tags occur. '-keep_head' will have no effect when this option is set. HTMLPlugin.extract_style:Extract style and script information from the HTML tag and save as DocumentHeader metadata. This will be set in the document page as the _document:documentheader_ macro. HTMLPlugin.file_is_url:Set if input filenames make up url of original source documents e.g. if a web mirroring tool was used to create the import directory structure. HTMLPlugin.hunt_creator_metadata:Find as much metadata as possible on authorship and place it in the 'Creator' field. HTMLPlugin.keep_head:Don't remove headers from html files. HTMLPlugin.metadata_fields:Comma separated list of metadata fields to attempt to extract. Capitalise this as you want the metadata capitalised in Greenstone, since the tag extraction is case insensitive. e.g. Title,Date. Use 'tag' to have the contents of the first pair put in a metadata element called 'tagname'. e.g. Title,Date,Author HTMLPlugin.metadata_field_separator:Separator character used in multi-valued metadata. Will split a metadata field value on this character, and add each item as individual metadata. HTMLPlugin.no_metadata:Don't attempt to extract any metadata from files. HTMLPlugin.no_strip_metadata_html:Comma separated list of metadata names, or 'all'. Used with -description_tags, it prevents stripping of HTML tags from the values for the specified metadata. HTMLPlugin.nolinks:Don't make any attempt to trap links (setting this flag may improve speed of building/importing but any relative links within documents will be broken). HTMLPlugin.no_image_links:Don't make any attempt to trap image links to allow view images. HTMLPlugin.rename_assoc_files:Renames files associated with documents (e.g. images). Also creates much shallower directory structure (useful when creating collections to go on cd-rom). HTMLPlugin.sectionalise_using_h_tags:Automatically create a sectioned document using h1, h2, ... hX tags. HTMLPlugin.title_sub:Substitution expression to modify string stored as Title. Used by, for example, PDFPlugin to remove "Page 1", etc from text used as the title. HTMLPlugin.tidy_html:If set, converts an HTML document into a well-formed XHTML to enable users view the document in the book format. HTMLPlugin.old_style_HDL:To mark whether the file in this collection used the old HDL document's tags style. BaseMediaConverter.enable_cache:Cache automatically generated files (such as thumbnails and screen-size images) so they don't need to be repeatedly generated. ImageConverter.apply_aspectpad: Pad images with a colour to a specified aspect ratio and orientation ImageConverter.aspectpad_colour: The desired padding colour. ImageConverter.aspectpad_mode: Padding mode ImageConverter.aspectpad_mode.al: Preserve the aspect orientation of the original image, but pad a square image to landscape format. ImageConverter.aspectpad_mode.ap: Preserve the aspect orientation of the original image, but pad a square image to portrait format. ImageConverter.aspectpad_mode.l: Force the result orientation to be landscape. ImageConverter.aspectpad_mode.p: Force the result orientation to be portrait. ImageConverter.aspectpad_ratio: The desired aspect ratio. ImageConverter.aspectpad_tolerance: Aspect tolerance. If the difference between existing and desired aspect is less than tolerance, no padding will be applied. ImageConverter.converttotype:Convert main image to format 's'. ImageConverter.create_screenview:If set to true, create a screen sized image, and set Screen, ScreenType, screenicon, ScreenWidth, ScreenHeight metadata. ImageConverter.create_thumbnail:If set to true, create a thumbnail version of each image, and add Thumb, ThumbType, thumbicon, ThumbWidth, ThumbHeight metadata. ImageConverter.desc:Helper plugin for image conversion using ImageMagick. ImageConverter.imagemagicknotinstalled:Image Magick not installed ImageConverter.minimumsize:Ignore images smaller than n bytes. ImageConverter.noconversionavailable:Image conversion not available ImageConverter.noscaleup:Don't scale up small images when making thumbnails. ImageConverter.screenviewsize:Make screenview images of size nxn. ImageConverter.screenviewtype:Make screenview images in format 's'. ImageConverter.store_original_image: Save the original image as an associated file. Only useful if -converttotype is used, as otherwise the original image is already stored. ImageConverter.disable_auto_orient: Disable ImageMagick from using its auto-orient option, where orientation EXIF metadata stored in an image is used to auto-rotate the image generated to be in the 'top-left' orientation. Having auto-orient on (which it is by default) is usually the right thing to do: when generating a PNG thumbnail from a JPG image, for example, the latter does not have the ability to store EXIF metadata in it, and so can end up being displayed at an incorrect orientation in Greenstone, despite the original being displayed correctly. ImageConverter.thumbnailsize:Make thumbnails of size nxn. ImageConverter.thumbnailtype:Make thumbnails in format 's'. ImageConverter.win95notsupported: Image Magick not supported on Win95/98 ImagePlugin.desc:This plugin processes images, adding basic metadata IndexPlugin.desc:This recursive plugin processes an index.txt file. The index.txt file should contain the list of files to be included in the collection followed by any extra metadata to be associated with each file.\n\nThe index.txt file should be formatted as follows: The first line may be a key (beginning with key:) to name the metadata fields (e.g. key: Subject Organization Date). The following lines will contain a filename followed by the value that metadata entry is to be set to. (e.g. 'irma/iw097e 3.2 unesco 1993' will associate the metadata Subject=3.2, Organization=unesco, and Date=1993 with the file irma/iw097e if the above key line was used)\n\nNote that if any of the metadata fields use the Hierarchy classifier plugin then the value they're set to should correspond to the first field (the descriptor) in the appropriate classification file.\n\nMetadata values may be named separately using a tag (e.g. 3.2) and this will override any name given to them by the key line. If there's no key line any unnamed metadata value will be named 'Subject'. ISISPlugin.desc:This plugin processes CDS/ISIS databases. For each CDS/ISIS database processed, three files must exist in the collection's import folder: the Master file (.mst), the Field Definition Table (.fdt), and the Cross-Reference File (.xrf). ISISPlugin.subfield_separator:The string used to separate subfields in CDS/ISIS database records. ISISPlugin.entry_separator:The string used to separate multiple values for single metadata fields in CDS/ISIS database records. KeyphraseExtractor.desc:Helper extractor plugin for generating keyphrases from text. Uses Kea keyphrase extraction system. KeyphraseExtractor.extract_keyphrases:Extract keyphrases automatically with Kea (default settings). KeyphraseExtractor.extract_keyphrases_kea4:Extract keyphrases automatically with Kea 4.0 (default settings). Kea 4.0 is a new version of Kea that has been developed for controlled indexing of documents in the domain of agriculture. KeyphraseExtractor.extract_keyphrase_options:Options for keyphrase extraction with Kea. For example: mALIWEB - use ALIWEB extraction model; n5 - extract 5 keyphrase;, eGBK - use GBK encoding. KeyphraseExtractor.keyphrases:keyphrases KeyphraseExtractor.missing_kea:Error: The Kea software could not be found at %s. Please download Kea %s from http://www.nzdl.org/Kea and install it in this directory. LaTeXPlugin.desc:Plugin for LaTeX documents. LOMPlugin.desc:Plugin for importing LOM (Learning Object Metadata) files. LOMPlugin.root_tag:The DocType of the XML file (or a regular expression that matches the root element). LOMPlugin.check_timestamp:Check timestamps of previously downloaded files, and only download again if source file is newer. LOMPlugin.download_srcdocs:Download the source document if one is specified (in general^identifier^entry or technical^location). This option should specify a regular expression to match filenames against before downloading. Note, this currently doesn't work for documents outside a firewall. MARCPlugin.desc:Basic MARC plugin. MARCXMLPlugin.desc:MARCXML plugin. MARCXMLPlugin.marc_metadata_mapping_file:Name of file that includes mapping details from MARC values to Greenstone metadata names. Defaults to 'marc2dc.txt' found in the site's etc directory. MediainfoOGVPlugin.desc:Plugin for importing OGV movie files. Requires Mediainfo (mediainfo.sourceforge.net) to be installed to extract metadata. MediainfoOGVPlugin.assoc_field:Name of the metadata field that will hold the movie file's name. MediaWikiPlugin.desc:Plugin for importing MediaWiki web pages MediaWikiPlugin.show_toc: Add to the collection's About page the 'table of contents' on the MediaWiki website's main page. Needs to specify a Perl regular expression in toc_exp below to match the 'table of contents' section. MediaWikiPlugin.delete_toc:Delete the 'table of contents' section on each HTML page. Needs to specify a Perl regular expression in toc_exp below to match the 'table of contents' section. MediaWikiPlugin.toc_exp:A Perl regular expression to match the 'table of contents'. The default value matches common MediaWiki web pages. MediaWikiPlugin.delete_nav:Delete the navigation section. Needs to specify a Perl regular expression in nav_div_exp below. MediaWikiPlugin.nav_div_exp:A Perl regular expression to match the navigation section. The default value matches common MediaWiki web pages. MediaWikiPlugin.delete_searchbox:Delete the searchbox section. Needs to specify a Perl regular expression in searchbox_div_exp below. MediaWikiPlugin.searchbox_div_exp:A Perl regular expression to match the searchbox section. The default value matches common MediaWiki web pages. MediaWikiPlugin.remove_title_suffix_exp:A Perl regular expression to trim the extracted title. For example, \\s-(.+) will trim title contents after "-". MetadataCSVDeprecatedPlugin.desc:An old plugin for metadata in comma-separated value format. The Filename field in the CSV file is used to determine which document the metadata belongs to. MetadataPass.desc:On-the-side base class to BaseImporter that supports metadata plugins utilise metadata_read pass of import.pl MetadataXMLPlugin.desc:Plugin that processes metadata.xml files. NutchTextDumpMARCXMLPlugin.keep_urls_file:File path or name of optional whitelist file containing one URL per line, whose records are to be retained when processing each url's record in the dump.txt files produced by nutch per website. Those records whose URLs are not listed in the file will be discarded. For relative paths, the plugin will look for the file in the collection's etc directory. GreenstoneMETSPlugin.desc:Process Greenstone-style METS documents MP3Plugin.desc:Plugin for processing MP3 files. MP3Plugin.assoc_images:Use Google image search to locate images related to MP3 file based on ID3 Title and Artist metadata. MP3Plugin.applet_metadata:Used to store [applet] metadata for each document that contains the necessary HTML for an MP3 audio player applet to play that file. MP3Plugin.metadata_fields:Comma separated list of metadata fields to extract (assuming present) in an MP3 file. Use \"*\" to extract all the fields. NulPlugin.desc:Dummy (.nul) file plugin. Used with the files produced by exploding metadata database files. NulPlugin.assoc_field:Name of a metadata field that will be set for each nul file. NulPlugin.add_metadata_as_text:Add a table of metadata as the text of the document, rather than "This document has no text". NulPlugin.remove_namespace_for_text:Remove namepsaces from metadata names in the document text (if add_metadata_as_text is set). OAIPlugin.desc:Basic Open Archives Initiative (OAI) plugin. OAIPlugin.document_field:The metadata element specifying the file name of documents to attach the metadata to. OAIPlugin.metadata_set:Metadata set (namespace prefix) to import all metadata as OAIPlugin.metadata_set.auto:Use the prefixes specified in the OAI record OAIPlugin.metadata_set.dc: Use the dc prefix. Will map qualified dc elements into their Greenstone form, eg spatial becomes dc.Coverage^spatial. OAIMetadataXMLPlugin.desc:Version of MetadataXMLPlugin that processes metadata.xml files. Additionally, it uses the "dc.Identifier" field and extracts OAI metadata from the specified OAI server (-oai_server_http_path) OAIMetadataXMLPlugin.oai_server_http_path: HTTP Path to the OAI server - e.g. http://test.com/oai_server/oai.pl OAIMetadataXMLPlugin.metadata_prefix: OAI metadata prefix - default oai_dc OAIMetadataXMLPlugin.koha_mode: If specified, the plugin will try to generate the oaiextracted.koharecordlink metadata. This metadata contains the link back to Koha document. OggVorbisPlugin.add_technical_metadata:Add technical (eg. bitrate) metadata. OggVorbisPlugin.desc:A plugin for importing Ogg Vorbis audio files. OpenDocumentPlugin.desc:Plugin for OASIS OpenDocument format documents (used by OpenOffice 2.0) PagedImagePlugin.desc:Plugin for documents made up of a sequence of images, with optional OCR text for each image. This plugin processes .item files which list the sequence of image and text files, and provide metadata. PagedImagePlugin.documenttype:Set the document type (used for display) PagedImagePlugin.documenttype.auto2:Automatically set document type based on item file format. Uses 'paged' for documents with a single sequence of pages, and 'hierarchy' for documents with internal structure (i.e. from XML item files containing PageGroup elements). PagedImagePlugin.documenttype.auto3:Automatically set document type based on item file format. Uses 'paged' for documents with a single sequence of pages, and 'pagedhierarchy' for documents with internal structure (i.e. from XML item files containing PageGroup elements). PagedImagePlugin.documenttype.paged2:Paged documents have a linear sequence of pages and no internal structure. They will be displayed with next and previous arrows and a 'go to page X' box. PagedImagePlugin.documenttype.paged3:Paged documents have a linear sequence of pages and no internal structure. They will be displayed with a scrolling list of page images. PagedImagePlugin.documenttype.hierarchy: Hierarchical documents have internal structure and will be displayed with a table of contents PagedImagePlugin.documenttype.pagedhierarchy: (Greenstone 3 only) These documents have internal structure and sequences of pages. They will be displayed with a table of contents and scrolling lists of pages. PagedImagePlugin.headerpage:Add a top level header page (that contains no image) to each document. PDFPlugin.allowimagesonly:Allow PDF files with no extractable text. Avoids the need to have -complex set. Only useful with convert_to html. PDFPlugin.complex:Create more complex output. With this option set the output html will look much more like the original PDF file. For this to function properly you Ghostscript installed (for *nix gs should be on your path while for windows you must have gswin32c.exe on your path). PDFPlugin.convert_to.html:very basic HTML comprising just the extracted text, no images. PDFPlugin.convert_to.pretty_html:Each PDF page as HTML containing selectable text positionally overlaid on top of a textless screenshot of the PDF page. PDFPlugin.convert_to.paged_pretty_html:Sectionalised pretty_html, where each page's html is its own section. PDFPlugin.deprecated_plugin:*************IMPORTANT******************\nPDFPlugin is being deprecated.\nConsider upgrading to the recommended PDFv2Plugin, which supports newer versions of PDFs.\nAlternatively, if you wish to retain the old style of conversion and are NOT relying on PDFBox,\nchange to PDFv1Plugin.\nIf you are using PDFBox then upgrade to PDFv2Plugin.\n*****************************************\n PDFPlugin.desc:Deprecated plugin that processes PDF documents. Upgrade to PDFv2Plugin for the newest PDF capabilities including pdfbox_conversion, or to PDFv1Plugin if you really want the old pdf to html conversion and aren't using pdfbox_conversion. PDFv1Plugin.desc:Plugin that processes PDF documents using the older pdftohtml tool. Does not support newer PDF versions. PDFv2Plugin.desc:Plugin that processes PDF documents using PDFBox and xpdftools. Supports newer PDF versions. PDFPlugin.html_for_realistic_book:PDFs will be converted to HTML for realistic book functionality PDFPlugin.nohidden:Prevent pdftohtml from attempting to extract hidden text. This is only useful if the -complex option is also set. PDFPlugin.noimages:Don't attempt to extract images from PDF. PDFv2Plugin.auto_output_default:Defaulting to output format %s PDFPlugin.use_realistic_book:Converts the PDF to a well-formed XHTML document to enable users view it in the realistic book format. PDFPlugin.use_sections:Create a separate section for each page of the PDF file. PDFPlugin.win_old_pdftotext_unsupported:*** On Windows, PDFPlugin pdfbox_conversion must be turned on for text output. PDFs will be converted to HTML instead.\n*** Use PDFv2Plugin for additional pdf to text conversion options. PDFv1Plugin.win_old_pdftotext_unsupported:*** On Windows, PDFv1Plugin does not support pdf to text. PDFs will be converted to HTML instead.\n*** Use PDFv2Plugin if you want pdf to actual text conversion. PDFPlugin.zoom:The factor by which to zoom the PDF for output. Only useful if -complex is set. PDFv2Plugin.dpi:The resolution in DPI of background images generated when convert_to is set to any of the pagedimg(txt) and (paged_)pretty_html formats. PostScriptPlugin.desc:This is a \"poor man's\" ps to text converter. If you are serious, consider using the PRESCRIPT package, which is available for download at http://www.nzdl.org/html/software.html PostScriptPlugin.extract_date:Extract date from PS header. PostScriptPlugin.extract_pages:Extract pages from PS header. PostScriptPlugin.extract_title:Extract title from PS header. PowerPointPlugin.desc:A plugin for importing Microsoft PowerPoint files. PowerPointPlugin.windows_scripting:Use MicroSoft Windows scripting technology (Visual Basic for Applications) to get PPT to convert document to various image types (e.g. JPEG,PNG,GIF) rather than rely on the open source package ppttohtml. PowerPointPlugin.convert_to.html_multi:A series of HTML pages, two per slide. One for the slide image, one for the slide text (needs -openoffice_conversion). PowerPointPlugin.convert_to.pagedimg:A series of JPEG images (needs -openoffice_conversion). PowerPointPlugin.convert_to.pagedimg_jpg:A series of images in JPEG format (needs -windows_scripting). PowerPointPlugin.convert_to.pagedimg_gif:A series of images in GIF format (needs -windows_scripting). PowerPointPlugin.convert_to.pagedimg_png:A series of images in PNG format (needs -windows_scripting). PreProcessPlugin.desc:A plugin that can be used to run an external pre-processing command before the main importing phase takes place. With -run_once, this will be run only once (when the import directory is processed), and not on a particular document. Use -process_exp or -process_extension to have the command run on each matching document. In this case, the original files will be blocked from further processing, unless -no_block_original_file is set. PreProcessPlugin.run_once:Only run this once at the start of import. PreProcessPlugin.process_extension:A simpler version of process_exp which doesn't need regular expressions. eg 'txt' PreProcessPlugin.process_exp:A perl regular expression to match against filenames. Matching filenames will have the exec_cmd run on them. For example, using '(?i).html?\$' matches all documents ending in .htm or .html (case-insensitive). PreProcessPlugin.exec_cmd:The command to run. Use %%%%INPUT_FILE as a place holder in the command for input filename (if there is one). %%%%GSDLHOME, %%%%GSDL3HOME, %%%%GSDL3SRCHOME and %%%%GSDLCOLLECTDIR can be used to make full paths to the command (will be replaced by the equivalent environment variable value). PreProcessPlugin.no_block_original_file:Don't block the files matched from further processing. PrintInfo.bad_general_option:The %s plugin uses an incorrect option. Check your collect.cfg configuration file. PrintInfo.desc:Most base plugin, handles printing info (using pluginfo.pl) and parsing of the arguments. PrintInfo.site:The name of the Greenstone 3 site. The default site for a GS3 installation is localsite. ProCitePlugin.desc:A plugin for (exported) ProCite databases ProCitePlugin.entry_separator:The string used to separate multiple values for single metadata fields in ProCite database records. ReadTextFile.could_not_extract_encoding:WARNING: encoding could not be extracted from %s - defaulting to %s ReadTextFile.could_not_extract_language:WARNING: language could not be extracted from %s - defaulting to %s ReadTextFile.could_not_open_for_reading:could not open %s for reading ReadTextFile.default_encoding:Use this encoding if -input_encoding is set to 'auto' and the text categorization algorithm fails to extract the encoding or extracts an encoding unsupported by Greenstone. This option can take the same values as -input_encoding. ReadTextFile.default_language:If Greenstone fails to work out what language a document is the 'Language' metadata element will be set to this value. The default is 'en' (ISO 639 language symbols are used: en = English). Note that if -input_encoding is not set to 'auto' and -extract_language is not set, all documents will have their 'Language' metadata set to this value. ReadTextFile.desc:Base plugin for files that are plain text. ReadTextFile.empty_file:file contains no text ReadTextFile.extract_language:Identify the language of each document and set 'Language' metadata. Note that this will be done automatically if -input_encoding is 'auto'. ReadTextFile.file_has_no_text:ERROR: %s contains no text ReadTextFile.input_encoding:The encoding of the source documents. Documents will be converted from these encodings and stored internally as utf8. ReadTextFile.input_encoding.auto:Use text categorization algorithm to automatically identify the encoding of each source document. This will be slower than explicitly setting the encoding but will work where more than one encoding is used within the same collection. ReadTextFile.read_denied:Read permission denied for %s ReadTextFile.separate_cjk:Insert spaces between Chinese/Japanese/Korean characters to make each character a word. Use if text is not segmented. ReadTextFile.unsupported_encoding:WARNING: %s appears to be encoded in an unsupported encoding (%s) - using %s ReadTextFile.wrong_encoding:WARNING: %s was read using %s encoding but appears to be encoded as %s. ReadXMLFile.desc:Base class for XML plugins. ReadXMLFile.xslt:Transform a matching input document with the XSLT in the named file. A relative filename is assumed to be in the collection's file area, for instance etc/mods2dc.xsl. RealMediaPlugin.desc:A plugin for processing Real Media files. ReferPlugin.desc:ReferPlugin reads bibliography files in Refer format. RogPlugin.desc:Creates simple single-level documents from .rog or .mdb files. RTFPlugin.desc:Plugin for importing Rich Text Format files. SourceCodePlugin.desc:Filename is currently used for Title ( optionally minus some prefix ). Current languages:\ntext: READMEs/Makefiles\nC/C++ (currently extracts #include statements and C++ class decls)\nPerl (currently only done as text)\nShell (currently only done as text) SourceCodePlugin.remove_prefix:Remove this leading pattern from the filename (eg -remove_prefix /tmp/XX/src/). The default is to remove the whole path from the filename. SplitTextFile.desc:SplitTextFile is a plugin for splitting input files into segments that will then be individually processed. This plugin should not be called directly. Instead, if you need to process input files that contain several documents, you should write a plugin with a process function that will handle one of those documents and have it inherit from SplitTextFile. See ReferPlugin for an example. SplitTextFile.split_exp:A perl regular expression to split input files into segments. SplitJSONFile.split_exp:A 'dot notation' string that specificies the (potentially nested) field within the JSON to split on, for example 'corpus.documents' to select the 'documents' field that is itself contain within the 'corpus' field in a JSON file SplitJSONFile.metadata_exp:An optional comma separated list of 'dot notation' strings that specificies the fields -- within the split up JSON -- the fields to set as metadata, for example 'title,date.created,oclc_refnum>docid'.In the case of 'oclc_refnum->docid' this takes the JSON field 'oclc_refnum' and sets it as the 'docid' metadata in Greenstone SplitJSONFile.file_exp:An optional 'dot notation' string that specifies the field -- within the split up JSON -- to use as the file that the metadata in the JSON record being processed maps to. If the file is not present on the file system, then a Greenstone document is formed with just the metadata in it StructuredHTMLPlugin.desc:A plugin to process structured HTML documents, splitting them into sections based on style information. StructuredHTMLPlugin.delete_toc:Remove any table of contents, list of figures etc from the converted HTML file. Styles for these are specified by the toc_header option. StructuredHTMLPlugin.title_header:possible user-defined styles for the title header. StructuredHTMLPlugin.level1_header:possible user-defined styles for the level1 header in the HTML document (equivalent to

). StructuredHTMLPlugin.level2_header:possible user-defined styles for the level2 header in the HTML document (equivalent to

). StructuredHTMLPlugin.level3_header:possible user-defined styles for the level3 header in the HTML document (equivalent to

). StructuredHTMLPlugin.toc_header:possible user-defined header styles for the table of contents, table of figures etc, to be removed if delete_toc is set. TabSeparatedPlugin.desc: A plugin for tab-separated metadata files. TextPlugin.desc:Creates simple single-level document. Adds Title metadata of first line of text (up to 100 characters long). TextPlugin.title_sub:Substitution expression to modify string stored as Title. Used by, for example, PostScriptPlugin to remove "Page 1" etc from text used as the title. UnknownConverterPlugin.desc:If you have a custom conversion tool installed that you're able to run from the command line to convert from an unsupported document format to text, HTML or a series of images in jpg, png or gif form, then provide that command to this Plugin. It will then run the command for you, capturing the output for indexing by Greenstone, making the documents (if converted to text or HTML) searchable. Set the -process_extension option to the suffix of files to be converted. Set the -convert_to option to the output format that the conversion command will generate, which will determine the output file's suffix. Set the -exec_cmd option to the command to be run. UnknownConverterPlugin.exec_cmd:Command line command string to execute that will do the conversion. Quoted elements need to have the quotes escaped with a backslash to preserve them. Use %%%%INPUT_FILE and %%%%OUTPUT as place holders in the command for input filename, and output filename, respectively. (You can optionally use %%%%GSDLHOME, %%%%GSDL3HOME, %%%%GSDL3SRCHOME in place of the similarly named environment variables, to set the exec_cmd value to a command that will function across operating-systems.) Greenstone will replace all these placeholder variables with the correct values when calling the command. If -convert_to is a pagedimg type, Greenstone sets %%%%OUTPUT to be a directory to contain the expected files and will create an item file collating the parts of the document. UnknownConverterPlugin.output_file_or_dir_name: Full pathname of the output file or of the directory (of output files) that get generated by the conversion UnknownPlugin.assoc_field:Name of the metadata field that will hold the associated file's name. UnknownPlugin.desc:This is a simple Plugin for importing files in formats that Greenstone doesn't know anything about. A fictional document will be created for every such file, and the file itself will be passed to Greenstone as the \"associated file\" of the document. UnknownPlugin.file_format:Type of the file (e.g. MPEG, MIDI, ...) UnknownPlugin.mime_type:Mime type of the file (e.g. image/gif). Google the mime type for your file extension. UnknownPlugin.process_extension:Process files with this file extension. This option is an alternative to process_exp that is simpler to use but less flexible. UnknownPlugin.srcicon:Specify a macro name (without underscores) to use as srcicon metadata. WordPlugin.desc:A plugin for importing Microsoft Word documents. WordPlugin.windows_scripting:Use MicroSoft Windows scripting technology (Visual Basic for Applications) to get Word to convert document to HTML rather than rely on the open source package WvWare. Causes Word application to open on screen if not already running. WordPlugin.metadata_fields: This is to retrieve metadata from the HTML document converted by VB scripting. It allows users to define comma separated list of metadata fields to attempt to extract. Use 'tag' to have the contents of the first pair put in a metadata element called 'tagname'. Capitalise this as you want the metadata capitalised in Greenstone, since the tag extraction is case insensitive WordPlugin.generate_pdf_as_associated_file:Use this option to generate a PDF version of the Word document, and have it included as an associated file (needs -openoffice_conversion). ZIPPlugin.desc:Plugin which handles compressed and/or archived input formats currently handled formats and file extensions are:\ngzip (.gz, .z, .tgz, .taz)\nbzip (.bz)\nbzip2 (.bz2)\nzip (.zip .jar)\ntar (.tar)\n\nThis plugin relies on the following utilities being present (if trying to process the corresponding formats):\ngunzip (for gzip)\nbunzip (for bzip)\nbunzip2 \nunzip (for zip)\ntar (for tar) # # Download module option descriptions # BaseDownload.desc:Base class for Download modules BaseDownload.bad_general_option:The %s download module uses an incorrect option. MediaWikiDownload.desc:A module for downloading from MediaWiki websites MediaWikiDownload.reject_filetype:Ignore url list, separate by comma, e.g.*cgi-bin*,*.ppt ignores hyperlinks that contain either 'cgi-bin' or '.ppt' MediaWikiDownload.reject_filetype_disp:Ignore URL patterns MediaWikiDownload.exclude_directories:List of exclude directories (must be absolute path to the directory), e.g. /people,/documentation will exclude the 'people' and 'documentation' subdirectory under the currently crawling site. MediaWikiDownload.exclude_directories_disp:Exclude directories OAIDownload.desc:A module for downloading from OAI repositories OAIDownload.url_disp:Source URL OAIDownload.url:OAI repository URL OAIDownload.set_disp:Restrict to set OAIDownload.set:Restrict the download to the specified set in the repository OAIDownload.metadata_prefix_disp:Metadata prefix OAIDownload.metadata_prefix:The metadata format used in the exported, e.g. oai_dc, qdc, etc. Press the button to find out what formats are supported. OAIDownload.get_doc_disp:Get document OAIDownload.get_doc:Download the source document if one is specified in the record OAIDownload.get_doc_exts_disp:Only include file types OAIDownload.get_doc_exts:Permissible filename extensions of documents to get OAIDownload.max_records_disp:Max records OAIDownload.max_records:Maximum number of records to download SRWDownload.desc:A module for downloading from SRW (Search/Retrieve Web Service) repositories WebDownload.desc:A module for downloading from the Internet via HTTP or FTP WebDownload.url:Source URL. In case of http redirects, this value may change WebDownload.url_disp:Source URL WebDownload.depth:How many hyperlinks deep to go when downloading WebDownload.depth_disp:Download Depth WebDownload.below:Only mirror files below this URL WebDownload.below_disp:Only files below URL WebDownload.within:Only mirror files within the same site WebDownload.within_disp:Only files within site WebDownload.html_only:Download only HTML files, and ignore associated files e.g images and stylesheets WebDownload.html_only_disp:Only HTML files WebDownload.proxied_connect_failed_info:Current proxy settings are: WebDownload.http_proxy_settings:- HTTP host=%s : port=%s WebDownload.https_proxy_settings:- HTTPS host=%s : port=%s WebDownload.ftp_proxy_settings:- FTP host=%s : port=%s WebDownload.proxyless_connect_failed_info:- The external server might not be responding\n- or you might need to switch on proxy settings WebDownload.connect_failed_info:- or try ticking No Certificate Checking (affects 'https' URLs)\nin File > Preferences > Connection WgetDownload.desc: Base class that handles calls to wget WgetDownload.proxy_on:Proxy on WgetDownload.http_proxy_host:HTTP proxy host WgetDownload.http_proxy_port:HTTP proxy port WgetDownload.https_proxy_host:HTTPS proxy host WgetDownload.https_proxy_port:HTTPS proxy port WgetDownload.ftp_proxy_host:FTP proxy host WgetDownload.ftp_proxy_port:FTP proxy port WgetDownload.user_name:User name WgetDownload.user_password:User password WgetDownload.no_check_certificate:No check certificate WgetDownload.wget_timed_out_warning:WARNING: wget timed out %s times waiting for a response.\n\tThe URL may be inaccessible or the proxy configuration is wrong or incomplete.\n Z3950Download.desc:A module for downloading from Z3950 repositories Z3950Download.host:Host URL Z3950Download.host_disp:Host Z3950Download.port:Port number of the repository Z3950Download.port_disp:Port Z3950Download.database:Database to search for records in Z3950Download.database_disp:Database Z3950Download.find:Retrieve records containing the specified search term Z3950Download.find_disp:Find Z3950Download.max_records:Maximum number of records to download Z3950Download.max_records_disp:Max Records # # Plugout option descriptions # BasPlugout.bad_general_option:The %s plugout uses an incorrect option. BasPlugout.debug:Set debugging mode BasPlugout.desc:Base class for all the export plugouts. BasPlugout.group_size:Number of documents to group into one XML file. BasPlugout.gzip_output:Use gzip to compress resulting xml documents (don't forget to include ZIPPlugin in your plugin list when building from compressed documents). # ' BasPlugout.no_auxiliary_databases:Don't generate archivesinf databases - useful when exporting. # ' BasPlugout.site:The name of the Greenstone 3 site. The default site for a GS3 installation is localsite. BasPlugout.output_handle: the file descriptor used to receive output data BasPlugout.output_info:The reference to an arcinfo object used to store information about the archives. BasPlugout.verbosity:Controls the quantity of plugout processing output. 0=none, 3=lots. BasPlugout.xslt_file:Transform a document with the XSLT in the named file. BasPlugout.subdir_hash_prefix:Specify flag to not count the word HASH in the split length calculation. BasPlugout.subdir_split_length:The maximum number of characters before spliting an archives subdirectory. BasPlugout.no_rss:Suppress the automatic generation of RSS feed file. BasPlugout.rss_title:Comma separated list of metadata fields listed in order of preference from which the title for a document's RSS link is to be obtained DSpacePlugout.desc:DSpace Archive format. DSpacePlugout.metadata_prefix:Comma separated list of metadata prefixes to include in the exported data. For example, setting this value to 'dls' will generate a metadata_dls.xml file for each document exported in the format needed by DSpace. FedoraMETSPlugout.desc:METS format using the Fedora profile. FedoraMETSPlugout.fedora_namespace:The prefix used in Fedora for process ids (PIDS) e.g. greenstone:HASH0122efe4a2c58d0 GreenstoneXMLPlugout.desc:Greenstone XML Archive format. GreenstoneMETSPlugout.desc:METS format using the Greenstone profile. MARCXMLPlugout.desc:MARC xml format. MARCXMLPlugout.group:Output the marc xml records into a single file. MARCXMLPlugout.mapping_file:Use the named mapping file for the transformation. METSPlugout.desc:Superclass plugout for METS format. Provides common functionality for profiles such as GreenstoneMETS and FedoraMETS and key abstract methods. METSPlugout.xslt_txt:Transform a mets's doctxt.xml with the XSLT in the named file. METSPlugout.xslt_mets:Transform a mets's docmets.xml with the XSLT in the named file. GreenstoneSQLPlugout.desc:Output metadata and/or full text to a MySQL database (named after GS3 site name for GS3 or named greenstone2 for GS2) instead of doc.xml. For Greenstone 3, the database name is the GS3 site name. For Greenstone 2, the database name is greenstone2. The basic saveas.options for this Plugout are the same as the basic options for the matching GreenstoneSQLPlugin. # # GreenstoneSQLPlug strings are shared by both GreenstoneSQLPlugout and GreenstoneSQLPlugin # GreenstoneSQLPlug.process_mode:Setting determines whether full text and/or metadata will be output to a MySQL database instead of to doc.xml during import. Choose one of meta_only, text_only, or all (default). GreenstoneSQLPlug.process_mode.all:Import stage outputs the full text and metadata to a MySQL database instead of to doc.xml. GreenstoneSQLPlug.process_mode.meta_only:Import stage outputs the metadata to a MySQL database and any text to doc.xml. GreenstoneSQLPlug.process_mode.text_only:Import stage outputs the full text to a MySQL database and any metadata to doc.xml. GreenstoneSQLPlug.db_driver:The database driver. Support has been implemented for MySQL so far, so the default is mysql. GreenstoneSQLPlug.db_client_user:The username with which you connect to the (My)SQL database, root by default. GreenstoneSQLPlug.db_client_pwd:The password with which you connect to the (My)SQL database. GreenstoneSQLPlug.db_host:The hostname on which the (My)SQL database server is running, 127.0.0.1 by default. Other values to try include localhost. GreenstoneSQLPlug.db_port:If your (My)SQL database server is NOT using the default port, then specift the port number here. Otherwise leave this field empty. GreenstoneSQLPlug.rollback_on_cancel:Support for undo on cancel. Set to true to support rollbacks on cancel. Transactions are then only committed to the database at the end of import and buildcol. Set to false if you do not want undo support, in which case SQL statements are autocommitted to the database. gsmysql.backup_on_build_msg: SQL DB CANCEL SUPPORT ON.\n To have the filesystem mimic the Rollback On Cancel behaviour of the GreenstonePlugs\n you first need to manually backup your collection's 'archives' and 'index' subfolders\n so you can manually restore them on cancel when the SQL database is automatically rolled back.\n \n Example backup commands:\n%s\n If you don't want to continue, press Ctrl-C to cancel now. gsmysql.restore_backups_on_build_cancel_msg: SQL database rolled back.\n If you backed up your collection's 'archives' and 'index' subfolders,\n then restore the backups now. # # Perl module strings # classify.could_not_find_classifier:ERROR: Could not find classifier \"%s\" download.could_not_find_download:ERROR: Could not find download module \"%s\" plugin.could_not_find_plugin:ERROR: Could not find plugin \"%s\" plugin.including_archive:including the contents of 1 ZIP/TAR archive plugin.including_archives:including the contents of %d ZIP/TAR archives plugin.kill_file:Process killed by .kill file plugin.n_considered:%d documents were considered for processing plugin.n_included:%d were processed and included in the collection plugin.n_rejected:%d were rejected plugin.n_unrecognised:%d were unrecognised plugin.no_plugin_could_process:WARNING: No plugin could process %s plugin.no_plugin_could_recognise:WARNING: No plugin could recognise %s plugin.no_plugin_could_process_this_file:no plugin could process this file plugin.no_plugin_could_recognise_this_file:no plugin could recognise this file plugin.one_considered:1 document was considered for processing plugin.one_included:1 was processed and included in the collection plugin.one_rejected:1 was rejected plugin.one_unrecognised:1 was unrecognised plugin.see_faillog:See %s for a list of unrecognised and/or rejected documents PrintUsage.default:Default PrintUsage.required:REQUIRED plugout.could_not_find_plugout:ERROR: Could not find plugout \"%s\"