Interchange Databases
=====================
icdatabase.1.66 (Draft)
1. Databases and Interchange
============================
Interchange is database-independent, perhaps more so than almost any
other powerful content management system.
Interchange can use GDBM, DB_File, SQL, LDAP, or in-memory databases.
In most cases, these different database formats should operate the
same when called by Interchange's access methods.
Also, most all of Interchange's core functions do not use hard-coded
field names; virtually every field can have a configurable name.
Interchange does not require an external SQL database. If you have a
small data set and do not want to integrate your own tool set, you
cound use Interchange's internal database. However, the order
management functions of Interchange will be slower and not as robust
without an SQL database. SQL is strongly recommended for at least the
state, country, orderline, transactions, and userdb tables. Any other
tables that will have programmatic updates, such as inventory, will be
best placed in SQL.
If you plan on using Interchange Admin UI, you should make the move to
SQL. It provides easy import routines for text files that should
replace text-file uploads.
Keeping a database in an SQL manager makes it easier to integrate
Interchange with other tools. Interchange can be used to maintain a
spreadsheet containing product information through modifying the file
products.txt as needed. References to SQL, DBI, and DBD can be
ignored.
1.1. Text Source Files
----------------------
Interchange reads delimited text files to obtain its initial data.
However, the text files are not the database. They are the source
information for the database tables.
By default, all database source files are located in the products
subdirectory of the catalog directory. The main products database is
in the products/products.txt file in the supplied demo catalog.
Note: If you are using one of the internal database methods, any
changes made to the ASCII source file will be reflected in the
database in the next user session. If the product database contains
less than a thousand records, updates will be instantaneous. If the
product database is larger, updates will take longer. Use the NoImport
reference tag to stop auto updating.
In the following configuration directive:
Database products products.txt TAB
the products table will obtain its source information from the file
products.txt. What is done with it depends on the type of underlying
database being used. The different types and their behavior are
described below:
GDBM
The database source file is checked to see if it is newer than
the actual database file, products.gdbm. If it is, the database
table is re-imported from the file.
This behavior can be changed in a few ways. If files should not
be imported unless the .gdbm file disappears, set the NoImport
directive:
NoImport products
If the database source file is only to be imported at catalog
start-up time, use the IMPORT_ONCE modifier:
Database products IMPORT_ONCE 1
GDBM is the default database type if the GDBM_File Perl module is
installed (as it is on LINUX).
DB_File
The database source file is checked to see if it is newer than
the actual database file, products.db. If it is, the database
table is re-imported from the file. You can change this behavior
in the same way as GDBM_File, described above.
DB_File is the default database type if the GDBM_File Perl module
is not installed. This is common on FreeBSD. To specify DB_File
as your database type, set it in catalog.cfg with a Database
directive:
Database products DB_FILE 1
DBI/SQL
If a file named products.sql is in the same directory as
products.txt, the database table will not be imported from the
ASCII source. If there is no products.sql, the following will
occur:
DBI/SQL imports will only happen at catalog configuration time.
Interchange will connect to the SQL database using the
specified DSN. (DBI parameter meaning "Database Source
Name".)
The table will be dropped with "DROP TABLE products;". This
will occur without warning. NOTE: This can be prevented in
several ways. See NoImport External or the SQL documentation
for more information.
The table will be created. If there are COLUMN_DEF
specifications in catalog.cfg, they will be used. Otherwise,
the key (first field in the text file by default) will be
created with a char(16) type and all other fields will be
created as char(128). The table creation statement will be
written to the error.log file.
The text source file will be imported into the SQL database.
Interchange will place the data in the columns. Data typing
must be user-configured. This means that if "none" is placed
in a field, and it is defined as a numeric type, the
database import will not succeed. And if it does not
succeed, the catalog will not become active.
In-Memory
Every time the catalog is configured, the products.txt file is
imported into memory and forms the database. Otherwise, the
database is not changed. The in-memory database is the default
database if there is no GDBM_File or DB_File Perl module
installed; specify it with:
Database products MEMORY 1
1.2. Interchange Database Conventions
-------------------------------------
This section describes naming and file usage conventions used with
Interchange.
Note: Throughout the documentation, the following terms and their
definitions are used interchangeably:
key, code
A reference to the database key. In Interchange, this is usually
the product code or SKU, which is the part number for the
product. Other key values may be used to generate relationships
to other database tables.
It is recommended that the key be the first column of the ASCII
source file, since Interchange's import, export, and search
facilities rely on this practice.
field, column
The vertical row of a database. One of the columns is always the
key and it is usually the first one.
table, database
A table in the database. Because Interchange has evolved from a
single-table database to an access method for an unlimited number
of tables (and databases, for that matter), a table will
occasionally be referred to as a database. The only time the term
database refers to something different is when describing the
concept as it relates to SQL, where a database contains a series
of tables. While Interchange cannot create SQL databases, it can
drop and create tables with that database if given the proper
permissions.
If necessary, Interchange can read the data to be placed in tables
from a standard ASCII-delimited file. All of the ASCII source files
are kept in the products directory, which is normally in the catalog
directory (where catalog.cfg is located). The ASCII files can have ^M
(carriage return) characters, but must have a new line character at
the end of the line to work. NOTE: Mac users uploading files must use
ASCII mode, not binary mode.
Interchange's default ASCII delimiter is TAB.
Note: The items must be separated by a single delimiter. The items in
this document are lined up for reading convenience.
TAB
Fields are separated by ^I characters. No whitespace is allowable
at the beginning of the line.
code description price image
SH543 Men's fine cotton shirt 14.95 shirts.jpg
PIPE
Fields are separated by pipe | characters. No whitespace is
allowable at the beginning of the line.
code|description|price|image
SH543|Men's fine cotton shirt|14.95|shirts.jpg
CSV
Fields are enclosed in quotes, separated by commas. No whitespace
should be at the beginning of the line.
"code","description","price","image"
"SH543","Men's fine cotton shirt","14.95","shirts.jpg"
Note: Using the default TAB delimiter is recommended if you plan on
searching the ASCII source file of the database. PIPE works fairly
well, but CSV delimiter schemes might cause problems with searching.
IMPORTANT NOTE: Field names are usually case-sensitive. Use
consistency when naming or you might encounter problems. All lower or
all upper case names are recommended.
Interchange uses one mandatory database, which is referred to as
the products database. In the supplied demo catalog, it is called
products and the ASCII source is kept in the file products.txt in
the products directory. This is also the default file for
searching with the THE SEARCH ENGINE.
Interchange also has a two of standard, but optional, databases
that are in fixed formats:
shipping.asc
The database of shipping options that is accessed if the
CustomShipping directive is in use. This is a fixed-format
database, and must be created as specified. For more
information, see the Shipping ITL tag in the Interchange Tag
Reference Guide.
salestax.asc
The database of sales tax information if the [salestax] tag
is to be used. A default is supplied. NOTE: Caution, these
things change! This is a fixed-format database, and must be
created as specified. See Sales Tax.
These are never stored in SQL or DBM.
1.3. The Product Database
-------------------------
Each product being sold should be given a product code, usually
referred to as SKU, a short code that identifies the product on the
ordering page and in the catalog. The products.txt file is a
ASCII-delimited list of all the product codes, along with an arbitrary
number of fields which must contain at least the fields description
and price (or however the PriceField and DescriptionField directives
have been set). Any additional information needed in the catalog can
be placed in any arbitrary field. See Interchange Database Capability
for details on the format.
Field names can be case-sensitive depending on the underlying database
type. Unless there are fields with the names "description" and "price"
field, set the PriceField and DescriptionField directives to use the
[item-price] and [item-description] tags.
The product code, or SKU, must be the first field in the line, and
must be unique. Product codes can contain the characters A-Za-z0-9,
along with hyphen (-), underscore (_), pound sign/hash mark (#), slash
(/), and period (.). Note that slash (/) will interfere with
on-the-fly page references. Avoid if at all possible.
The words should be separated by one of the approved delimiting
schemes (TAB, PIPE, or CSV), and are case-sensitive in some cases. If
the case of the "description" or "price" fields have been modified,
the PriceField and DescriptionField directives must be appropriately
set.
Note: CSV is not recommended as the scheme for the products database.
It is much slower than TAB- or PIPE-delimited, and dramatically
reduces search engine functionality. No field-specific searches are
possible. Using CSV for any small database that will not be searched
is fine.
IMPORTANT NOTE: The field names must be on the first line of the
products.txt file. These field names must match exactly the field
names of the [item-field] tags in the catalog pages, or the
Interchange server will not access them properly. Field names can
contain the characters A-Za-z0-9 and underscore (_).
More than one database may be used as a products database. If the
catalog directive, ProductFiles, is set to a space-separated list of
valid Interchange database identifiers, those databases will be
searched (in the order specified) for any items that are ordered, or
for product information (as in the [price code] and [field code]
tags).
When the database table source file (i.e., products.txt) changes after
import or edit, a DBM database is re-built upon the next user access.
No restart of the server is necessary.
If changing the database on-the-fly, it is recommended that the file
be locked while it is being modified. Interchange's supplied import
routines do this.
1.4. Multiple Database Tables
-----------------------------
Interchange can manage an unlimited number of arbitrary database
tables. They use the TAB delimiter by default, but several flexible
delimiter schemes are available. These are defined by default:
Type 1 DEFAULT - uses default TAB delimiter
Type 2 LINE
Each field on its own line, a blank line
separates the record. Watch those carriage
returns! Also has a special format when CONTINUE
is set to be NOTES.
Type 3 %%
Fields separated by a \n%%\n combination, records by
\n%%%\n (where \n is a newline). Watch those carriage
returns!
Type 4 CSV
Type 5 PIPE
Type 6 TAB
Type 7 reserved
Type 8 SQL
Type 9 LDAP
The databases are specified in Database directives, as:
Database arbitrary arbitrary.csv CSV
This specifies a Type 4 database, the ASCII version of which is
located in the file arbitrary.csv, and the identifier it will be
accessed under in Interchange is "arbitrary." The DBM file, if any,
will be created in the same directory if the ASCII file is newer, or
if the DBM file does not exist. The files will be created as
arbitrary.db or arbitrary.gdbm, depending on DBM type.
The identifier is case sensitive, and can only contain characters in
the class [A-Za-z0-9_]. Fields are accessed with the [item_data
identifier field] or [data identifier field key] elements. NOTE: Use
of lower-case letters is strongly recommended.
If one of the first six types is specified, the database will
automatically be built in the default Interchange DB style. The type
can be specified with DB_FILE, GDBM, or MEMORY, if the type varies
from that default. They will coexist with an unlimited number of DBI
databases of different types.
In addition to the database, the session files will be kept in the
default format, and are affected by the following actions.
The order of preference is:
GDBM
This uses the Perl GDBM_File module to build a GDBM database. The
following command will indicate if GDBM is in Perl:
perl -e 'require GDBM_File and print "I have GDBM.\n"'
Installing GDBM_File requires rebuilding Perl after obtaining the
GNU GDBM package, and is beyond the scope of this document. LINUX
will typically have this by default; most other operating systems
will need to specifically build in this capability.
DB_File (Berkeley DB)
This uses the DB_File module to build a Berkeley DB (hash)
database. The following command will indicate if DB_File is in
Perl:
perl -e 'require DB_File and print "I have Berkeley DB.\n"'
Installing DB_File requires rebuilding Perl after obtaining the
Berkeley DB package, and is beyond the scope of this document.
BSDI, FreeBSD, and LINUX will typically have it by default; most
other operating systems will need to specifically build this in.
If using DB_File, even though GDBM_File is in Perl, set the
environment variable MINIVEND_DBFILE to a true (non-zero,
non-blank) value:
# csh or tcsh
setenv MINIVEND_DBFILE 1
# sh, bash, or ksh
MINIVEND_DBFILE=1 ; export MINIVEND_DBFILE
Then, re-start the server.
Or, to set a particular table to use Berkeley DB, the DB_FILE
class in catalog.cfg can be specified:
Database arbitrary DB_FILE 1
In-memory
This uses Perl hashes to store the data directly in memory. Every
time the Interchange server is restarted, it will re-import all
in-memory databases for every catalog.
If this is used, despite the presence of GDBM_File or DB_File,
set the environment variable MINIVEND_NODBM as above or specify
the memory type in the Database directive:
Database arbitrary MEMORY 1
Note: The use of memory databases is not recommended.
1.5. Character Usage Restrictions
---------------------------------
To review, database identifiers, field names, and product codes
(database keys) are restricted in the characters they may use. The
following table shows the restrictions:
Legal characters
---------------------
Database identifiers A-Z a-z 0-9 _
Field names A-Z a-z 0-9 _
Database keys (product code/SKU) A-Z a-z 0-9 _ # - . /
Database values Any (subject to field/record delimiter)
Some SQL databases have reserved words which cannot be used as field
names; Interchange databases do not have this restriction.
For easy HTML compatibility, it is not recommended that a / be used in
a part number if using the flypage capability. It can still be called
[page href=flypage arg="S/KU"].
1.6. Database Attributes
------------------------
Especially in SQL databases, there are certain functions that can be
set with additional database attributes. For text import, the CONTINUE
extended database import attribute allows additional control over the
format of imported text.
Note: CONTINUE applies to all types except CSV. (Do not use NOTES
unless using type LINE.)
CONTINUE
One of UNIX, DITTO, LINE, NONE, or NOTES. The default, NONE, is
to simply split the line/record according to the delimiter, with
no possible spanning of records. Setting CONTINUE to UNIX appends
the next line to the current when it encounters a backslash (\)
at the end of a record, just like many UNIX commands and shells.
DITTO is invoked when the key field is blank. It adds the
contents of following fields to the one above, separated by a new
line character. This allows additional text to be added to a
field beyond the 255 characters available with most spreadsheets
and flat-file databases.
Example in catalog.cfg:
Database products products.txt TAB
Database products CONTINUE DITTO
Products.asc file:
code price description
00-0011 500000 The Mona Lisa, one of the worlds great masterpieces.
Now at a reduced price!
The description for product 00-0011 will contain the contents of
the description field on both lines, separated by a new line.
Note: Fields are separated by tabs, formatted for reading convenience.
This will work for multiple fields in the same record. If the
field contains any non-empty value, it will be appended.
LINE is a special setting so a multi-line field can be used.
Normally, when using the LINE type, there is only data on one
line separated by one blank line. When using CONTINUE LINE, there
may be some number of fields which are each on a line, while the
last one spans multiple lines up until the first blank line.
Example in catalog.cfg:
Database products products.txt LINE
Database products CONTINUE LINE
Products.asc file:
code
price
description
00-0011
500000
The Mona Lisa, one of the worlds great masterpieces.
Now at a reduced price!
00-0011a
1000
A special frame for the Mona Lisa.
NOTES reads a Lotus Notes "structured text" file. The format is
any number of fields, all except one of which must have a field
name followed by a colon and then the data. There is optional
whitespace after the colon.
Records are separated by a settable delimiting character which
goes on a line by itself, much like a "here document." By
default, it is a form feed (^L) character. The final field begins
at the first blank line and continues to the end of the record.
This final field is named notes_field, unless set as mentioned
below.
Interchange reads the field names from the first paragraph of the
file. The key field should be first, followed by other fields in
any order. If one (and only one) field name has whitespace, then
its name is used for the notes_field. Any characters after a
space or TAB are used as the record delimiter.
If there are none, then the delimiter returns to the default form
feed (^L) and the field name reverts to notes_field. The field in
question will be discarded, but a second field with whitespace
will cause an import error. Following records are then read by
name, and only fields with data in them need be set. Only the
notes_field may contain a new line. It is always the last field
in the record, and begins at the first blank line.
The following example sets the delimiter to a tilde (~) and
renames the notes_field to description.
Example in catalog.cfg:
Database products products.txt LINE
Database products CONTINUE NOTES
Products.asc file:
code
title
price
image
description ~
size
color
title: Mona Lisa
price: 500000
code: 00-0011
image: 00-0011.jpg
The Mona Lisa, one of the worlds great masterpieces.
Now at a reduced price!
~
title: The Art Store T-Shirt
code: 99-102
size: Medium, Large*, XL=Extra Large
color: Green, Blue, Red, White*, Black
price: 2000
Extra large 1.00 extra.
~
EXCEL
Microsoft Excel is a widely-used tool to maintain Interchange
databases, but has several problems with its standard
TAB-delimited export, like enclosing fields containing commas in
quotes, generating extra carriage returns embedded in records,
and not including trailing blank fields. To avoid problems, use a
text-qualifier of none.
Set the EXCEL attribute to 1 to fix these problems on import:
Database products EXCEL 1
This is normally used only with TAB-delimited files.
LARGE
Interchange databases containing many records can result in a
noticeable slowdown when displayed by the UI.
Set the LARGE attribute to 1 to avoid this problem:
Database transactions LARGE 1
In this case the UI supplies only input boxes to search records
in the database instead of drawing all the records from the
database, sorting them and creating more lists.
1.7. Dictionary Indexing With INDEX
-----------------------------------
Interchange will automatically build index files for a fast binary
search of an individual field. This type of search is useful for
looking up the author of a book based on the beginning of their last
name, a book title based on its beginning, or other similar
situations.
Such a search requires a dictionary ordered index with the field to be
searched contained in the first field and the database key (product
code) in the second field. If the INDEX field modifier is specified,
Interchange will build the index upon database import:
Database products products.txt TAB
Database products INDEX title
If the title field is the fourth column in the products database
table, a file products.txt.4 will be built, containing two
tab-separated fields something like:
American Gothic 19-202
Mona Lisa 00-0011
Sunflowers 00-342
The Starry Night 00-343
Options can be appended to the field name after a colon (:). The most
useful will be f, which does a case-insensitive sort. The mv_dict_fold
option must be added to the search in this case.
Another option is c, which stands for "comma index." To index on
comma-separated sub-fields within a field, use the :c option:
Database products products.txt TAB
Database products INDEX category:c
This can get slow for larger databases and fields. Interchange will
split the field on a comma (stripping surrounding whitespace) and make
index entries for each one. This allows multiple categories in one
field while retaining the fast category search mechanism. It might
also be useful for a keywords field.
The fast binary search is described in greater detail in THE SEARCH
ENGINE below.
1.8. MEMORY for Memory-Only Databases
-------------------------------------
Interchange's memory-based databases are the fastest possible way to
organize and store frequently used data. To force a database to be
built in memory instead of DBM, use the MEMORY modifier:
Database country country.asc TAB
Database country MEMORY 1
Obviously, large tables will use a great deal of memory, and the data
will need to be re-imported from the ASCII source file at every
catalog reconfiguration or Interchange restart. The big advantage of
using MEMORY is that the database remains open at all times and does
not need to be reinitialized at every connect. Use it for smaller
tables that will be frequently accessed.
Memory tables are read only -- the MEMORY modifier forces IMPORT_ONCE.
1.9. IMPORT_ONCE
----------------
The IMPORT_ONCE modifier tells Interchange not to re-import the
database from the ASCII file every time it changes. Normally,
Interchange does a comparison of the database file modification time
with the ASCII source every time it is accessed, and if the ASCII
source is newer it will re-import the file.
IMPORT_ONCE tells it only to import on a server restart or catalog
reconfiguration:
Database products products.txt TAB
Database products IMPORT_ONCE 1
SQL databases don't normally need this. They will only be imported
once in normal operation. Also see NoImport for a way to guarantee
that the table will never be imported.
IMPORT_ONCE is always in effect for MEMORY databases. A catalog
reconfiguration is required to force a change.
1.10. MIRROR
------------
Additionally, you can have two tables, the regular table and the
memory table by adding to the definition files:
Database country_memory country_memory.txt TAB
Database country_memory MIRROR country
Database country_memory MEMORY 1
1.11. SQL/DBI parameters
------------------------
1.11.1. AUTO_SEQUENCE
---------------------
Tells Interchange to use a SQL sequence to number new database items
inserted into the database.
If you have Interchange create the table, then you need to do:
Database foo foo.txt dbi:mysql:test
Database foo AUTO_SEQUENCE foo_seq
Then on MySQL, Pg, or Oracle, Interchange will create an integer key
type and a sequence (or AUTO_INCREMENT in MySQL) to maintain the
count.
1.11.2. AUTO_SEQUENCE_MAXVAL
----------------------------
Sets the MAXVAL to have in an AUTO_SEQUENCE counter:
Database foo AUTO_SEQUENCE_MAXVAL 1000000
1.11.3. AUTO_SEQUENCE_MINVAL
----------------------------
Sets the MINVAL to have in an AUTO_SEQUENCE counter:
Database foo AUTO_SEQUENCE_MINVAL 10
1.11.4. AUTO_SEQUENCE_START
---------------------------
Sets the starting value for an AUTO_SEQUENCE counter:
Database foo AUTO_SEQUENCE_START 1000
1.11.5. COMPOSITE_KEY
---------------------
If you are using a DBI table with composite keys, where two or more
fields combine to make the unique identifier for a record, you must
tell Interchange so it can request data in the right way. To do this,
set:
Database product_spec product_spec.asc dbi:mysql:foobase
Database product_spec COMPOSITE_KEY sku feature
Database product_spec COLUMN_DEF "sku=varchar(32)"
Database product_spec COLUMN_DEF "feature=varchar(128)"
If you want to create a custom index for the table, do so. If you
don't specify a POSTCREATE or INDEX parameter for the table,
Interchange will create a unique index with all composite key elements
at table creation time.
1.11.6. DSN
-----------
The data source name (DSN) for the database. It is beyond the scope of
this document to describe this in detail.
Normally this is set as the type in the initial Database configuration
line, i.e.
Database foo foo.txt dbi:mysql:foobase
This has the same effect:
Database foo foo.txt SQL
Database foo DSN dbi:mysql:foobase
Some other examples of DSN specs:
Database foo DSN dbi:mysql:host=db.you.com;database=foobase
Database foo DSN dbi:Pg:dbname=foobase
Database foo DSN dbi:Oracle:host=myhost.com;sid=ORCL
1.11.7. HAS_TRANSACTIONS
------------------------
Informs Interchange that the SQL database in use has commit() and
rollback() for transactions. For PostgreSQL and Oracle this should be
set properly to 1 -- for MySQL and other databases you have to set it.
1.11.8. HAS_LIMIT
-----------------
Informs Interchange that the SQL database in use has as the LIMIT
extension to SQL to limit return from queries. Should be set properly
by default for MySQL, PostgreSQL, and Oracle.
1.11.9. POSTCREATE
------------------
One or more SQL statements that should be performed after Interchange
creates a table.
Database foo POSTCREATE "create unique index foo_idx on foo(key1,key2)"
Database foo POSTCREATE "create index mulkey_idx on foo(mulkey)"
1.11.10. PRECREATE
------------------
One or more SQL statements that should be performed before Interchange
creates a table.
Database foo POSTCREATE "drop table foobackup"
Database foo POSTCREATE "alter table foo rename to foobackup"
1.11.11. REAL_NAME
------------------
Sometimes it may be convenient to have a table named a consistent
value in Interchange despite its name in the underlying database. For
instance, two divisions of a company may share orders but have
different products tables. You can tell Interchange to name the table
products for its purposes, but use the products_a table for SQL
statements:
Database products REAL_NAME products_a
Of course if you have SQL queries that are passed verbatim to
Interchange (i.e. the [query ...] tag) you must use the REAL_NAME in
those.
1.12. Importing in a Page
-------------------------
To add a data record to a database as a result of an order or other
operation, use Interchange's [import ...] tag.
[import table type*] RECORD [/import]
Named parameters:
[import table=table_name
file=filename*
type=(TAB|PIPE|CSV|%%|LINE)*
continue=(NOTES|UNIX|DITTO)*
separator=c*]
Import one or more records into a database. The type is any of the
valid Interchange delimiter types, with the default being TAB. The
table must already be a defined Interchange database table. It cannot
be created on-the-fly. If on-the-fly functionality is need, it is time
to use SQL.
The import type selected need not match the type the database was
specified. Different delimiters may be used.
The type of LINE and continue setting of NOTES is particularly useful,
for it allows fields to be named and not have to be in any particular
order of appearance in the database. The following two imports are
identical in effect:
[import table=orders]
code: [value mv_order_number]
shipping_mode: [shipping-description]
status: pending
[/import]
[import table=orders]
shipping_mode: [shipping-description]
status: pending
code: [value mv_order_number]
[/import]
The code or key must always be present, and is always named code. If
NOTES mode is not used, the fields must be imported in the same order
as they appear in the ASCII source file.
The file option overrides the container text and imports directly from
a named file based in the catalog directory. To import from
products.txt, specify file="products/products.txt". If the NoAbsolute
directive is set to Yes in interchange.cfg, only relative path names
will be allowed.
The [import ....] TEXT [/import] region may contain multiple records.
If using NOTES mode, a separator must be used, which, by default, is a
form-feed character (^L). See Import Attributes for more information.
1.13. Exporting from a Database
-------------------------------
To export an existing database to a file to its text file, suitable
for full-text search by Interchange, use Interchange's UI create a
page that contains a [export table=TABLENAME] ITL tag (ExportTag).
1.14. Write Control
-------------------
Interchange databases can be written in the normal course of events,
either using the [import ...] tag or with a tag like [data table=table
column=field key=code value=new-value]. To control writing of a global
database, or to a certain catalog within a series of subcatalogs, or
make one read only, see the following:
To enable write control:
Database products WRITE_CONTROL 1
Once this is done, to make a database read only, which won't allow
writing even if [tag flag write]products[/tag] is specified:
Database products READ_ONLY 1
To have control with [tag flag write]products[/tag]:
Database products WRITE_TAGGED 1
To limit write to certain catalogs, set:
Database products WRITE_CATALOG simple=0, sample=1
The "simple" catalog will not be able to write, while "sample" will if
[tag flag write]products[/tag] is enabled. If a database is to always
be writable, without having to specify [tag flag write] ... [/tag],
then define:
Database products WRITE_ALWAYS 1
The default behavior of SQL databases is equivalent to WRITE_ALWAYS,
while the default for GDBM_File, DB_File, and Memory databases is
equivalent to:
Database products WRITE_CONTROL 1
Database products WRITE_TAGGED 1
1.15. Global Databases
----------------------
If a database is to be available to all catalogs on the Interchange
server, it may be defined in interchange.cfg. Any catalog running
under that server will be able to use it. It is writable by any
catalog unless WRITE_CONTROL is used.
2. SQL Support
==============
Interchange can use any of a number of SQL databases through the
powerful Perl DBI/DBD access methods. This allows transparent access
to any database engine that is supported by a DBD module. The current
list includes mSQL, MySQL, Solid, PostgreSQL, Oracle, Sybase,
Informix, Ingres, Dbase, DB2, Fulcrum, and others. Any ODBC (with
appropriate driver) should also be supported.
No SQL database is included with Interchange, but there are a number
widely available on the Internet. Most commonly used with Interchange
are PostgreSQL, MySQL, and Oracle. It is beyond the scope of this
document to describe SQL or DBI/DBD. Sufficient familiarity is
assumed.
In most cases, Interchange cannot perform administrative functions,
like creating a database or setting access permissions. This must be
done with the tools provided with a SQL distribution. But, if given a
blank database and the permission to read and write it, Interchange
can import ASCII files and bootstrap from there.
2.1. SQL Support via DBI
------------------------
The configuration of the DBI database is accomplished by setting
attributes in additional Database directives after the initial
defining line as described above. For example, the following defines
the database arbitrary as a DBI database, sets the data source (DSN)
to an appropriate value for an mSQL database named minivend on port
1114 of the local machine:
Database arbitrary arbitrary.asc SQL
Database arbitrary DSN dbi:mSQL:minivend:localhost:1114
As a shorthand method, include the DSN as the type:
Database arbitrary arbitrary.asc dbi:mSQL:minivend:localhost:1114
Supported configuration attributes include (but are not limited to):
DSN
A specification of the DBI driver and its data source. To use the
DBD::mSQL driver for DBI, use:
dbi:mSQL:minivend:othermachine.my.com:1112
where mSQL selects the driver (case IS important), minivend
selects the database, othermachine.my.com selects the host, and
1112 is the port. On many systems, dbi:mSQL:minivend will work
fine. Of course, the minivend database must already exist.
This is the same as the DBI_DSN environment variable, if the DSN
parameter is not set. Then, the value of DBI_DSN will be used to
try and find the proper database to connect to.
USER
The user name used to log into the database. It is the same as
the environment variable DBI_USER. If a user name is not needed,
just don't set the USER directive.
PASS
The password used to log into the database. It is the same as the
environment variable DBI_PASS. If a password is not needed, just
don't set the PASS directive.
COLUMN_DEF
A comma-separated set of lines in the form NAME=TYPE(N), where
NAME is the name of the field/column, TYPE is the SQL data type
reference, and N is the length (if needed). Most Interchange
fields should be the fixed-length character type, something like
char(128). In fact, this is the default if a type is not chosen
for a column. There can be as many lines as needed. This is not a
DBI parameter, it is specific to Interchange.
NAME
A space-separated field of column names for a table. Normally not
used. Interchange should resolve the column names properly upon
query. Set this if a catalog errors out with "dbi: can't find
field names" or the like. The first field should always be code.
This is not a DBI parameter, it is specific to Interchange. All
columns must be listed, in order of their position in the table.
NUMERIC
Tells Interchange not to quote values for this field. It allows
numeric data types for SQL databases. It is placed as a
comma-separated field of column names for a table, in no
particular order. This should be defined if a numeric value is
used because many DBD drivers do not yet support type queries.
UPPERCASE
Tells Interchange to force field names to UPPER case for row
accesses using the [item-data ...], [loop-data ...], [item-field
..., etc. Typically used for Oracle and some other SQL
implementations.
DELIMITER
A Interchange delimiter type, either TAB,CSV,PIPE,%%,LINE or the
corresponding numeric type. The default for SQL databases is TAB.
Use DELIMITER if another type will be used to import. This is not
a DBI parameter. It is specific to Interchange.
KEY
The keying default of code in the first column of the database
can be changed with the KEY directive. Don't use this unless
prepared to alter all searches, imports, and exports accordingly.
It is best to just accept the default and make the first column
the key for any Interchange database.
ChopBlanks, LongReadLen, LongTruncOK, RaiseError, etc.
Sets the corresponding DBI attribute. Of particular interest is
ChopBlanks, which should be set on drivers which by default
return space-padded fixed-length character fields (Solid is an
example).
The supported list as of this release of Interchange is:
ChopBlanks
CompatMode
LongReadLen
LongTruncOk
PrintError
RaiseError
Warn
Issue the shell command perldoc DBI for more information.
Here is an example of a completely set up DBI database on MySQL, using
a comma-separated value input, setting the DBI attribute LongReadLen
to retrieve an entire field, and changing some field definitions from
the default char(128):
Database products products.csv dbi:mysql:minivend
Database products USER minivend
Database products PASS nevairbe
Database products DELIMITER CSV
# Set a DBI attribute
Database products LongReadLen 128
# change some fields from the default field type of char(128)
# Only applies if Interchange is importing from ASCII file
# If you set a field to a numeric type, you must set the
# NUMERIC attribute
Database products COLUMN_DEF "code=char(20) NOT NULL primary key"
Database products COLUMN_DEF price=float, discount=float
Database products COLUMN_DEF author=char(40), title=char(64)
Database products COLUMN_DEF nontaxable=char(3)
Database products NUMERIC price
Database products NUMERIC discount
MySQL, DBI, and DBD::mysql must be completely installed and tested,
and have created the database minivend, for this to work. Permissions
are difficult on MySQL. if having trouble, try starting the MySQL
daemon with safe_mysqld --skip-grant-tables & for testing purposes.
To change to ODBC, the only changes required might be:
Database products DSN dbi:ODBC:TCP/IP localhost 1313
Database products ChopBlanks 1
The DSN setting is specific to a ODBC setup. The ChopBlanks setting
takes care of the space-padding in Solid and some other databases. It
is not specific to ODBC. Once again, DBI, DBD::ODBC, and the
appropriate ODBC driver must be installed and tested.
2.2. SQL Access Methods
-----------------------
An Interchange SQL database can be accessed with the same tags as any
of the other databases can. Arbitrary SQL queries can be passed with
the [query sql="SQL STATEMENT"] ITL tag.
[query
ml=10
more=1
type=list
sp="@@MV_PAGE@@"
sql=|
SELECT sku, description
FROM products
WHERE somecol
BETWEEN '[filter sql][cgi from][/filter]'
AND '[filter sql][cgi to][/filter]'
AND someothercol = '[filter sql][cgi whatever][/filter]'
ORDER BY sku
|]
[list]
sku=[sql-code] - desc=[sql-param description]
[/list]
[on-match]
Something was found
[/on-match]
[no-match]
Nothing was found
[/no-match]
[more-list]
[matches]
[/more-list]
[/query]
Not the filter for [cgi foo] values, which prevent single quotes (')
from destroying the query.
2.3. Importing from an ASCII File
---------------------------------
When importing a file for SQL, Interchange by default uses the first
column of the ASCII file as the primary key, with a char(16) type, and
assigns all other columns a char (128) definition. These definitions
can be changed by placing the proper definitions in COLUMN_DEF
Database directive attribute:
Database products COLUMN_DEF price=char(20), nontaxable=char(3)
This can be set as many times as desired, if it will not fit on the
line.
Database products COLUMN_DEF price=char(20), nontaxable=char(3)
Database products COLUMN_DEF description=char(254)
To create an index automatically, append the information when the
value is in quotes:
Database products COLUMN_DEF "code=char(14) primary key"
The field delimiter to use is TAB by default, but can be changed with
the Database DELIMITER directive:
Database products products.csv dbi:mSQL:minivend:localhost:1114
Database products DELIMITER CSV
To create other secondary keys to speed sorts and searches, do so in
the COLUMN_DEF:
Database products COLUMN_DEF "author=char(64) secondary key"
Or use external database tools. NOTE: Not all SQL databases use the
same index commands.
To use an existing SQL database instead of importing, set the NoImport
directive in catalog.cfg to include any database identifiers not to be
imported:
NoImport products inventory
WARNING: If Interchange has write permission on the products database,
be careful to set the NoImport directive or create the proper .sql
file. If that is not done, and the database source file is changed,
the SQL database could be overwritten. In any case, always back up the
database before enabling it for use by Interchange.
3. Managing DBM Databases
=========================
3.1. Making the Database
------------------------
The DBM databases can be built offline with the offline command. The
directory to be used for output is specified either on the command
line with the -d option, or is taken from the catalog.cfg directive
OfflineDir -- offline in the catalog directory by default. The
directory must exist. The source ASCII files should be present in that
directory, and the DBM files are created there. Existing files will be
overwritten.
offline -c catalog [-d offline_dir]
Do a perldoc VENDROOT/bin/offline for full documentation.
3.2. Updating Individual Records
--------------------------------
If it takes a long time to build a very large DBM database, consider
using the bin/update script to change just one field in a record, or
to add from a corrections list.
The database is specified with the -n option, or is 'products' by
default.
The following updates the products database price field for item
19-202 with the new value 25.00:
update -c catalog -f price 25.00
More than one field can be updated on a single command line.
update -c catalog -f price -f comment 25.00 "That pitchfork couple"
The following takes input from file, which must be formatted exactly
like the original database, and adds/corrects any records contained
therein.
update -c catalog -i file
Invoke the command without any arguments for a usage message
describing the options.
4. Other Database Capabilities
==============================
Interchange has a number of other options that can affect operation of
or operations on a defined database.
4.1. Search Modification
------------------------
Normally, Interchange can search any database and will return all
records that match the search specification. Some attributes affect
this.
4.1.1. HIDE_FIELD
-----------------
When set to a field name, i.e.:
Database sometable HIDE_FIELD inactive
Interchange will not return records that have that field (in the
example, c) set to a true (non-blank, non-zero) value.
4.1.2. NO_SEARCH
----------------
An indication that the database should not be searchable by default.
Used to determine the default search files for a product searc.
Database sometable NO_SEARCH 1
In the foundation demo, this is used to prevent the options table from
being searched for products.
4.2. Indexing
-------------
You can indicate that a database should be indexed on a field with the
INDEX modifier:
Database sometable INDEX category
This will create an ASCII index on every import, and will also create
an index on the field at SQL creation time.
If you wish to create SQL indices at table creation time without
creating an ASCII index, use the NO_ASCII_INDEX parameter:
Database sometable NO_ASCII_INDEX 1
Of course you can create a SQL index manually at any time via your SQL
toolset.
5. The Search Engine
====================
Interchange implements a search engine which will search the product
database (or any other file) for items based on customer input. It
uses either forms or link-based searches that are called with the
special page name scan. The search engine uses many special
Interchange tags and variables.
If the search is implemented in a link or a form, it will always
display formatted results on the results page, an Interchange page
that uses some combination of the [search-region], [search-list],
[more-list], [more], and other Interchange tags to format and display
the results. The search results are usually a series of product
codes/SKUs or other database keys, which are then iterated over
similar to the [item-list].
Note: Examples of search forms and result pages are included in the
demos.
Two search engine interfaces are provided, and five types of searching
are available. The default is a text-based search of the first
products database source file (i.e., products.txt). A binary search of
a dictionary-ordered file can be specified. An optional Glimpse search
is enabled by placing the command specification for Glimpse in the
catalog.cfg directive Glimpse. There is a range-based search, used in
combination with one of the above. And finally, there is a
fully-coordinated search with grouping.
The default, a text based search, sequentially scans the lines in the
target file. By default it returns the first field (delineated by the
delimiter for that database) for every line matching the search
specification. This corresponds to the product code, which is then
used to key specific accesses to the database.
The text-based search is capable of sophisticated field-specific
searches with fully-independent case-sensitivity, substring, and
negated matching.
5.1. The Search Form
--------------------
A number of variables can be set on search forms to determine which
search will be used, what fields in the database it will search, and
what search behavior will be.
Here is a simple search form:
When the "Search" submit button is pressed (or is pressed),
Interchange will search the products.txt file for the string entered
into the text field mv_searchspec, and return the product code
pertaining to that line.
The same search for a fixed string, say "shirt," could be performed
with the use of a hot link, using the special scan URL:
[page search="se=shirt"]See our shirt collection!
The default is to search every field on the line. To match on the
string "shirt" in the product database field "description," modify the
search:
In the hot-linked URL search:
[page search="
se=shirt
sf=category
"]See our shirt collection!
To let the user decide on the search parameters, use checkboxes or
radiobox fields to set the fields:
Search by author
Search by title
Fields can be stacked. If more than one is checked, all checked fields
will be searched.
5.2. Glimpse
------------
To use the Glimpse search, the Glimpse index must be built based on
files in the ProductDir, or wherever the files to be searched will be
located. If the catalog is in /var/lib/interchange/foundation, the
command line to build the index for the products file would be:
chdir /var/lib/interchange/foundation/products
glimpseindex -b -H . products.txt
There are several ways to improve search speed for large catalogs. One
method that works well for large products.txt files is to split the
products.txt file into small index files (in the example, 100 lines)
with the split(1) UNIX/POSIX command. Then, index it with Glimpse:
split -100 products.txt index.txt.
glimpseindex -H /var/lib/interchange/foundation/products index.txt.*
This will dramatically increase search speeds for large catalogs, at
least if the search term is relatively unique. If it is a common
string, in a category search, for example, it is better to use the
text-based search.
To search for numbers, add the -n option to the Glimpse command line.
Note: A large catalog is one of more than several thousand items;
smaller ones have acceptable speed in any of the search modes.
If the Glimpse executable is not found at Interchange startup, the
Glimpse search will be disabled and the regular text-based search used
instead.
There are several things to watch for while using Glimpse, and a
liberal dose of the Glimpse documentation is suggested. In particular,
the spelling error capability will not work in combination with the
field-specific search. Glimpse selects the line, but Interchange's
text-based search routines disqualify it when checking to see if the
search string is within one of the specified fields.
To use field-specific searching on Glimpse, tell it what the field
names are. If the search is on the products database (file), nothing
is needed for the default is to use the field names from the products
database. If it is some other field layout, specify the file to get
the field names from with mv_field_file (ff).
5.3. Fast Binary Search
-----------------------
Fast binary searching is useful for scanning large databases for
strings that match the beginning of a line. They use the standard Perl
module Search::Dict, and are enabled through use of the mv_dict_look,
mv_dict_end, mv_dict_limit, mv_dict_fold, and mv_dict_order variables.
The field to search is the first field in the file, the product code
should be in the second field, delimited by TAB. Set the
mv_return_fields=1 to return the product code in the search.
The search must be done on a dictionary-ordered pre-built index, which
can be produced with the database INDEX modifier. See Dictionary
indexing with INDEX.
If using the mv_dict_look parameter by itself, and the proper index
file is present, Interchange will set the options:
mv_return_fields=1
mv_dict_limit=-1
This will make the search behave much like the simple search described
above, except it will be much faster on large files and will match
only from the beginning of the field. Here is an example. A title
index has been built by including in catalog.cfg:
Database products INDEX title
Note: The ASCII source file must be "touched" to rebuild the index and
the database.
Now, specify in a form:
or in a URL:
[page search="dl=Van Gogh/di=title"]
This search is case-sensitive. To do the same thing
case-insensitively:
Database products INDEX title:f
[page search="dl=Van Gogh/di=title/df=1"]
5.4. Coordinated and Joined Searching
-------------------------------------
Interchange will do a complete range of tests on individual columns in
the database. To use this function, set mv_coordinate to Yes (co=yes
in the one-click syntax). In order to use coordinated searching, the
number of search fields must equal the number of search strings.
To make sure that is the case, use the mv_search_map variable. It
allows variables to be mapped to others in the search specification.
For example:
Artist:
Title:
Genre:
Even if the user leaves one blank, the search will work.
Leading/trailing whitespace is stripped from all lines in the
mv_search_map variable, so it can be positioned as shown for
convenience.
Coordinated searches may be joined with the output of another table if
set one of the mv_search_field values is set to a table:column pair.
Note that this will slow down large searches considerably unless there
is another search specification, as the database must be accessed for
every search line If there is a search field that qualifies for a
regular expression search function, or conducting a binary search with
mv_dict_look, or are not doing an OR search, the penalty should not be
too great as only matching lines will cause an access to the database.
Individual field operations can then be specified with the
mv_column_op (or op) parameter. The operations include:
operation string numeric equivalent
---------
equal to eq == =
not equal ne != <>
greater than gt >
less than lt <
less than/equal to le <=
greater than/equal to ge >=
regular expression rm =~ , LIKE
regular expression NOT rn !~
exact match em
Text::Query::Advanced aq
Text::Query::Simple tq
An example:
[page search="
co=yes
sf=title
se=Sunflowers
op=em
sf=artist
se=Van Gogh
op=rm
"] Sunflowers, Van Gogh
[page search="
co=yes
sf=title
se=Sunflowers
nu=0
op=!~
sf=artist
se=Van Gogh
op=rm
nu=0
sf=inventory:qty
se=1
op=>=
nu=1
"] Any in stock except Sunflowers, Van Gogh
Note that in the second example, nu=0 must be specified even though
that is the default. This is to set the proper correspondence. To
avoid having to do this, use Interchange's option array feature:
[page search.0="
sf=title
se=Sunflowers
op=!~
"
search.1="
sf=artist
se=Van Gogh
"
search.2="
sf=inventory:qty
se=1
op=>=
nu=1
"
] Any in stock except Sunflowers, Van Gogh
The co=yes is assumed when specifying a multiple search.
The second search will check the stock status of the painting provided
there is an inventory table as in some of the Interchange demo
catalogs. If the qty field is greater than or equal to 1, the product
will be picked. If out of stock, it will not be found.
It always helps to have an rm type included in the search. This is
used to pre-screen records so that database accesses only need be made
for already-matching entries. If accesses must be made for every
record, large searches can get quite slow.
The special aq and tq query types only operate if the Text::Query CPAN
module is installed. This allows Altavista-style searches on the
field, using AND, OR, NOT, and NEAR with arbitrarily complex
parentheses.
A useful form for the aq type would be:
This searches the sku, description, comment, and category fields in
the default products file with Text::Query syntax. Try the term
"painters NEAR set" in the default foundation example.
5.5. Custom search operators
----------------------------
You can write your own search operator with Interchange's CodeDef. In
interchange.cfg, or in the code directory tree, you can put:
CodeDef find_mirrored SearchOp
CodeDef find_mirrored Routine <
[/loop]
The passed parameters are:
o The search object ($self)
o The index into coordinated search array ($i)
o The pattern to match
o The name of the op (find_hammer in this case)
Must return a sub which receives the data to match and returns 1 if it
matches. DOES NOT HONOR mv_negate UNLESS you tell it to.
See Vend::Search::create_text_query for an example of how to return a
proper routine and look in search object for the associated params.
5.6. Specifying a Text-Based Search with SQL Syntax
---------------------------------------------------
If the Perl SQL::Statement module is installed, SQL syntax can be
specified for the text-based search. This is not the same as the
external SQL database search, treated below separately. This works on
the ASCII text source file, not on the actual database.
This syntax allows this form setup:
Artist:
Title:
If the right hand side of an expression looks like a column, i.e., is
not quoted, the appropriate form variable is substituted. (If used in
a one-click, the corresponding scratch variable is used instead.) The
assumption is reversed for the left-hand side. If it is a quoted
string, the column name is read from the passed values. Otherwise, the
column name is literal.
Search for:
Search in title
artist
Once again, this does not conduct a search on an SQL database, but
formats a corresponding text-based search. Parentheses will have no
effect, and an OR condition will cause all conditions to be OR. The
searches above would be similar to:
[page search="
co=yes
sf=artist
op=rm
se=[value artist]
sf=title
op=rm
se=[value title]
" ]
Search for [value artist], [value title]
[page search="
co=yes
sf=[value column]
op=rm
se=[value searchstring]
" ]
Search for [value searchstring]
in [value column]
5.7. One-Click Searches
-----------------------
Interchange allows a search to be passed in a URL, as shown above.
Just specify the search with the special page parameter search or
special page scan. Here is an example:
[page search="
se=Impressionists
sf=category
"]
Impressionist Paintings
This is the same:
[page scan se=Impressionists/sf=category]
Impressionist Paintings
Here is the same thing from a home page (assuming /cgi-bin/vlink is
the CGI path for Interchange's vlink):
Impressionist Paintings
The two-letter abbreviations are mapped with these letters:
ac mv_all_chars
bd mv_base_directory
bs mv_begin_string
ck mv_cache_key
co mv_coordinate
cs mv_case
cv mv_verbatim_columns
de mv_dict_end
df mv_dict_fold
di mv_dict_limit
dl mv_dict_look
DL mv_raw_dict_look
do mv_dict_order
dr mv_record_delim
em mv_exact_match
er mv_spelling_errors
ff mv_field_file
fi mv_search_file
fm mv_first_match
fn mv_field_names
hs mv_head_skip
ix mv_index_delim
lb mv_search_label
lf mv_like_field
lo mv_list_only
lr mv_search_line_return
ls mv_like_spec
ma mv_more_alpha
mc mv_more_alpha_chars
md mv_more_decade
ml mv_matchlimit
mm mv_max_matches
MM mv_more_matches
mp mv_profile
ms mv_min_string
ne mv_negate
ng mv_negate
np mv_nextpage
nu mv_numeric
op mv_column_op
os mv_orsearch
pf prefix
ra mv_return_all
rd mv_return_delim
rf mv_return_fields
rn mv_return_file_name
rr mv_return_reference
rs mv_return_spec
se mv_searchspec
sf mv_search_field
sg mv_search_group
si mv_search_immediate
sm mv_start_match
sp mv_search_page
sq mv_sql_query
sr mv_search_relate
st mv_searchtype
su mv_substring_match
tf mv_sort_field
to mv_sort_option
un mv_unique
va mv_value
These can be treated just the same as form variables on the page,
except that they can't contain a new line. If using the multi-line
method of specification, the characters will automatically be escaped
for a URL.
IMPORTANT NOTE: An incompatibility in earlier Interchange catalogs is
specifying [page scan/se=searchstring]. This is interpreted by the
parser as [page scan/se="searchstring"] and will cause a bad URL.
Change this to [page scan se=searchstring], or perhaps better yet:
[page search="
se=searchstring
"]
A one-click search may be specified in three different ways.
Original
To do an OR search on the fields category and artist for the
strings "Surreal" and "Gogh," while matching substrings, do:
[page scan se=Surreal/se=Gogh/os=yes/su=yes/sf=artist/sf=category]
Van Gogh -- compare to surrealists
In this method of specification, to replace a / (slash) in a file
name (for the sp, bd, or fi parameter), the shorthand of :: must
be used, i.e., sp=results::standard. (This may not work for some
browsers, so put the page in the main pages directory or define
the page in a search profile.)
Multi-Line
Specify parameters one to a line, as well.
[page scan
se="Van Gogh"
sp=lists/surreal
os=yes
su=yes
sf=artist
sf=category
] Van Gogh -- compare to surrealists
Any "unsafe" characters will be escaped. To search for trailing
spaces (unlikely), quote.
Ampersand
Substitute & for / in the specification and be able to use / and
quotes and spaces in the specification.
[page href=scan se="Van Gogh"&sp=lists/surreal&os=yes&su=yes&sf=artist]
Van Gogh -- compare to surrealists
Any "unsafe" characters will be escaped.
5.8. Setting Display Options with mv_value
------------------------------------------
A value can be specified that will be set in the link with the
mv_value parameter. It takes an argument of var=value, just as setting
a normal variable in an Interchange profile. Actually mv_value is a
misnomer, it will almost never be used in a form where variable values
can be set. Always specify it in a one-click search with va=var=value.
Example:
[page href=scan
arg="se=Renaissance
se=Impressionists
va=category_name=Renaissance and Impressionist Paintings
os=yes"]Renaissance and Impressionist Paintings
Display the appropriate category on the search results page with
[value category_name].
5.9. In-Page Searches
---------------------
To specify a search inside a page with the [search-region parameters*]
tag. The parameters are the same as the one-click search, and the
output is always a newline-separated list of the return objects, by
default, a series of item codes.
The [loop ...] tag directly accepts a search parameter. To search for
all products in the categories "Americana" and "Contemporary," do:
[loop search="
se=Americana
se=Contemporary
os=yes
sf=category9
"]
Artist: [loop-field artist]
Title: [loop-field title]
[/loop]
The advantage of the in-page search is that searches can be embedded
within searches, and there can be straight unchanging links from
static HTML pages.
To place an in-page search with the full range of display in a normal
results page, use the [search-region] tag the same as above, except
that [search-list], [more-list], and [more] tags can be placed within
it. Use them to display and format the results, including paging. For
example:
[search-region more=1
search="
se=Americana
sf=category
ml=2
"]
[more-list][more][/more-list]
[search-list]
[page [item-code]]
[item-field title], by [item-field artist]
[/search-list]
[no-match]
Sorry, no matches for [value mv_searchspec].
[/no-match]
[/search-region]
Note: The [item-code] above does not need to be quoted because it is
replaced before the [page ...] tag is interpolated. If building large
lists, this is worth doing because unquoted tags are twice as fast to
parse.
To use the same page for search paging, make sure to set the sp=page
parameter.
5.10. Search Profiles
---------------------
An unlimited number of search profiles can be predefined that reside
in a file or files. To use this, make up a series of lines like:
mv_search_field=artist
mv_search_field=category
mv_orsearch=yes
These correspond to the Interchange search variables that can be set
on a form. Set it right on the page that contains the search.
[set artist_profile]
mv_search_field=artist
mv_search_field=category
mv_orsearch=yes
[/set]
This is the same:
[set artist_profile]
sf=artist
sf=category
os=yes
[/set]
Then, in the search form, set a variable with the name of the profile:
In a one-click search, use the mp modifier:
[page scan se=Leonardo/mp=artist_profile]A left-handed artist
They can also be placed in a file. Define the file name in the
SearchProfile directive. The catalog must be reconfigured for
Interchange to read it. The profile is named by placing a name
following a __NAME__ pragma:
__NAME__ title_search
The __NAME__ must begin the line, and be followed by whitespace and
the name.
The special variable mv_last stops interpretation of search variables.
The following variables are always interpreted:
mv_dict_look
mv_searchspec
Other than that, if mv_last is set in a search profile, and there are
other variables on the search form, they will not be interpreted.
To place multiple search profiles in the same file, separate them with
__END__, which must be on a line by itself.
5.11. Search Reference
----------------------
The supplied simple/srchform.html and simple/results.html pages show
example search forms. Modify them to present the search in any way
desired. Be careful to use the proper variable names for passing to
Interchange. It is also necessary to copy the hidden variables as-is.
They are required to interpret the request as a search.
Note: The following definitions frequently refer to field name and
column and column number. All are the references to the columns of a
searched text file as separated by delimiter characters.
The field names can be specified in several ways.
ProductFiles
If the file to be searched is left empty in the search form or
definition (it is set with mv_search_file (fi)), the text files
associated with the products databases will be searched, and
field names are already available as named in the first line of
the file(s). This is defined to be products.txt in the
Interchange demo catalogs.
Be careful if using SQL! If the database is changed and not
exported with [tag export products][/tag], searches will not be
successful.
Other database files
If the file or files to be searched are ASCII delimited files,
and have field names specified on the first line of the file,
Interchange will read the first line (of the first file) and
determine the field names.
Other files
If the file or files to be searched are ASCII delimited files,
but don't have field names specified on the first line of the
file, set the variable mv_field_names to a comma-separated list
of field names as they will be referenced.
Fields can also always be specified by an integer column number, with
0 as the first column.
mv_all_chars
Scan abbreviation: ac=[1|0]. Set this if searching is anticipated
for lots of punctuation characters that might be special
characters for Perl. The characters ()[]\$^ are included.
mv_base_directory
Scan abbreviation: bd=/directory/name. In the text search, set to
the directory from which to base file searches. File names
without leading / characters will be based from there. In the
Glimpse search, passed to Glimpse with the -H option, and Glimpse
will look for its indices there. Default is ProductDir.
If an absolute path directory is used, for security enable it in
the users session with:
[set /directory/name]1[/set]
This prevents users from setting an arbitrary value and viewing
arbitrary files.
mv_begin_string
If this is set, the string will only match if it is at the
beginning of a field. The handling is a bit different for the
default AND search compared to the OR search. With OR searches
all words are searched for from the beginning of the field, with
AND searches all are.
This is a multiple parameter. If mv_coordinate is in force, it
should be set as many times as necessary to match the
field/searchstring combination. If set only once, it applies to
all fields. If set more than once but not as many times as the
fields, it will default to off.
mv_cache_key
Not normally set by the user. It is a value that provides a
pointer to the search reference by the more function.
mv_case
If this item is set to No, the search will return items without
regard to upper or lower case. This is the default. Set to Yes if
case should be matched. Implement with a checkbox field.
If stacked to match the mv_search_field and mv_searchspec
variables, and mv_coordinate is set, it will operate only for the
corresponding field.
Scan abbreviation: cs
mv_column_op
In the coordinated search, the operation that will be performed
to check the field for a search match. These operations are
supported:
!= Not equal to
!~ Not matching regular expression
< Less than
<= Less than or equal to
<> Not equal to
= Equal to
== Equal to
=~ Matching regular expression
> Greater than
>= Greater than or equal to
em Exact match
eq Equal to
ge Greater than or equal to
gt Greater than
le Less than or equal to
lt Less than
ne Not equal to
rm Matching regular expression
rn Not matching regular expression
If stacked to match the mv_search_field and mv_searchspec
variables, and mv_coordinate is set, it will operate only for the
corresponding field.
Note that several of the operators are the same. They do either
numeric or string comparisons based on the status of mv_numeric (alias
nu) for that column.
mv_coordinate
If this item is set to Yes, and the number of search fields
equals the number of search specs, the search will return only
items that match field to spec. (The search specifications are
set by stacked mv_searchspec and mv_search_field variables.)
Case sensitivity, substring matching, and negation all work on a
field-by field basis according to the following:
If only one instance of the option is set, it will affect
all fields.
If the number of instances of the option is greater than or
equal to the number of search specs, all will be used
independently. Trailing instances will be ignored.
If more than one instance of the options are set, but fewer
than the number of search specifications, the default
setting will be used for the trailing unset options.
If a search specification is blank, it will be removed and
all case-sensitivity/negation/substring options will be
adjusted accordingly. If you need a blank string to match
on, use quotes ("").
mv_dict_end
If the string at the beginning of a line lexically exceeds this
value, matching will stop. Ignored without mv_dict_look.
mv_dict_fold
Make dictionary matching case-insensitive. Ignored without
mv_dict_look.
Note: This is the reverse sense from mv_case.
mv_dict_limit
Automatically set the limiting string (mv_dict_end) to be one
character greater than the mv_dict_look variable, at the
character position specified. A value of 1, for instance, will
set the limiting string to "fprsythe" if the value of
mv_dict_look is "forsythe". A useful value is -1, which will
increment the last character (setting the mv_dict_end to
"forsythf" in our example). This prevents having to scan the
whole file once a unique match is found.
Note: The order of this and the mv_dict_end variable is significant.
Each will overwrite the other.
If this is set to a non-numeric value, an automatic mode is
entered which looks for a dictionary-indexed file that
corresponds to the file name plus .field, where field is whatever
mv_dict_limit is set to. The actual value of mv_dict_limit is set
to -1. If the file does not exist, the original file is silently
used. Also, the value of mv_return_fields is set to 1 to
correspond to the location of the key in the auto-indexed file.
To illustrate:
is equal to:
The real utility would be in a form construct like
Search for
beginning with
which would allow automatic binary search file selection.
Combined with the INDEX attribute to the Database directive, this
allows fast binary search qualification combined with regular
mv_searchspec text searches.
mv_dict_look
The string at which to begin matching at in a dictionary-based
search. If not set, the mv_dict_end, mv_dict_fold, and
mv_dict_case variables will be ignored. May be set in a search
profile based on other form variables.
mv_dict_order
Make dictionary matching follow dictionary order, where only word
characters and whitespace matter. Ignored without mv_dict_look.
mv_doit
This can be set to search to make a form with a process action be
a search page by default. The mv_todo variable takes precedence.
mv_exact_match
Normally Interchange searches match words, as opposed to
sentences. This behavior can be overridden with mv_exact_match,
which when set will place quotes around any value in
mv_searchspec or mv_dict_look.
mv_field_file
If you want to search a file which has no field header on the
first line, you can specify a file to get the field names from.
It expects a single line with the field names separated by TAB
characters.
mv_field_names
Deprecated in favor of in-list sorting. Defines the field names
for the file being searched. This guarantees that they will be
available, and prevents a disk access if using named fields on a
search file (that is not the product database ASCII source, where
field names are already known). This must be exactly correct, or
it will result in anomalous search operation. Usually passed in a
hidden field or search profile as a comma-separated list.
Note: Use this on the product database only if planning on both
pre-sorting with mv_sort_field and then post-sorting with
[sort]field:opt[/sort].
mv_first_match
Normally Interchange will return the first page of a search. If
this variable is set, it will start the search return at the
match specified, even if there is only one page. If set to a
value greater than the number of matches, it will act as if no
matches were found.
mv_head_skip
Normally Interchange searches all lines of an index/product file
but the first. Set this to the number of lines to skip at the
beginning of the index. Default is 1 for the text search, which
skips the header line in the product file. Default is 0 for a
Glimpse search.
mv_index_delim
Sets the delimiter for counting fields in a search index. The
default is TAB. It should rarely be changed unless you are
searching a pipe-delimited or colon-delimited file.
mv_like_field
Specifies a field in a database search which should be used for a
screening function based on the SQL like function. Needs
mv_like_spec.
mv_like_spec
The string that should be searched for in mv_like_field. The
behavior of the % character and case-sensitivity depends on your
SQL implementation.
mv_matchlimit
Function depends upon context. When the search results display is
handled by one of the mechanisms which works with [more] lists
(such as [search-region]), mv_matchlimit determines the number of
results per page. If more matches than mv_matchlimit are found,
the search paging mechanism will be employed if the proper
[more-list] is present. When the search results are displayed as
one continuous list (i.e.: with [loop search="..."]),
mv_matchlimit is equivalent in function to mv_max_matches.
To have no matchlimit, use none instead of a number. all does the
same thing (since returning "all" is just anothing way of looking
at no matchlimit).
If no matchlimit is provided, or an invalid setting (some other
string or 0) the default is taken from catalog variable
MV_DEFAULT_MATCHLIMIT, and if that's not set, is 50.
mv_max_matches
The maximum number of records that will be returned in a search.
Default is unlimited. If search results paging with [more-list]
is to be employed, Use mv_matchlimit to set the number of results
per page.
mv_min_string
Sets the minimum size of a search string for a search operation.
Default is 4 for the Glimpse search, and 1 for the text search.
mv_negate
Specifies that records NOT matching the search criteria will be
returned. Default is no. It is not operative for the Glimpse
search.
If stacked to match the mv_search_field and mv_searchspec
variables, and mv_coordinate is set, it will operate only for the
corresponding field.
mv_orsearch
If this item is set to Yes, the search will return items matching
any of the words in searchspec. The default is No.
mv_profile
Selects one of the pre-defined search specifications set by the
SearchProfile directive. If the special variable within that
file, mv_last, is defined, it will prevent the scanning of the
form input for further search modifications. The values of
mv_searchspec and mv_dict_look are always scanned, so specify
this to do the equivalent of setting multiple checkboxes or
radioboxes with one click, while still reading the search input
text.
mv_record_delim
Sets the delimiter for counting records in a search index. The
default is newline, which works for the products and most
line-based index files.
mv_return_fields
The field(s) that should be returned by the match, specified
either by field name or by column number, separated by commas. Do
not list the same field more than once per search. Specify 0 as
the first field to be returned if searching the products
database, since that is the key for accessing database fields.
As with SQL queries, you can use the '*' shortcut to return all
fields. For example:
[loop search="fi=nation/ra=yes/rf=*"]
when used with a hypothetical 'nation' table would be equivalent to:
[loop search="
fi=nation
ra=yes
rf=code,sorder,region,name,tax
"]
as well as:
[loop search="fi=nation/ra=yes/rf=0,1,2,3,4"]
and:
[query sql="select * from nation"][/query]
However, you probably rarely need to use every single field in a row.
For maximum maintainability and execution speed the best practice is
to list by name only the fields you want returned.
mv_return_spec
Returns the string specified as the search (i.e., the value of
mv_searchspec) as the one and only match. Typically used in a
SKU/part number search.
mv_search_field
The field(s) to be searched, specified either by column name or
by column number.
If the number of instances matches the number of fields specified
in the mv_searchspec variable and mv_coordinate is set to true,
each search field (in order specified on the form) will be
matched with each search spec (again in that order).
mv_search_file
In the text search, set this variable to the file(s) to be
scanned for a match. The default, if not set, is to scan the
default ProductFiles (i.e., products.txt). If set multiple times
in a form (for a text search), will cause a search all the files.
One file name per instance.
In the Glimpse search, follows the Glimpse wildcard-based file
name matching scheme. Use with caution and a liberal dose of the
Glimpse man page.
mv_search_match_count
Set by the search to indicate the total number of matches found.
mv_search_page
The Interchange-style name of the page that should display the
search results. This overrides the default value of search.
mv_searchspec
The actual search string that is typed in by the customer. It is
a text INPUT TYPE=TEXT field, or can be put in a select
(drop-down) list to enable category searches. If multiple
instances are found, they will be concatenated just as if
multiple words had been placed in a text field.
The user can place quotes around words to specify that they match
as a string. To enable this by default, use the mv_exact_match
variable.
If mv_dict_look has a value, and mv_searchspec does not, then
mv_searchspec will be set to the value of mv_dict_look.
If the number of instances matches the number of fields specified
in the mv_search_field variable and mv_coordinate is set to true,
each search field (in order specified on the form) will be
matched with each search spec (again in that order).
mv_searchtype
If set to Glimpse, selects the Glimpse search (if Glimpse is
defined).
If set to db, iterates over every row of the database (not the
associated text source file).
If set to sql, same as db.
If set to text, selects the text-based search.
When using st=db, returned keys may be affected by TableRestrict.
See CATALOG.CFG.
Defaults to text if Glimpse is not defined; defaults to Glimpse
if it is defined. This can allow use of both search types if that
is desirable. For instance, searching for very common strings is
better done by the text-based search. An example might be
searching for categories of items instead of individual items.
mv_small_data
Tells the search engine that there is a small amount of data in the
file and that it should perform the search function on every line.
Normally, when Interchange can find a fixed search expression it
produces a "screening" function which will allow records to be quickly
rejected when they don't match. If there are less than 50 records in
the file or database, this may be counterproductive.
mv_sort_field
The file field(s) the search is to be sorted on, specified in one
of two ways. If the file(s) to be searched have a header line
(the first line) that contains delimiter-separated field names,
it can be specified by field name. It can also be specified by
column number (the code or key is specified with a value of 0,
for both types). These can be stacked if coming from a form or
placed in a single specification separated by commas.
Note: If specifying a sort for the product database, mv_field_names
must be specified if doing a fieldname-addressed post-sort.
mv_sort_option
The way that each field should be sorted. The flags are r, n, and
f, reverse, numeric, and case-insensitive respectively. These can
be stacked if coming from a form or placed in a single
specification separated by commas. The stacked options will be
applied to the sort fields as they are defined, presuming those
are stacked.
mv_spelling_errors
The number of spelling errors that will be tolerated. Ignored
unless using Glimpse. For a large table, limit this to two.
mv_substring_match
If mv_substring_match is set to Yes, matches on substrings as
well as whole words. Typically set this for dictionary-based
searches.
If stacked to match the mv_search_field and mv_searchspec
variables and mv_coordinate is set, it will operate only for the
corresponding field.
mv_unique
If set to a true value, causes the sort to return only unique
results. This operates on whatever the search return is, as
defined by mv_return_fields.
mv_value
This is normally only used in the one-click search
(va=var=value). It allows setting of a session variable based on
the clicked link, which makes for easy definition of headers and
other display choices. (If had trouble using mv_searchspec for
this before, this is what is needed.)
5.12. The Results Page
----------------------
Once a search has been completed, there needs to be a way of
presenting the output. By default, the SpecialPage search is used. It
is set to results in the distribution demo, but any number of search
pages can be specified by passing the value in the search form
specified in the variable mv_search_page.
On the search page, some special Interchange tags are used to format
the otherwise standard HTML. Each of the iterative tags is applied to
every code returned from the search. This is normally the product
code, but could be a key to any of the arbitrary databases. The value
placed by the [item-code] tag is set to the first field returned from
the search.
The basic structure looks like this:
[search-region]
[search-list]
your iterating code, once for each match
[/search-list]
[no-match]
Text / tags to be output if no matches found (optional but recommended)
[/no-match]
[more-list]
More / paging area (optional)
[/more-list]
[/search-region]
Tip for catalogs upgraded from Minivend 3: A
[search-list][/search-list] must always be surrounded by a
[search-region][/search-region] pair. This is a change from Minivend
3.
[search-list]
Starts the representation of a search list. Interchange tags can
be embedded in the search list, yielding a table or formatted
list of items with part number, description, price, and
hyperlinks to order or go to its catalog page.
The example tags shown have an item- prefix, which is the
default. Set any prefix desired with the prefix parameter to
[search-region]:
[search-region prefix=my]
[search-list]
SKU: [my-code]
Title: [my-data products title]
[/search-list]
[/search-region]
The standard set of Interchange iterative ITL tags are available.
They are interpolated in this order:
[item-alternate N] true [else] false [/else] [/item-alternate]
[if-item-param named_field] true [else] false [/else] [/if-item-param]
[item-param named_field]
[if-item-pos N] true [else] false [/else] [/if-item-pos]
[item-pos N]
[if-item-field products_field] true [else] false [/else] [/if-item-field]
[item-field products_column]
[item-increment]
[item-accessories]
[item-code]
[item-description]
[if-item-data table column] true [else] false [/else] [/if-item-data]
[item-data table column]
[item-price N* noformat=1*]
[item-calc] [/item-calc]
[item-change marker]
[condition]variable text[/condition]
true
[else] false [/else]
[/item-change marker]
[item-last] condition [/item-last]
[item-next] condition [/item-next]
Note: those that reference the shopping cart do not apply, i.e.,
[item-quantity], [item-modifier ...] and friends.
[/search-list]
Ends the search list.
[no-match]
Starts the region of the search results page that should be
returned if there is no match (and no error) for the search. If
this is not on the page, the special page nomatch will be
displayed instead.
[/no-match]
Ends the no match region.
[sort database:field:option* database:field:option*]
Sorts the search list return based on database fields. If no
options are supplied, sorts according to the return code. See
SORTING.
This is slow, and it is far better to pre-sort the return in the
search specification.
[item-change marker]
Active only within [search-list][/search-list].
Along with the companion [/item-change marker], surrounds a
region which should only be output when a field (or other
repeating value) changes its value. This allows indented lists
similar to database reports to be easily formatted. The repeating
value must be a tag interpolated in the search process, such as
[item-field field] or [item-data database field].
Of course, this will only work as expected when the search
results are properly sorted.
The marker field is mandatory, and is also arbitrary, meaning
that any marker can be selected as long as it matches the marker
associated with [/item-change marker]. The value to be tested is
contained within a [condition]value[/condition] tag pair. The
[item-change marker] tag also processes an [else] [/else] pair
for output when the value does not change. The tags may be nested
as long as the markers are different.
The following is a simple example for a search list that has a
field category and subcategory associated with each item:
The above should output a table that only shows the category and
subcategory once, while showing the name for every product. (The
will prevent blanked table cells if using a border.)
[/item-change marker]
Companion to [item-change marker].
[matches]
Replaced with the range of match numbers displayed by the search
page. Looks something like "1-50". Make sure to insert this item
between a [more-list] and [/more-list] element pair.
[match-count]
Replaced with the total number of matches. This tag works even on
[query] searches where [value mv_search_match_count] isn't set
unless the query is applied to a non-SQL database. Make sure to
insert this item between a [more-list] and [/more-list] element
pair.
[more-list next_img* prev_img* page_img* border* border_current*]
Starts the section of the search page which is only displayed if
there are more matches than specified in mv_matchlimit. If there
are less matches than the number in mv_matchlimit, all text/html
between the [more_list] and [/more_list] elements is stripped.
Use in conjunction with the [more] element to place pointers to
additional pages of matches.
If the optional arguments next_img, prev_img, and/or page_img are
present, they represent image files that will be inserted instead
of the standard 'Next,' 'Previous,' and page number. If prev_img
is none, then no previous link will be output. If page_img is
none, then no links to pages of matches will be output. These are
URLs, are substituted for with ImageDir and friends, and will be
encased in IMG tags. Lastly, border is the border number to put.
In addition, if page_img is used, it will be passed an argument
of the digit that is to be represented. This would allow an image
generator program to be used, generating page numbers on the fly.
The border and border_selected values are integers indicating the
border that should be put around images in the page_img
selection. The is used for the current page if
set.
\Examples:
[more-list next.gif prev.gif page_num.cgi 3] causes anchors of:
Previous
Page 1
Page 2
Next
[more-list next.gif prev.gif page_num.cgi] causes anchors of:
Previous
Page 1
Page 2
Next
[more-list next.gif prev.gif 0 0] causes anchors of:
Previous
Page 1
Page 2
Next
To set custom text for the "Previous" and "Next" usually used,
define the next_img, prev_img, and page_img with
[next-anchor][/next-anchor], [prev-anchor][/prev-anchor],
[first-anchor][/first-anchor], [last-anchor][/last-anchor] and
[page-anchor][/page-anchor]. The string $PAGE$ will be replaced
with the page number in the latter. The same example:
[more-list]
[first-anchor] First [/first-anchor]
[next-anchor] Forward | [/next-anchor]
[prev-anchor] Back [/prev-anchor]
[last-anchor] Last [/last-anchor]
[page-anchor] Page $PAGE$ (matches $MINPAGE$-$MAXPAGE$) | [/page-anchor]
[more]
[/more-list]
will display Forward | Page 1 (matches 1-50) | Page 2 (matches
51-77) | Back for 2 pages. Note that the following anchors are
replaced with the page number, the minimum match on the page, and
the maximum match on the page:
$PAGE$ Page number
$MINPAGE$ Minimum match on page
$MAXPAGE$ Maximum match on page
You can customize the HTML hyperlink with [link-template]
[/link-template]. This is useful for adding a JavaScript onclick
attribute, or setting the link target to a different window, etc.
[link-template]$ANCHOR$[/link-template]
There are two tokens you can use as many times as needed in
[link-template], which will be replaced as follows:
$URL$ The URL for the 'more' page in question
$ANCHOR$ The page number or the word "Next" or "Previous"
for the link in question.
If have many pages of matches and don't wish to have all
displayed at once, set [decade-next][/decade-next] and
[decade-prev][/decade-prev]. If set them empty, a search with 31
pages will display pages 21-30 like:
Previous 1 2 3 4 5 6 7 8 9 10 [more>>] Next
and pages 11-20 like:
Previous [<>] Next
If set to [decade-next](higher)[/decade-next] and
[decade-prev](lower)[/decade-prev], the following will be
displayed:
Previous (lower) 11 12 13 14 15 16 17 18 19 20 (higher) Next
Of course, image-based anchors can be used as well.
[/more-list]
Companion to [more-list].
[more]
Inserts a series of hyperlinks that will call up the next matches
in a series. They look like this:
Previous 1 2 3 4 5 6 Next
The current page will not be a hyperlink. Every time the new link
is pressed, the list is re-built to correspond to the current
page. If there is no Next or Previous page, that link will not be
shown.
See the search.html file for examples. Make sure to insert this
item between a [more-list] and [/more-list] element pair.
[process-search]
Outputs the complete URL for a search, including Interchange
session tags. Used as the ACTION value for the search form. This
is exactly the same as [area search].
6. Sorting
==========
Interchange has standard sorting options for sorting the search lists,
loop lists, and item lists based on the contents of database fields.
In addition, it adds list slices for limiting the displayed entries
based on a start value and chunk size (or start and end value, from
which a chunk size is determined). All accept a standard format sort
tag which must be directly after the list call:
[loop 4 3 2 1]
[sort -2 +2]
[loop-code]
[/loop]
[search-list]
[sort products:category:f]
[item-price] [item-description]
[/search-list]
[item-list]
[sort products:price:rn]
[item-price] [item-code]
[/item-list]
[loop search="ra=yes"]
[sort products:category products:title]
[loop-field category] [loop-field title]
[/loop]
All sort situations, [search list], [loop list], [tag each table], and
[item-list], take options of the form:
[sort database:field:option* -n +n =n-n ... ]
database
The Interchange database identifier. This must be supplied and
should normally be 'products' if using the default name for the
database.
field
The field (column) of the database to be sorted on.
option
None, any, or combinations of the options:
f case-insensitive sort (folded) (mutually exclusive of n)
n numeric order (mutually exclusive of f)
r reverse sort
-n
The starting point of the list to be displayed, beginning at 1
for the first entry.
+n
The number of entries to display in this list segment.
=n-n
The starting and ending point of the list display. This is an
alternative to -n and +n. They should be specified in only one
form. If both are specified, the last one will take effect.
...
Don't really put ... in. This means that many sort levels are
specified. Lots of sort levels with large databases will be quite
slow.
Multiple levels of sort are supported, and database boundaries on
different sort levels can be crossed. Cross-database sorts on the same
level are not supported. If using multiple product databases, they
must be sorted with embedded Perl. This is actually a feature in some
cases, all items in a used database can be displayed before or after
new ones in products.
Examples, all based on the simple demo:
Loop list
[loop 00-0011 19-202 34-101 99-102]
[sort products:title]
[loop-code] [loop-field title]
[/loop]
Will display:
34-101 Family Portrait
00-0011 Mona Lisa
19-202 Radioactive Cats
99-102 The Art Store T-Shirt
\Alternatively:
[loop 00-0011 19-202 34-101 99-102]
[sort products:title -3 +2]
[loop-code] [loop-field title]
[/loop]
\Displays:
19-202 Radioactive Cats
99-102 The Art Store T-Shirt
The tag [sort products:title =3-4] is equivalent to the above.
Search list
A search of all products (i.e.,
http://yoursystem.com/cgi-bin/simple/scan/ra=yes):
[search-list]
[sort products:artist products:title:rf]
[item-field artist] [item-field title]
[/search-list]
will display:
Gilded Frame
Grant Wood American Gothic
Jean Langan Family Portrait
Leonardo Da Vinci Mona Lisa
Salvador Dali Persistence of Memory
Sandy Skoglund Radioactive Cats
The Art Store The Art Store T-Shirt
Vincent Van Gogh The Starry Night
Vincent Van Gogh Sunflowers
Note the reversed order of the title for Van Gogh and the
presence of the accessory item Gilded Frame at the front of the
list. It has no artist field and, as such, sorts first).
Adding a slice option:
[search-list]
[sort products:artist products:title:rf =6-10]
[item-field artist] [item-field title]
[/search-list]
will display:
Sandy Skoglund Radioactive Cats
The Art Store The Art Store T-Shirt
Vincent Van Gogh The Starry Night
Vincent Van Gogh Sunflowers
If the end value/chunk size exceeds the size of the list, only
the elements that exist will be displayed, starting from the
start value.
Shopping cart
[item-list]
[sort products:price:rn]
[item-price] [item-code]
[/item-list]
will display the items in the shopping cart sorted on their
price, with the most expensive shown first. NOTE: This is based
on the database field and doesn't take quantity price breaks or
discounts into effect. Modifier values or quantities cannot be
sorted.
Complete database contents
[tag each products]
[sort products:category products:title]
[loop-field category] [loop-field title]
[/tag]
A two level sort that will sort products based first on their
category, then on their title within the category.
Note that large lists may take some time to sort. If a product
database contains many thousands of items, using the [tag each
products] sort is not recommended unless planning on caching or
statically building pages.
7. Shipping
===========
Interchange has a powerful custom shipping facility that performs UPS
and other shipper lookups, as well as a flexible rule-based facility
for figuring cost by other methods.
7.1. Shipping Cost Database
---------------------------
The shipping cost database (located in ProductDir/shipping.asc) is a
tab-separated ASCII file with eight fields: code, text description,
criteria (quantity or weight, for example), minimum number, maximum
number, and cost, query, and options. None of the fields are
case-sensitive.
To define the shipping database in a catalog configuration file, set
the Variable MV_SHIPPING to what would be its contents.
To set the file to be something other than shipping.asc in the
products directory, set the Special directive:
Special shipping.asc /home/user/somewhere/shipping_defs
There are two styles of setting which can be mixed in the same file.
The first is line-based and expects six or more TAB-separated fields.
They would look like:
default No shipping weight 0 99999999 0
upsg UPS Ground weight 0 0 e Nothing to ship!
upsg UPS Ground weight 0 150 u Ground [default zip 98366] 3.00
upsg UPS Ground weight 150 999999 e @@TOTAL@@ lbs too heavy for UPS
The second is a freeform method with a mode: Description text
introducing the mode line. The special encoding is called out by
indented parameters. The below is identical to the above:
upsg: UPS Ground
criteria weight
min 0
max 0
cost e Nothing to ship!
min 0
max 150
cost u
table 2ndDayAir
geo zip
default_geo 98366
adder 3
min 150
max 999999
cost e @@TOTAL@@ lbs too heavy for UPS
The second format has several advantages. Multiple lines can be
spanned with the <{country} eq 'US';
return 'weight' if ! $Values->{country};
# Return blank, don't want UPS
return '';
[/perl]
EOF
The definable fields are, in order, for the tab-separated format:
MODE
The unique identifier for that shipping method. It may be
repeated as many times as needed.
DESCRIPTION
Text to describe the method (can be accessed on a page with the
[shipping-description] element).
CRITERIA
Whether shipping is based on weight, quantity, price, etc. Valid
Interchange tags can be placed in the field to do a dynamic
lookup. If a number is returned, that is used as the accumulated
criteria. That is, the total of weight, quantity, or price as
applied to all items in the shopping cart.
See Criteria Determination below.
MINIMUM
The low bound of quantity/weight/criteria this entry applies to.
MAXIMUM
The high bound of quantity/weight/criteria this entry applies to.
The first found entry is used in case of ties.
COST
The method of developing cost. It can be a number which will be
used directly as the shipping cost, or a function, determined by
a single character at the beginning of the field:
f Formula (ITL tags OK, evaluated as Perl)
x Multiplied by a number
[uA-Z] UPS-style lookup
m Interchange chained cost lookup (all items summed together)
i Interchange chained cost lookup (items summed individually)
NEXT
The next field supplies an alternative shipping mode to
substitute if the cost of the current one is zero.
ZONE
The UPS zone that is being defined.
QUERY
Interchange tags which will return a SQL query to select lines
matching this specification. The current mode is replaced with
this selection. If there is a query parameter of ?, it will be
replaced with the mode name.
QUAL
The geographic qualification (if any) for this mode.
PERL
Perl code that is read and determines the criterion, not the
cost. Use the cost option with "f" as the prelim to supply Perl
code to determine cost.
TOTAL
Set to the accumulated criterion before passing to Perl.
OPT
Used to maintain UPS and freeform options. Normally these are set
by separate lines in the shipping definition.
7.2. Criteria Determination
---------------------------
The criteria field varies according to whether it is the first field
in the shipping file exactly matching the mode identifier. In that
case, it is called the main criterion. If it is in subsidiary shipping
lines matching the mode (with optional appended digits), it is called
a qualifying criterion. The difference is that the main criterion
returns the basis for the calculation (i.e., weight or quantity),
while the qualifying criterion determines whether the individual line
may match the conditions.
The return must be one of:
quantity
The literal value quantity as the main criterion will simply
count the number of items in the shopping cart and return it as
the accumulated criteria. If using a database table field named
quantity, use the table::field notation.
o or