19/36
The script content on this page is for navigation purposes only and does not alter the content in any way.
10
SQL*Loader Field List Reference
This chapter describes the field-list portion of the SQL*Loader control file. The following topics are discussed:
Specifying the Position of a Data Field
To load data from the data file, SQL*Loader must know the length and location of the field. To specify the position of a field in the logical record, use the POSITION
clause in the column specification. The position may either be stated explicitly or relative to the preceding field. Arguments to POSITION
must be enclosed in parentheses. The start, end, and integer values are always in bytes, even if character-length semantics are used for a data file.
The syntax for the position specification (pos_spec) clause is as follows:
Description of the illustration ”pos_spec.gif”
Table 10-1 describes the parameters for the position specification clause.
Table 10-1 Parameters for the Position Specification Clause
Parameter
Description
start
The starting column of the data field in the logical record. The first byte position in a logical record is 1.
end
The ending position of the data field in the logical record. Either start
–end
or start:end
is acceptable. If you omit end
, then the length of the field is derived from the datatype in the data file. Note that CHAR
data specified without start or end, and without a length specification (CHAR(n)
), is assumed to have a length of 1. If it is impossible to derive a length from the datatype, then an error message is issued.
*
Specifies that the data field follows immediately after the previous field. If you use *
for the first data field in the control file, then that field is assumed to be at the beginning of the logical record. When you use *
to specify position, the length of the field is derived from the datatype.
+integer
You can use an offset, specified as +integer
, to offset the current field from the next position after the end of the previous field. A number of bytes, as specified by +integer
, are skipped before reading the value for the current field.
You may omit POSITION
entirely. If you do, then the position specification for the data field is the same as if POSITION(*)
had been used.
Using POSITION with Data Containing Tabs
When you are determining field positions, be alert for tabs in the data file. Suppose you use the SQL*Loader advanced SQL string capabilities to load data from a formatted report. You would probably first look at a printed sao chép of the report, carefully measure all character positions, and then create your control file. In such a situation, it is highly likely that when you attempt to load the data, the load will fail with multiple “invalid number” and “missing field” errors.
These kinds of errors occur when the data contains tabs. When printed, each tab expands to consume several columns on the paper. In the data file, however, each tab is still only one character. As a result, when SQL*Loader reads the data file, the POSITION
specifications are wrong.
To fix the problem, inspect the data file for tabs and adjust the POSITION
specifications, or else use delimited fields.
See Also:
“Specifying Delimiters”
Using POSITION with Multiple Table Loads
In a multiple table load, you specify multiple INTO
TABLE
clauses. When you specify POSITION(*)
for the first column of the first table, the position is calculated relative to the beginning of the logical record. When you specify POSITION(*)
for the first column of subsequent tables, the position is calculated relative to the last column of the last table loaded.
Thus, when a subsequent INTO
TABLE
clause begins, the position is not set to the beginning of the logical record automatically. This allows multiple INTO
TABLE
clauses to process different parts of the same physical record. For an example, see “Extracting Multiple Logical Records”.
A logical record might contain data for one of two tables, but not both. In this case, you would reset POSITION
. Instead of omitting the position specification or using POSITION(*+
n
)
for the first field in the INTO TABLE
clause, use POSITION(1)
or POSITION(
n
)
.
Examples of Using POSITION
siteid POSITION (*) SMALLINT siteloc POSITION (*) INTEGER
If these were the first two column specifications, then siteid
would begin in column 1, and siteloc
would begin in the column immediately following.
ename POSITION (1:20) CHAR empno POSITION (22-26) INTEGER EXTERNAL allow POSITION (*+2) INTEGER EXTERNAL TERMINATED BY "/"
Column ename
is character data in positions 1 through 20, followed by column empno
, which is presumably numeric data in columns 22 through 26. Column allow
is offset from the next position (27) after the end of empno
by +2, so it starts in column 29 and continues until a slash is encountered.
Specifying Columns and Fields
You may load any number of a table’s columns. Columns defined in the database, but not specified in the control file, are assigned null values.
A column specification is the name of the column, followed by a specification for the value to be put in that column. The list of columns is enclosed by parentheses and separated with commas as follows:
(columnspec
,columnspec
, ...)
Each column name (unless it is marked FILLER
) must correspond to a column of the table named in the INTO TABLE
clause. A column name must be enclosed in quotation marks if it is a SQL or SQL*Loader reserved word, contains special characters, or is case sensitive.
If the value is to be generated by SQL*Loader, then the specification includes the RECNUM
, SEQUENCE
, or CONSTANT
parameter. See “Using SQL*Loader to Generate Data for Input”.
If the column’s value is read from the data file, then the data field that contains the column’s value is specified. In this case, the column specification includes a column name that identifies a column in the database table, and a field specification that describes a field in a data record. The field specification includes position, datatype, null restrictions, and defaults.
It is not necessary to specify all attributes when loading column objects. Any missing attributes will be set to NULL
.
Specifying Filler Fields
A filler field, specified by BOUNDFILLER
or FILLER
is a data file mapped field that does not correspond to a database column. Filler fields are assigned values from the data fields to which they are mapped.
Keep the following in mind regarding filler fields:
-
The syntax for a filler field is same as that for a column-based field, except that a filler field’s name is followed by
FILLER
. -
Filler fields have names but they are not loaded into the table.
-
Filler fields can be used as arguments to
init_specs
(for example,NULLIF
andDEFAULTIF
). -
Filler fields can be used as arguments to directives (for example,
SID
,OID
,REF
, andBFILE
).To avoid ambiguity, if a Filler field is referenced in a directive, such as
BFILE
, and that field is declared in the control file inside of a column object, then the field name must be qualified with the name of the column object. This is illustrated in the following example:LOAD DATA INFILE * INTO TABLE BFILE1O_TBL REPLACE FIELDS TERMINATED BY ',' ( emp_number char, emp_info_b column object ( bfile_name FILLER char(12), emp_b BFILE(constant "SQLOP_DIR", emp_info_b.bfile_name) NULLIF emp_info_b.bfile_name = 'NULL' ) ) BEGINDATA 00001,bfile1.dat, 00002,bfile2.dat, 00003,bfile3.dat,
-
Filler fields can be used in field condition specifications in
NULLIF
,DEFAULTIF
, andWHEN
clauses. However, they cannot be used in SQL strings. -
Filler field specifications cannot contain a
NULLIF
orDEFAULTIF
clause. -
Filler fields are initialized to
NULL
ifTRAILING NULLCOLS
is specified and applicable. If another field references a nullified filler field, then an error is generated. -
Filler fields can occur anyplace in the data file, including inside the field list for an object or inside the definition of a
VARRAY
. -
SQL strings cannot be specified as part of a filler field specification, because no space is allocated for fillers in the bind array.
Note:
The information in this section also applies to specifying bound fillers by using
BOUNDFILLER
. The only exception is that with bound fillers, SQL stringscan
be specified as part of the field, because space is allocated for them in the bind array.
The information in this section also applies to specifying bound fillers by using. The only exception is that with bound fillers, SQL stringsbe specified as part of the field, because space is allocated for them in the bind array.
A sample filler field specification looks as follows:
field_1_count FILLER char, field_1 varray count(field_1_count) ( filler_field1 char(2), field_1 column object ( attr1 char(2), filler_field2 char(2), attr2 char(2), ) filler_field3 char(3), ) filler_field4 char(6)
Specifying the Datatype of a Data Field
The datatype specification of a field tells SQL*Loader how to interpret the data in the field. For example, a datatype of INTEGER
specifies binary data, while INTEGER
EXTERNAL
specifies character data that represents a number. A CHAR
field can contain any character data.
Only one datatype can be specified for each field; if a datatype is not specified, then CHAR
is assumed.
“SQL*Loader Datatypes” describes how SQL*Loader datatypes are converted into Oracle datatypes and gives detailed information about each SQL*Loader datatype.
Before you specify the datatype, you must specify the position of the field.
Specifying Field Conditions
A field condition is a statement about a field in a logical record that evaluates as true or false. It is used in the WHEN
, NULLIF,
and DEFAULTIF
clauses.
Note:
If a field used in a clause evaluation has a NULL value, then that clause will always evaluate to FALSE. This feature is illustrated in
If a field used in a clause evaluation has a NULL value, then that clause will always evaluate to FALSE. This feature is illustrated in Example 10-5
A field condition is similar to the condition in the CONTINUEIF
clause, with two important differences. First, positions in the field condition refer to the logical record, not to the physical record. Second, you can specify either a position in the logical record or the name of a field in the data file (including filler fields).
Note:
A field condition cannot be based on fields in a secondary data file (SDF).
A field condition cannot be based on fields in a secondary data file (SDF).
The syntax for the field_condition
clause is as follows:
Description of the illustration ”fld_cond.gif”
The syntax for the pos_spec
clause is as follows:
Description of the illustration ”pos_spec.gif”
Table 10-4 describes the parameters used for the field condition clause. For a full description of the position specification parameters, see Table 10-1.
Table 10-4 Parameters for the Field Condition Clause
Parameter
Description
pos_spec
Specifies the starting and ending position of the comparison field in the logical record. It must be surrounded by parentheses. Either start
–end
or start
:end
is acceptable.
The starting location can be specified as a column number, or as *
(next column), or as *+n
(next column plus an offset).
If you omit an ending position, then the length of the field is determined by the length of the comparison string. If the lengths are different, then the shorter field is padded. Character strings are padded with blanks, hexadecimal strings with zeros.
start
Specifies the starting position of the comparison field in the logical record.
end
Specifies the ending position of the comparison field in the logical record.
full_fieldname
full_fieldname
is the full name of a field specified using dot notation. If the field col2
is an attribute of a column object col1
, then when referring to col2
in one of the directives, you must use the notation col1
.col2
. The column name and the field name referencing or naming the same entity can be different, because the column name never includes the full name of the entity (no dot notation).
operator
A comparison operator for either equal or not equal.
char_string
A string of characters enclosed in single or double quotation marks that is compared to the comparison field. If the comparison is true, then the current record is inserted into the table.
X'hex_string
‘
A string of hexadecimal digits, where each pair of digits corresponds to one byte in the field. It is enclosed in single or double quotation marks. If the comparison is true, then the current record is inserted into the table.
BLANKS
Enables you to test a field to see if it consists entirely of blanks. BLANKS
is required when you are loading delimited data and you cannot predict the length of the field, or when you use a multibyte character set that has multiple blanks.
Comparing Fields to BLANKS
The BLANKS
parameter makes it possible to determine if a field of unknown length is blank.
For example, use the following clause to load a blank field as null:
full_fieldname
... NULLIFcolumn_name
=BLANKS
The BLANKS
parameter recognizes only blanks, not tabs. It can be used in place of a literal string in any field comparison. The condition is true whenever the column is entirely blank.
The BLANKS
parameter also works for fixed-length fields. Using it is the same as specifying an appropriately sized literal string of blanks. For example, the following specifications are equivalent:
fixed_field
CHAR(2) NULLIFfixed_field
=BLANKSfixed_field
CHAR(2) NULLIF
fixed_field
=" "
There can be more than one blank in a multibyte character set. It is a good idea to use the BLANKS
parameter with these character sets instead of specifying a string of blank characters.
The character string will match only a specific sequence of blank characters, while the BLANKS
parameter will match combinations of different blank characters. For more information about multibyte character sets, see “Multibyte (Asian) Character Sets”.
Comparing Fields to Literals
When a data field is compared to a literal string that is shorter than the data field, the string is padded. Character strings are padded with blanks, for example:
NULLIF (1:4)=" "
This example compares the data in position 1:4 with 4 blanks. If position 1:4 contains 4 blanks, then the clause evaluates as true.
Hexadecimal strings are padded with hexadecimal zeros, as in the following clause:
NULLIF (1:4)=X'FF'
This clause compares position 1:4 to hexadecimal ‘FF000000’.
Using the WHEN, NULLIF, and DEFAULTIF Clauses
The following information applies to scalar fields. For nonscalar fields (column objects, LOBs, and collections), the WHEN
, NULLIF
, and DEFAULTIF
clauses are processed differently because nonscalar fields are more complex.
The results of a WHEN
, NULLIF
, or DEFAULTIF
clause can be different depending on whether the clause specifies a field name or a position.
-
If the
WHEN
,NULLIF
, orDEFAULTIF
clause specifies a field name, then SQL*Loader compares the clause to the evaluated value of the field. The evaluated value takes trimmed whitespace into consideration. See “Trimming Whitespace” for information about trimming blanks and tabs. -
If the
WHEN
,NULLIF
, orDEFAULTIF
clause specifies a position, then SQL*Loader compares the clause to the original logical record in the data file. No whitespace trimming is done on the logical record in that case.
Different results are more likely if the field has whitespace that is trimmed, or if the WHEN
, NULLIF
, or DEFAULTIF
clause contains blanks or tabs or uses the BLANKS
parameter. If you require the same results for a field specified by name and for the same field specified by position, then use the PRESERVE
BLANKS
option. The PRESERVE
BLANKS
option instructs SQL*Loader not to trim whitespace when it evaluates the values of the fields.
The results of a WHEN
, NULLIF
, or DEFAULTIF
clause are also affected by the order in which SQL*Loader operates, as described in the following steps. SQL*Loader performs these steps in order, but it does not always perform all of them. Once a field is set, any remaining steps in the process are ignored. For example, if the field is set in Step 5, then SQL*Loader does not move on to Step 6.
-
SQL*Loader evaluates the value of each field for the input record and trims any whitespace that should be trimmed (according to existing guidelines for trimming blanks and tabs).
-
For each record, SQL*Loader evaluates any
WHEN
clauses for the table. -
If the record satisfies the
WHEN
clauses for the table, or noWHEN
clauses are specified, then SQL*Loader checks each field for aNULLIF
clause. -
If a
NULLIF
clause exists, then SQL*Loader evaluates it. -
If the
NULLIF
clause is satisfied, then SQL*Loader sets the field toNULL
. -
If the
NULLIF
clause is not satisfied, or if there is noNULLIF
clause, then SQL*Loader checks the length of the field from field evaluation. If the field has a length of 0 from field evaluation (for example, it was a null field, or whitespace trimming resulted in a null field), then SQL*Loader sets the field toNULL
. In this case, anyDEFAULTIF
clause specified for the field is not evaluated. -
If any specified
NULLIF
clause is false or there is noNULLIF
clause, and if the field does not have a length of 0 from field evaluation, then SQL*Loader checks the field for aDEFAULTIF
clause. -
If a
DEFAULTIF
clause exists, then SQL*Loader evaluates it. -
If the
DEFAULTIF
clause is satisfied, then the field is set to 0 if the field in the data file is a numeric field. It is set toNULL
if the field is not a numeric field. The following fields are numeric fields and will be set to 0 if they satisfy theDEFAULTIF
clause:-
BYTEINT
-
SMALLINT
-
INTEGER
-
FLOAT
-
DOUBLE
-
ZONED
-
(packed)
DECIMAL
-
Numeric
EXTERNAL
(INTEGER
,FLOAT
,DECIMAL
, andZONED
)
-
-
If the
DEFAULTIF
clause is not satisfied, or if there is noDEFAULTIF
clause, then SQL*Loader sets the field with the evaluated value from Step 1.
The order in which SQL*Loader operates could cause results that you do not expect. For example, the DEFAULTIF
clause may look like it is setting a numeric field to NULL
rather than to 0.
Note:
As demonstrated in these steps, the presence of NULLIF
and DEFAULTIF
clauses results in extra processing that SQL*Loader must perform. This can affect performance. Note that during Step 1, SQL*Loader will set a field to NULL if its evaluated length is zero. To improve performance, consider whether it might be possible for you to change your data to take advantage of this. The detection of NULLs as part of Step 1 occurs much more quickly than the processing of a NULLIF
or DEFAULTIF
clause.
As demonstrated in these steps, the presence ofandclauses results in extra processing that SQL*Loader must perform. This can affect performance. Note that during Step 1, SQL*Loader will set a field to NULL if its evaluated length is zero. To improve performance, consider whether it might be possible for you to change your data to take advantage of this. The detection of NULLs as part of Step 1 occurs much more quickly than the processing of aorclause.
For example, a CHAR(5)
will have zero length if it falls off the end of the logical record or if it contains all blanks and blank trimming is in effect. A delimited field will have zero length if there are no characters between the start of the field and the terminator.
Also, for character fields, NULLIF
is usually faster to process than DEFAULTIF
(the default for character fields is NULL).
Examples of Using the WHEN, NULLIF, and DEFAULTIF Clauses
Example 10-2 through Example 10-5 clarify the results for different situations in which the WHEN
, NULLIF,
and DEFAULTIF
clauses might be used. In the examples, a blank or space is indicated with a period (.). Assume that col1
and col2
are VARCHAR2(5)
columns in the database.
Example 10-2 DEFAULTIF Clause Is Not Evaluated
The control file specifies:
(col1 POSITION (1:5), col2 POSITION (6:8) CHAR INTEGER EXTERNAL DEFAULTIF col1 = 'aname')
The data file contains:
aname...
In Example 10-2, col1
for the row evaluates to aname
. col2
evaluates to NULL
with a length of 0 (it is ...
but the trailing blanks are trimmed for a positional field).
When SQL*Loader determines the final loaded value for col2
, it finds no WHEN
clause and no NULLIF
clause. It then checks the length of the field, which is 0 from field evaluation. Therefore, SQL*Loader sets the final value for col2
to NULL
. The DEFAULTIF
clause is not evaluated, and the row is loaded as aname
for col1
and NULL
for col2
.
Example 10-3 DEFAULTIF Clause Is Evaluated
The control file specifies:
. . . PRESERVE BLANKS . . . (col1 POSITION (1:5), col2 POSITION (6:8) INTEGER EXTERNAL DEFAULTIF col1 = 'aname'
The data file contains:
aname...
In Example 10-3, col1
for the row again evaluates to aname
. col2
evaluates to ‘...
‘ because trailing blanks are not trimmed when PRESERVE BLANKS
is specified.
When SQL*Loader determines the final loaded value for col2
, it finds no WHEN
clause and no NULLIF
clause. It then checks the length of the field from field evaluation, which is 3, not 0.
Then SQL*Loader evaluates the DEFAULTIF
clause, which evaluates to true because col1
is aname
, which is the same as aname
.
Because col2
is a numeric field, SQL*Loader sets the final value for col2
to . The row is loaded as
aname
for col1
and as for
col2
.
Example 10-4 DEFAULTIF Clause Specifies a Position
The control file specifies:
(col1 POSITION (1:5), col2 POSITION (6:8) INTEGER EXTERNAL DEFAULTIF (1:5) = BLANKS)
The data file contains:
.....123
In Example 10-4, col1
for the row evaluates to NULL
with a length of 0 (it is .....
but the trailing blanks are trimmed). col2
evaluates to 123
.
When SQL*Loader sets the final loaded value for col2
, it finds no WHEN
clause and no NULLIF
clause. It then checks the length of the field from field evaluation, which is 3, not 0.
Then SQL*Loader evaluates the DEFAULTIF
clause. It compares (1:5)
which is .....
to BLANKS
, which evaluates to true. Therefore, because col2
is a numeric field (integer EXTERNAL
is numeric), SQL*Loader sets the final value for col2
to . The row is loaded as
NULL
for col1
and for
col2
.
Example 10-5 DEFAULTIF Clause Specifies a Field Name
The control file specifies:
(col1 POSITION (1:5), col2 POSITION(6:8) INTEGER EXTERNAL DEFAULTIF col1 = BLANKS)
The data file contains:
.....123
In Example 10-5, col1
for the row evaluates to NULL
with a length of (it is
.....
but the trailing blanks are trimmed). col2
evaluates to 123
.
When SQL*Loader determines the final value for col2
, it finds no WHEN
clause and no NULLIF
clause. It then checks the length of the field from field evaluation, which is 3, not 0.
Then SQL*Loader evaluates the DEFAULTIF
clause. As part of the evaluation, it checks to see that col1
is NULL
from field evaluation. It is NULL
, so the DEFAULTIF
clause evaluates to false. Therefore, SQL*Loader sets the final value for col2
to 123
, its original value from field evaluation. The row is loaded as NULL
for col1
and 123
for col2
.
Loading Data Across Different Platforms
When a data file created on one platform is to be loaded on a different platform, the data must be written in a form that the target system can read. For example, if the source system has a native, floating-point representation that uses 16 bytes, and the target system’s floating-point numbers are 12 bytes, then the target system cannot directly read data generated on the source system.
The best solution is to load data across an Oracle Net database link, taking advantage of the automatic conversion of datatypes. This is the recommended approach, whenever feasible, and means that SQL*Loader must be run on the source system.
Problems with interplatform loads typically occur with native datatypes. In some situations, it is possible to avoid problems by lengthening a field by padding it with zeros, or to read only part of the field to shorten it (for example, when an 8-byte integer is to be read on a system that uses 4-byte integers, or the reverse). Note, however, that incompatible datatype implementation may prevent this.
If you cannot use an Oracle Net database link and the data file must be accessed by SQL*Loader running on the target system, then it is advisable to use only the portable SQL*Loader datatypes (for example, CHAR
, DATE
, VARCHARC
, and numeric EXTERNAL
). Data files written using these datatypes may be longer than those written with native datatypes. They may take more time to load, but they transport more readily across platforms.
If you know in advance that the byte ordering schemes or native integer lengths differ between the platform on which the input data will be created and the platform on which SQL*loader will be run, then investigate the possible use of the appropriate technique to indicate the byte order of the data or the length of the native integer. Possible techniques for indicating the byte order are to use the BYTEORDER
parameter or to place a byte-order mark (BOM) in the file. Both methods are described in “Byte Ordering”. It may then be possible to eliminate the incompatibilities and achieve a successful cross-platform data load. If the byte order is different from the SQL*Loader default, then you must indicate a byte order.
Byte Ordering
Note:
The information in this section is only applicable if you are planning to create input data on a system that has a different byte-ordering scheme than the system on which SQL*Loader will be run. Otherwise, you can skip this section.
The information in this section is only applicable if you are planning to create input data on a system that has a different byte-ordering scheme than the system on which SQL*Loader will be run. Otherwise, you can skip this section.
SQL*Loader can load data from a data file that was created on a system whose byte ordering is different from the byte ordering on the system where SQL*Loader is running, even if the data file contains certain nonportable datatypes.
By default, SQL*Loader uses the byte order of the system where it is running as the byte order for all data files. For example, on a Sun Solaris system, SQL*Loader uses big-endian byte order. On an Intel or an Intel-compatible PC, SQL*Loader uses little-endian byte order.
Byte order affects the results when data is written and read an even number of bytes at a time (typically 2 bytes, 4 bytes, or 8 bytes). The following are some examples of this:
-
The 2-byte integer value 1 is written as 0x0001 on a big-endian system and as 0x0100 on a little-endian system.
-
The 4-byte integer 66051 is written as 0x00010203 on a big-endian system and as 0x03020100 on a little-endian system.
Byte order also affects character data in the UTF16 character set if it is written and read as 2-byte entities. For example, the character ‘a’ (0x61 in ASCII) is written as 0x0061 in UTF16 on a big-endian system, but as 0x6100 on a little-endian system.
All Oracle-supported character sets, except UTF16, are written one byte at a time. So, even for multibyte character sets such as UTF8, the characters are written and read the same way on all systems, regardless of the byte order of the system. Therefore, data in the UTF16 character set is nonportable because it is byte-order dependent. Data in all other Oracle-supported character sets is portable.
Byte order in a data file is only an issue if the data file that contains the byte-order-dependent data is created on a system that has a different byte order from the system on which SQL*Loader is running. If SQL*Loader knows the byte order of the data, then it swaps the bytes as necessary to ensure that the data is loaded correctly in the target database. Byte swapping means that data in big-endian format is converted to little-endian format, or the reverse.
To indicate byte order of the data to SQL*Loader, you can use the BYTEORDER
parameter, or you can place a byte-order mark (BOM) in the file. If you do not use one of these techniques, then SQL*Loader will not correctly load the data into the data file.
See Also:
Case study 11, Loading Data in the Unicode Character Set, for an example of how SQL*Loader handles byte swapping. (See
Case study 11, Loading Data in the Unicode Character Set, for an example of how SQL*Loader handles byte swapping. (See “SQL*Loader Case Studies” for information on how to access case studies.)
Specifying Byte Order
To specify the byte order of data in the input data files, use the following syntax in the SQL*Loader control file:
Description of the illustration ”byteorder.gif”
The BYTEORDER
parameter has the following characteristics:
-
BYTEORDER
is placed after theLENGTH
parameter in the SQL*Loader control file. -
It is possible to specify a different byte order for different data files. However, the
BYTEORDER
specification before theINFILE
parameters applies to the entire list of primary data files. -
The
BYTEORDER
specification for the primary data files is also used as the default for LOBFILEs and SDFs. To override this default, specifyBYTEORDER
with the LOBFILE or SDF specification. -
The
BYTEORDER
parameter is not applicable to data contained within the control file itself. -
The
BYTEORDER
parameter applies to the following:-
Binary
INTEGER
andSMALLINT
data -
Binary lengths in varying-length fields (that is, for the
VARCHAR
,VARGRAPHIC
,VARRAW
, andLONG
VARRAW
datatypes) -
Character data for data files in the UTF16 character set
-
FLOAT
andDOUBLE
datatypes, if the system where the data was written has a compatible floating-point representation with that on the system where SQL*Loader is running
-
-
The
BYTEORDER
parameter does not apply to any of the following:-
Raw datatypes (
RAW
,VARRAW
, orVARRAWC
) -
Graphic datatypes (
GRAPHIC
,VARGRAPHIC
, orGRAPHIC
EXTERNAL
) -
Character data for data files in any character set other than UTF16
-
ZONED
or (packed)DECIMAL
datatypes
-
Using Byte Order Marks (BOMs)
Data files that use a Unicode encoding (UTF-16 or UTF-8) may contain a byte-order mark (BOM) in the first few bytes of the file. For a data file that uses the character set UTF16, the values {0xFE,0xFF} in the first two bytes of the file are the BOM indicating that the file contains big-endian data. The values {0xFF,0xFE} are the BOM indicating that the file contains little-endian data.
If the first primary data file uses the UTF16 character set and it also begins with a BOM, then that mark is read and interpreted to determine the byte order for all primary data files. SQL*Loader reads and interprets the BOM, skips it, and begins processing data with the byte immediately after the BOM. The BOM setting overrides any BYTEORDER
specification for the first primary data file. BOMs in data files other than the first primary data file are read and used for checking for byte-order conflicts only. They do not change the byte-order setting that SQL*Loader uses in processing the data file.
In summary, the precedence of the byte-order indicators for the first primary data file is as follows:
-
BOM in the first primary data file, if the data file uses a Unicode character set that is byte-order dependent (UTF16) and a BOM is present
-
BYTEORDER
parameter value, if specified before theINFILE
parameters -
The byte order of the system where SQL*Loader is running
For a data file that uses a UTF8 character set, a BOM of {0xEF,0xBB,0xBF} in the first 3 bytes indicates that the file contains UTF8 data. It does not indicate the byte order of the data, because data in UTF8 is not byte-order dependent. If SQL*Loader detects a UTF8 BOM, then it skips it but does not change any byte-order settings for processing the data files.
SQL*Loader first establishes a byte-order setting for the first primary data file using the precedence order just defined. This byte-order setting is used for all primary data files. If another primary data file uses the character set UTF16 and also contains a BOM, then the BOM value is compared to the byte-order setting established for the first primary data file. If the BOM value matches the byte-order setting of the first primary data file, then SQL*Loader skips the BOM, and uses that byte-order setting to begin processing data with the byte immediately after the BOM. If the BOM value does not match the byte-order setting established for the first primary data file, then SQL*Loader issues an error message and stops processing.
If any LOBFILEs or secondary data files are specified in the control file, then SQL*Loader establishes a byte-order setting for each LOBFILE and secondary data file (SDF) when it is ready to process the file. The default byte-order setting for LOBFILEs and SDFs is the byte-order setting established for the first primary data file. This is overridden if the BYTEORDER
parameter is specified with a LOBFILE or SDF. In either case, if the LOBFILE or SDF uses the UTF16 character set and contains a BOM, the BOM value is compared to the byte-order setting for the file. If the BOM value matches the byte-order setting for the file, then SQL*Loader skips the BOM, and uses that byte-order setting to begin processing data with the byte immediately after the BOM. If the BOM value does not match, then SQL*Loader issues an error message and stops processing.
In summary, the precedence of the byte-order indicators for LOBFILEs and SDFs is as follows:
-
BYTEORDER
parameter value specified with the LOBFILE or SDF -
The byte-order setting established for the first primary data file
Note:
If the character set of your data file is a unicode character set and there is a byte-order mark in the first few bytes of the file, then do not use the
SKIP
parameter. If you do, then the byte-order mark will not be read and interpreted as a byte-order mark.If the character set of your data file is a unicode character set and there is a byte-order mark in the first few bytes of the file, then do not use theparameter. If you do, then the byte-order mark will not be read and interpreted as a byte-order mark.
Suppressing Checks for BOMs
A data file in a Unicode character set may contain binary data that matches the BOM in the first bytes of the file. For example the integer(2) value 0xFEFF = 65279 decimal matches the big-endian BOM in UTF16. In that case, you can tell SQL*Loader to read the first bytes of the data file as data and not test for a BOM by specifying the BYTEORDERMARK
parameter with the value NOCHECK
. The syntax for the BYTEORDERMARK
parameter is:
Description of the illustration ”byteordermark.gif”
BYTEORDERMARK
NOCHECK
indicates that SQL*Loader should not test for a BOM and should read all the data in the data file as data.
BYTEORDERMARK
CHECK
tells SQL*Loader to test for a BOM. This is the default behavior for a data file in a Unicode character set. But this specification may be used in the control file for clarification. It is an error to specify BYTEORDERMARK
CHECK
for a data file that uses a non-Unicode character set.
The BYTEORDERMARK
parameter has the following characteristics:
-
It is placed after the optional
BYTEORDER
parameter in the SQL*Loader control file. -
It applies to the syntax specification for primary data files, and also to LOBFILEs and secondary data files (SDFs).
-
It is possible to specify a different
BYTEORDERMARK
value for different data files; however, theBYTEORDERMARK
specification before theINFILE
parameters applies to the entire list of primary data files. -
The
BYTEORDERMARK
specification for the primary data files is also used as the default for LOBFILEs and SDFs, except that the valueCHECK
is ignored in this case if the LOBFILE or SDF uses a non-Unicode character set. This default setting for LOBFILEs and secondary data files can be overridden by specifyingBYTEORDERMARK
with the LOBFILE or SDF specification.
Loading All-Blank Fields
Fields that are totally blank cause the record to be rejected. To load one of these fields as NULL
, use the NULLIF
clause with the BLANKS
parameter.
If an all-blank CHAR
field is surrounded by enclosure delimiters, then the blanks within the enclosures are loaded. Otherwise, the field is loaded as NULL
.
A DATE
or numeric field that consists entirely of blanks is loaded as a NULL
field.
See Also:
-
Case study 6, Loading Data Using the Direct Path Load Method, for an example of how to load all-blank fields as
NULL
with theNULLIF
clause. (See “SQL*Loader Case Studies” for information on how to access case studies.) -
“Trimming Whitespace”
-
“How the PRESERVE BLANKS Option Affects Whitespace Trimming”
How the PRESERVE BLANKS Option Affects Whitespace Trimming
To prevent whitespace trimming in all CHAR
, DATE
, and numeric EXTERNAL
fields, you specify PRESERVE
BLANKS
as part of the LOAD
statement in the control file. However, there may be times when you do not want to preserve blanks for all CHAR
, DATE
, and numeric EXTERNAL
fields. Therefore, SQL*Loader also enables you to specify PRESERVE
BLANKS
as part of the datatype specification for individual fields, rather than specifying it globally as part of the LOAD
statement.
In the following example, assume that PRESERVE
BLANKS
has not been specified as part of the LOAD
statement, but you want the c1
field to default to zero when blanks are present. You can achieve this by specifying PRESERVE
BLANKS
on the individual field. Only that field is affected; blanks will still be removed on other fields.
c1 INTEGER EXTERNAL(10) PRESERVE BLANKS DEFAULTIF c1=BLANKS
In this example, if PRESERVE
BLANKS
were not specified for the field, then it would result in the field being improperly loaded as NULL (instead of as 0).
There may be times when you want to specify PRESERVE
BLANKS
as an option to the LOAD
statement and have it apply to most CHAR
, DATE
, and numeric EXTERNAL
fields. You can override it for an individual field by specifying NO
PRESERVE
BLANKS
as part of the datatype specification for that field, as follows:
c1 INTEGER EXTERNAL(10) NO PRESERVE BLANKS
How [NO] PRESERVE BLANKS Works with Delimiter Clauses
The PRESERVE
BLANKS
option is affected by the presence of the delimiter clauses, as follows:
-
Leading whitespace is left intact when optional enclosure delimiters are not present
-
Trailing whitespace is left intact when fields are specified with a predetermined size
For example, consider the following field, where underscores represent blanks:
__aa__,
Suppose this field is loaded with the following delimiter clause:
TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
In such a case, if PRESERVE
BLANKS
is specified, then both the leading whitespace and the trailing whitespace are retained. If PRESERVE
BLANKS
is not specified, then the leading whitespace is trimmed.
Now suppose the field is loaded with the following clause:
TERMINATED BY WHITESPACE
In such a case, if PRESERVE
BLANKS
is specified, then it does not retain the space at the beginning of the next field, unless that field is specified with a POSITION
clause that includes some of the whitespace. Otherwise, SQL*Loader scans past all whitespace at the end of the previous field until it finds a nonblank, nontab character.
See Also:
“Trimming Whitespace”
Applying SQL Operators to Fields
A wide variety of SQL operators can be applied to field data with the SQL string. This string can contain any combination of SQL expressions that are recognized by the Oracle database as valid for the VALUES
clause of an INSERT
statement. In general, any SQL function that returns a single value that is compatible with the target column’s datatype can be used. SQL strings can be applied to simple scalar column types and also to user-defined complex types such as column object and collections. See the information about expressions in the Oracle Database SQL Language Reference.
The column name and the name of the column in a SQL string bind variable must, with the interpretation of SQL identifier rules, correspond to the same column. But the two names do not necessarily have to be written exactly the same way, as in the following example of specifying the control file:
LOAD DATA INFILE * APPEND INTO TABLE XXX ( "Last" position(1:7) char "UPPER(:"Last")" first position(8:15) char "UPPER(:first || :FIRST || :"FIRST")" ) BEGINDATA Phil Grant Jason Taylor
Note the following about the preceding example:
-
If, during table creation, a column identifier is declared using double quotation marks because it contains lowercase and/or special-case letters (as in the column named
"Last"
above), then the column name in the bind variable must exactly match the column name used in theCREATE TABLE
statement. -
If a column identifier is declared without double quotation marks during table creation (as in the column name
first
above), then becausefirst
,FIRST
, and"FIRST"
all point to the same column, any of these written formats in a SQL string bind variable would be acceptable.
The following requirements and restrictions apply when you are using SQL strings:
-
If your control file specifies character input that has an associated SQL string, then SQL*Loader makes no attempt to modify the data. This is because SQL*Loader assumes that character input data that is modified using a SQL operator will yield results that are correct for database insertion.
-
The SQL string appears after any other specifications for a given column.
-
The SQL string must be enclosed in double quotation marks.
-
To enclose a column name in quotation marks within a SQL string, you must use escape characters.
In the preceding example,
Last
is enclosed in double quotation marks to preserve the mixed case, and the double quotation marks necessitate the use of the backslash (escape) character. -
If a SQL string contains a column name that references a column object attribute, then the full object attribute name must be used in the bind variable. Each attribute name in the full name is an individual identifier. Each identifier is subject to the SQL identifier quoting rules, independent of the other identifiers in the full name. For example, suppose you have a column object named
CHILD
with an attribute name of"HEIGHT_%TILE"
. (Note that the attribute name is in double quotation marks.) To use the full object attribute name in a bind variable, any one of the following formats would work:-
:CHILD."HEIGHT_%TILE"
-
:child."HEIGHT_%TILE"
Enclosing the full name (
:"CHILD.HEIGHT_%TILE"
) generates a warning message that the quoting rule on an object attribute name used in a bind variable has changed. The warning is only to suggest that the bind variable be written correctly; it will not cause the load to abort. The quoting rule was changed because enclosing the full name in quotation marks would have caused SQL to interpret the name as one identifier rather than a full column object attribute name consisting of multiple identifiers. -
-
The SQL string is evaluated after any
NULLIF
orDEFAULTIF
clauses, but before a date mask. -
If the Oracle database does not recognize the string, then the load terminates in error. If the string is recognized, but causes a database error, then the row that caused the error is rejected.
-
SQL strings are required when using the
EXPRESSION
parameter in a field specification. -
The SQL string cannot reference fields that are loaded using
OID
,SID
,REF
, orBFILE
. Also, it cannot reference filler fields. -
In direct path mode, a SQL string cannot reference a
VARRAY
, nested table, or LOB column. This also includes aVARRAY
, nested table, or LOB column that is an attribute of a column object. -
The SQL string cannot be used on
RECNUM
,SEQUENCE
,CONSTANT
, orSYSDATE
fields. -
The SQL string cannot be used on LOBs,
BFILE
s,XML
columns, or a file that is an element of a collection. -
In direct path mode, the final result that is returned after evaluation of the expression in the SQL string must be a scalar datatype. That is, the expression may not return an object or collection datatype when performing a direct path load.
Referencing Fields
To refer to fields in the record, precede the field name with a colon (:). Field values from the current record are substituted. A field name preceded by a colon (:) in a SQL string is also referred to as a bind variable. Note that bind variables enclosed in single quotation marks are treated as text literals, not as bind variables.
The following example illustrates how a reference is made to both the current field and to other fields in the control file. It also illustrates how enclosing bind variables in single quotation marks causes them to be treated as text literals. Be sure to read the notes following this example to help you fully understand the concepts it illustrates.
LOAD DATA INFILE * APPEND INTO TABLE YYY ( field1 POSITION(1:6) CHAR "LOWER(:field1)" field2 CHAR TERMINATED BY ',' NULLIF ((1) = 'a') DEFAULTIF ((1)= 'b') "RTRIM(:field2)" field3 CHAR(7) "TRANSLATE(:field3, ':field1', ':1')", field4 COLUMN OBJECT ( attr1 CHAR(3) "UPPER(:field4.attr3)", attr2 CHAR(2), attr3 CHAR(3) ":field4.attr1 + 1" ), field5 EXPRESSION "MYFUNC(:FIELD4, SYSDATE)" ) BEGINDATA ABCDEF1234511 ,:field1500YYabc abcDEF67890 ,:field2600ZZghl
Notes About This Example:
-
In the following line,
:field1
is not enclosed in single quotation marks and is therefore interpreted as a bind variable:field1 POSITION(1:6) CHAR “LOWER(:field1)”
-
In the following line,
':field1'
and':1'
are enclosed in single quotation marks and are therefore treated as text literals and passed unchanged to theTRANSLATE
function:field3 CHAR(7) "TRANSLATE(:field3, ':field1', ':1')"
For more information about the use of quotation marks inside quoted strings, see “Specifying File Names and Object Names”.
-
For each input record read, the value of the field referenced by the bind variable will be substituted for the bind variable. For example, the value
ABCDEF
in the first record is mapped to the first field:field1
. This value is then passed as an argument to theLOWER
function. -
A bind variable in a SQL string need not reference the current field. In the preceding example, the bind variable in the SQL string for field
FIELD4.ATTR1
references fieldFIELD4.ATTR3
. The fieldFIELD4.ATTR1
is still mapped to the values 500 and 600 in the input records, but the final values stored in its corresponding columns are ABC and GHL. -
field5
is not mapped to any field in the input record. The value that is stored in the target column is the result of executing theMYFUNC
PL/SQL function, which takes two arguments. The use of theEXPRESSION
parameter requires that a SQL string be used to compute the final value of the column because no input data is mapped to the field.
Common Uses of SQL Operators in Field Specifications
SQL operators are commonly used for the following tasks:
-
Loading external data with an implied decimal point:
field1 POSITION(1:9) DECIMAL EXTERNAL(8) ":field1/1000"
-
Truncating fields that could be too long:
field1 CHAR TERMINATED BY "," "SUBSTR(:field1, 1, 10)"
Combinations of SQL Operators
Multiple operators can also be combined, as in the following examples:
field1 POSITION(*+3) INTEGER EXTERNAL "TRUNC(RPAD(:field1,6,'0'), -2)" field1 POSITION(1:8) INTEGER EXTERNAL "TRANSLATE(RTRIM(:field1),'N/A', '0')" field1 CHAR(10) "NVL( LTRIM(RTRIM(:field1)), 'unknown' )"
Using SQL Strings with a Date Mask
When a SQL string is used with a date mask, the date mask is evaluated after the SQL string. Consider a field specified as follows:
field1 DATE "dd-mon-yy" "RTRIM(:field1)"
SQL*Loader internally generates and inserts the following:
TO_DATE(RTRIM(<field1_valueandgt;), 'dd-mon-yyyy')
Note that when using the DATE
field datatype, it is not possible to have a SQL string without a date mask. This is because SQL*Loader assumes that the first quoted string it finds after the DATE
parameter is a date mask. For instance, the following field specification would result in an error (ORA-01821: date format not recognized):
field1 DATE "RTRIM(TO_DATE(:field1, 'dd-mon-yyyy'))"
In this case, a simple workaround is to use the CHAR
datatype.
Interpreting Formatted Fields
It is possible to use the TO_CHAR
operator to store formatted dates and numbers. For example:
field1 ... "TO_CHAR(:field1, '$09999.99')"
This example could store numeric input data in formatted form, where field1
is a character column in the database. This field would be stored with the formatting characters (dollar sign, period, and so on) already in place.
You have even more flexibility, however, if you store such values as numeric quantities or dates. You can then apply arithmetic functions to the values in the database, and still select formatted values for your reports.
An example of using the SQL string to load data from a formatted report is shown in case study 7, Extracting Data from a Formatted Report. (See “SQL*Loader Case Studies” for information on how to access case studies.)
Using SQL Strings to Load the ANYDATA Database Type
The ANYDATA
database type can contain data of different types. To load the ANYDATA
type using SQL*loader, it must be explicitly constructed by using a function call. The function is invoked using support for SQL strings as has been described in this section.
For example, suppose you have a table with a column named miscellaneous
which is of type ANYDATA
. You could load the column by doing the following, which would create an ANYDATA
type containing a number.
LOAD DATA INFILE * APPEND INTO TABLE ORDERS ( miscellaneous CHAR "SYS.ANYDATA.CONVERTNUMBER(:miscellaneous)" ) BEGINDATA 4
There can also be more complex situations in which you create an ANYDATA
type that contains a different type depending upon the values in the record. To do this, you could write your own PL/SQL function that would determine what type should be in the ANYDATA
type, based on the value in the record, and then call the appropriate ANYDATA
.Convert*()
function to create it.
See Also:
-
Oracle Database SQL Language Reference for more information about the
ANYDATA
database type -
Oracle Database PL/SQL Packages and Types Reference for more information about using
ANYDATA
with PL/SQL
Using SQL*Loader to Generate Data for Input
The parameters described in this section provide the means for SQL*Loader to generate the data stored in the database record, rather than reading it from a data file. The following parameters are described:
-
CONSTANT Parameter
-
EXPRESSION Parameter
-
RECNUM Parameter
-
SYSDATE Parameter
-
SEQUENCE Parameter
Loading Data Without Files
It is possible to use SQL*Loader to generate data by specifying only sequences, record numbers, system dates, constants, and SQL string expressions as field specifications.
SQL*Loader inserts as many records as are specified by the LOAD
statement. The SKIP
parameter is not permitted in this situation.
SQL*Loader is optimized for this case. Whenever SQL*Loader detects that only generated specifications are used, it ignores any specified data file—no read I/O is performed.
In addition, no memory is required for a bind array. If there are any WHEN
clauses in the control file, then SQL*Loader assumes that data evaluation is necessary, and input records are read.
Setting a Column to a Constant Value
This is the simplest form of generated data. It does not vary during the load or between loads.
CONSTANT Parameter
To set a column to a constant value, use CONSTANT
followed by a value:
CONSTANTvalue
CONSTANT
data is interpreted by SQL*Loader as character input. It is converted, as necessary, to the database column type.
You may enclose the value within quotation marks, and you must do so if it contains whitespace or reserved words. Be sure to specify a legal value for the target column. If the value is bad, then every record is rejected.
Numeric values larger than 2^32 – 1 (4,294,967,295) must be enclosed in quotation marks.
Note:
Do not use the CONSTANT
parameter to set a column to null. To set a column to null, do not specify that column at all. Oracle automatically sets that column to null when loading the record. The combination of CONSTANT
and a value is a complete column specification.
Do not use theparameter to set a column to null. To set a column to null, do not specify that column at all. Oracle automatically sets that column to null when loading the record. The combination ofand a value is a complete column specification.
Setting a Column to an Expression Value
Use the EXPRESSION
parameter after a column name to set that column to the value returned by a SQL operator or specially written PL/SQL function. The operator or function is indicated in a SQL string that follows the EXPRESSION
parameter. Any arbitrary expression may be used in this context provided that any parameters required for the operator or function are correctly specified and that the result returned by the operator or function is compatible with the datatype of the column being loaded.
EXPRESSION Parameter
The combination of column name, EXPRESSION
parameter, and a SQL string is a complete field specification:
column_name EXPRESSION "SQL string"
In both conventional path mode and direct path mode, the EXPRESSION
parameter can be used to load the default value into column_name
:
column_name EXPRESSION "DEFAULT"
Note that if DEFAULT
is used and the mode is direct path, then use of a sequence as a default will not work.
Setting a Column to the Data File Record Number
Use the RECNUM
parameter after a column name to set that column to the number of the logical record from which that record was loaded. Records are counted sequentially from the beginning of the first data file, starting with record 1. RECNUM
is incremented as each logical record is assembled. Thus it increments for records that are discarded, skipped, rejected, or loaded. If you use the option SKIP=10
, then the first record loaded has a RECNUM
of 11.
RECNUM Parameter
The combination of column name and RECNUM
is a complete column specification.
column_name RECNUM
Setting a Column to the Current Date
A column specified with SYSDATE
gets the current system date, as defined by the SQL language SYSDATE
parameter. See the section on the DATE
datatype in Oracle Database SQL Language Reference.
SYSDATE Parameter
The combination of column name and the SYSDATE
parameter is a complete column specification.
column_name SYSDATE
The database column must be of type CHAR
or DATE
. If the column is of type CHAR
, then the date is loaded in the form ‘dd-mon-yy.’ After the load, it can be loaded only in that form. If the system date is loaded into a DATE
column, then it can be loaded in a variety of forms that include the time and the date.
A new system date/time is used for each array of records inserted in a conventional path load and for each block of records loaded during a direct path load.
Setting a Column to a Unique Sequence Number
The SEQUENCE
parameter ensures a unique value for a particular column. SEQUENCE
increments for each record that is loaded or rejected. It does not increment for records that are discarded or skipped.
SEQUENCE Parameter
The combination of column name and the SEQUENCE
parameter is a complete column specification.
Description of the illustration ”sequence.gif”
Table 10-6 describes the parameters used for column specification.
Table 10-6 Parameters Used for Column Specification
Parameter
Description
column_name
The name of the column in the database to which to assign the sequence.
SEQUENCE
Use the SEQUENCE
parameter to specify the value for a column.
COUNT
The sequence starts with the number of records already in the table plus the increment.
MAX
The sequence starts with the current maximum value for the column plus the increment.
integer
Specifies the specific sequence number to begin with.
incr
The value that the sequence number is to increment after a record is loaded or rejected. This is optional. The default is 1.
If a record is rejected (that is, it has a format error or causes an Oracle error), then the generated sequence numbers are not reshuffled to mask this. If four rows are assigned sequence numbers 10, 12, 14, and 16 in a particular column, and the row with 12 is rejected, then the three rows inserted are numbered 10, 14, and 16, not 10, 12, and 14. This allows the sequence of inserts to be preserved despite data errors. When you correct the rejected data and reinsert it, you can manually set the columns to agree with the sequence.
Case study 3, Loading a Delimited Miễn phí-Format File, provides an example of using the SEQUENCE
parameter. (See “SQL*Loader Case Studies” for information on how to access case studies.)
Generating Sequence Numbers for Multiple Tables
Because a unique sequence number is generated for each logical input record, rather than for each table insert, the same sequence number can be used when inserting data into multiple tables. This is frequently useful.
Sometimes, however, you might want to generate different sequence numbers for each INTO
TABLE
clause. For example, your data format might define three logical records in every input record. In that case, you can use three INTO
TABLE
clauses, each of which inserts a different part of the record into the same table. When you use SEQUENCE(MAX)
, SQL*Loader will use the maximum from each table, which can lead to inconsistencies in sequence numbers.
To generate sequence numbers for these records, you must generate unique numbers for each of the three inserts. Use the number of table-inserts per record as the sequence increment, and start the sequence numbers for each insert with successive numbers.
Example: Generating Different Sequence Numbers for Each Insert
Suppose you want to load the following department names into the dept
table. Each input record contains three department names, and you want to generate the department numbers automatically.
Accounting Personnel Manufacturing Shipping Purchasing Maintenance ...
You could use the following control file entries to generate unique department numbers:
INTO TABLE dept (deptno SEQUENCE(1, 3), dname POSITION(1:14) CHAR) INTO TABLE dept (deptno SEQUENCE(2, 3), dname POSITION(16:29) CHAR) INTO TABLE dept (deptno SEQUENCE(3, 3), dname POSITION(31:44) CHAR)
The first INTO
TABLE
clause generates department number 1, the second number 2, and the third number 3. They all use 3 as the sequence increment (the number of department names in each record). This control file loads Accounting as department number 1, Personnel as 2, and Manufacturing as 3.
The sequence numbers are then incremented for the next record, so Shipping loads as 4, Purchasing as 5, and so on.