tendra-doc-4.1.2.orig/ 40775 1750 1750 0 6626015352 14004 5ustar brooniebroonietendra-doc-4.1.2.orig/doc/ 40755 1750 1750 0 6507734505 14555 5ustar brooniebroonietendra-doc-4.1.2.orig/doc/diag/ 40755 1750 1750 0 6507734466 15467 5ustar brooniebroonietendra-doc-4.1.2.orig/doc/diag/diag1.html100644 1750 1750 5032 6466607533 17435 0ustar brooniebroonie TDF Diagnostic Specification, Issue 3.0

TDF Diagnostic Specification, Issue 3.0

January 1998

next section previous section current document TenDRA home page document index


1 - Introduction
2 - Diagnostic SORTs
3 - Reserved diagnostic TOKENs
4 - Proposed changes

1. Introduction

The TDF diagnostic information is intended to convey all that information, used by current source level debuggers, which would conventionally be part of an object file. Any particular installer will only use those parts of this information which its native object format can represent.

The version of the diagnostics described here is the first version. It has only been tested with TDF produced from C programs. There are known to be certain deficiencies relative to other languages (in particular to FORTRAN). A later version will correct these deficiencies. The changes already envisaged are detailed in section 4, and would have minimal (if any) impact on C producers.

The diagnostic system introduces one new type of TDF linkable entities, and currently adds two new units to the bitstream representation of TDF.

Much of the actual annotation of procedure bodies is currently done by reserved TOKENs, which installers recognize specially. These TOKENs are described in section 3.

There is a resemblance between the TDF diagnostic information and Unix International's DWARF format. DWARF has similar aims to the TDF diagnostics, and ensuring that complete DWARF information could be generated provided a useful check during the development of the TDF diagnostics. However the TDF diagnostics are intended to be architecture (and format) neutral. No inference should be made about any link (present or future) between DWARF and TDF diagnostics.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/diag/diag4.html100644 1750 1750 40747 6466607533 17474 0ustar brooniebroonie Diagnostic SORTs

TDF Diagnostic Specification, Issue 3.0

January 1998

next section previous section current document TenDRA home page document index


2.1 - DIAG_DESCRIPTOR
2.1.1 - diag_desc_id
2.1.2 - diag_desc_struct
2.1.3 - diag_desc_typedef
2.2 - DIAG_UNIT
2.2.1 - build_diag_unit
2.3 - DIAG_TAG
2.3.1 - make_diag_tag
2.4 - DIAG_TAGDEF
2.4.1 - make_diag_tagdef
2.5 - DIAG_TYPE_UNIT
2.5.1 - build_diagtype_unit
2.6 - DIAG_TYPE
2.6.1 - diag_type_apply_token
2.6.2 - diag_array
2.6.3 - diag_bitfield
2.6.4 - diag_enum
2.6.5 - diag_floating_variety
2.6.6 - diag_loc
2.6.7 - diag_proc
2.6.8 - diag_ptr
2.6.9 - diag_struct
2.6.10 - diag_type_null
2.6.11 - diag_union
2.6.12 - diag_variety
2.6.13 - use_diag_tag
2.7 - ENUM_VALUES
2.7.1 - make_enum_values_list
2.8 - DIAG_FIELD
2.8.1 - make_diag_field
2.9 - DIAG_TQ
2.9.1 - add_diag_const
2.9.2 - add_diag_volatile
2.9.3 - diag_tq_null
2.10 - FILENAME
2.10.1 - filename_apply_token
2.10.2 - make_filename
2.11 - SOURCEMARK
2.11.1 - make_sourcemark

2. Diagnostic SORTs

As a summary of this section


2.1. DIAG_DESCRIPTOR

Number of encoding bits: 2
Is coding extendable: yes

DIAG_DESCRIPTORs are used to associate names in the source program with diagnostic items.

2.1.1. diag_desc_id

Encoding number: 1

	src_name:	TDFSTRING(k, n)
	whence:		SOURCEMARK
	found_at:	EXP POINTER(al)
	type:		DIAG_TYPE
		   -> DIAG_DESCRIPTOR
Generates a descriptor for an identifier (of DIAG_TYPE type), whose source name was src_name from source location whence. The EXP found_at describes how to access the value. Note that the EXP need not be unique (e.g. FORTRAN EQUIVALENCE might be implemented this way).

2.1.2. diag_desc_struct

Encoding number: 2

	src_name:	TDFSTRING(k, n)
	whence:		SOURCEMARK
	new_type:	DIAG_TYPE
		   -> DIAG_DESCRIPTOR
Generates a descriptor whose source name was src_name. new_type must be either a DIAG_STRUCT, DIAG_UNION or DIAG_ENUM.

This construct is obsolete.

2.1.3. diag_desc_typedef

Encoding number: 3

	src_name:	TDFSTRING(k, n)
	whence:		SOURCEMARK
	new_type:	DIAG_TYPE
		   -> DIAG_DESCRIPTOR
Generates a descriptor for a type new_type whose source name was src_name. Note that diag_desc_typedef is used for associating a name with a type, rather than for any name given in the initial description of the type (e.g. in C this is used for typedef, not for struct/union/enum tags).


2.2. DIAG_UNIT

Number of encoding bits: 0
Is coding extendable: no
Unit identification: diagdef

A DIAG_UNIT is a TDF unit containing DIAG_DESCRIPTORs. A DIAG_UNIT is used to contain descriptions of items outside procedure bodies (e.g. global variables, global type definitions).

2.2.1. build_diag_unit

Encoding number: 0

	no_labels:	TDFINT
	descriptors:	SLIST(DIAG_DESCRIPTOR)
		   -> DIAG_UNIT
Create a DIAG_UNIT containing DIAG_DESCRIPTORs. no_labels is the number of local labels used in descriptors (for conditionals).


2.3. DIAG_TAG

Number of encoding bits: 1
Is coding extendable: yes
Linkable entity identification: diagtag

DIAG_TAGs are used inter alia to break cyclic diagnostic types. They are (TDF) linkable entities. A DIAG_TAG is made from a number, and used in use_diag_tag to obtain the DIAG_TYPE associated with that number by make_diag_tagdef.

2.3.1. make_diag_tag

Encoding number: 1

	num:		TDFINT
		   -> DIAG_TAG
Create a DIAG_TAG from num.


2.4. DIAG_TAGDEF

Number of encoding bits: 1
Is coding extendable: yes

DIAG_TAGDEFs associate DIAG_TAGs with DIAG_TYPE s.

2.4.1. make_diag_tagdef

Encoding number: 1

	tno:		TDFINT
	dtype:		DIAG_TYPE
		   -> DIAG_TAGDEF
Associates tag number tno with dtype.


2.5. DIAG_TYPE_UNIT

Number of encoding bits: 0
Is coding extendable: no
Unit identification: diagtype

A DIAG_TYPE_UNIT is a TDF unit containing DIAG_TAGDEFs.

2.5.1. build_diagtype_unit

Encoding number: 0

	no_labels:	TDFINT
	tagdefs:	SLIST(DIAG_TAGDEF)
		   -> DIAG_TYPEUNIT
Create a DIAG_TYPEUNIT containing DIAG_TAGDEFs. no_labels is the number of local labels used in tagdefs (for conditionals).


2.6. DIAG_TYPE

Sortname: foreign_sort("diag_type")
Number of encoding bits: 4
Is coding extendable: yes

DIAG_TYPEs are used to provide diagnostic information about data types.

2.6.1. diag_type_apply_token

Encoding number: 1

	token_value:	TOKEN
	token_args:	BITSTREAM
		   -> DIAG_TYPE
The token is applied to the arguments to give a DIAG_TYPE. If there is a definition for token_value in the CAPSULE then token_args is a BITSTREAM encoding of the SORTs of its parameters, in the order specified.

2.6.2. diag_array

Encoding number: 2

	element_type:	DIAG_TYPE
	stride:		EXP OFFSET(p,p)
	lower_bound:	EXP INTEGER(v)
	upper_bound:	EXP INTEGER(v)
	index_type:	DIAG_TYPE
		   -> DIAG_TYPE
An array of element_type objects. stride is the OFFSET between elements of the array (i.e. p is described by element_type). The bounds are in general not runtime constants, hence the values are EXPs (not say SIGNED_NAT). The VARIETY v is described by index_type. As in TDF there is no multi-dimensional array primitive.

2.6.3. diag_bitfield

Encoding number: 3

	type:		DIAG_TYPE
	number_of_bits:	NAT
		   -> DIAG_TYPE
Describes number_of_bits, which when extracted will have DIAG_TYPE type.

2.6.4. diag_enum

Encoding number: 4

	base_type:	DIAG_TYPE
	enum_name:	TDFSTRING(k, n)
	values:		LIST(ENUM_VALUES) 
		   -> DIAG_TYPE
An enumeration to be stored in an object of type base_type. If enum_name is a string contining zero characters this signifies no source tag.

2.6.5. diag_floating_variety

Encoding number: 5

	var:		FLOATING_VARIETY
		   -> DIAG_TYPE
Creates a DIAG_TYPE to describe an FLOATING_VARIETY var.

2.6.6. diag_loc

Encoding number: 6

	object:		DIAG_TYPE
	qualifier:	DIAG_TQ
		   -> DIAG_TYPE
Records the existence of an item of DIAG_TYPE object, qualified by qualifier. diag_loc is used for variables (which may of course not actually occupy a memory location).

2.6.7. diag_proc

Encoding number: 7

	params:		LIST(DIAG_TYPE)
	optional_args:	BOOL
	result_type:	DIAG_TYPE
		   -> DIAG_TYPE
Describes a procedure taking n parameters. optional_args is true if and only if the make_proc which this diag_proc describes had vartag present.

2.6.8. diag_ptr

Encoding number: 8

	object:		DIAG_TYPE
	qualifier:	DIAG_TQ
		   -> DIAG_TYPE
Describes a pointer to an object of DIAG_TYPE object. The DIAG_TQ qualifier qualifier qualifies the pointer, not the object pointed to.

2.6.9. diag_struct

Encoding number: 9

	tdf_shape:	SHAPE
	src_name:	TDFSTRING(k, n)
	fields:		LIST(DIAG_FIELD) 
		   -> DIAG_TYPE
Describes a structure. If src_name is a string contining zero characters this signifies no source tag for the whole structure. tdf_shape allows the total size to be computed.

2.6.10. diag_type_null

Encoding number: 10

		   -> DIAG_TYPE
A null DIAG_TYPE.

2.6.11. diag_union

Encoding number: 11

	tdf_shape:	SHAPE
	src_name:	TDFSTRING(k, n)
	fields:		LIST(DIAG_FIELD)
		   -> DIAG_TYPE
Describes a union. If src_name is a string contining zero characters this signifies no source tag for the whole union. tdf_shape allows the total size to be computed.

2.6.12. diag_variety

Encoding number: 12

	var:		VARIETY
		   -> DIAG_TYPE
Creates a DIAG_TYPE to describe an integer VARIETY var.

2.6.13. use_diag_tag

Encoding number: 13

	dtag:		DIAG_TAG
		   -> DIAG_TYPE
Obtains the DIAG_TYPE associated with DIAG_TAG dtag.


2.7. ENUM_VALUES

Number of encoding bits: 0
Is coding extendable: no

2.7.1. make_enum_values_list

Encoding number: 0

	value:		EXP sh
	src_name:	TDFSTRING(k, n)
		   -> ENUM_VALUES
ENUM_VALUES describe elements of an enumerated type. src_name is the source language name. value evaluates to a value of SHAPE sh. Note that all members of a LIST(ENUM_VALUES) must have the same sh.


2.8. DIAG_FIELD

Number of encoding bits: 0
Is coding extendable: no

2.8.1. make_diag_field

Encoding number: 0

	field_name:	TDFSTRING(k, n)
	found_at:	EXP OFFSET( ALIGNMENT whole, ALIGNMENT this_field
)
	field_type:	DIAG_TYPE
		   -> DIAG_FIELD
DIAG_FIELDs describe one field of a structure or union. field_name is the source language name. found_at is the OFFSET between whole (the enclosing structure or union), and this field (this_field). field_type is the DIAG_TYPE of the field.


2.9. DIAG_TQ

Number of encoding bits: 2
Is coding extendable: yes

DIAG_TQs are type qualifiers, used to qualify DIAG_TYPE s. A DIAG_TQ is constructed from diag_tq_null and the various add_diag_XXX operations.

2.9.1. add_diag_const

Encoding number: 1

	qual:		DIAG_TQ
		   -> DIAG_TQ
Marks a DIAG_TQ type qualifier as being const in the ANSI C sense.

2.9.2. add_diag_volatile

Encoding number: 2

	qual:		DIAG_TQ
		   -> DIAG_TQ
Marks a DIAG_TQ type qualifier as being volatile in the ANSI C sense.

2.9.3. diag_tq_null

Encoding number: 3

		   -> DIAG_TQ
Create a null DIAG_TQ type qualifier.


2.10. FILENAME

Sortname: foreign_sort("~diag_file")
Number of encoding bits: 2
Is coding extendable: yes

FILENAME record details of source files used in producing a CAPSULE. They can be tokenised for abbreviation.

2.10.1. filename_apply_token

Encoding number: 1

	token_value:	TOKEN
	token_args:	BITSTREAM
		   -> FILENAME
The token is applied to the arguments to give a FILENAME. If there is a definition for token_value in the CAPSULE then token_args is a BITSTREAM encoding of the SORTs of its parameters, in the order specified.

2.10.2. make_filename

Encoding number: 2

	date:		NAT
	machine:	TDFSTRING(k1, n1)
	file:		TDFSTRING(k2, n2)
		   -> FILENAME
Create a FILENAME for file file, dated date (a UNIX timestamp; seconds since 1 Jan 1970) on machine machine.


2.11. SOURCEMARK

Number of encoding bits: 1
Is coding extendable: yes

A SOURCEMARK records a location in the source program. Present SOURCEMARKs assume that a location can be described by one or two numbers within a FILENAME.

2.11.1. make_sourcemark

Encoding number: 1

	file:		FILENAME
	line_no:	NAT
	char_offset:	NAT
		   -> SOURCEMARK
Create a SOURCEMARK referencing the char_offset'th character on line line_no in file file.

char_offset is counted from 1, 0 meaning that no character offset is available.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/diag/diag5.html100644 1750 1750 5014 6466607533 17441 0ustar brooniebroonie Reserved diagnostic TOKENs

TDF Diagnostic Specification, Issue 3.0

January 1998

next section previous section current document TenDRA home page document index


3.0.1 - ~exp_to_source
3.0.2 - ~diag_id_source
3.0.3 - ~diag_type_scope
3.0.4 - ~diag_tag_scope

3. Reserved diagnostic TOKENs

Reserved TOKENs were used for diagnostic extensions to EXPs, to avoid adding new constructs the contents of an existing UNIT. All other parts of the diagnostic system occur in other UNITs.

3.0.1. ~exp_to_source

	body:		EXP sh
	from:		SOURCEMARK
	to:		SOURCEMARK
		   -> EXP sh
Records that the EXP body arose from translating program between SOURCEMARK from and SOURCEMARK to (inclusive).

3.0.2. ~diag_id_source

	body:		EXP sh
	name:		TDFSTRING(k, n)
	access:		EXP POINTER(al)
	type:		DIAG_TYPE
		   -> EXP sh
Within the EXP body a variable named name of DIAG_TYPE type can be accessed via the EXP access.

3.0.3. ~diag_type_scope

	body:		EXP sh
	name:		TDFSTRING(k, n)
	type:		DIAG_TYPE
		   -> EXP sh
Within the EXP body a source language type named name of DIAG_TYPE type is valid.

3.0.4. ~diag_tag_scope

	body:		EXP sh
	name:		TDFSTRING(k, n)
	type:		DIAG_TYPE
		   -> EXP sh
This TOKEN is obsolete.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/diag/diag6.html100644 1750 1750 13123 6466607533 17462 0ustar brooniebroonie Proposed changes

TDF Diagnostic Specification, Issue 3.0

January 1998

next section previous section current document TenDRA home page document index


4.1 - Language features currently missing
4.1.1 - Data types
4.1.2 - C++ requirements
4.1.3 - FORTRAN requirements
4.1.4 - Other requirements
4.2 - Areas for further abstraction
4.2.1 - Compilation related
4.2.2 - C related
4.2.3 - Naming of types
4.3 - Postscript: ANDF-DE

4. Proposed changes

It is thought likely that the new TDF entities described above will eventually be incorporated into the main TDF specification.

In several places below the absence of "standardised methods" is noted. These are cases where TDF can express some operation in several ways, and the installer cannot be expected to spot all of them and generate new diagnostic info.


4.1. Language features currently missing

The following sections list some of the language features known not to be supported by the current specification. It is not intended to be exhaustive.

4.1.1. Data types

4.1.2. C++ requirements

4.1.3. FORTRAN requirements

4.1.4. Other requirements


4.2. Areas for further abstraction

4.2.1. Compilation related

How a running program has been created from several components is of interest when debugging. The present system cannot record all details of how a program has been created. In particular there is no indication of the source language of any piece of TDF, nor of the full name of any of the source files.

4.2.2. C related

At present there is no defined link between the fundamental C types and the VARIETYs etc. used for them. Present installers for 32 bit machines cannot distinguish between int and long when generating diagnostics, other than by means of the standard token names which form part of the C producer language interface.

4.2.3. Naming of types

At present various DIAG_TYPEs have names, some don't. I suspect we should make a separate is_named operation and remove the other names.


4.3. Postscript - ANDF-DE

As this section makes clear, the TDF Diagnostic Specification was only ever really intended to deal with C. As of 1997, a more extensive diagnostic extension to TDF, ANDF-DE, is under development by DDC-I. This has been designed with the requirements of C, C++ and Ada in mind. It is intended that eventually ANDF-DE will be incorporated into the TDF specification, and the diagnostic format described here will be denegrated.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/guide/ 40755 1750 1750 0 6507734470 15653 5ustar brooniebroonietendra-doc-4.1.2.orig/doc/guide/footnote.html100644 1750 1750 10017 6466607535 20517 0ustar brooniebroonie Footnotes

TDF Guide, Issue 4.0

January 1998

next section previous section
current document TenDRA home page document index


Footnote 1

There are facilities to allow extensions to the number of constructors, so it is not quite as simple as this


Footnote 2

The "tld" UNITs gives usage information for namess to aid the linker, tld, to discover which namess have definitions and some usage information. The C producer also optionally constructs "diagnostics" UNITs (to give run-time diagnostic information).


Footnote 3

There is a similar distinction between tags introduced to be locals of a procedure using identify and variable (see section 5.3.1)


Footnote 4

Note that is not generally true for C bitfields; most C ABIs have (different) rules for putting in padding bits depending on the size of the bitfield and its relation with the natural alignments. This is a fruitful source of errors in data exchange between different C ABIs For more on similar limitations of bitfields in TDF (see Assigning and extracting bitfields).


Footnote 5

The vararg construction in C are implemented by giving more actuals than formals; the extra parameters are accessed by offset arithmetic with a pointer to a formal, using parameter_alignment to pad the offsets.


Footnote 6

If a formal parameter is to be used in this way, it should be marked as having out_par ACCESS in its corresponding TAGSHACC in callers_intro.

Footnote 7

However see also initial_value in section 3.2


Footnote 8

Exercise for the reader: what are the SORTs of these parameters?

The current C producer does this for some of the constructs, but not in any systematic manner; perhaps it will change.


Footnote 9

The order-specifying constructors are conditional, identify, repeat, labelled, sequence and variable


Footnote 10

A sufficient condition for not side-effecting in this sense is that there are no apply_procs or local_allocs in E; that any assignments in E are to variables defined in E; and that any branches in E are to labels defined in conditionals in E


Footnote 11

There are analogous rules for labelled and repeat with unused LABELs.


Footnote 12

This has to be modified if B contains any uses of local_free_all or last_local.


Footnote 13

However, we may find that the mapping of a constraint allows extra relationships for a class of architectures which do not hold in all generality; this may mean that some constructions are defined on this class while still being undefined in others (see section 13).


Footnote 14

I could equally have given simply shape_offset(sh_int) for S_i, but the above formulation is more uniform with respect to selection OFFSETs.


Footnote 15

For most architectures, these definition are dependent only on a few constants such as the maximum length of bitfield., expessed as tokens for the target. The precise specification of such target dependent tokens is of current interest outside the scope of this document.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/guide/guide1.html100644 1750 1750 5576 6466607535 20036 0ustar brooniebroonie A Guide to the TDF Specification, Issue 4.0

A Guide to the TDF Specification, Issue 4.0

January 1998

next section previous section current document TenDRA home page document index


1 - Introduction
2 - SORTs and TOKENs
3 - CAPSULEs and UNITs
4 - SHAPEs, ALIGNMENTs and OFFSETs.
5 - Procedures and Locals
6 - Control Flow within procedures
7 - Values, variables and assignments.
8 - Operations
9 - Constants
10 - Tokens and APIs
11 - TDF transformations
12 - TDF expansions of offsets
13 - Models of the TDF algebra
14 - Conclusion

1 Introduction

This memo is intended to be a fairly detailed commentary on the specification of TDF, a kind of Talmud to the Torah. If it conflicts with the specification document, it is wrong. The aim is elucidate the various constructions of TDF, giving examples of usages both from the point of view of a producer of TDF and how it is used to construct programs on particular platforms using various installers or translators. In addition, some attempt is made to give the reasons why the particular constructions have been chosen. Most of the commentary is a distillation of questions and answers raised by people trying to learn TDF from the specification document.

Throughout this document, references like (S5.1) are headings in the TDF specification, Issue 4.0. I use the term "compiling" or "producing" to mean the production of TDF from some source language and "translating" to mean making a program for some specific platform from TDF.

I use the first person where I am expressing my own opinions or preferences; these should not be taken as official opinions of DRA or the TenDRA team.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/guide/guide10.html100644 1750 1750 24156 6466607535 20131 0ustar brooniebroonie Operations

TDF Guide, Issue 4.0

January 1998

next section previous section current document TenDRA home page document index


8.1 - VARIETY and overflow
8.1.1 - ERROR_TREATMENT
8.2 - Division and remainder
8.3 - change_variety
8.4 - and, or, not, xor
8.5 - Floating-point operations, ROUNDING_MODE
8.6 - change_bitfield_to_int, change_int_to_bitfield
8.7 - make_compound, make_nof, n_copies

8 Operations

Most of the arithmetic operations of TDF have familiar analogues in standard languages and processors. They differ principally in how error conditions (e.g. numeric overflow) are handled. There is a wide diversity in error handling in both languages and processors, so TDF tries to reduce it to the simplest primitive level compatible with their desired operation in languages and their implementation on processors. Before delving into the details of error handling, it is worthwhile revisiting the SHAPEs and ranges in arithmetic VARIETYs.


8.1. VARIETY and overflow

An INTEGER VARIETY, for example, is defined by some range of signed natural numbers. A translator will fit this range into some possibly larger range which is convenient for the processor in question. For example, the integers with variety(1,10) would probably be represented as unsigned characters with range (0..255), a convenient representation for both storage and arithmetic.

The question then arises of what is meant by overflow in an operation which is meant to deliver an integer of this VARIETY - is it when the integer result is outside the range (1..10) or outside the range (0..255)? For purely pragmatic reasons, TDF chooses the latter - the result is overflowed when it is outside its representational range (0..255). If the program insists that it must be within (1..10), then it can always test for it. If the program uses the error handling mechanism and the result is outside (1..10) but still within the representational limits, then, in order for the program to be portable, then the error handling actions must in some sense be "continuous" with the normal action. This would not be the case if, for example, the value was used to index an array with bounds (1..10), but will usually be the case where the value is used in further arithmetic operations which have similar error handling. The arithmetic will continue to give the mathematically correct result provided the representational bounds are not exceeded.

The limits in a VARIETY are there to provide a guide to its representation, and not to give hard limits to its possible values. This choice is consistent with the general TDF philosophy of how exceptions are to be treated. If, for example, one wishes to do array-bound checking, then it must be done by explicit tests on the indices and jumping to some exception action if they fail. Similarly, explicit tests can be made on an integer value, provided its representational limits are not exceeded. It is unlikely that a translator could produce any more efficient code, in general, if the tests were implicit. The representational limits can be exceeded in arithmetic operations, so facilities are provided to either to ignore it , to allow one to jump to a label , or to obey a TDF exception handler if it happens.

8.1.1. ERROR_TREATMENT

Taking integer addition as an example, plus has signature:

	ov_err: 	ERROR_TREATMENT
	arg1: 	EXP INTEGER(v)
	arg2: 	EXP INTEGER(v)
		   -> 	EXP INTEGER(v)
The result of the addition has the same integer VARIETY as its parameters. If the representational bounds of v are exceeded, then the action taken depends on the ERROR_TREATMENT ov_err.

The ERROR_TREATMENT , impossible, is an assertion by the producer that overflow will not occur; on its head be it if it does.

The ERROR_TREATMENTS continue and wrap give "fixup" values for the result. For continue the fixup value is undefined. For wrap, the the answer will be modulo 2 to the power of the number of bits in the representational variety.Thus, integer arithmetic with byte representational variety is done modulo 256. This just corresponds to what happens in most processors and, incidentally, the definition of C.

The ERROR_TREATMENT that one would use if one wished to jump to a label is error_jump:

	lab: 	LABEL
		   -> 	ERROR_TREATMENT
A branch to lab will occur if the result overflows.

The ERROR_TREATMENT, trap(overflow) will raise a TDF exception(see section 6.3 on page 35)with ERROR_CODE overflow if overflow occurs.


8.2. Division and remainder

The various constructors in involving integer division (e.g. div1, rem1) have two ERROR_TREATMENT parameters, one for overflow and one for divide-by-zero e.g. div1 is:

	div_by_zero_error:	ERROR_TREATMENT
	ov_err:	ERROR_TREATMENT
	arg1:	EXP INTEGER(v)
	arg2:	EXP INTEGER(v)
		   -> EXP INTEGER(v)
. There are two different kinds of division operators (with corresponding remainder operators) defined. The operators div2 and rem2 are those generally implemented directly by processor instructions giving the sign of the remainder the same as the sign of the quotient. The other pair, div1 and rem1, is less commonly implemented in hardware, but have rather more consistent mathematical properties; here the sign of remainder is the same as the sign of divisor. Thus, div1(x, 2) is the same as shift_right(x, 1) which is only true for div2 if x is positive. The two pairs of operations give the same results if both operands have the same sign. The constructors div0 and rem0 allow the translator to choose whichever of the two forms of division is convenient - the producer is saying that he does not care which is used, as long as they are pairwise consistent. The precise definition of the divide operations is given in (S7.4)
.


8.3. change_variety

Conversions between the various INTEGER varieties are provided for by change_variety:

	ov_err:	ERROR_TREATMENT
	r: 	VARIETY
	arg1: 	EXP INTEGER(v)
		   -> 	EXP INTEGER(r)
If the value arg1 is outside the limits of the representational variety of r, then the ERROR_TREATMENT ov_err will be invoked.


8.4. and, or, not, xor

The standard logical operations, and, not, or and xor are provided for all integer varieties. Since integer varieties are defined to be represented in twos-complement the result of these operations are well defined.


8.5. Floating-point operations, ROUNDING_MODE

All of the floating-point (including complex) operations include ERROR-TREATMENTs. If the result of a floating-point operation cannot be represented in the desired FLOATING_VARIETY, the error treatment is invoked. If the ERROR_TREATMENT is wrap or impossible, the result is undefined; otherwise the jump operates in the same way as for integer operations. Both floating_plus and floating_mult are defined as n-ary operations. In general, floating addition and multiplication are not associative, but a producer may not care about the order in which they are to be performed. Making them appear as though they were associative allows the translator to choose an order which is convenient to the hardware.

Conversions from integer to floating are done by float_int and from floating to integers by round_with_mode . This latter constructor has a parameter of SORT ROUNDING_MODE which effectively gives the IEEE rounding mode to be applied to the float to produce its integer result.

One can extract the real and imaginary parts of a complex FLOATING using real_part and imaginary_part. A complex FLOATING can be constructed using make_complex. Normal complex arithmetic applies to all the other FLOATING constructors except for those explicitly excluded (eg floating_abs, floating_max etc.)


8.6. change_bitfield_to_int, change_int_to_bitfield

There are two bit-field operation, change_bitfield_to_int and change_int_to_bitfield to transform between bit-fields and integers. If the varieties do not fit the result is undefined; the producer can always get it right.


8.7. make_compound, make_nof, n_copies

There is one operation to make values of COMPOUND SHAPE, make_compound:

	arg1: 	EXP OFFSET(base, y)
	arg2: 	LIST(EXP)
		   -> EXP COMPOUND(sz)
The OFFSET arg1 is evaluated as a translate-time constant to give sz, the size of the compound object. The EXPs of arg2 are alternately OFFSETs (also translate-time constants) and values which will be placed at those offsets. This constructor is used to construct values given by structure displays; in C, these only occur with constant val[i] in global definitions. It is also used to provide union injectors; here sz would be the size of the union and the list would probably two elements with the first being an offset_zero.

Constant sized array values may be constructed using make_nof, make_nof_int (see section 8.7 on page 42), and n_copies. Again, they only occur in C as constants in global definitions.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/guide/guide11.html100644 1750 1750 15264 6466607535 20132 0ustar brooniebroonie Constants

TDF Guide, Issue 4.0

January 1998

next section
previous section current document TenDRA home page document index


9.1 - _cond constructors
9.2 - Primitive constant constructors

9 Constants

The representation of constants clearly has peculiar difficulties in any architecture neutral format. Leaving aside any problems of how numbers are to be represented, we also have the situation where a "constant" can have different values on different platforms. An obvious example would be the size of a structure which, although it is a constant of any particular run of a program, may have different values on different machines. Further, this constant is in general the result of some computation involving the sizes of its components which are not known until the platform is chosen. In TDF, sizes are always derived from some EXP OFFSET constructed using the various OFFSET arithmetic operations on primitives like shape_offset and offset_zero. Most such EXP OFFSETs produced are in fact constants of the platform; they include field displacements of structure as well as their sizes. TDF assumes that, if these EXPs can be evaluated at translate-time (i.e. when the sizes and alignments of primitive objects are known), then they must be evaluated there. An example of why this is so arises in make_compound; the SHAPE of its result EXP depends on its arg1 EXP OFFSET parameter and all SHAPEs must be translate-time values.

An initialisation of a TAGDEF is a constant in this sense *; this allows one to ignore any difficulties about their order of evaluation in the UNIT and consequently the order of evaluation of UNITs. Once again all the EXPs which are initialisations must be evaluated before the program is run; this obviously includes any make_proc or make_general_proc. . The limitation on an initialisation EXP to ensure this is basically that one cannot take the contents of a variable declared outside the EXP after all tokens and conditional evaluation is taken into account. In other words, each TDF translator effectively has an TDF interpreter which can do evaluation of expressions (including conditionals etc.) involving only constants such as numbers, sizes and addresses of globals. This corresponds very roughly to the kind of initialisations of globals that are permissible in C; for a more precise definition, see (S7.3).


9.1. _cond constructors

Another place where translate-time evaluation of constants is mandated is in the various _cond constructors which give a kind of "conditional compilation" facility; every SORT which has a SORTNAME, other that TAG, TOKEN and LABEL, has one of these constructors e.g. exp_cond:

	control: 	EXP INTEGER(v)
	e1: 	BITSTREAM EXP x
	e2: 	BITSTREAM EXP y
		   -> 	EXP x or EXP y
The constant, control, is evaluated at translate time. If it is not zero the entire construction is replaced by the EXP in e1; otherwise it is replaced by the one in e2. In either case, the other BITSTREAM is totally ignored; it even does not need to be sensible TDF. This kind of construction is use extensively in C pre-processing directives e.g.:

#if (sizeof(int) == sizeof(long)) ...


9.2. Primitive constant constructors

Integer constants are constructed using make_int:

	v:	VARIETY
	value: 	SIGNED_NAT
		   -> 	EXP INTEGER(v)
The SIGNED_NAT value is an encoding of the binary value required for the integer; this value must lie within the limits given by v. I have been rather slip-shod in writing down examples of integer constants earlier in this document; where I have written 1 as an integer EXP, for example, I should have written make_int(v, 1) where v is some appropriate VARIETY.

Constants for both floats and strings use STRINGs. A constant string is just an particular example of make_nof_int:

	v:	VARIETY
	str: 	STRING(k, n)
		   -> 	EXP NOF(n, INTEGER(v))
Each unsigned integer in str must lie in the variety v and the result is the constant array whose elements are the integers considered to be of VARIETY v. An ASCI-C constant string might have v = variety(-128,127) and k = 7; however, make_nof_int can be used to make strings of any INTEGER VARIETY; a the elements of a Unicode string would be integers of size 16 bits.

A floating constant uses a STRING which contains the ASCI characters of a expansion of the number to some base in make_floating:

	f: 	FLOATING_VARIETY
	rm: 	ROUNDING_MODE
	sign:	BOOL
	mantissa: 	STRING(k, n)
	base:	NAT
	exponent: 	SIGNED_NAT
		   -> 	EXP FLOATING(f)
For a normal floating point number, each integer in mantissa is either the ASCI `.'-symbol or the ASCI representation of a digit of the representation in the given base; i.e. if c is the ASCI symbol, the digit value is c-'0'. The resulting floating point number has SHAPE FLOATING(f) and value mantissa * base exponent rounded according to rm. Usually the base will be 10 (sometimes 2) and the rounding mode to_nearest. Any floating-point evaluation of expressions done at translate-time will be done to an accuracy greater that implied by the FLOATING_VARIETY involved, so that floating constants will be as accurate as the platform permits.

The make_floating construct does not apply apply to a complex FLOATING_VARIETY f; to construct a complex constant use make_complex with two make_floating arguments.

Constants are also provided to give unique null values for pointers, label values and procs i.e.: make_null_ptr, make_null_local_lv and make_null_proc. Any significant use of these values (e.g. taking the contents of a null pointer) is undefined, but they can be assigned and used in tests in the normal way.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/guide/guide12.html100644 1750 1750 23630 6466607535 20127 0ustar brooniebroonie Tokens and APIs

TDF Guide, Issue 4.0

January 1998

next section
previous section current document TenDRA home page document index


10.1 - Application programming interfaces
10.2 - Linking to APIs
10.2.1 - Target independent headers, unique_extern
10.3 - Language programming interfaces

10 Tokens and APIs

All of the examples of the use of TOKENs so far given have really been as abbreviations for commonly used constructs, e.g. the EXP OFFSETS for fields of structures. However, the real justification for TOKENs are their use as abstractions for things defined in libraries or application program interfaces (APIs).


10.1. Application programming interfaces

APIs usually do not give complete language definitions of the operations and values that they contain; generally, they are defined informally in English giving relationships between the entities within them. An API designer should allow implementors the opportunity of choosing actual definitions which fit their hardware and the possibility of changing them as better algorithms or representations become available.

The most commonly quoted example is the representation of the type FILE and its related operations in C. The ANSI C definition gives no common representation for FILE; its implementation is defined to be platform-dependent. A TDF producer can assume nothing about FILE; not even that it is a structure. The only things that can alter or create FILEs are also entities in the Ansi-C API and they will always refer to FILEs via a C pointer. Thus TDF abstracts FILE as a SHAPE TOKEN with no parameters, make_tok(T_FILE) say. Any program that uses FILE would have to include a TOKDEC introducing T_FILE:

make_tokdec(T_FILE, empty, shape())
and anywhere that it wished to refer to the SHAPE of FILE it would do:

shape_apply_token(make_tok(T_FILE), ())
Before this program is translated on a given platform, the actual SHAPE of FILE must be supplied. This would be done by linking a TDF CAPSULE which supplies the TOKDEF for the SHAPE of FILE which is particular to the target platform.

Many of the C operations which use FILEs are explicitly allowed to be expanded as either procedure calls or as macros. For example, putc(c,f) may be implemented either as a procedure call or as the expansion of macro which uses the fields of f directly. Thus, it is quite natural for putc(c, f) to be represented in TDF as an EXP TOKEN with two EXP parameters which allows it to be expanded in either way. Of course, this would be quite distinct from the use of putc as a value (as a proc parameter of a procedure for example) which would require some other representation. One such representation that comes to mind might be to simply to make a TAGDEC for the putc value, supplying its TAGDEF in the Ansi API CAPSULE for the platform. This might prove to be rather short-sighted, since it denies us the possibility that the putc value itself might be expanded from other values and hence it would be better as another parameterless TOKEN. I have not come across an actual API expansion for the putc value as other than a simple TAG; however the FILE* value stdin is sometimes expressed as:

#define stdin &_iob[0]
which illustrates the point. It is better to have all of the interface of an API expressed as TOKENs to give both generality and flexibility across different platforms.


10.2. Linking to APIs

In general, each API requires platform-dependent definitions to be supplied by a combination of TDF linking and system linking for that platform. This is illustrated in the following diagram giving the various phases involved in producing a runnable program.

There will be CAPSULEs for each API on each platform giving the expansions for the TOKENs involved, usually as uses of identifiers which will be supplied by system linking from some libraries. These CAPSULEs would be derived from the header files on the platform for the API in question, usually using some automatic tools. For example, there will be a TDF CAPSULE (derived from <stdio.h>) which defines the TOKEN T_FILE as the SHAPE for FILE, together with definitions for the TOKENs for putc, stdin, etc., in terms of identifiers which will be found in the library libc.a.

10.2.1. Target independent headers, unique_extern

Any producer which uses an API will use system independent information to give the common interface TOKENs for this API. In the C producer, this is provided by header files using pragmas, which tell the producer which TOKENs to use for the particular constructs of the API . In any target-independent CAPSULE which uses the API, these TOKENs would be introduced as TOKDECs and made globally accessible by using make_linkextern. For a world-wide standard API, the EXTERNAL "name" for a TOKEN used by make_linkextern should be provided by an application of unique_extern on a UNIQUE drawn from a central repository of names for entities in standard APIs; this repository would form a kind of super-standard for naming conventions in all possible APIs. The mechanism for controlling this super-standard has yet to be set up, so at the moment all EXTERN names are created by string_extern.

An interesting example in the use of TOKENs comes in abstracting field names. Often, an API will say something like "the type Widget is a structure with fields alpha, beta ..." without specifying the order of the fields or whether the list of fields is complete. The field selection operations for Widget should then be expressed using EXP OFFSET TOKENs; each field would have its own TOKEN giving its offset which will be filled in when the target is known. This gives implementors on a particular platform the opportunity to reorder fields or add to them as they like; it also allows for extension of the standard in the same way.

The most common SORTs of TOKENs used for APIs are SHAPEs to represent types, and EXPs to represent values, including procedures and constants. NATs and VARIETYs are also sometimes used where the API does not specify the types of integers involved. The other SORTs are rarely used in APIs; indeed it is difficult to imagine any realistic use of TOKENs of SORT BOOL. However, the criterion for choosing which SORTs are available for TOKENisation is not their immediate utility, but that the structural integrity and simplicity of TDF is maintained. It is fairly obvious that having BOOL TOKENs will cause no problems, so we may as well allow them.


10.3. Language programming interfaces

So far, I have been speaking as though a TOKENised API could only be some library interface, built on top of some language, like xpg3, posix, X etc. on top of C. However, it is possible to consider the constructions of the language itself as ideal candidates for TOKENisation. For example, the C for-statement could be expressed as TOKEN with four parameters *. This TOKEN could be expanded in TDF in several different ways, all giving the correct semantics of a for-statement. A translator (or other tools) could choose the expansion it wants depending on context and the properties of the parameters. The C producer could give a default expansion which a lazy translator writer could use, but others might use expansions which might be more advantageous. This idea could be extended to virtually all the constructions of the language, giving what is in effect a C-language API; perhaps this might be called more properly a language programming interface (LPI). Thus, we would have TOKENs for C for-statements, C conditionals, C procedure calls, C procedure definitions etc. *.

The notion of a producer for any language working to an LPI specific to the constructs of the language is very attractive. It could use different TOKENs to reflect the subtle differences between uses of similar constructs in different languages which might be difficult or impossible to detect from their expansions, but which could allow better optimisations in the object code. For example, Fortran procedures are slightly different from C procedures in that they do not allow aliasing between parameters and globals. While application of the standard TDF procedure calls would be semantically correct, knowledge of that the non-aliasing rule applies would allow some procedures to be translated to more efficient code. A translator without knowledge of the semantics implicit in the TOKENs involved would still produce correct code, but one which knew about them could take advantage of that knowledge.

I also think that LPIs would be a very useful tool for crystalising ideas on how languages should be translated, allowing one to experiment with expansions not thought of by the producer writer. This decoupling is also an escape clause allowing the producer writer to defer the implementation of a construct completely to translate-time or link-time, as is done at the moment in C for off-stack allocation. As such it also serves as a useful test-bed for TOKEN constructions which may in future become new constructors of core TDF.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/guide/guide13.html100644 1750 1750 30101 6466607535 20117 0ustar brooniebroonie TDF transformations

TDF Guide, Issue 4.0

January 1998

next section
previous section current document TenDRA home page document index


11.1 - Transformations as definitions
11.1.1 - Examples of transformations
11.1.2 - Programs with undefined values

11 TDF transformations

TDF to TDF transformations form the basis of most of the tools of TDF, including translators. TDF has a rich set of easily performed transformations; this is mainly due to its algebraic nature, the liberality of its sequencing rules, and the ease with which one can introduce new names over limited scopes. For example, a translator is always free to transform:

assign(e1, e2)
to:

identify(empty, new_tag, e1, assign(obtain_tag(new_tag), e2))
i.e. identify the evaluation of the left-hand side of the assignment with a new TAG and use that TAG as the left-hand operand of a new assignment in the body of the identification. Note that the reverse transformation is only valid if the evaluation of e1 does not side-effect the evaluation of e2. A producer would have had to use the second form if it wished to evaluate e1 before e2. The definition of assign allows its operands to be evaluated in any order while identify insists that the evaluation of its definition is conceptually complete before starting on its body.

Why would a translator wish to make the more complicated form from the simpler one? This would usually depend on the particular forms of e1 and e2 as well as the machine idioms available for implementing the assignment. If, for example, the joint evaluation of e1 and e2 used more evaluation registers than is available, the transformation is probably a good idea. It would not necessarily commit one to putting the new tag value into the stack; some other more global criteria might lead one to allocate it into a register disjoint from the evaluation registers. In general, this kind of transformation is used to modify the operands of TDF constructions so that the code-production phase of the translator can just "churn the handle" knowing that the operands are already in the correct form for the machine idioms.

Transformations like this are also used to give optimisations which are largely independent of the target architecture. In general, provided that the sequencing rules are not violated, any EXP construction, F(X), say, where X is some inner EXP, can be replaced by:

 identify(empty, new_tag, X, F(obtain_tag(new_tag))). 
This includes the extraction of expressions which are constant over a loop; if F was some repeat construction and one can show that the EXP X is invariant over the repeat, the transformation does the constant extraction.

Most of the transformations performed by translators are of the above form, but there are many others. Particular machine idioms might lead to transformations like changing a test (i>=1) to (i>0) because the test against zero is faster; replacing multiplication by a constant integer by shifts and adds because multiplication is slow; and so on. Target independent transformations include things like procedure inlining and loop unrolling. Often these target independent transformations can be profitably done in terms of the TOKENs of an LPI; loop unrolling is an obvious example.


11.1. Transformations as definitions

As well being a vehicle for expressing optimisation, TDF transformations can be used as the basis for defining TDF. In principle, if we were to define all of the allowable transformations of the TDF constructions, we would have a complete definition of TDF as the initial model of the TDF algebra. This would be a fairly impracticable project, since the totality of the rules including all the simple constructs would be very unwieldy, difficult to check for inconsistencies and would not add much illumination to the definition. However, knowledge of allowable transformations of TDF can often answer questions of the meaning of diverse constructs by relating them to a single construct. What follows is an alphabet of generic transformations which can often help to answer knotty questions. Here, E[X \ Y] denotes an EXP E with all internal occurrences of X replaced by Y.

If F is any non order-specifying
* EXP constructor and E is one of the EXP operands of F, then:
F(... , E, ...) xde identify(empty, newtag, E, F(... , obtain_tag(newtag), ...)) If E is a non side-effecting * EXP and none of the variables used in E are assigned to in B: identify(v, tag, E, B) xde B[obtain_tag(tag) \ E] If all uses of tg in B are of the form contents(shape(E), obtain_tag(tg)): variable(v, tg, E, B) xde identify(v, nt, E, B[contents(shape(E), obtain_tag(tg)) \ obtain_tag(nt)]) sequence((S1, ... , Sn), sequence((P1, ..., Pm), R) xdb sequence((S1, ..., Sn, P1, ..., Pm), R) If Si = sequence((P1, ..., Pm), R) : sequence((S1, ... , Sn), T) xdb sequence((S1, ..., Si-1, P1, ..., Pm, R, Si+1, ..., Sn), T) E xdb sequence(( ), E) If D is either identify or variable: D(v, tag, sequence((S1, ..., Sn), R), B) xde sequence((S1, ..., Sn), D(v, tag, R, B) ) If Si is an EXP BOTTOM , then: sequence((S1, S2, ... Sn), R) xde sequence((S1, ... Si-1), Si) If E is an EXP BOTTOM, and if D is either identify or variable: D(v, tag, E, B) xde E If Si is make_top(), then: sequence((S1, S2, ... Sn), R) xdb sequence((S1, ... Si-1, Si+1, ...Sn), R) If Sn is an EXP TOP: sequence((S1, ... Sn), make_top()) xdb sequence((S1 , ..., Sn-1), Sn) If E is an EXP TOP and E is not side-effecting then E xde make_top() If C is some non order-specifying and non side-effecting constructor, and Si is C(P1,..., Pm) where P1..m are the EXP operands of C: sequence((S1, ..., Sn), R) xde sequence((S1, ..., Si-1, P1, ..., Pm, Si+1, ..., Sn), R) If none of the Si use the label L: conditional(L, sequence((S1, ..., Sn), R), A) xde sequence((S1, ..., Sn), conditional(L, R, A)) If there are no uses of L in X *: conditional(L, X, Y) xde X conditional(L, E , goto(Z)) xde E[L \ Z] If EXP X contains no use of the LABEL L: conditional(L, conditional(M, X, Y), Z) xde conditional(M, X, conditional(L, Y, Z)) repeat(L, I, E) xde sequence( (I), repeat(L, make_top(), E)) repeat(L, make_top(), E) xde conditional(Z, E[L \ Z], repeat(L, make_top(), E)) If there are no uses of L in E: repeat(L, make_top(), sequence((S, E), make_top()) xde conditional(Z, S[L \ Z], repeat(L, make_top(), sequence((E, S), make_top()) ) ) If f is a procedure defined * as: make_proc(rshape, formal1..n, vtg , B( return R1, ..., return Rm)) where: formali = make_tagshacc(si, vi, tgi) and B is an EXP with all of its internal return constructors indicated parametrically then, if Ai has SHAPE si apply_proc(rshape, f, (A1, ... , An), V) xde variable( empty, newtag, make_value((rshape=BOTTOM)? TOP: rshape), labelled( (L), variable(v1, tg1, A1, ... , variable(vn, tgn, An, variable(empty, vtg, V, B(sequence( assign(obtain_tag(newtag), R1), goto(L)) , ... , sequence( assign(obtain_tag(newtag), Rm), goto(L)) ) ) ), contents(rshape, obtain_tag(newtag)) ) ) assign(E, make_top()) xde sequence( (E), make_top()) contents(TOP, E) xde sequence((E), make_top()) make_value(TOP) xde make_top() component(s, contents(COMPOUND(S), E), D) xde contents(s, add_to_ptr(E, D)) make_compound(S, ((E1, D1), ..., (En, Dn)) ) xde variable(empty, nt, make_value(COMPOUND(S)), sequence( ( assign(add_to_ptr(obtain_tag(nt), D1), E1), ... , assign(add_to_ptr(obtain_tag(nt), Dn), En) ), contents(S, obtain_tag(nt)) ) )

11.1.1. Examples of transformations

Any of these transformations may be performed by the TDF translators. The most important is probably {A} which allows one to reduce all of the EXP operands of suitable constructors to obtain_tags. The expansion rules for identification, {G}, {H} and {I}, gives definition to complicated operands as well as strangely formed ones, e.g. return(... return(X)...). Rule {A} also illustrates neatly the lack of ordering constraints on the evaluation of operands. For example, mult(et, exp1, exp2) could be expanded by applications of {A} to either:

identify(empty, t1, exp1, 
	 identify(empty, t2, exp2, mult(et, obtain_tag(t1), obtain_tag(t2))) )
or:

identify(empty, t2, exp2, 
	 identify(empty, t1, exp1, mult(et, obtain_tag(t1), obtain_tag(t2))) )
Both orderings of the evaluations of exp1 and exp2 are acceptable, regardless of any side-effects in them. There is no requirement that both expansions should produce the same answer for the multiplications; the only person who can say whether either result is "wrong" is the person who specified the program.

Many of these transformations often only come into play when some previous transformation reveals some otherwise hidden information. For example, after procedure in-lining given by {U} or loop un-rolling given by {S}, a translator can often deduce the behaviour of a _test constructor, replacing it by either a make_top or a goto. This may allow one to apply either {J} or {H} to eliminate dead code in sequences and in turn {N} or {P} to eliminate entire conditions and so on.

Application of transformations can also give expansions which are rather pathological in the sense that a producer is very unlikely to form them. For example, a procedure which returns no result would have result statements of the form return(make_top()). In-lining such a procedure by {U} would have a form like:

variable(empty, nt, make_shape(TOP),
	labelled( (L), 
	... sequence((assign(obtain_tag(nt), make_top())), 
goto(L)) ...
	contents(TOP, obtain_tag(nt))
	)
)
The rules {V}, {W} and {X} allow this to be replaced by:

variable(empty, nt, make_top(),
	labelled( (L), 
	... sequence((obtain_tag(nt)), goto(L)) ...
	sequence((obtain_tag(nt)), make_top())
	)
)
The obtain_tags can be eliminated by rule {M} and then the sequences by {F}. Sucessive applications of {C} and {B} then give:

labelled( (L), 
	... goto(L) ...
	make_top()
	)

11.1.2. Programs with undefined values

The definitions of most of the constructors in the TDF specification are predicated by some conditions; if these conditions are not met the effect and result of the constructor is not defined for all possible platforms
*. Any value which is dependent on the effect or result of an undefined construction is also undefined. This is not the same as saying that a program is undefined if it can construct an undefined value - the dynamics of the program might be such that the offending construction is never obeyed.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/guide/guide14.html100644 1750 1750 21141 6466607535 20124 0ustar brooniebroonie TDF expansions of offsets

TDF Guide, Issue 4.0

January 1998

next section
previous section current document TenDRA home page document index


12.1 - Bitfield offsets

12 TDF expansions of offsets

Consider the C structure defined by:

typedef struct{ int i; double d; char c;} mystruct;
Given that sh_int, sh_char and sh_double are the SHAPEs for int, char and double, the SHAPE of mystruct is constructed by:

SH_mystruct = compound(S_c) 
where:
S_c = offset_add(O_c, shape_offset(sh_char))
where:
O_c = offset_pad(alignment(sh_char), S_d)
where:
S_d = offset_add(O_d, shape_offset(sh_double))
where:
O_d = offset_pad(alignment(sh_double), S_i)
where*:
S_i = offset_add(O_i, shape_offset(sh_int)) and: O_i = offset_zero(alignment(sh_int)) Each of S_c, S_d and S_i gives the minimum "size" of the space required to upto and including the field c, d and i respectively. Each of O_c, O_d and O_i gives the OFFSET "displacement" from a pointer to a mystruct required to select the fields c, d and i respectively. The C program fragment:

mystruct s;
.... s.d = 1.0; ...
would translate to something like:

variable(empty, tag_s, make_value(compound(S_c)),
	sequence( ... 
	 assign(add_to_ptr(obtain_tag(tag_s), O_d), 1.0)
	 ...
	)
)

Each of the OFFSET expressions above are ideal candidates for tokenisation; a producer would probably define tokens for each of them and use exp_apply_token to expand them at each of their uses.

From the definition, we find that:

S_c = shape_offset(SH_mystruct)
i.e. an OFFSET(alignment(sh_int) xc8  alignment(sh_char) xc8  alignment(sh_double), {})
This would not be the OFFSET required to describe sizeof(mystruct) in C, since this is defined to be the difference between successive elements an array of mystructs. The sizeof OFFSET would have to pad S_c to the alignment of SH_mystruct:

offset_pad(alignment(SH_mystruct), S_c)
This is the OFFSET that one would use to compute the displacement of an element of an array of mystructs using offset_mult with the index.

The most common use of OFFSETs is in add_to_ptr to compute the address of a structure or array element. Looking again at its signature in a slightly different form:

	arg1: 	EXP POINTER(y xc8  A)
	arg2: 	EXP OFFSET(y, z)
		   -> EXP POINTER(z)
	... for any ALIGNMENT A
one sees that arg2 can measure an OFFSET from a value of a "smaller" alignment than the value pointed at by arg1. If arg2 were O_d, for example, then arg1 could be a pointer to any structure of the form struct {int i, double d,...} not just mystruct. The general principle is that an OFFSET to a field constructed in this manner is independent of any fields after it, corresponding to normal usage in both languages and machines. A producer for a language which conflicts with this would have to produce less obvious TDF, perhaps by re-ordering the fields, padding the offsets by later alignments or taking maxima of the sizes of the fields.


12.1. Bitfield offsets

Bitfield offsets are governed by rather stricter rules. In order to extract or assign a bitfield, we have to find an integer variety which entirely contains the bitfield. Suppose that we wished to extract a bitfield by:

bitfield_contents(v, p:POINTER(X), b:OFFSET(Y, B))
Y must be an alignment(I) where I is some integer SHAPE, contained in X. Further, this has to be equivalent to:

bitfield_contents(v, add_ptr(p, d:OFFSET(Y,Y)), b':OFFSET(Y, B))
for some d and b' such that:

offset_pad(v, shape_offset(I)) >= b'
and
offset_add(offset_pad(v, offset_mult(d, sizeof(I)), b') = b
Clearly, we have a limitation on the length of bitfields to the maximum integer variety available; in addition, we cannot have a bitfield which overlaps two such varieties.

The difficulties inherent in this may be illustrated by attempting to construct an array of bitfields using the nof constructor. Assuming a standard architecture, we find that we cannot usefully define an object of SHAPE nof(N, bitfield(bfvar_bits(b, M))) without padding this shape out to some integer variety which can contain M bits. In addition, they can only be usefully indexed (using bitfield_contents)either if M is some power of 2 or M*N is less than the length of the maximum integer variety. Thus a producer must be sure that these conditions hold if he is to generate and index this object simply. Even here he is in some dificulty, since he does not know the representational varieties of the integers available to him; also it is difficult for him to ensure that the alignment of the entire array is in some sense minimal. Similar difficulties occur with bitfields in structures - they are not restricted to arrays.

The solution to this conundrum in its full generality requires knowledge of the available representational varieties. Particular languages have restrictions which means that sub-optimal solutions will satisfy its specification on the use of bitfields. For example, C is satisfied with bitfields of maximum length 32 and simple alignment constraints. However, for the general optimal solution, I can see no reasonable alternative to the installer defining some tokens to produce bitfield offsets which are guaranteed to obey the alignment rules and also give optimal packing of fields and alignments of the total object for the platform in question. I believe that three tokens are sufficient to do this; these are analogous to the constructors offset_zero, offset_pad and offset_mult with ordinary alignments and their signatures could be:

~Bitfield_offset_zero:
	n:	NAT
	issigned:	BOOL
		   -> EXP OFFSET(A, bfvar_bits(issigned, n))
	Here the result is a zero offset to the bitfield with `minimum' integer variety alignment A.

~Bitfield_offset_pad:
	n:	NAT
	issigned:	BOOL
	sh:	SHAPE
		   -> EXP OFFSET(alignment(sh) xc8  A, bfvar_bits(issigned, 
n))
	Here the result is the shape_offset of sh padded with the `minimum' alignment A so that it can accomodate the bitfield. Note that this may involve padding 
sh with the alignment of the maximum integer variety if there are not enough bits left at the end of 
sh.

~Bitfield_offset_mult:
	n:	NAT
	issigned:	BOOL
	ind:	EXP INTEGER(v)
		   -> EXP OFFSET(A, bfvar_bits(issigned, n))
	Here the result is an offset which gives the displacement of indth
 element of an array of n-bit bitfields with `minimum' alignment A. Note that this will correspond to a normal multiplication only if 
n is a power of 2 or ind*n <= length of the maximum integer variety.

These tokens can be expressed in TDF if the lengths of the available varieties are known, i.e., they are installer defined *. They ought to be used in place of offset_zero, offset_pad and offset_mult whereever the alignment or shape (required to construct a SHAPE or an argument to the bitfield constructs) is a pure bitfield. The constructor nof should never be used on a pure bitfield; instead it should be replaced by:

S = compound(~Bitfield_offset_mult(M, b, N))
to give a shape, S, representing an array of N M-bit bitfields. This may not be just N*M bits; for example ~Bitfield_offset_mult may be implemented to pack an array of 3-bit bitfields as 10 fields to a 32-bit word. In any case, one would expect that normal rules for offset arithmetic are preserved, e.g.

offset_add(~Bitfield_offset_pad(M, b, S), size(bitfield(bfvar_bits(b, N))) )
   =  ~Bitfield_offset_mult(M, b, N+1)

where size(X) = offset_pad(alignment(X), shape_offset(X))


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/guide/guide15.html100644 1750 1750 34062 6466607535 20133 0ustar brooniebroonie Models of the TDF algebra

TDF Guide, Issue 4.0

January 1998

next section
previous section current document TenDRA home page document index


13.1 - Model for a 32-bit standard architecture
13.1.1 - Alignment model
13.1.2 - Offset and pointer model
13.1.3 - Size model
13.2 - Model for machines like the iAPX-432
13.2.1 - Alignment model
13.2.2 - Offset and pointer model
13.2.3 - Size model
13.2.4 - Offset arithmetic

13 Models of the TDF algebra

TDF is a multi-sorted abstract algebra. Any implementation of TDF is a model of this algebra, formed by a mapping of the algebra into a concrete machine. An algebraic mapping gives a concrete representation to each of the SORTs in such a way that the representation of any construction of TDF is independent of context; it is a homomorphism. In other words if we define the mapping of a TDF constructor, C, as MAP[C] and the representation of a SORT, S, as REPR[S] then:

REPR[ C(P1 ,..., Pn) ] = MAP[C]( REPR(P1) ,..., REPR(Pn
))
Any mapping has to preserve the equivalences of the abstract algebra, such as those exemplified by the transformations {A} - {Z} in
section 11.1 on page 48. Similarly, the mappings of any predicates on the constructions, such as those giving "well-formed" conditions, must be satisfied in terms of the mapped representations.

In common with most homomorphisms, the mappings of constructions can exhibit more equivalences than are given by the abstract algebra. The use of these extra equivalences is the basis of most of the target-dependent optimisations in a TDF translator; it can make use of "idioms" of the target architecture to produce equivalent constructions which may work faster than the "obvious" translation. In addition, we may find that may find that more predicates are satisfied in a mapping than would be in the abstract algebra. A particular concrete mapping might allow more constructions to be well-formed than are permitted in the abstract; a producer can use this fact to target its output to a particular class of architectures. In this case, the producer should produce TDF so that any translator not targeted to this class can fail gracefully.

Giving a complete mapping for a particular architecture here is tantamount to writing a complete translator. However, the mappings for the small but important sub-algebra concerned with OFFSETs and ALIGNMENTs illustrates many of the main principles. What follows is two sets of mappings for disparate architectures; the first gives a more or less standard meaning to ALIGNMENTs but the second may be less familiar.


13.1. Model for a 32-bit standard architecture

Almost all current architectures use a "flat-store" model of memory. There is no enforced segregation of one kind of data from another - in general, one can access one unit of memory as a linear offset from any other. Here, TDF ALIGNMENTs are a reflection of constraints for the efficient access of different kinds of data objects - usually one finds that 32-bit integers are most efficiently accessed if they start at 32 bit boundaries and so on.

13.1.1. Alignment model

The representation of ALIGNMENT in a typical standard architecture is a single integer where:

REPR [ { } ] = 1
REPR[ {bitfield} ] = 1
REPR[ {char_variety} ] = 8
REPR[ {short_variety} ] = 16
Otherwise, for all other primitive ALIGNMENTS a:

REPR [ {a} ] = 32
The representation of a compound ALIGNMENT is given by:

REPR [ A xc8  B ] = Max(REPR[ A ] , REPR[ B ])
i.e. MAP[ unite_alignment] = Max
while the ALIGNMENT inclusion predicate is given by:

REPR[ A ... B ]= REPR[ A ] xb3  REPR[ B }

All the constructions which make ALIGNMENTs are represented here and they will always reduce to an integer known at translate-time. Note that the mappings for xc8 and ... must preserve the basic algebraic properties derived from sets; for example the mapping of xc8 must be idempotent, commutative and associative, which is true for Max.

13.1.2. Offset and pointer model

Most standard architectures use byte addressing; to address bits requires more complication. Hence, a value with SHAPE POINTER(A) where REPR[A)]xb9 1 is represented by a 32-bit byte address.

We are not allowed to construct pointers where REPR[A] = 1, but we still have offsets whose second alignment is a bitfield. Thus a offsets to bitfield are represented differently to offsets to other alignments:

A value with SHAPE OFFSET(A, B) where REPR(B) xb9 1 is represented by a 32-bit byte-offset.

A value with SHAPE OFFSET(A, B) where REPR(B) = 1 is represented by a 32-bit bit-offset.

13.1.3. Size model

In principle, the representation of a SHAPE is a pair of an ALIGNMENT and a size, given by shape_offset applied to the SHAPE. This pair is constant which can be evaluated at translate time. The construction, shape_offset(S), has SHAPE OFFSET(alignment(s), { } ) and hence is represented by a bit-offset:

REPR[ shape_offset(top()) ] = 0
REPR[ shape_offset(integer(char_variety)) ] = 8
REPR[ shape_offset(integer(short_variety)) ] = 16
.... etc. for other numeric varieties
REPR[ shape_offset(pointer(A)) ]= 32
REPR[ shape_offset(compound(E)) ] = REPR[ E ]
REPR[ shape_offset(bitfield(bfvar_bits(b, N))) ] = N
REPR[ shape_offset(nof(N, S)) ] = N * REPR[ offset_pad(
alignment(S), shape_offset(S)) ]
         where S is not a bitfield shape
Similar considerations apply to the other offset-arithmetic constructors. In general, we have:

REPR [ offset_zero(A) ] = 0             for all A

REPR[offset_pad(A, X:OFFSET(C,D)) 
      		= ((REPR[X] + REPR[A]-1)/(REPR[A]))*REPR[A]/8
	if REPR[A] xb9  1 xd9  REPR[D ] =1
Otherwise :
REPR[offset_pad(A, X:OFFSET(C,D)) 
		= ((REPR[X] + REPR[A]-1)/(REPR[A]))*REPR[A]
	
REPR[ offset_add(X:OFFSET(A,B), Y:OFFSET(C,D) )] 
		= REPR[ X ] *8+ REPR[ Y ]
        if REPR[B] xb9 1 xd9  REPR[D ] =1
 Otherwise:
REPR[ offset_add(X, Y )] = REPR[ X ] + REPR[ Y ]

REPR[ offset_max(X: OFFSET(A, B), Y: OFFSET(C, D))]
	= Max(REPR[ X ], 8*REPR[ Y ]
	if REPR[ B ] = 1 xd9  REPR[D ] xb9  1
REPR[ offset_max(X: OFFSET(A, B), Y: OFFSET(C, D))]
	= Max(8*REPR[ X ], REPR[ Y ]
	if REPR[ D ] = 1 xd9  REPR[ B ] xb9  1
Otherwise:
REPR[ offset_max(X, Y) ] = Max( REPR[ X ], REPR[ Y ])

REPR[offset_mult(X, E) ] = REPR[ X ] * REPR[ E ]


IA translator working to this model maps ALIGNMENTs into the integers and their inclusion constraints into numerical comparisons. As a result, it will correctly allow many OFFSETs which are disallowed in general; for example, OFFSET({pointer}, {char_variety}) is allowed since REPR[ {pointer} ] xb3 REPR[ {char_variety} ]. Rather fewer of these extra relationships are allowed in the next model considered.


13.2. Model for machines like the iAPX-432

The iAPX-432 does not have a linear model of store. The address of a word in store is a pair consisting of a block-address and a displacement within that block. In order to take full advantage of the protection facilities of the machine, block-addresses are strictly segregated from scalar data like integers, floats, displacements etc. There are at least two different kind of blocks, one which can only contain block-addresses and the other which contains only scalar data. There are clearly difficulties here in describing data-structures which contain both pointers and scalar data.

Let us assume that the machine has bit-addressing to avoid the bit complications already covered in the first model. Also assume that instruction blocks are just scalar blocks and that block addresses are aligned on 32-bit boundaries.

13.2.1. Alignment model

An ALIGNMENT is represented by a pair consisting of an integer, giving the natural alignments for scalar data, and boolean to indicate the presence of a block-address. Denote this by:

(s: alignment_of_scalars, b: has_blocks)
We then have:

REPR[ alignment({ }) ] = (s: 1, b: FALSE)
REPR[ alignment({char_variety}) = (s: 8, b:FALSE)
... etc. for other numerical and bitfield varieties.
REPR[ alignment({pointer}) ] = (s: 32, b: TRUE)
REPR[ alignment({proc}) ] = (s: 32, b: TRUE)
REPR[ alignment({local_label_value}) ] = (s: 32, b: TRUE)
The representation of a compound ALIGNMENT is given by:

REPR[ A xc8  B ] = (s: Max(REPR[ A ].s, REPR[ B ].s), b: REPR[ A ].b xda  REPR[ B ].b )

and their inclusion relationship is given by:

REPR[ A ... B ] = (REPR[ A ].s xb3  REPR[ B ].s) xd9  (REPR[ A ].b xda  ÿ REPR[ B ].b)

13.2.2. Offset and pointer model

A value with SHAPE POINTER A where ÿ REPR[ A ].b is represented by a pair consisting of a block-address of a scalar block and an integer bit-displacement within that block. Denote this by:

(sb: scalar_block_address, sd: bit_displacement)
A value with SHAPE POINTER A where REPR[ A ].b is represented by a quad word consisting of two block-addresses and two bit-displacements within these blocks. One of these block addresses will contain the scalar information pointed at by one of the bit-displacements; similarly, the other pair will point at the block addresses in the data are held. Denote this by:

(sb: scalar_block_address, ab: address_block_address,
  sd: scalar_displacement, ad: address_displacement )

A value with SHAPE OFFSET(A, B) where ÿ REPR[ A ].b is represented by an integer bit-displacement.

A value with SHAPE OFFSET(A, B) where REPR[ A ].b is represented by a pair of bit-displacements, one relative to a scalar-block and the other to an address-block. Denote this by:

( sd: scalar_displacement, ad: address_displacement )

13.2.3. Size model

The sizes given by shape_offset are now:

REPR[shape_offset(integer(char_variety)) ] = 8
... etc. for other numerical and bitfield varieties.
REPR[ shape_offset(pointer(A)) ] = ( REPR[ A ].b ) ? (sd: 64, ad: 64) : (sd: 32, ad: 32)
REPR[ shape_offset(offset(A, B)) ] = (REPR[ A ].b) ? 64 : 32)
REPR[ shape_offset(proc) ] = (sd: 32, ad: 32)
REPR[ shape_offset(compound(E)) ] = REPR[ E ]
REPR[ shape_offset(nof(N, S)) ] 
                         = N* REPR[ offset_pad(alignment(S)), shape_offset(S)) ]
REPR[ shape_offset(top) ] = 0

13.2.4. Offset arithmetic

The other OFFSET constructors are given by:

REPR[ offset_zero(A) ] = 0		                          if  ÿ REPR[ A ].b
REPR[ offset_zero(A) ] = (sd: 0, ad: 0)        		if REPR[ A ].b

REPR[ offset_add(X: OFFSET(A,B), Y: OFFSET(C, D)) ] = REPR[ X ] + REPR[ Y ]
	if ÿ REPR[ A ].b xd9  ÿ REPR[ C ].b
REPR[ offset_add(X: OFFSET(A,B), Y: OFFSET(C, D)) ]
                       = ( sd: REPR[ X ].sd + REPR[ Y ].sd, ad: REPR[ X ].ad + REPR[ Y ].ad)
	if REPR[ A ].b xd9  REPR[ C ].b
REPR[ offset_add(X: OFFSET(A,B), Y: OFFSET(C, D)) ]
                       = ( sd: REPR[ X ].sd + REPR[ Y ], ad:REPR[ X ].ad )
	if REPR[ A ].b xd9  ÿ REPR[ C ].b

REPR[ offset_pad(A, Y: OFFSET(C, D)) ] = (REPR[Y ] + REPR[A ].s - 1)/REPR[ A ].s
	if ÿ REPR[ A ].b xd9  ÿ REPR[ C ].b
REPR[ offset_pad(A, Y: OFFSET(C, D)) ]
                       = ( sd: (REPR[Y ] + REPR[A ].s - 1)/REPR[ A ].s, ad: REPR[ Y ].ad)
	if REPR[ C ].b
REPR[ offset_pad(A, Y: OFFSET(C, D)) ]
                       = ( sd: (REPR[Y]+REPR[A].s-1)/REPR[A].s, ad: 0)
	if REPR[ A ].b xd9  ÿ REPR[ C ].b
REPR[ offset_max(X: OFFSET(A,B), Y: OFFSET(C, D)) ] 
                        = Max(REPR[ X ], REPR[ Y ])
	if ÿ REPR[ A ].b xd9  ÿ REPR[ C ].b
REPR[ offset_max(X: OFFSET(A,B), Y: OFFSET(C, D)) ]
                       = ( sd: Max(REPR[ X ].sd, REPR[ Y ].sd), 
                             ad: Max(REPR[ X ].a, REPR[ Y ].ad) )
	if REPR[ A ].b xd9  REPR[ C ].b
REPR[ offset_max(X: OFFSET(A,B), Y: OFFSET(C, D)) ]
                       = ( sd: Max(REPR[ X ].sd, REPR[ Y ]), ad:REPR[ X ].ad )
	if REPR[ A ].b xd9  ÿ REPR[ C ].b
REPR[ offset_max(X: OFFSET(A,B), Y: OFFSET(C, D)) ]
                       = ( sd: Max(REPR[Y ].sd, REPR[ X]), ad: REPR[Y ].ad )
	if REPR[C ].b xd9  ÿ REPR[ A ].b

REPR[ offset_subtract(X: OFFSET(A,B), Y: OFFSET(C, D)) ] 
                       = REPR[ X ]- REPR[ Y ]
	if ÿ REPR[ A ].b xd9  ÿ REPR[ C ].b
REPR[ offset_subtract(X: OFFSET(A,B), Y: OFFSET(C, D)) ]
                       = ( sd: REPR[ X ].sd - REPR[ Y ].sd, ad:REPR[ X ].ad - REPR[ Y ].ad)
	if REPR[ A ].b xd9  REPR[ C ].b
REPR[ offset_add(X: OFFSET(A,B), Y: OFFSET(C, D)) ]
                       = REPR[ X ].sd - REPR[ Y ]
	if REPR[ A ].b xd9  ÿ REPR[ C ].b
.... and so on.

Unlike the previous one, this model of ALIGNMENTs would reject OFFSETs such as OFFSET({long_variety}, {pointer}) but not OFFSET( {pointer}, {long_variety}) since:

 REPR [ {long_variety} ... {pointer} ] = FALSE
but:
 REPR [ {pointer} ... {long_variety} ] = TRUE

This just reflects the fact that there is no way that one can extract a block-address necessary for a pointer from a scalar-block, but since the representation of a pointer includes a scalar displacement, one can always retrieve a scalar from a pointer to a pointer.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/guide/guide16.html100644 1750 1750 2260 6466607535 20107 0ustar brooniebroonie Conclusion

TDF Guide, Issue 4.0

January 1998

next section
previous section current document TenDRA home page document index


14 Conclusion

This commentary is not complete. I have tended to go into considerable detail into aspects which I consider might be unfamiliar and skip over others which occur in most compiling systems. I also have a tendency to say things more than once, albeit in different words; however if something is worth saying, it is worth saying twice.

I shall continue tracking further revisions of the TDF specification in later releases or appendices to this document.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/guide/guide3.html100644 1750 1750 3665 6466607535 20035 0ustar brooniebroonie Introduction

TDF Guide, Issue 4.0

January 1998

next section previous section current document TenDRA home page document index


1 Introduction

This memo is intended to be a fairly detailed commentary on the specification of TDF, a kind of Talmud to the Torah. If it conflicts with the specification document, it is wrong. The aim is elucidate the various constructions of TDF, giving examples of usages both from the point of view of a producer of TDF and how it is used to construct programs on particular platforms using various installers or translators. In addition, some attempt is made to give the reasons why the particular constructions have been chosen. Most of the commentary is a distillation of questions and answers raised by people trying to learn TDF from the specification document.

Throughout this document, references like (S5.1) are headings in the TDF specification, Issue 4.0. I use the term "compiling" or "producing" to mean the production of TDF from some source language and "translating" to mean making a program for some specific platform from TDF.

Changes between Issue 3.0 and Issuue 4.0 are marked by change bars. Also I use the first person where I am expressing my own opinions or preferences; these should not be taken as official opinions of DRA or the TenDRA team.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/guide/guide4.html100644 1750 1750 26401 6466607535 20047 0ustar brooniebroonie SORTs and TOKENs

TDF Guide, Issue 4.0

January 1998

next section previous section current document TenDRA home page document index


2.1 - Token applications and first-class SORTs
2.2 - Token definitions and declarations
2.3 - A simple use of a TOKEN
2.4 - Second class SORTs

2 SORTs and TOKENs

In the syntax of language like C or Pascal, we find various syntactic units like <Expression>, <Identifier> etc. A SORT bears the same relation to TDF as these syntactic units bear to the language; roughly speaking, the syntactic unit <Expression> corresponds to the SORT EXP and <Identifier> to TAG . However, instead of using BNF to compose syntactic units from others, TDF uses explicit constructors to compose its SORTs; each constructor uses other pieces of TDF of specified SORTs to make a piece of its result SORT. For example, the constructor plus uses an ERROR_TREATMENT and two EXPs to make another EXP.

At the moment, there are 58 different SORTS, from ACCESS to VARIETY given in tables 1 and 2. Some of these have familiar analogues in standard language construction as with EXP and TAG above. Others will be less familiar since TDF must concern itself with issues not normally addressed in language definitions. For example, the process of linking together TDF programs is at the root of the architecture neutrality of TDF and so must form an integral part of its definition. On the other hand, TDF is not meant to be a language readily accessible to the human reader or writer; computers handle it much more easily. Thus a great many choices have been made in the definition which would be intolerable in a standard language definition for the human programmer but which, paradoxically enough, make it much simpler for a computer to produce and analyse TDF

The SORTs and constructors in effect form a multi-sorted algebra. There were two principal reasons for choosing this algebraic form of definition. First, it is easy to extend - a new operation on existing constructs simply requires a new constructor. Secondly, the algebraic form is highly amenable to the automatic construction of programs. Large parts of both TDF producers and TDF translators have been created by automatic transformation of the text of the specification document itself, by extracting the algebraic signature and constructing C program which can read or produce TDF. To this extent, one can regard the specification document as a formal description of the free algebra of TDF SORTs and constructors. Of course, most of the interesting parts of the definition of TDF lies in the equivalences of parts of TDF, so this formality only covers the easy bit.

Another distinction between the TDF definition and language syntactic description is that TDF is to some extent conscious of its own SORTs so that it can specify a new construction of a given SORT. The analogy in normal languages would be that one could define a new construction with new syntax and say this is an example of an <Expression>, for example; I don't know of any standard language which permits this, although those of you with a historical bent might remember Algol-N which made a valiant attempt at it. Of course, the algebraic method of description makes it much easier to specify, rather than having to give syntax to provide the syntax for the new construction in a language.


2.1. Token applications and first-class SORTs

A new construction is introduced by the SORT TOKEN; the constructors involving TOKENs allow one to give an expansion for the TOKEN in terms of other pieces of TDF, possibly including parameters. We can encapsulate a (possibly parameterised) fragment of TDF of a suitable SORT by giving it a TOKEN as identification. Not all of the SORTs are available for this kind of encapsulation - only those which have a SORTNAME constructor (from access to variety). These are the "first-class" SORTs given in table 1 on page 8. Each of these have an appropriate _apply_token constructor (e.g. exp_apply_token ) give the expansion. Most of these also have _cond constructors (e.g.see exp_cond in section 9.1 on page 43) which allows translate time conditional expansion of the SORT.



Every TOKEN has a result SORT, i.e. the SORT of its resulting expansion and before it can be expanded, one must have its parameter SORTs. Thus, you can regard a TOKEN as having a type defined by its result and parameter SORTs and the _apply_token as the operator which expands the encapsulation and substitutes the parameters.

However, if we look at the signature of exp_apply_token:

	token_value:	TOKEN 
	token_args:	BITSTREAM param_sorts(token_value)
		   -> EXP x

we are confronted by the mysterious BITSTREAM where one might expect to find the actual parameters of the TOKEN.

To explain BITSTREAMs requires a diversion into the bit-encoding of TDF. Constructors for a particular SORT are represented in a number of bits depending on the number of constructors for that SORT; the context will determine the SORT required, so no more bits are required. Thus since there is only one constructor for UNITs, no bits are required to represent make_unit; there are about 120 different constructors for EXPs so 7 bits are required to cover all the EXPs. The parameters of each constructor have known SORTs and so their representations are just concatenated after the representation of the constructor *. While this is a very compact representation, it suffers from the defect that one must decode it even just to skip over it. This is very irksome is some applications, notably the TDF linker which is not interested detailed expansions. Similarly, in translators there are places where one wishes to skip over a token application without knowledge of the SORTs of its parameters. Thus a BITSTREAM is just an encoding of some TDF, preceded by the number of bits it occupies. Applications can then skip over BITSTREAMs trivially. Similar considerations apply to BYTESTREAMs used elsewhere; here the encoding is preceded by the number of bytes in the encoding and is aligned to a byte boundary to allow fast copying.


2.2. Token definitions and declarations

Thus the token_args parameter of exp_apply_token is just the BITSTREAM formed from the actual parameters in the sequence described by the definition of the token_value parameter. This will be given in a TOKEN_DEFN somewhere with constructor token_definition:

	result_sort:	SORTNAME
	tok_params:	LIST(TOKFORMALS)
	body:	result_sort
		   -> TOKEN_DEFN
The result_sort is the SORT of the construction of body; e.g. if result_sort is formed from exp then body would be constructed using the EXP constructors and one would use exp_apply_token to give the expansion. The list tok_params gives the formal parameters of the definition in terms of TOKFORMALS constructed using make_tok_formals:

	sn:	SORTNAME
	tk:	TDFINT
		   -> TOKFORMALS
The TDFINT tk will be the integer representation of the formal parameter expressed as a TOKEN whose result sort is sn (see more about name representation in
section 3.1 on page 13). To use the parameter in the body of the TOKEN_DEFN, one simply uses the _apply_token appropriate to sn.Note that sn may be a TOKEN but the result_sort may not.

Hence the BITSTREAM param_sorts(token_value) in the actual parameter of exp_apply_token above is simply formed by the catenation of constructions of the SORTs given by the SORTNAMEs in the tok_params of the TOKEN being expanded.

Usually one gives a name to a TOKEN_DEFN using to form a TOKDEF using make_tokdef :

	tok:	TDFINT
	signature:	OPTION(STRING)
	def:	BITSTREAM TOKEN_DEFN
		   -> TOKDEF
Here, tok gives the name that will be used to identify the TOKEN whose expansion is given by def. Any use of this TOKEN (e.g. in exp_apply_token) will be given by make_token(tok ) . Once again, a BITSTREAM is used to encapsulate the TOKEN_DEFN.

The significance of the signature parameter is discussed in section 3.2.2 on page 18.

Often, one wishes a token without giving its definition - the definition could, for example, be platform-dependent. A TOKDEC introduces such a token using make_tokdec:

	tok:	TDFINT
	signature:	OPTION(STRING)
	s:	SORTNAME
		   -> TOKDEC
Here the SORTNAME, s, is given by token:

	result:	SORTNAME
	params:	LIST(SORTNAME)
		   -> SORTNAME
which gives the result and parameter SORTs of tok.

One can also use a TOKEN_DEFN in an anonymous fashion by giving it as an actual parameter of a TOKEN which itself demands a TOKEN parameter. To do this one simply uses use_tokdef :

	tdef:	BITSTREAM TOKEN_DEFN
		   -> TOKEN

2.3. A simple use of a TOKEN

The crucial use of TOKENs in TDF is to provide abstractions of APIs (see section 10 on page 45) but they are also used as shorthand for commonly occurring constructions. For example, given the TDF constructor plus, mentioned above, we could define a plus with only two EXP parameters more suitable to C by using the wrap constructor as the ERROR_TREATMENT:

make_tokdef (C_plus, empty,
	token_definition( 
		exp(), 
		(make_tokformals(exp(), l), make_tokformals(exp(), r)),
		plus(wrap(), exp_apply_token(l, ()), exp_apply_token(r,())
	)
)

2.4. Second class SORTs

Second class SORTs (given in table 2 on page 11
) cannot be TOKENised. These are the "syntactic units" of TDF which the user cannot extend; he can only produce them using the constructors defined in core-TDF.

Some of these constructors are implicit. For example, there are no explicit constructors for LIST or SLIST which are both used to form lists of SORTs; their construction is simply part of the encoding of TDF. However, it is forseen that LIST constructors would be highly desireable and there will probably extensions to TDF to promote LIST from a second-class SORT to a first-class one. This will not apply to SLIST or to the other SORTs which have implicit constructions. These include BITSTREAM, BYTESTREAM, TDFINT, TDFIDENT and TDFSTRING.




Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/guide/guide5.html100644 1750 1750 42177 6466607535 20060 0ustar brooniebroonie CAPSULEs and UNITs

TDF Guide, Issue 4.0

January 1998

next section previous section current document TenDRA home page document index


3.1 - make_capsule and name-spaces
3.1.1 - External linkages
3.1.2 - UNITs
3.1.3 - make_unit
3.1.4 - LINK
3.2 - Definitions and declarations
3.2.1 - Scopes and linking
3.2.2 - Declaration and definition signatures
3.2.3 - STRING

3 CAPSULEs and UNITs

A CAPSULE is typically the result of a single compilation - one could regard it as being the TDF analogue of a Unix .o file. Just as with .o files, a set of CAPSULEs can be linked together to form another. Similarly, a CAPSULE may be translated to make program for some platform, provided certain conditions are met. One of these conditions is obviously that a translator exists for the platform, but there are others. They basically state that any names that are undefined in the CAPSULE can be supplied by the system in which it is to be run. For example, the translator could produce assembly code with external identifiers which will be supplied by some system library.


3.1. make_capsule and name-spaces

The only constructor for a CAPSULE is make_capsule. Its basic function is to compose together UNITs which contain the declarations and definitions of the program. The signature of make_capsule looks rather daunting and is probable best represented graphically.

The diagram gives an example of a CAPSULE using the same components as in the following text.

Each CAPSULE has its own name-space, distinct from all other CAPSULEs' name-spaces and also from the name-spaces of its component UNITs (see section 3.1.2 on page 14). There are several different kinds of names in TDF and each name-space is further subdivided into one for each kind of name. The number of different kinds of names is potentially unlimited but only three are used in core-TDF, namely "tag", "token" and "al_tag". Those names in a "tag" name-space generally correspond to identifiers in normal programs and I shall use these as the paradigm for the properties of them all.

The actual representations of a "tag" name in a given name-space is an integer, described as SORT TDFINT. These integers are drawn from a contiguous set starting from 0 up to some limit given by the constructor which introduces the name-space. For CAPSULE name-spaces, this is given by the capsule_linking parameter of make_capsule:

capsule_linking: SLIST(CAPSULE_LINK)
In the most general case in core-TDF, there would be three entries in the list introducing limits using make_capsule_link for each of the "tag", "token" and "al_tag" name-spaces for the CAPSULE. Thus if:

capsule_linking = (make_capsule_link("tag", 5), 
			            make_capsule_link("token", 6), 
			            make_capsule_link("al_tag", 7))
there are 5 CAPSULE "tag" names used within the CAPSULE, namely 0, 1, 2, 3 and 4; similarly there are 6 "token" names and 7 "al_tag" names.

3.1.1. External linkages

The context of usage will always determine when and how an integer is to be interpreted as a name in a particular name-space. For example, a TAG in a UNIT is constructed by make_tag applied to a TDFINT which will be interpreted as a name from that UNIT's "tag" name-space. An integer representing a name in the CAPSULE name-space would be found in a LINKEXTERN of the external_linkage parameter of make_capsule.

external_linkage: SLIST(EXTERN_LINK)
Each EXTERN_LINK is itself formed from an SLIST of LINKEXTERNs given by make_extern_link . The order of the EXTERN_LINKs determines which name-space one is dealing with; they are in the same order as given by the capsule_linkage parameter. Thus, with the capsule_linkage given above, the first EXTERN_LINK would deal with the "tag" name-space; Each of its component LINKEXTERNs constructed by make_linkextern would be identifying a tag number with some name external to the CAPSULE; for example one might be:

make_linkextern (4, string_extern("printf"))
This would mean: identify the CAPSULE's "tag" 4 with an name called "printf", external to the module. The name "printf" would be used to linkage external to the CAPSULE; any name required outside the CAPSULE would have to be linked like this.

3.1.2. UNITs

This name "printf", of course, does not necessarily mean the C procedure in the system library. This depends both on the system context in which the CAPSULE is translated and also the meaning of the CAPSULE "tag" name 4 given by the component UNITs of the CAPSULE in the groups parameter of make_capsule:

groups: SLIST(GROUP)
Each GROUP in the groups SLIST will be formed by sets of UNITs of the same kind. Once again, there are a potentially unlimited number of kinds of UNITs but core-TDF only uses those named "tld","al_tagdefs", "tagdecs", "tagdefs", "tokdecs" and "tokdefs"
*. These names will appear (in the same order as in groups) in the prop_names parameter of make_capsule, one for each kind of UNIT appearing in the CAPSULE:

prop_names: SLIST(TDFIDENT)
Thus if:

prop_names = ("tagdecs", "tagdefs")
then, the first element of groups would contain only "tagdecs" UNITs and and the second would contain only "tagdefs" UNITs. A "tagdecs" UNIT contains things rather like a set of global identifier declarations in C, while a "tagdefs" UNIT is like a set of global definitions of identifiers.

3.1.3. make_unit

Now we come to the construction of UNITs using make_unit, as in the diagram below

First we give the limits of the various name-spaces local to the UNIT in the local_vars parameter:

local_vars: SLIST(TDFINT)
Just in the same way as with external_linkage, the numbers in local_vars correspond (in the same order) to the spaces indicated in capsule_linking in
section 3.1 on page 13. With our example,the first element of local_vars gives the number of "tag" names local to the UNIT, the second gives the number of "token" names local to the UNIT etc. These will include all the names used in the body of the UNIT. Each declaration of a TAG, for example, will use a new number from the "tag" name-space; there is no hiding or reuse of names within a UNIT.

3.1.4. LINK

Connections between the CAPSULE name-spaces and the UNIT name-spaces are made by LINKs in the lks parameter of make_unit:

lks: SLIST(LINKS)
Once again, lks is effectively indexed by the kind of name-space a. Each LINKS is an SLIST of LINKs each of which which establish an identity between names in the CAPSULE name-space and names in the UNIT name-space. Thus if the first element of lks contains:

make_link(42, 4)
then, the UNIT "tag" 42 is identical to the CAPSULE "tag" 4.

Note that names from the CAPSULE name-space only arise in two places, LINKs and LINK_EXTERNs. Every other use of names are derived from some UNIT name-space.


3.2. Definitions and declarations

The encoding in the properties:BYTSTREAM parameter of a UNIT is a PROPS , for which there are five constructors corresponding to the kinds of UNITs in core-TDF, make_al_tagdefs, make_tagdecs, make_tagdefs, make_tokdefs and make_tokdecs. Each of these will declare or define names in the appropriate UNIT name-space which can be used by make_link in the UNIT's lks parameter as well as elsewhere in the properties parameter. The distinction between "declarations" and "definitions" is rather similar to C usage; a declaration provides the "type" of a name, while a definition gives its meaning. For tags, the "type" is the SORT SHAPE (see below). For tokens, the "type" is a SORTNAME constructed from the SORTNAMEs of the parameters and result of the TOKEN using token:

	params:	LIST(SORTNAME)
	result:	SORTNAME
		   -> SORTNAME
Taking make_tagdefs as a paradigm for PROPS, we have:

	no_labels: 	TDFINT
	tds: 	SLIST(TAGDEF)
		   -> TAGDEF_PROPS
The no_labels parameter introduces the size of yet another name-space local to the PROPS, this time for the LABELs used in the TAGDEFs. Each TAGDEF in tds will define a "tag" name in the UNIT's name-space. The order of these TAGDEFs is immaterial since the initialisations of the tags are values which can be solved at translate time, load time or as unordered dynamic initialisations.

There are three constructors for TAGDEFs, each with slightly different properties. The simplest is make_id_tagdef:

	t:	TDFINT	
	signature:	OPTION(STRING)
	e:	EXP x
		   -> TAGDEF
Here, t is the tag name and the evaluation of e will be the value of SHAPE x of an obtain_tag(t) in an EXP. Note that t is not a variable; the value of obtain_tag(t) will be invariant. The signature parameter gives a STRING (see
section 3.2.3 on page 18) which may be used as an name for the tag, external to TDF and also as a check introduced by the producer that a tagdef and its corresponding tagdec have the same notion of the language-specific type of the tag.

The two other constructors for TAGDEF, make_var_tagdef and common_tagdef both define variable tags and have the same signature:

	t:	TDFINT
	opt_access:	OPTION(ACCESS)
	signature:	OPTION(STRING)
	e:	EXP x
		   -> TAGDEF
Once again t is tag name but now e is initialisation of the variable t. A use of obtain_tag(t) will give a pointer to the variable (of SHAPE POINTER x), rather than its contents *. There can only be one make_var_tagdef of a given tag in a program, but there may be more than one common_tagdef, possibly with different initialisations; however these initialisations must overlap consistently just as in common blocks in FORTRAN.

The ACCESS parameter gives various properties required for the tag being defined and is discussed in section 5.3.2 on page 30.

The initialisation EXPs of TAGDEFs will be evaluated before the "main" program is started. An initialiation EXP must either be a constant (in the sense of section 9 on page 43) or reduce to (either directly or by token or _cond expansions) to an initial_value:

	init:	EXP s
		   -> EXP s
The translator will arrange that init will be evaluated once only before any procedure application, other than those themselves involved in initial_values, but after any constant initialisations. The order of evaluation of different initial_values is arbitrary.

3.2.1. Scopes and linking

Only names introduced by AL_TAGDEFS, TAGDEFS, TAGDECs, TOKDECs and TOKDEFs can be used in other UNITs (and then, only via the lks parameters of the UNITs involved). You can regard them as being similar to C global declarations. Token definitions include their declarations implicitly; however this is not true of tags. This means that any CAPSULE which uses or defines a tag across UNITs must include a TAGDEC for that tag in its "tagdecs" UNITs. A TAGDEC is constructed using either make_id_tagdec , make_var_tagdec or common_tagdec, all with the same form:

	t_intro:	TDFINT
	acc:		OPTION(ACCESS)
	signature:	OPTION(STRING)
	x:		SHAPE
		   -> TAGDEC
Here the tagname is given by t_intro; the SHAPE x will defined the space and alignment required for the tag (this is analogous to the type in a C declaration). The acc field will define certain properties of the tag not implicit in its SHAPE; I shall return to the kinds of properties envisaged in discussing local declarations in
section 5.3 on page 30.

Most program will appear in the "tagdefs" UNITs - they will include the definitions of the procedures of the program which in turn will include local definitions of tags for the locals of the procedures.

The standard TDF linker allows one to link CAPSULEs together using the name identifications given in the LINKEXTERNs, perhaps hiding some of them in the final CAPSULE. It does this just by generating a new CAPSULE name-space, grouping together component UNITs of the same kind and replacing their lks parameters with values derived from the new CAPSULE name-space without changing the UNITs' name-spaces or their props parameters. The operation of grouping together UNITs is effectively assumed to be associative, commutative and idempotent e.g. if the same tag is declared in two capsules it is assumed to be the same thing . It also means that there is no implied order of evaluation of UNITs or of their component TAGDEFs

Different languages have different conventions for deciding how programs are actually run. For example, C requires the presence of a suitably defined "main" procedure; this is usually enforced by requiring the system ld utility to bind the name "main" along with the definitions of any library values required. Otherwise, the C conventions are met by standard TDF linking. Other languages have more stringent requirements. For example, C++ requires dynamic initialisation of globals, using initial_value. As the only runnable code in TDF is in procedures, C++ would probably require an additional linking phase to construct a "main" procedure which calls the initialisation procedures of each CAPSULE involved if the system linker did not provide suitable C++ linking.

3.2.2. Declaration and definition signatures

The signature arguments of TAGDEFs and TAGDECs are designed to allow a measure of cross-UNIT checking when linking independently compiled CAPSULEs. Suppose that we have a tag, t, used in one CAPSULE and defined in another; the first CAPSULE would have to have a TAGDEC for t whose TAGDEF is in the second. The signature STRING of both could be arranged to represent the language-specific type of t as understood at compilation-time. Clearly, when the CAPSULEs are linked the types must be identical and hence their STRING representation must be the same - a translator will reject any attempt to link definitions and declarations of the same object with different signatures.

Similar considerations apply to TOKDEFs and TOKDECs; the "type" of a TOKEN may not have any familiar analogue in most HLLs, but the principle remains the same.

3.2.3. STRING

The SORT STRING is used in various constructs other than declarations and definitions. It is a first-class SORT with string_apply_token and string_cond. A primitive STRING is constructed from a TDFSTRING(k,n) which is an encoding of n integers,each of k bits, using make_string:

	arg:	TDFSTRING(k, n)
		   -> STRING(k, n)
STRINGs may be concatenated using concat_string:

	arg1:	STRING(k, n)
	arg2:	STRING(k,m)
		   -> STRING(k, n+m)
Being able to compose strings, including token applications etc, means that late-binding is possible in signature checking in definitions and declarations. This late-binding means that the representation of platform-dependent HLL types need only be fully expanded at install-time and hence the types could be expressed in their representational form on the specific platform.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/guide/guide6.html100644 1750 1750 57506 6466607535 20063 0ustar brooniebroonie SHAPEs, ALIGNMENTs and OFFSETs.

TDF Guide, Issue 4.0

January 1998

next section previous section current document TenDRA home page document index


4.1 - Shapes
4.1.1 - TOP, BOTTOM, LUB
4.1.2 - INTEGER
4.1.3 - FLOATING and complex
4.1.4 - BITFIELD
4.1.5 - PROC
4.1.6 - Non-primitive SHAPEs
4.2 - Alignments
4.2.1 - ALIGNMENT constructors
4.2.2 - Special alignments
4.2.3 - AL_TAG, make_al_tagdef
4.3 - Pointer and offset SHAPEs
4.3.1 - OFFSET
4.4 - Compound SHAPEs
4.4.1 - Offset arithmetic with compound shapes
4.4.2 - offset_mult
4.4.3 - OFFSET ordering and representation
4.5 - BITFIELD alignments

4 SHAPEs, ALIGNMENTs and OFFSETs.

In most languages there is some notion of the type of a value. This is often an uncomfortable mix of a definition of a representation for the value and a means of choosing which operators are applicable to the value. The TDF analogue of the type of value is its SHAPE (S3.20). A SHAPE is only concerned with the representation of a value, being an abstraction of its size and alignment properties. Clearly an architecture-independent representation of a program cannot say, for example, that a pointer is 32 bits long; the size of pointers has to be abstracted so that translations to particular architectures can choose the size that is apposite for the platform.


4.1. Shapes

There are ten different basic constructors for the SORT SHAPE from bitfield to top as shown in table 3. SHAPEs arising from those constructors are used as qualifiers (just using an upper case version of the constructor name) to various SORTs in the definition; for example, EXP TOP is an expression with top SHAPE. This is just used for definitional purposes only; there is no SORT SHAPENAME as one has SORTNAME.

In the TDF specification of EXPs, you will observe that all EXPs in constructor signatures are all qualified by the SHAPE name; for example, a parameter might be EXP INTEGER(v). This merely means that for the construct to be meaningful the parameter must be derived from a constructor defined to be an EXP INTEGER(v). You might be forgiven for assuming that TDF is hence strongly-typed by its SHAPEs. This is not true; the producer must get it right. There are some checks in translators, but these are not exhaustive and are more for the benefit of translator writers than for the user. A tool for testing the SHAPE correctness of a TDF program would be useful but has yet to be written.

4.1.1. TOP, BOTTOM, LUB

Two of the SHAPE constructions are rather specialised; these are TOP and BOTTOM. The result of any expression with a TOP shape will always be discarded; examples are those produced by assign and integer_test . A BOTTOM SHAPE is produced by an expression which will leave the current flow of control e.g. goto . The significance of these SHAPEs only really impinges on the computation of the shapes of constructs which have alternative expressions as results. For example, the result of conditional is the result of one of its component expressions. In this case, the SHAPE of the result is described as the LUB of the SHAPEs of the components. This simply means that if one of the component SHAPEs is TOP then the resulting SHAPE is TOP; if one is BOTTOM then the resulting SHAPE is the SHAPE of the other; otherwise both component SHAPEs must be equal and is the resulting SHAPE. Since this operation is associative, commutative and idempotent, we can speak quite unambiguously of the LUB of several SHAPEs.

4.1.2. INTEGER

Integer values in TDF have shape INTEGER(v) where v is of SORT VARIETY. The constructor for this SHAPE is integer with a VARIETY parameter. The basic constructor for VARIETY is var_limits which has a pair of signed natural numbers as parameters giving the limits of possible values that the integer can attain. The SHAPE required for a 32 bit signed integer would be:

integer(var_limits(-231, 231-1))
while an unsigned char is:

integer(var_limits(0, 255))
A translator should represent each integer variety by an object big enough (or bigger) to contain all the possible values with limits of the VARIETY. That being said, I must confess that most current translators do not handle integers of more than the maximum given naturally by the target architecture, but this will be rectified in due course.

The other way of constructing a VARIETY is to specify the number of bits required for its 2s-complemennt representation using var_width:

	signed_width:	BOOL
	width:	NAT
		   -> VARIETY

4.1.3. FLOATING and complex

Similarly, floating point and complex numbers have shape FLOATING qualified by a FLOATING_VARIETY.

A FLOATING_VARIETY for a real number is constructed using fvar_parms:

	base:	NAT
	mantissa_digits:	NAT
	minimum_exponent:	NAT
	maximum_exponent::	NAT
		   -> FLOATING_VARIETY
A FLOATING_VARIETY specifies the base, number of mantissa digits, and maximum and minimum exponent. Once again, it is intended that the translator will choose a representation which will contain all possible values, but in practice only those which are included in IEEE float, double and extended are actually implemented.

Complex numbers have a floating variety constructed by complex_parms which has the the same signature as fvar_parms. The representation of these numbers is likely to be a pair of real numbers each defined as if by fvar_parms with the same arguments. The real and imaginary parts of of a complex number can be extracted using real_part and imaginary_part; these could have been injected ito the complex number using make_complex or any of the complex operations. Many translators will simply transform complex numbers into COMPOUNDs consisting of two floating point numbers, transforming the complex operations into floating point operations on the fields.

4.1.4. BITFIELD

A number of contiguous bits have shape BITFIELD, qualified by a BITFIELD_VARIETY (S3.4) which gives the number of bits involved and whether these bits are to be treated as signed or unsigned integers. Current translators put a maximum of 32 or 64 on the number of bits.

4.1.5. PROC

The representational SHAPEs of procedure values is given by PROC with constructor proc . I shall return to this in the description of the operations which use it.

4.1.6. Non-primitive SHAPEs

The construction of the other four SHAPEs involves either existing SHAPEs or the alignments of existing SHAPEs. These are constructed by compound, nof , offset and pointer. Before describing these, we require a digression into what is meant by alignments and offsets.


4.2. Alignments

In most processor architectures there are limitations on how one can address particular kinds of objects in convenient ways. These limitations are usually defined as part of the ABI for the processor. For example, in the MIPs processor the fastest way to access a 32-bit integer is to ensure that the address of the integer is aligned on a 4-byte boundary in the address space; obviously one can extract a mis-aligned integer but not in one machine instruction. Similarly, 16-bit integers should be aligned on a 2-byte boundary. In principle, each primitive object could have similar restrictions for efficient access and these restrictions could vary from platform to platform. Hence, the notion of alignment has to be abstracted to form part of the architecture independent TDF - we cannot assume that any particular alignment regime will hold universally.

The abstraction of alignments clearly has to cover compound objects as well as primitive ones like integers. For example, if a field of structure in C is to be accessed efficiently, then the alignment of the field will influence the alignment of the structure as whole; the structure itself could be a component of a larger object whose alignment must then depend on the alignment of the structure and so on. In general, we find that a compound alignment is given by the maximum alignment of its components, regardless of the form of the compound object e.g. whether it is a structure, union, array or whatever.

This gives an immediate handle on the abstraction of the alignment of a compound object - it is just the set of abstractions of the alignments of its components. Since "maximum" is associative, commutative and idempotent, the component sets can be combined using normal set-union rules. In other words, a compound alignment is abstracted as the set of alignments of the primitive objects which make up the compound object. Thus the alignment abstraction of a C structure with only float fields is the singleton set containing the alignment of a float while that of a C union of an int and this structure is a pair of the alignments of an int and a float.

4.2.1. ALIGNMENT constructors

The TDF abstraction of an alignment has SORT ALIGNMENT. The constructor, unite_alignments, gives the set-union of its ALIGNMENT parameters; this would correspond to taking a maximum of two real alignments in the translator.

The constructor , alignment, gives the ALIGNMENT of a given SHAPE according to the rules given in the definition. These rules effectively define the primitive ALIGNMENTs as in the ALIGNMENT column of table 3. Those for PROC, all OFFSETs and all POINTERs are constants regardless of any SHAPE qualifiers. Each of the INTERGER VARIETYs, each of the FLOATING VARIETYs and each of the BITFIELD VARIETYs have their own ALIGNMENTs. These ALIGNMENTs will be bound to values apposite to the particular platform at translate-time. The ALIGNMENT of TOP is conventionally taken to be the empty set of ALIGNMENTs (corresponding to the minimum alignment on the platform).

The alignment of a procedure parameter clearly has to include the alignment of its SHAPE; however, most ABIs will mandate a greater alignment for some SHAPEs e.g. the alignment of a byte parameter is usually defined to be on a 32-bit rather than an 8-bit boundary. The constructor, parameter_alignment, gives the ALIGNMENT of a parameter of given SHAPE.

4.2.2. Special alignments

There are several other special ALIGNMENTs.

The alignment of a code address is {code} given by code_alignment; this will be the alignment of a pointer given by make_local_lv giving the value of a label.

The other special ALIGNMENTs are considered to include all of the others, but remain distinct. They are all concerned with offsets and pointers relevant to procedure frames, procedure parameters and local allocations and are collectively known as frame alignments. These frame alignments differ from the normal alignments in that their mapping to a given architecture is rather more than just saying that it describes some n-bit boundary. For example, alloca_alignment describes the alignment of dynamic space produced by local_alloc (roughly the C alloca). Now, an ABI could specify that the alloca space is a stack disjoint from the normal procedure stack; thus manipulations of space at alloca_alignment may involve different code to space generated in other ways.

Similar considerations apply to the other special alignments, callees_alignment(b), callers_alignment(b) and locals_alignment. The first two give the alignments of the bases of the two different parameter spaces in procedures (q.v.) and locals_alignment gives the alignment of the base of locally declared tags within a procedure. The exact interpretation of these depends on how the frame stack is defined in the target ABI, e.g. does the stack grow downwards or upwards?

The final special alignment is var_param_alignment. This describes the alignment of a special kind of parameter to a procedure which can be of arbitrary length (see section 5.1.1 on page 27).

4.2.3. AL_TAG, make_al_tagdef

Alignments can also be named as AL_TAGs using make_al_tagdef. There is no corresponding make_al_tagdec since AL_TAGs are implicitly declared by their constructor, make_al_tag. The main reason for having names for alignments is to allow one to resolve the ALIGNMENTs of recursive data structures. If, for example, we have mutually recursive structures, their ALIGNMENTs are best named and given as a set of equations formed by AL_TAGDEFs. A translator can then solve these equations trivially by substitution; this is easy because the only significant operation is set-union.


4.3. Pointer and offset SHAPEs

A pointer value must have a form which reflects the alignment of the object that it points to; for example, in the MIPs processor, the bottom two bits of a pointer to an integer must be zero. The TDF SHAPE for a pointer is POINTER qualified by the ALIGNMENT of the object pointed to. The constructor pointer uses this alignment to make a POINTER SHAPE.

4.3.1. OFFSET

Expressions which give sizes or offsets in TDF have an OFFSET SHAPE. These are always described as the difference between two pointers. Since the alignments of the objects pointed to could be different, an OFFSET is qualified by these two ALIGNMENTs. Thus an EXP OFFSET(X,Y) is the difference between an EXP POINTER(X) and an EXP POINTER(Y). In order for the alignment rules to apply, the set X of alignments must include Y. The constructor offset uses two such alignments to make an OFFSET SHAPE. However, many instances of offsets will be produced implicitly by the offset arithmetic, e.g., offset_pad:

	a: 	ALIGNMENT
	arg1: 	EXP OFFSET(z, t)
		   -> EXP OFFSET(z xc8  a, a)
This gives the next OFFSET greater or equal to arg1 at which an object of ALIGNMENT a can be placed. It should be noted that the calculation of shapes and alignments are all translate-time activities; only EXPs should produce runnable code. This code, of course, may depend on the shapes and alignments involved; for example, offset_pad might round up arg1 to be a multiple of four bytes if a was an integer ALIGNMENT and z was a character ALIGNMENT. Translators also do extensive constant analysis, so if arg1 was a constant offset, then the round-off would be done at translate-time to produce another constant.

.


4.4. Compound SHAPEs

The alignments of compound SHAPEs (i.e. those arising from the constructors compound and nof) are derived from the constructions which produced the SHAPE. To take the easy one first, the constructor nof has signature:

	n:	 NAT
	s: 	SHAPE
		   -> SHAPE
This SHAPE describes an array of n values all of SHAPE s; note that n is a natural number and hence is a constant known to the producer. Throughout the definition this is referred to as the SHAPE NOF(n, s). The ALIGNMENT of such a value is alignment(s); i.e. the alignment of an array is just the alignment of its elements.

The other compound SHAPEs are produced using compound:

	sz: 	EXP OFFSET(x, y)
		   -> S	HAPE
The sz parameter gives the minimum size which can accommodate the SHAPE.

4.4.1. Offset arithmetic with compound shapes

The constructors offset_add , offset_zero and shape_offset are used together with offset_pad to implement (inter alia) selection from structures represented by COMPOUND SHAPEs. Starting from the zero OFFSET given by offset_zero, one can construct an EXP which is the offset of a field by padding and adding offsets until the required field is reached. The value of the field required could then be extracted using component or add_to_ptr. Most producers would define a TOKEN for the EXP OFFSET of each field of a structure or union used in the program simply to reduce the size of the TDF

The SHAPE of a C structure consisting of an char followed by an int would require x to be the set consisting of two INTEGER VARIETYs, one for int and one for char, and sz would probably have been constructed like:

sz = offset_add(offset_pad(int_al, shape_offset(char)), shape_offset(int))
The various rules for the ALIGNMENT qualifiers of the OFFSETs give the required SHAPE; these rules also ensure that offset arithmetic can be implemented simply using integer arithmetic for standard architectures (see
section 13.1 on page 56). Note that the OFFSET computed here is the minimum size for the SHAPE. This would not in general be the same as the difference between successive elements of an array of these structures which would have SHAPE OFFSET(x, x) as produced by offset_pad(x, sz). For examples of the use of OFFSETs to access and create structures, see section 12 on page 53.

4.4.2. offset_mult

In C, all structures have size known at translate-time. This means that OFFSETs for all field selections of structures and unions are translate-time constants; there is never any need to produce code to compute these sizes and offsets. Other languages (notably Ada) do have variable size structures and so sizes and offsets within these structures may have to be computed dynamically. Indexing in C will require the computation of dynamic OFFSETs; this would usually be done by using offset_mult to multiply an offset expression representing the stride by an integer expression giving the index:

	arg1: 	EXP OFFSET(x, x)
	arg2: 	EXP INTEGER(v)
		   -> EXP OFFSET(x, x)
and using add_to_ptr with a pointer expression giving the base of the array with the resulting OFFSET.

4.4.3. OFFSET ordering and representation

There is an ordering defined on OFFSETs with the same alignment qualifiers, as given by offset_test and offset_max having properties like:

shape_offset(S) xb3  offset_zero(alignment(S))
A xb3  B 	iff offset_max(A,B) = A
offset_add(A, B) xb3  A 	where B xb3  offset_zero(some compatible alignment)
In most machines, OFFSETs would be represented as single integer values with the OFFSET ordering corresponding to simple integer ordering. The offset_add constructor just translates to simple addition with offset_zero as 0 with similar correspondences for the other offset constructors. You might well ask why TDF does not simply use integers for offsets, instead of introducing the rather complex OFFSET SHAPE. The reasons are two-fold. First, following the OFFSET arithmetic rules concerned with the ALIGNMENT qualifiers will ensure that one never extracts a value from a pointer with the wrong alignment by, for example, applying contents to an add_to_pointer. This frees TDF from having to define the effect of strange operations like forming a float by taking the contents of a pointer to a character which may be mis-aligned with respect to floats - a heavy operation on most processors. The second reason is quite simple; there are machines which cannot represent OFFSETs by a single integer value.

The iAPX-432 is a fairly extreme example of such a machine; it is a "capability" machine which must segregate pointer values and non-pointer values into different spaces. On this machine a value of SHAPE POINTER({pointer, int}) (e.g. a pointer to a structure containing both integers and pointers) could have two components; one referring to the pointers and another to the integers. In general, offsets from this pointer would also have two components, one to pick out any pointer values and the other the integer values. This would obviously be the case if the original POINTER referred to an array of structures containing both pointers and integers; an offset to an element of the array would have SHAPE OFFSET({pointer, int},{pointer , int}); both elements of the offset would have to be used as displacements to the corresponding elements of the pointer to extract the structure element. The OFFSET ordering is now given by the comparison of both displacements. Using this method, one finds that pointers in store to non-pointer alignments are two words in different blocks and pointers to pointer-alignments are four words, two in one block and two in another. This sounds a very unwieldy machine compared to normal machines with linear addressing. However, who knows what similar strange machines will appear in future; the basic conflicts between security, integrity and flexibility that the iAPX-432 sought to resolve are still with us. For more on the modelling of pointers and offsets see section 13 on page 56.


4.5. BITFIELD alignments

Even in standard machines, one finds that the size of a pointer may depend on the alignment of the data pointed at. Most machines do not allow one to construct pointers to bits with the same facility as other alignments. This usually means that pointers in memory to BITFIELD VARIETYs must be implemented as two words with an address and bit displacement. One might imagine that a translator could implement BITFIELD alignments so that they are the same as the smallest natural alignment of the machine and avoid the bit displacement, but this is not the intention of the definition. On any machine for which it is meaningful, the alignment of a BITFIELD must be one bit; in other words successive BITFIELDs are butted together with no padding bits *. Within the limits of what one can extract from BITFIELDs, namely INTEGER VARIETYs, this is how one should implement non-standard alignments, perhaps in constructing data, such as protocols, for exchange between machines. One could implement some Ada representational statements in this way; certainly the most commonly used ones.

TDF Version 3.0 does not allow one to construct a pointer of SHAPE POINTER(b) where b consists entirely of bitfield alignments; this relieves the translators of the burden of doing general bit-addressing. Of course, this simply shifts the burden to the producer. If the high level language requires to construct a pointer to an arbitrary bit position, then the producer is required to represent such a pointer as a pair consisting of pointer to some alignment including the required bitfield and an offset from this alignment to the bitfield. For example, Ada may require the construction of a pointer to a boolean for use as the parameter to a procedure; the SHAPE of the rep resentation of this Ada pointer could be a COMPOUND formed from a POINTER({x,b}) and an OFFSET({x, b}, b) where b is the alignment given by a 1 bit alignment. To access the boolean, the producer would use the elements of this pair as arguements to bitfield_assign and bitfield_contents (Assigning and extracting bitfields on page 39.).


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/guide/guide7.html100644 1750 1750 64135 6466607535 20060 0ustar brooniebroonie Procedures and Locals

TDF Guide, Issue 4.0

January 1998

next section previous section current document TenDRA home page document index


5.1 - make_proc and apply_proc
5.1.1 - vartag, varparam
5.2 - make_general_proc and apply_general_proc
5.2.1 - tail_call
5.2.2 - PROCPROPS
5.3 - Defining and using locals
5.3.1 - identify, variable
5.3.2 - ACCESS
5.3.2.1 - Locals model
5.3.2.2 - Access "hints"
5.3.3 - current_env, env_offset
5.3.4 - local_alloc, local_free_all, last_local
5.4 - Heap storage

5 Procedures and Locals

All procedures in TDF are essentially global; the only values which are accessible from the body of a procedure are those which are derived from global TAGs (introduced by TAGDEFs or TAGDECs), local TAGs defined within the procedure and parameter TAGs of the procedure

All executable code in TDF will arise from an EXP PROC made by either make_proc or make_general_proc. They differ in their treatment of how space for the actual parameters of a call is managed; in particular, is it the caller or the callee which deallocates the parameter space?

With make_proc, this management is conceptually done by the caller at an apply_proc; i.e. the normal C situation. This suffers from the limitation that tail-calls of procedures are then only possible in restricted circumstances (e.g. the space for the parameters of the tail-call must be capable of being included in caller's parameters) and could only be implemented as an optimisation within a translator. A producer could not predict these circumstances in a machine independent manner, whether or not it knew that a tail-call was valid.

An alternative would be to make the management of parameter space the responsibility of the called procedure. Rather than do this, make_general_proc (and apply_general_proc) splits the parameters into two sets, one whose allocation is the responsibility of the caller and the other whose allocation is dealt with by the callee. This allows an explicit tail_call to be made to a procedure with new callee parameters; the caller parameters for the tail_call will be the same as (or some initial subset of) the caller parameters of the procedure containing the tail_call .

A further refinement of make_general_proc is to allow access to the caller parameter space in a postlude at the call of the procedure using an apply_general_proc. This allows simple implementations of Ada out_parameters, or more generally, multiple results of procedures.


5.1. make_proc and apply_proc

The make_proc constructor has signature:

	result_shape:	SHAPE
	params_intro:	LIST(TAGSHACC)
	var_intro:	OPTION(TAGACC)
	body:	EXP BOTTOM
		   -> 	EXP PROC
The params_intro and var_intro parameters introduce the formal parameters of the procedure which may be used in body. The procedure result will have SHAPE result_shape and will be usually given by some return construction within body. The basic model is that space will be provided to copy actual parameters (into space supplied by some apply_proc) by value into these formals and the body will treat this space effectively as local variables.

Each straightforward formal parameter is introduced by an auxiliary SORT TAGSHACC using make_tagshacc:

	sha:	SHAPE
	opt_access:	OPTION(LIST(ACCESS))
	tg_intro:	TAG POINTER(alignment(sha))
		   -> TAGSHACC

Within body, the formal will be accessed using tg_intro; it is always considered to be a pointer to the space of SHAPE sha allocated by apply_proc, hence the pointer SHAPE.

For example, if we had a simple procedure with one integer parameter, var_intro would be empty and params_intro might be:

params_intro = make_tagshacc( integer(v), empty, make_tag(13))
Then, TAG 13 from the enclosing UNIT's name-space is identified with the formal parameter with SHAPE POINTER(INTEGER(v)). Any use of obtain_tag(make_tag(13)) in body will deliver a pointer to the integer parameter. I shall return to the meaning of opt_access and the ramifications of the scope and extent of TAGs involved in conjunction with local declarations in
section 5.3.1 on page 30.

Procedures, whether defined by make_proc or make_general_proc, will usually terminate and deliver its result with a return:

	arg1:	EXP x
		   -> EXP BOTTOM
Here x must be identical to the result_shape of the call of the procedure There may be several returns in body; and the SHAPE x in each will be the same. Some languages allow different types to be returned depending on the particular call. The producer must resolve this issue. For example, C allows one to deliver void if the resulting value is not used. In TDF a dummy value must be provided at the return; for example make_value(result_shape)

Note that the body has SHAPE bottom since all possible terminations to a procedure have SHAPE BOTTOM..

Procedures defined by make_proc are called using apply_proc:

	result_shape:	SHAPE
	arg1:	EXP PROC
	arg2:	LIST(EXP)
	varparam:	OPTION(EXP)
	--> EXP result_shape
Here arg1 is the procedure to be called and arg2 gives the actual parameters. There must be at least as many actual parameters as given (with the same SHAPE) in the params_intro of the corresponding make_proc for arg1 *. The values of arg2 will be copied into space managed by caller.

The SHAPE of the result of the call is given by result_shape which must be identical to the result_shape of the make_proc.

5.1.1. vartag, varparam

Use of the var_intro OPTION in make_proc and the corresponding varparam in apply_proc allows one to have a parameter of any SHAPE, possibly differing from call to call where the actual SHAPE can be deduced in some way by the body of the make_proc . One supplies an extra actual parameter, varparam, which usually would be a structure grouping some set of values. The body of the procedure can then access these values using the pointer given by the TAG var_intro, using add_to_ptr with some computed offsets to pick out the individual fields.

This is a slightly different method of giving a variable number of parameters to a procedure, rather than simply giving more actuals than formals. The principle difference is in the alignment of the components of varparam; these will be laid out according to the default padding defined by the component shapes. In most ABIs, this padding is usually different to the way parameters are laid out; for example, character parameters are generally padded out to a full word. Thus a sequence of parameters of given shape has a different layout in store to the same sequence of shapes in a structure. If one wished to pass an arbitrary structure to a procedure, one would use the varparam option rather passing the fields individually as extra actual parameters.


5.2. make_general_proc and apply_general_proc

A make_general_proc has signature:

	result_shape:	SHAPE
	prcprops:	OPTION(PROCPROPS)
	caller_intro:	LIST(TAGSHACC)
	callee_intro:	LIST(TAGSHACC)
	body:	EXP BOTTOM
		   -> EXP PROC
Here the formal parameters are split into two sets, caller_intro and callee_intro, each given by a list of TAGSHACCs just as in make_proc. The distinction between the two sets is that the make_general_proc is responsible for de_allocating any space required for the callee parameter set; this really only becomes obvious at uses of tail_call within body.

The result_shape and body have the same general properties as in make_proc. In addition prcprops gives other information both about body and the way that that the procedure is called. PROCPROPS are a set drawn from check_stack, inline, no_long_jump_dest, untidy, var_callees and var_callers. The set is composed using add_procprops. The PROCPROPS no_long_jump_dest is a property of body only; it indicates that none of the labels within body will be the target of a long_jump construct. The other properties should also be given consistently at all calls of the procedure; theu are discussed in section 5.2.2 on page 29.

A procedure, p, constructed by make_general_proc is called using apply_general_proc:

	result_shape:	SHAPE
	prcprops:	OPTION(PROCPROPS)
	p:	EXP PROC
	caller_params:	LIST(OTAGEXP)
	callee_params:	CALLEES
	postlude:	EXP TOP
		   -> EXP result_shape
The actual caller parameters are given by caller_params as a list of OTAGEXPs constructed using make_otagexp:

	tgopt: 	OPTION(TAG x)
	e:	EXP x
		   -> OTAGEXP
Here, e is the value of the parameter and tgopt, if present, is a TAG which will bound to the final value of the parameter (after body is evaluated) in the postlude expression of the apply_general_proc *. Clearly, this allows one to use a caller parameter as an extra result of the procedure; for example, as in Ada out-parameters.

The actual callee_params may be constructed in three different ways. The usual method is to use make_callee_list, giving a list of actual EXP parameters, corresponding to the caller_intro list in the obvious way.The constructor, same_callees allows one to use the callees of the current procedure as the callees of the call; this, of course, assumes that the formals of the current procedure are compatible with the formals required for the call The final method allows one to construct a dynamically sized set of CALLEES; make_dynamic_callees takes a pointer and a size (expressed as an OFFSET) to make the CALLEES; this will be used in conjunction with a var_callees PROCPROPS (see section 5.2.2 on page 29).

Some procedures can be expressed using either make_proc or make_general_proc. For example:

make_proc(S, L, empty, B) = make_general_proc(S, var_callers, L, empty, B)

5.2.1. tail_call

Often the result of a procedure, f, is simply given by the call of another (or the same) procedure, g. In appropriate circumstances, the same stack space can be used for the call of g as the call of f. This can be particularly important where heavily recursive routines are involved; some languages even use tail recursion as the preferred method of looping.

One condition for such a tail call to be applicable is knowing that g does not require any pointers to locals of f; this is often implicit in the language involved. Equally important is that the action on the return from f is indistiguishable from the return from g. For example, if it were the callers responsibility to pop the the space for the parameters on return from a call, then the tail call of g would only work if g had the same parameter space as f.

This is the justification for splitting the parameter set of a general proc; it is (at least conceptually) the caller's responsibility for popping the caller-parameters only - the callee-parameters are dealt with by the procedure itself. Hence we can define tail_call which uses the same caller-parameters, but a different set of callee-parameters:

	prcprops:	OPTION(PROCPROPS)
	p:	EXP PROC
	callee_params:	CALLEES
		   -> EXP BOTTOM
The procedure p will be called with the same caller parameters as the current procedure and the new callee_params and return to the call site of the current procedure. Semantically, if S is the return SHAPE of the current procedure, and L is its caller-parameters:

tail_call(P, p, C) = return(apply_general_proc(S, P, p, L, C, make_top()))

However an implementation is expected to conserve stack by using the same space for the call of p as the current procedure.

5.2.2. PROCPROPS

The presence of var_callees (or var_callers) means that the procedure can be called with more actual callee (or caller) parameters than are indicated in callee_intro (or caller_intro ). These extra parameters would be accessed within body using offset calculations with respect to the named parameters. The offsets should be calculated using parameter_alignment to give the packing of the parameter packs.

The presence of untidy means that body may be terminated by an untidy_return. This returns the result of the procedure as in return, but the lifetime of the local space of the procedure is extended (in practice this is performed by not returning the stack to its original value at the call). A procedure containing an untidy_return is a generalisation of a local_alloc(see section 5.3.4 on page 32). For example the procedure could do some complicated local allocation (a triangular array, say) and untidily return a pointer to it so that the space is still valid in the calling procedure. The space will remain valid for the lifetime of the calling procedure unless some local_free is called within it, just as if the space had been generated by a local_alloc in the calling procedure.

The presence of inline is just a hint to the translator that the procedure body is a good candidate for inlining at the call.

The presence of check_stack means that the static stack requirements of the procedure will be checked on entry to see that they do not exceed the limits imposed by set_stack_limit; if they are exceeded a TDF exception with ERROR_CODE stack_overflow (see section 6.3 on page 35) will be raised.


5.3. Defining and using locals

5.3.1. identify, variable

Local definitions within the body of a procedure are given by two EXP constructors which permit one to give names to values over a scope given by the definition. Note that this is somewhat different to declarations in standard languages where the declaration is usually embedded in a larger construct which defines the scope of the name; here the scope is explicit in the definition. The reason for this will become more obvious in the discussion of TDF transformations. The simpler constructor is identify:

	opt_access:	OPTION(ACCESS)
	name_intro:	TAG x
	definition:	EXP x
	body:	 EXP y
		   -> EXP y
The definition is evaluated and its result is identified with the TAG given by name_intro within its scope body. Hence the use of any obtain_tag(name_intro) within body is equivalent to using this result. Anywhere else, obtain_tag(name_intro ) is meaningless, including in other procedures.

The other kind of local definition is variable:

	opt_access:	OPTION(ACCESS)
	name_intro:	TAG x
	init:	EXP x
	body:	 EXP y
		   -> EXP y
Here the init EXP is evaluated and its result serves as an initialisation of space of SHAPE x local to the procedure. The TAG name_intro is then identified with a pointer to that SPACE within body. A use of obtain_tag(name_intro) within body is equivalent to using this pointer and is meaningless outside body or in other procedures. Many variable declarations in programs are uninitialised; in this case, the init argument could be provided by make_value which will produce some value with SHAPE given by its parameter.

5.3.2. ACCESS

The ACCESS SORT given in tag declarations is a way of describing a list of properties to be associated with the tag. They are basically divided into two classes, one which describes global properties of the tag with respect to the model for locals and the other which gives "hints" on how the value will be used. Any of these can be combined using add_access.

5.3.2.1 . Locals model

At the moment there are just three possibilities in the first class of ACCESS constructors. They are standard_access (the default) , visible, out_par and long_jump_access.

The basic model used for the locals and parameters of a procedure is a frame within a stack of nested procedure calls. One could implement a procedure by allocating space according to SHAPEs of all of the parameter and local TAGs so that the corresponding values are at fixed offsets either from the start of the frame or some pointer within it.

Indeed, if the ACCESS opt_access parameter in a TAG definition is produced by visible, then a translator is almost bound to do just that for that TAG. This is because it allows for the possibility of the value to be accessed in some way other than by using obtain_tag which is the standard way of recovering the value bound to the TAG. The principal way that this could happen within TDF is by the combined use of env_offset to give the offset and current_env to give a pointer to the current frame (see section 5.3.3 on page 31).

The out_par ACCESS is only applicable to caller parameters of procedures; it indicates that the value of the TAG concerned will accessed by the postlude part of an apply_general_proc. Hence, the value of the parameter must be accessible after the call; usually this will be on the stack in the callers frame.

The long_jump_access flag is used to indicate that the tag must be available after a long_jump. In practice, if either visible or long_jump_access is set, most translators would allocate the space for the declaration on the main-store stack rather than in an available register. If it is not set, then a translator is free to use its own criteria for whether space which can fit into a register is allocated on the stack or in a register, provided there is no observable difference (other than time or program size) between the two possibilities.

Some of these criteria are rather obvious; for example, if a pointer to local variable is passed outside the procedure in an opaque manner, then it is highly unlikely that one can allocate the variable in a register. Some might be less obvious. If the only uses of a TAG t was in obtain_tag(t)s which are operands of contents or the left-hand operands of assigns , most ABIs would allow the tag to be placed in a register. We do not necessarily have to generate a pointer value if it can be subsumed by the operations available.

5.3.2.2 . Access "hints"

A variable tag with ACCESS constant is a write-once value; once it is initialised the variable will always contain the initialisation. In other words the tag is a pointer to a constant value; translators can use this information to apply various optimisations.

A POINTER tag with ACCESS no_other_read or no_other_write is asserting that there are no "aliassed" accesses to the contents of the pointer. For example, when applied to a parameter of a procedure, it is saying that the original pointer of the tag is distinct from any other tags used (reading/writing) in the lifetime of the tag. These other tags could either be further parameters of the procedure or globals. Clearly, this is useful for describing the limitations imposed by Fortran parameters, for example.

5.3.3. current_env, env_offset

The constructor current_env gives a pointer to the current procedure frame of SHAPE POINTER(fa) where fa is depends on how the procedure was defined and will be some set of the special frame ALIGNMENTs. This set will always include locals_alignment - the alignment of any locals defined within the procedure. If the procedure has any caller- parameters, the set will also include callers_alignment(b) where b indicates whether there can be a variable number of them; similarly for callee-parameters.

Offsets from the current_env of a procedure to a tag declared in the procedure are constructed by env_offset:

	fa:	ALIGNMENT
	y:	ALIGNMENT
	t:	TAG x
		   -> EXP OFFSET(fa,y)
The frame ALIGNMENT fa will be the appropriate one for the TAG t; i.e. if t is a local then the fa will be locals_alignment; if t is a caller parameter, fa will be callers_alignment(b); if t is a callee_parameter, fa will be callees_alignment(b). The alignment y will be the alignment of the initialisation of t.

The offset arithmetic operations allow one to access the values of tags non-locally using values derived from current_env and env_offset. They are effectively defined by the following identities:

If TAG t is derived from a variable definition 
	add_to_ptr(current_env(), env_offset(locals_alignment, A, t)) = obtain_tag(t)
if TAG t is derived from an identify definition:
	contents(S, add_to_ptr(current_env(), env_offset(locals_alignment, A, t))) = obtain_tag(t)
if TAG t is derived from a caller parameter:
	add_to_ptr(current_env(), env_offset(callers_alignment(b), A, t)) = obtain_tag(t)
if TAG t is derived from a callee parameter:
	add_to_ptr(current_env(), env_offset(callees_alignment(b), A, t)) = obtain_tag(t)
These identities are valid throughout the extent of t, including in inner procedure calls. In other words, one can dynamically create a pointer to the value by composing current_env and env_offset.

The importance of this is that env_offset(t) is a constant OFFSET and can be used anywhere within the enclosing UNIT, in other procedures or as part of constant TAGDEF; remember that the TDFINT underlying t is unique within the UNIT. The result of a current_env could be passed to another procedure (as a parameter, say) and this new procedure could then access a local of the original by using its env_offset. This would be the method one would use to access non-local, non-global identifiers in a language which allowed one to define procedures within procedures such as Pascal or Algol. Of course, given the stack-based model, the value given by current_env becomes meaningless once the procedure in which it is invoked is exited.

5.3.4. local_alloc, local_free_all, last_local

The size of stack frame produced by variable and identify definitions is a translate-time constant since the frame is composed of values whose SHAPEs are known. TDF also allows one to produce dynamically sized local objects which are conceptually part of the frame. These are produced by local_alloc:

	arg1:	EXP OFFSET(x, y)
		   -> EXP POINTER(alloca_alignment)
The operand arg1 gives the size of the new object required and the result is a pointer to the space for this object "on top of the stack" as part of the frame. The quotation marks indicate that a translator writer might prefer to maintain a dynamic stack as well as static one. There are some disadvantages in putting everything into one stack which may well out-weigh the trouble of maintaining another stack which is relatively infrequently used. If a frame has a known size, then all addressing of locals can be done using a stack-front register; if it is dynamically sized, then another frame-pointer register must be used - some ABIs make this easy but not all. The majority of procedures contain no local_allocs, so their addressing of locals can always be done relative to a stack-front; only the others have to use another register for a frame pointer.

The alignment of pointer result is alloca_alignment which must include all SHAPE alignments.

There are two constructors for releasing space generated by local_alloc. To release all such space generated in the current procedure one does local_free_all(); this reduces the size of the current frame to its static size.

The other constructor is local_free whch is effectively a "pop" to local_alloc's "push":

	a:	EXP OFFSET(x, y)
	p:	 EXP POINTER(alloca_alignment)
		   -> 	EXP TOP
Here p must evaluate to a pointer generated either by local_alloc or last_local . The effect is to free all of the space locally allocated after p. The usual implementation (with a downward growing stack) of this is that p becomes the "top of stack" pointer

The use of a procedure with an untidy_return is just a generalisation of the idea of local_alloc and the space made available by its use can be freed in the same way as normal local allocations. Of course, given that it could be the result of the procedure it can be structured in an arbitrarily complicated way.


5.4. Heap storage

At the moment, there are no explicit constructors of creating dynamic off-stack storage in TDF. Any off-stack storage requirements must be met by the API in which the system is embedded, using the standard procedural interface. For example, the ANSI C API allows the creation of heap space using standard library procedures like malloc.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/guide/guide8.html100644 1750 1750 26461 6466607535 20061 0ustar brooniebroonie Control Flow within procedures

TDF Guide, Issue 4.0

January 1998

next section previous section current document TenDRA home page document index


6.1 - Unconditional flow
6.1.1 - sequence
6.2 - Conditional flow
6.2.1 - labelled, make_label
6.2.2 - goto, make_local_lv, goto_local_lv, long_jump, return_to_label
6.2.3 - integer_test, NTEST
6.2.4 - case
6.2.5 - conditional, repeat
6.3 - Exceptional flow

6 Control Flow within procedures


6.1. Unconditional flow

6.1.1. sequence

To perform a sequential set of operations in TDF, one uses the constructor sequence:

	statements:	LIST(EXP)
	result:	EXP x
		   -> EXP x
Each of the statements are evaluated in order, throwing away their results. Then, result is evaluated and its result is the result of the sequence.

A translator is free to rearrange the order of evaluation if there is no observable difference other than in time or space. This applies anywhere I say "something is evaluated and then ...". We find this kind of statement in definitions of local variables in section 5.3 on page 30, and in the controlling parts of the conditional constructions below.

For a more precise discussion of allowable reorderings see (S7.14) .


6.2. Conditional flow

6.2.1. labelled, make_label

All simple changes of flow of control within a TDF procedure are done by jumps or branches to LABELs, mirroring what actually happens in most computers. There are three constructors which introduce LABELs; the most general is labelled which allows arbitrary jumping between its component EXPs:

	placelabs_intro:	LIST(LABEL)
	starter:	EXP x
	places:	LIST(EXP)
		   -> EXP w
Each of the EXPs in places is labelled by the corresponding LABEL in placelabs_intro; these LABELs are constructed by make_label applied to a TDFINT uniquely drawn from the LABEL name-space introduced by the enclosing PROPS. The evaluation starts by evaluating starter; if this runs to completion the result of the labelled is the result of starter. If there is some jump to a LABEL in placelabs_intro then control passes to the corresponding EXP in places and so on. If any of these EXPS runs to completion then its result is the result of the labelled; hence the SHAPE of the result, w, is the LUB of the SHAPEs of the component EXPs.

Note that control does not automatically pass from one EXP to the next; if this is required the appropriate EXP must end with an explicit goto.

6.2.2. goto, make_local_lv, goto_local_lv, long_jump, return_to_label

The unconditional goto is the simplest method of jumping. In common with all the methods of jumping using LABELs, its LABEL parameter must have been introduced in an enclosing construction, like labelled, which scopes it.

One can also pick up a label value of SHAPE POINTER {code} (usually implemented as a program address) using make_local_lv for later use by an "indirect jump" such as goto_local_lv . Here the same prohibition holds - the construction which introduced the LABEL must still be active.

The construction goto_local_lv only permits one to jump within the current procedure; if one wished to do a jump out of a procedure into a calling one, one uses long_jump which requires a pointer to the destination frame (produced by current_env in the destination procedure) as well as the label value. If a long_jump is made to a label, only those local TAGs which have been defined with a visible ACCESS are guaranteed to have preserved their values; the translator could allocate the other TAGs in scope as registers whose values are not necessarily preserved.

A slightly "shorter" long jump is given by return_to_label. Here the destination of the jump must a label value in the calling procedure. Usually this value would be passed as parameter of the call to provide an alternative exit to the procedure.

6.2.3. integer_test, NTEST

Conditional branching is provided by the various _test constructors, one for each primitive SHAPE except BITFIELD. I shall use integer_test as the paradigm for them all:

	nt:	NTEST
	dest: 	LABEL
	arg1: 	EXP INTEGER(v)
	arg2:	EXP INTEGER(v)
		   -> 	EXP TOP
The NTEST nt chooses a dyadic test (e.g. =, >=, <, etc.) that is to be applied to the results of evaluating arg1 and arg2. If arg1 nt arg2 then the result is TOP; otherwise control passes to the LABEL dest. In other words, integer_test acts like an assertion where if the assertion is false, control passes to the LABEL instead of continuing in the normal fashion.

Some of the constructors for NTESTs are disallowed for some _tests (e.g. proc_test ) while others are redundant for some _tests; for example, not_greater_than is the same as less_than_or_equal for all except possibly floating_test. where the use of NaNs (in the IEEE sense) as operands may give different results.

6.2.4. case

There are only two other ways of changing flow of control using LABELs. One arises in ERROR_TREATMENTs which will be dealt with in the arithmetic operations. The other is case:

	exhaustive:	BOOL
	control:	EXP INTEGER(v)
	branches:	LIST(CASELIM)
		   -> 	EXP (exhaustive ? BOTTOM : TOP)
Each CASELIM is constructed using make_caselim:

	branch:	LABEL
	lower:	SIGNED_NAT
	upper:	SIGNED_NAT
		   -> CASELIM
In the case construction, the control EXP is evaluated and tested to see whether its value lies inclusively between some lower and upper in the list of CASELIMs. If so, control passes to the corresponding branch. The order in which these tests are performed is undefined, so it is probably best if the tests are exclusive. The exhaustive flag being true asserts that one of the branches will be taken and so the SHAPE of the result is BOTTOM. Otherwise, if none of the branches are taken, its SHAPE is TOP and execution carries on normally.

6.2.5. conditional, repeat

Besides labelled, two other constructors, conditional and repeat, introduce LABELs which can be used with the various jump instructions. Both can be expressed as labelled, but have extra constraints which make assertions about the use of the LABELs introduced and could usually be translated more efficiently; hence producers are advised to use them where possible. A conditional expression or statement would usually be done using conditional:

	alt_label_intro:	LABEL
	first:	EXP x
	alt:	EXP y
		   -> 	EXP(LUB(x, y))
Here first is evaluated; if it terminates normally, its result is the result of the conditional. If a jump to alt_label_intro occurs then alt is evaluated and its result is the result of the conditional. Clearly, this, so far, is just the same as labelled((alt_label_intro ), first, (alt)). However, conditional imposes the constraint that alt cannot use alt_label_intro. All jumps to alt_label_intro are "forward jumps" - a useful property to know in a translator.

Obviously, this kind of conditional is rather different to those found in traditional high-level languages which usually have three components, a boolean expression, a then-part and an else-part. Here, the first component includes both the boolean expression and the then-part; usually we find that it is a sequence of the tests (branching to alt_label_intro ) forming the boolean expression followed by the else-part. This formulation means that HLL constructions like "andif" and "orelse" do not require special constructions in TDF.

A simple loop can be expressed using repeat:

	repeat_label_intro:	LABEL
	start: 	EXP TOP
	body: 	EXP y
		   -> EXP y
The EXP start is evaluated, followed by body which is labelled by repeat_label_intro. If a jump to repeat_label_intro occurs in body, then body is re-evaluated. If body terminates normally then its result is the result of the repeat. This is just the same as:

labelled((repeat_label_intro), sequence((start), goto(repeat_label_intro
)), (body))
except that no jumps to repeat_label_intro are allowed in start - a useful place to do initialisations for the loop.

Just as with conditionals, the tests for the continuing or breaking the loop are included in body and require no special constructions.


6.3. Exceptional flow

A further way of changing the flow of control in a TDF program is by means of exceptions. TDF exceptions currently arise from three sources - overflow in arithmetic operations with trap ERROR_TREATMENT(see section 8.1.1 on page 40) , an attempt to access a value via a nil pointer using assign_with_mode, contents_with_mode or move_some(see section 7.3 on page 38) or a stack overflow on procedure entry with PROCPROPS check_stack(see section 5.2.2 on page 29) or a local_alloc_check.

Each of these exceptions have an ERROR_CODE ascribed to them, namely overflow, nil_access and stack_overflow. Each ERROR_CODE can be made into a distinct NAT by means of the constructor error_val; these NATs could be used, for example, to discriminate the different kinds of errors using a case construction.

When one of these exceptions is raised, the translator will arrange that a TOKEN, ~Throw, is called with the appropriate ERROR_CODE as its (sole) parameter. Given that every language has a different way of both characterising and handling exceptions, the exact expansion of ~Throw must be given by the producer for that language - usually it will involve doing a long_jump to some label specifying a signal handler and translating the ERROR_CODE into its language-specific representation.

The expansion of ~Throw forms part of the language specific environment required for the translation of TDF derived from the language, just as the exact shape of FILE must be given for the translation of C.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/guide/guide9.html100644 1750 1750 21565 6466607535 20062 0ustar brooniebroonie Values, variables and assignments.

TDF Guide, Issue 4.0

January 1998

next section
previous section current document TenDRA home page document index


7.1 - contents
7.2 - assign
7.3 - TRANSFER_MODE operations
7.4 - Assigning and extracting bitfields

7 Values, variables and assignments.

TAGs in TDF fulfil the role played by identifiers in most programming languages. One can apply obtain_tag to find the value bound to the TAG. This value is always a constant over the scope of a particular definition of the TAG. This may sound rather strange to those used to the concepts of left-hand and right-hand values in C, for example, but is quite easily explained as follows.

If a TAG, id, is introduced by an identify, then the value bound is fixed by its definition argument. If, on the other hand, v was a TAG introduced by a variable definition, then the value bound to v is a pointer to fixed space in the procedure frame (i.e. the left-hand value in C).


7.1. contents

In order to get the contents of this space (the right-hand value in C), one must apply the contents operator to the pointer:

contents(shape(v), obtain_tag(v))
In general, the contents constructor takes a SHAPE and an expression delivering pointer:

	s:	SHAPE
	arg1: 	EXP POINTER(x)
		   -> 	EXP s
It delivers the value of SHAPE s, pointed at by the evaluation of arg1. The alignment of s need not be identical to x. It only needs to be included in it; this would allow one, for example, to pick out the first field of a structure from a pointer to it.


7.2. assign

A simple assignment in TDF is done using assign:

	arg1: 	EXP POINTER(x)
	arg2: 	EXP y
		   -> EXP TOP
The EXPs arg1 and arg2 are evaluated (no ordering implied) and the value of SHAPE y given by arg2 is put into the space pointed at by arg1. Once again, the alignment of y need only be included in x, allowing the assignment to the first field of a structure using a pointer to the structure. An assignment has no obvious result so its SHAPE is TOP.

Some languages give results to assignments. For example, C defines the result of an assignment to be its right-hand expression, so that if the result of (v = exp) was required, it would probably be expressed as:

identify(empty, newtag, exp, 
	sequence((assign(obtain_tag(v), obtain_tag(newtag))), obtain_tag(newtag)))

From the definition of assign, the destination argument, arg1, must have a POINTER shape. This means that given the TAG id above, assign(obtain_tag(id), lhs) is only legal if the definition of its identify had a POINTER SHAPE. A trivial example would be if id was defined:

identify(empty, id, obtain_tag(v), assign(obtain_tag(id), lhs))
This identifies id with the variable v which has a POINTER SHAPE, and assigns lhs to this pointer. Given that id does not occur in lhs, this is identical to:

 assign(obtain_tag(v), lhs).
Equivalences like this are widely used for transforming TDF in translators and other tools (see
section 11 on page 48).


7.3. TRANSFER_MODE operations

The TRANSFER_MODE operations allow one to do assignment and contents operations with various qualifiers to control how the access is done in a more detailed manner than the standard contents and assign operations.

For example, the value assigned in assign has some fixed SHAPE; its size is known at translate-time. Variable sized objects can be moved by move_some:

	md: 	TRANSFER_MODE
	arg1: 	EXP POINTER x
	arg2:	EXP POINTER y
	arg3: 	EXP OFFSET(z, t)
		   -> 	EXP TOP
The EXP arg1 is the destination pointer, and arg2 is a source pointer. The amount moved is given by the OFFSET arg3.

The TRANSFER_MODE md parameter controls the way that the move will be performed. If overlap is present, then the translator will ensure that the move is equivalent to moving the source into new space and then copying it to the destination; it would probably do this by choosing a good direction in which to step through the value. The alternative, standard_transfer_mode, indicates that it does not matter.

If the TRANSFER_MODE trap_on_nil is present and arg1 is a nil pointer, a TDF exception with ERROR_CODE nil_access is raised.

There are variants of both the contents and assign constructors. The signature of contents_with_mode is:

	md:	TRANSFER_MODE
	s:	SHAPE
	arg1:	EXP POINTER(x)
		   -> EXP s
Here, the only significant TRANSFER_MODE constructors md are trap_on_nil and volatile. The latter is principally intended to implement the C volatile construction; it certainly means that the contents_with_mode operation will never be "optimised" away.

Similar considerations apply to assign_with_mode; here the overlap TRANSFER_MODE is also possible with the same meaning as in move_some.


7.4. Assigning and extracting bitfields

Since pointers to bits are forbidden, two special operations are provided to extract and assign bitfields. These require the use of a pointer value and a bitfield offset from the pointer. The signature of bitfield_contents which extracts a bitfield in this manner is:

	v:	BITFIELD_VARIETY
	arg1:	EXP POINTER(x)
	arg2:	EXP OFFSET(y,z)
		   -> EXP bitfield(v)
Here arg1 is a pointer to an alignment x which includes v, the required bitfield alignment. In practice, x must include an INTEGER VARIETY whose representation can contain the entire bitfield. Thus on a standard architecture, if v is a 15 bit bitfield, x must include at least a 16 bit integer variety; a 27 bitfield would require a 32 bit integer variety and so on. Indeed the constraint is stronger than this since there must be an integer variety, accessible from arg1, which entirely contains the bitfield.

This constraint means that producers cannot expect that arbitrary bitfield lengths can be accomodated without extra padding; clearly it also means that the maximum bitfield length possible is the maximum size of integer variety that can be implemented on the translator concerned (this is defined to be at least 32). On standard architectures, the producer can expect that an array of bitfields of lenth 2n will be packed without padding; this, of course, includes arrays of booleans. For structures of several different bitfields, he can be sure of no extra padding bits if the total number of bits involved is less than or equal to 32; similarly if he can subdivide the bitfields so that each of the subdivisions (except the last) is exactly equal to 32 and the last is less than or equal to 32. This could be relaxed to 64 bits if the translator deals with 64 bit integer varieties, but would require that conditional TDF is produced to maintain portability to 32 bit platforms - and on these platforms the assurance of close packing would be lost.

Since a producer is ignorant of the exact representational varieties used by a translator, the onus is on the translator writer to provide standard tokens which can be used by a producer to achieve both optimum packing of bitfields and minimum alignments for COMPOUNDs containing them(see Bitfield offsets on page 54). These tokens would allow one to construct an offset of the form OFFSET(x, b) (where b is some bitfield alignment and x is the `minimum' alignment which could contain it) in a manner analogous to the normal padding operations for offsets. This offset could then used both in the construction of a compound shape and in the extraction and assignment constructors.

The assignment of bitfields follows the same pattern with the same constraints using bitfield_assign:

	arg1:	EXP POINTER(x)
	arg2:	EXP OFFSET(y,z)
	arg3:	EXP BITFIELD_VARIETY(v)
		   -> EXP TOP


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/guide/table4.gif100644 1750 1750 43177 6466607535 17653 0ustar brooniebroonieGIF87a"€ÿÿÿ,"þŒ©Ëí£œ´Ú²¾,7 pS(: ² ¬dê¡IÚÆn‰çúÎ÷þ )! u¬}h³bòs±bĦrºR9§J5ùüŠ¹Ã²ùŒN«×lÕ~Záä5ú’›âu›÷ÅuW#Ç4ˆ…Tצ¸ÈØèøˆ#æã$øÖ4–g´y·Ùr¸‘ÕZ‰4æFŠh&ú©õ:¨I[k{‹‹§åwåÙ:{Ò¥Ù[„yä)œêËÊ‹*;çV(mû š‹­½]ÛÙœ'ˇa˜;© Z޼\ÝÜ|(s™þ)û7˽Ïß﯋…—@=ïÌ¥2è×›u[’±›‡P^»KQuꥊš¥þ;züG)T°s¦$¶ ÐÞ¸c¥‘ËÂÄ¥µ—ù þI™($Œj{úü 4¨Ð¡D‹=Š4©Ò¥L›:} 5ªÔ©T«Z½Š5«Ö­\»zý 6¬Ø±dËš=‹6­ÚµlÛº} 7®Ü¹ é*"ÈTŸ]3.úúý 8°àÁ„ >Œ8±âÅŒ;~ 9²äÉ”+[6œw/¼KõjR¬óg5œ•zÝ#´iÔhJ'=ÍZ‡ê×±ùvÈ\»Ìl¤°sspÍÛ÷ÝG{ ¯¼øñ Ä_.!¹sè?š}NÝÎmÑÙyXÇóËÞýÝ ]Ìxü¤‹`?3Ü®B%°;:?úUöþ}GŽ<_ÀâLjæXÐzõ‰{2 I Bq €âdauM¨J‡ÑZ…ÄtpJ9h%ÁQy"T’„Èá|^Ì1ÌŒ²½¨‰7ªGß6(8Žò´#è±ô£15n†d$÷ ß“è(£:»ØX×8>ì”Q•ÕuIÑx Õ´ä†;#! 2ŽÙÏyl¦ ¢~"Îà">F¤c$dN‰%6%@|FXç—æ•²Ð‚Î ÊÑDܘ!‹}Ö£"ŠŸ•w ^+&j`œìÅ¡+tƹݚӌ:jžíj&•Ô¬Ú¨Cyš¤@ñÚ«®B ¯?[þ¬°A!{œ±>1Û¬²¿v·f«´Q»#´¶h››³=qÛ­´Çb›ƒ·„’ë'¸ ¢‹©—«±ûž¸ÏÂû›µÁÑ…¼ß⛢½Êñ‹œ¾ç¿Óï ä–ûÕNJëâkÀÌW“ÄC=I룫Ñ ß(sã:¯KÁÐI©Û½E€à°ã½¬ºr©¤¤Îná.ŒKÓ  ç—ïæHuã³¾Ñ=R&@Ä”4g!3ÙNGáÛ=Ò>7ñä€ð“M„‘[=0!ª9S©W·Çù nüüE=þqÊ^”z`I ¦ûÙos$¼ŸµD«ÐÐstâ„àôäAù”0#´ƒað€( Qäp7¶ž=¢4ì þñPh§Ò¥ ‹rkaÿÒdÄ/fU4„† yw¹Ã)ÑkÆaŵg¾ƒÄqˆZ# õ3¿ò hè»àA¸B›ñŽ{ q#ëGŒ4Òá!.Tb—¨ ò { ÔK§NÇ’åÑ_™Paµ´ ‡AÌO:àhÇJâĉ#¤œ‚¤¾IbUA ¤Hï)ã«8 [aç?ÃÊ—×¢ŃyG„Ýh_µt·\اÔÉZ4o9fv>s㦠´ypº%k¬ÚÚ7qcÎB‰S7éŒ:ÍIÎݵ“˜Ëj§:ß O)jmjÒ‰&32)*å1§†2\OxºzNkžeC’‚þE*g& wÝ”ßx…Ž‹¡É#~LhCp#woâS”˜‘Q}–3âÃ&/©@h€Ç4é)µNnht^ÝáK ¶BÆT3•ãaj½Ï•e§ûê)ýº8¿uª£Œ)å(Wæ4–*•g>©º6þ¶iU !Ú<<ºN,LØW}Öê* ›T_å„9W'Í–_iëÛXÚÐ]ÙS%B{ëSiqÍÛ1t«u*F‹iO¿ °Tld»ºƦ%ž™¬d}FYŸZv±øäg`!KÚÂnÍ=7}@óq2Ín³:½lGs™*,Âh8ž¥­ûÛ'ÜþN‘ lljßþÚÃŽÕ‚~Ü ÙØyYÙ* ¸Çl-«'*ÄE·´Oc-Û”ÅÇV5ŸÒ¯ioûÝJ9´®Ó=®Ô »>B¨ñT$$MoËk×îž¶žÛUíyE‹ZÀâ·ZÔõ-Wû‹\øJt¡î½WrÅË`swiœ­-‚ß»ÚýF˜¼Z…»Ø™‰xÄ$.±‰OŒâ—uøg>pƒÿõàoôÅÊTpMG+aÿ귲氎)¬áç8Áæ1iŒ¶x¢C^Å<Èú¨´O-îI½9}XgÑt¯·ÃÙ%/¦Ú”el-ù¤-oÙh'ͪÙJj)uº»ZÜ07· Çkqa®TZsÚþâIñ°]$³¤Èä+IègªºL ˜;G+4µvˆìaRIúª¯EñpQ¾b~ ˆ^íMÆÚm\§w©@>樈r-Þ‘”z¼ý}´5Wª¢(8Ø–.L§až éKç~•9Ss}i@q¨î›s öþ(§\\ázÓ‹0³ª‹Mi r0}Tkàð7ÊÐ hÍϬa|=¹zÜT¨XŽöUeg¸ ê$ö°"F°Šn"²¯Ž¤_¡_Z™ºÝö5w¿]-ï:Î’‰Ýk.wo{³’ˆþÑE/jSƒ×›Îd5ÒO“xëˆßMKœ Eôªï[ |þÆ ½‹GSo‹“ A9 @îÉþ†7Ñቦ¿HðUœxqM7§)¹sŠÌÀnzÓr!gSÜt‹äù<AlOD;¡6¹rMs†‹{ØR×¶;´îÒéýNäOö£m| ©€©Qß¶•áþÉÚ™®wRŽòªYikŸ[:uÛû™\1}=è´6xÂYêPöv¿F©†¨ùnUÚwɬçy à#ƒæó‚n&é÷<úÚÚ"÷¬°iàlœÞ+«—æ}W¼³Úãbö´=ëmú g˜÷ýR2îsï{ÛCãf¬%³®©=j×ó™·Ò7Ëg•VdWÂÙRœ³UˆÊ:šëïÓt°CW?“£CW;NŠ‹´§Qiâþº´Þ“ ¶Î?m$Òr6ô6¯Ø!‰lÝæt³%U`,wcËÑ6æwZd#Õ÷7FÒHºuvX€lvÓgTì…L€Ô"é1u·¤?§X·4.Wb”|™‘¶kE-çwn)4F(d&¼„vR7€h!~+EuÇc~e£xwÅj ƒLædÄwðLx?èU\·‚l¡{°'2 RØYÑÁxÂq…3w_Ø{-6V…bˆdDa†U†Íta††mjkˆcÙmIf‡=ö_yȆ·÷c¦gMµ‡¢‡‡&dä•b‹Èˆèˆ‰$v|#󆈈DÖ‡ˆzM&þ‡‡Hj„ȉ06‡“H‰i‡1¦…ÂgZt8h@fdª¨_²ÔEPuxBˆƒÁç…‚…OÑRôAWå—q"ØJ~ø‰è*©÷€¶(xéÒ~™iîFI»f¶ÓävƒjèD8\°¢^ŒVŒg(^ÌÒFN3wÛWnfäŒ{ËçJ¥O¨F†WAôðOM˜#ÄÆiÃÓìçneKkg£¦„î°<ùXQSÆk½#Dþ¸'˜¦"þƓĎ4^ÂHJ$0Øã€w” eçAY—rÆæ‘äsm† rß§}‹ÔvÒvtÂHhT£ƒ YB©sÎnL¤% q{siÙ‚t(C(·Œ‰x þÄlf3\ñ€÷Æo«äŒw”tv$àÈ@yÃnÙBù’G o8Iv^érÛÖ$Yw‚L‰‚³ˆKSyNèE Øqè8Ä4¼ôkj•D{7”tù NøK W—°ÖŽÏ“–‡$U-ùT“zB…*Ø#@•ƒE¤•QqY9&‚9<êXgH‡gfô‚lGì#€¥”m~FH©–#¢–DH"ݸW—Ã(^—_õ€ Ùlµ™Aÿv“ÂFšFituŽêWHBØwϧŒ‡K»ä;Qåw©:URœ•´”ĹEBÅ[`Õõ\Æ4—}k~'rC%J\Òjò¨"‡ôqœö,þ¬è‚‡ ZËTmð)@¤øzPQ‰ëò%vfŸ<óŸÄˆ|¯˜ˆÃ‡Ÿ³ŸSt‰î¹c¸¸‰>†‰ * †h ÈbƒèVÊ ®è  ¨¡˜‰ò‰}`ÔRæ”ר\ha¨hŠ–&il‰„m(êƒ-ÊŸ, u&–-´\0j–Vh£ j|:&“'Wrù£ ¤Yˆ£Ü%“1ʘÓ9NKšC¡Hº:•Ç—T9*¢ì’ Lê!T襲ǥÕ|ªŠkFM¾¡¢ 6Šz¡ez¦{¦UZˆ°(ˆú ¡ª¦ºŠ 0wÊfq(Šž˜Š‚‹„Ê/†Š¦Ñ‰“J©•j©—Šþ©€á¨ø©äŠ5¦¨uj5 ¢ª‰ê§º§š¡©Zª€zª­:¨rÊ|úWŸúi¿h‹ÞØÞ™íåmu¸NÆc‘$ÐÔPÜ$wm4a5ljz­©¤˜™Žaéf×ÅN ¬ÓkùXƺ•¾pmòJo9VD©=®“eŒDµ^óÅJŒ_Jsu—oñG8²[ê‡d(ÔxÕ«ñÉF 9klÔ¬«ö¬ Ev ˜‰å*ö(i>ôx(˜9¦)•ÿ¨vdH•ÄfŒô?´™“k5?¨ãBðc“084J•ÛšÆIK ûjÃ(‚"Ëv¾ iëE~ £“hwŽþF–P›˜Ât)³Àc˜›‘ÉFG/ä˜$•“—÷pÏðx[›Á ™P(ƒ‹„•>‚}˜þ“šx±d‹„á:<©js²=a{v>É–3[e¢vÙZ?ºižQK¬õGµ=²á‰scö‹3Y´{eys檘mº®”·Ãf·Ëêe'*xlË”14GÍ·C­×Ê™Âs·Û*£[÷l=Ø•õg²g#w“¯«µ™+¯éZ]æãEU[‘Èv‹øÆƒY²`¯TK½kw:8£°»šÖñPIêöŽ»;œ5‹¥™¯KÙ¦¶$yŠ™·“—Hÿ”¹èÚg¤Ó!ç ¶´u6Û–I%oþûè;§=Ìů¢œê›*Y/tJ§Š¿¤ÊzŽ‹˜¡º¿ˆ ª!º¨}:«ºªjÀJ«¥È§¨zÀªz¡,« ŒÀ̪·ú¡áU" {£?Yh…Feɱ¯ ¾.¦ÀlÁÏX—ö+¦ôjöÊ»×”HÁ쪜™«‰·ëæiyË™ y»4X‘wÃ]Hcé'\pÉæ•òӛà =Òi„<ËÃ/|Ä+ú©J\ŸöÁ)¨@¬bL6AE˜ §±ŠÃ¬Ž=K²X˜˜Î+=Ç&L5,I„©‡)\Á@V™O<ÄØÆ=˜>aQï*jxlÃÌ®<¬ ›pÜ·‘„ñÛþ”Ä*mµÈ¨œ§ŒšÀj¼È9<À¦šÅi Êy À›Ê©¢Ê8qzÁsŠÊðÒ©£*À¦LÀª¼ù»šÌÊìÊ ü1±¼ÊŒLË¢\À*ÌÇ™ŠÌɬÌËÌÌóÊ`j«ŸÌƉZ˺,̉¼Ë¯ÊÇÏ<„7,Í+¬Í¬Çk ÎÔL̶ÜÄ[Ü€.Ù¼hŒ¾Ëx;ÒwÅõ¡Yͧ¥½$³™7ÂòimY¦Å»×fDlSQ°ÆåÉÂZ|xª|áKÆE|W‰½í ai6`n© }l™ß&VáEî‡jú ŒÈs¶§»¼I ~yw_÷]V´û®>d§F⑽Ë„‡‹j!þ±©´/uó̰:Ê(|ÍF|ÎÖ\ÊØ|Ê Ž0o*âÕÌÍC(‹–lJNº%þâ1¼Ï»º´LÛ¥Í;%·àÍmã*þ¿ßV+…ŒÈTA¹ÅEêé©)~¥Tm±û­àmÑb£ÞãÃUG›nÀDmÁìâDúE*[¹©n;ž—­»Ihz »ƒ)B¸üª&NÜ]Ùå^+µ¼ZÆ´ÝÝAŽ8G›Ýh©M*æ¯ív^†]9a·šÙ~$:žnþ ŸëšÀ9èÙlÎEê›ÿýÇZdÇ/útÛm¦Ý'ã; éE â~Þáë{T–½Ûiɳ=à@xëBç§ŽÎ¥†«(æ­Œ|µÞÉ!n¦³¬ëCÎëµêëâÌá~â$nëÇÞËÅþËÑ<Ÿ#·ÄÎ˽~ã1(í‹Íìíßîá.î‘íÝÌ·Ì¿¡Üìånî·^ß»~íÆÎîÕ´í^Îêþ먮ì>ìëNä…Zk{‰Ïé{ßç}ç¿qolÕƒ£^¾±~)¤ÎFháñ¹hñóvñÒêÞþœì ÎÚÐÚ»•_Ñã‚\l×ê)ìy(&ŵ[R{iñÀΚˆ{Ú¥þ»¸#Tê0}¿’]i¥ÞÇ2i«3—ýÒEˆ°ë·p¢«.]«èÿ#Uz ÜU&ÝX‰k'MÛ'F9¼KÖ8rÖðÎê×^tÕ2EQªò³ß“ôެ«î=÷Y¥Ü-ñM9'‰äÆkÙHroÿª´óû¦½ÏIÇÆ->‚_“M4kõèÙüý¬s/›£{”â¹iùg]œîÁ™Ž’ëèŽhKÃ<½oÑ< ‘c6õkôcg—¹•.u®ÌûøZÅžCçIy÷Ør1ÇE²ôœŸ^n<ú¶ñ?Yõ¡É}è.: ‡¹n©¶]ïõúwPëœài@¹fD„ÑkäЋξþ'6ŠÍº,Mov]½¿¹ú:¿¶½ú¥=ó…-ºú¾Ú€Ÿéƒ\ÿ:ÿÐiGSæO, o]RUú"œ›'½ð%0ÔP‹ìØÖý¦ó×åƒe¬!SŠÞ#%°$g˜‹Ù‚Z²[,…´Y-†%U9´çµ˜R— °ZÙùyÍÃpE ¿î[ýžŸ]ö ™þò  !#ý%ýNõ0+79;ù5=;CEKMENU)W[]£^Qeik Sm+IsysA{#w‡]q‰…•=—a¡_£“©¯ŸÝ°«·»9§½'ÃǛɬÏÕ»îÒŒP|zþAÅbæêè‹ÒrÖ%ÍùÙ_´Ig埯vÚÞÔ bÐÍS¸ìÀ2Â]ÁMþ,*ÔFeOºŒ¥Àq’P‡ÇQ 9¢9#` ‰?BÂøñÏ%UTÚ¸y¤eL]Ã\ygòäœ(Ó’YùÒ^OŸ}ff¬iAM’DHlóôdM„tª½ˆ²‹ ¥52}è”k¦X\±aYvkŠm¿ ¶Ûbd‡N\é2eE¼Ù‚Â%w+¦v㦷ïx mM¥xyŒ`65JÕb\ªV“ZV¡V´£BK¦t)v2]…éÞØ·{_k·ŸÎcœÓæÕ{"iûîÛ6¹r^¤ 6wnPúþ²êÓeAÿw»4Vë¸w_¥_xñª^—>?Ž<øõì¿«3ÿžÙÛàô½¥Ž[{ùüû‹ïœùìÇ>Ÿ,p²RÐÿt0°.©Ð 1ÌPà 9ìÐÃA QÄI,ÑÄQLQÅYlÑÅ-dNÂŒ‰Á Ño;•œyDÇò‚$ÆGøŠfH÷’”§FššfÉÿ¢l0±ûª¤NÆ,k92 ¹ÔbÊÃÌîI©Ê¤å5wÀˆÈ/ôŽz–U*Ub¾2µ»¼¾9Ÿ åL°m»…¸²þE¦8V­üÛŒ]Ó}z.v×xaxçæ”m‰e{µ‰´Î ÒŽë9b+¦{`”ýÍTψ沓?iÇ®6µíÓ¶`16tÙÍÔmšé°rÒÜrh£v4ÏÏyºᮕã,r§²õâ¥['è8s›Z‹Ú°|~õö4½+MæÿR~<è—w^=ê§—NìOñ2?îáÔ¾{ðõµz?òAðGôW6Jöë¿|øwÿyúÇ’?ý†!2¢gügG’Žñæ­Jœ8¤"3Iõ/}úk_aÔ†*¶Æ^(Y˜ýzÐ*×4ïb’ g1Á¢T,tƒÂçòÐ;r.ƒ’“Tn‚Ç8fáê3£þû]˰º„™®k|Ÿ뇨¥HðWà àßÜG(>Ùá…d¡JÞüG7uld¥« ß(·ÃÊ9K†åbÉvª˜‘{G¤–¦vç56qT¶Â›¹ÂeŒÚÍU|Û»px™Ó©¯|üRÒĘª/⥂Å;ÄÐu@t@ªŽR\ÚG¶¾Eð4ßær8¸¼ ’p¼j¡O¨É¨TÊ0C̤L‰ GNÒqçš"]É7Ö8oª±ÝO8YCJMw:CŸee8´ùë–£IåYÞ¨»h¹®wkaftuIHæŒùÛR(Qi½óa•¹Ì&77¨Íó…Þd#9sd¾q¢~üþ;Ó)N"Á3æ '=…¤Îyâ3Ÿöì#?)O&´jþl'A¹!P*!´m×d(¬Ðx¿‡ºÅ ñœè¼×ŠÎˆB0òèGAR‘Ž”¤%5éIQšR9ô¢‹T(™ZêÁv0¦ŒèõjºÈ™æ3§MÈh€zêSwj4¨‡Ðç@‹ÊŽ4©ö*P›Š–F5…S¥ª¢XųhÊmŒõa§HY­"ÄG±Zè,š&(Â[j GøFS¬–0ÛŒå?Õu­(tª¦ô$¸b*§w¼`Ú[°¯=ꥡÐGÿÖº6%vmº V÷"8r­yœkë®fÀ…Pí0…íWþƒe(e>U”«ÖÎ>ÈfE*±%œfñþ¦Ç5ÞJ¶xÀk#I%DÎŽñŠq˜á„’»ˆÇ´\¡![n1–­7YA-]*qTMƒnhVw¸ÉA±1fYLážH«0þÅlgÅ– ³š2ßa§¹·M1÷TÛneª†à–ëz;±tWgóÍ.h>G<÷u0Cígà±6ýáõts”äãhgîÔW ¤UWKä²Äõ ëÒv] gF+ s@W¡þöpº={+ÃW^ذam%aÉúÿº¥›.D³ºÙéÄÑêT“Gk-[ÐfQ»˜©¬TžÊÕ| cþ y°eÖ*kµÛãtŽ83Èiìll0’Á¼ÇÜ&¹Í ›+™çx¨Í5.ùZn51¦õQ… ªŽˆ$¾ÊÔªMýéü˜  ¶Ê޵×Xª¤ºPBë ÑI}´¥7IÅ„ºª£&µ ‘êOSãÒ˜nõ~Ui˜î2Ä1VãÁžûáq²Ò¬&u©±ÔƳɖ̲ÒÛžw­Éÿ¼ºs§–õ »‹kbk8¸¨S6Öˆc`l úÎ~)´×Uï²»ÇÖ5nÇvæ1›º›^õ„1K…y؇9«p]kظùšÝpŽê»¡á6R–—ˆ°á.wrp[¯¯ßý5Ài”9,ÛVØíŠå‘{þUæ¾ý±áôö_×IëfÕ.˜“Åï¯bÍN­»ÓE…¸u~ 6S»ûÙ©ŽuÇmJè—÷(æÜ~g¬¿msšãÜ¥@ù>oÞò îüD§o¯áýñ›n3é3×yÍ»íl§Uê~ ùÐUv±ìe7ûÙÑžv)½§L7RÏ¡ð‡cýçU6̓žu»#hîy¯ûÐÙžS·+ î[ÇCׯtÀoùN_«Ó­oåe‹Ü«ƒ­Æ¡— L^Öº€“ÂôzâEûX÷þa*Ÿ;fR.Nm}Y9æsEW­Ý…þïd[Ü ¾7Y¬pî³7®ÔÉŒY¤ì;]aí¾q^ÜþÅȇ¶Õb¢±±ïÔ}’²¯Ÿ²2®ÞȳîùW7š “ßÿ7|f£óñs£7ï#_µ±$¯’íOåg2¼ÄüÿŸ¿öê”nªÂÊrŒÌ—Àh^–(—¥ÜÌ«qVÉ™ÞYêL¯â® Ü¼Çiæ ßhï­È¦²&0Ýœbc^¦Ñ‰:Ì…¢ïÿÃÈ´/NέwvP¸Š&!)™¦¬Êð¨ÁPðr6G~B‚™lᆌÖdkú Tð ÿÇÁÆËŨèc$Ç&’Iá4Èöœ‡ô²¬Ã¾L˜,ÆÆ pªˆ`È Q‚¤%ñ¬É‰òìö¾jr2pº8¥fd –L`hþ«nøp]Xf`ŒÏvP ðLòëÆþt,«K×ЖþÇDNé%é/Ýt=*ê!‡¦IÚ^m1-â¦rÖ‚äQ;è)2IÔLR$=²RÁ-$ù.#5ÃñT3´¦•«Ò±#ÿM=5;òQyÏ|ÒÊ©<‹Rá©`QPLÍütßCU;óÀrH9pàÌÍmç‹Zu¸P•ëÔ)´ %ÃPKry«Å,,).ÛR‹AΰStS5\qp290ÉÄ5üœƒ[gÕeø%ÂpUÜXóx€åYA8õ0;uR#ßΉPMS¬<§ðúDÃ^}>{h¼šÓoðix£Nk0û(gåv¯ZGUßüÐbþ64]QåêÕ` ObÕè Ñt¸ðG™©Ì´õcW5d-©€/^¥Ì=!,e‘õ8'ÉKu4G+týÊ0:-¨ôòsDUèDÖ¸²UØ&ôqp[U–Y–ôVEPNÏR7–ù>Éc vhÔka§”(³Î¸ÈÔÖTÒ!·¶'d¥Ö"ç6osvocµoñ.jõŽTËÑoï#R¢F’pÿîUA5V;5bi5rMV)wqqjpµ]½Ñ A7tEwtI·t²njZqLs©®q;×R!·TWp W"9—]a—oQ— T·z*WvoWo 7vï†Ð[ô ÂÉÔuþnGѱÀN’Að&Ey}jÒ(ÅxÏ(­ÕÞ*Ãz¥ ½11—5öœÐ+qv]§çP¡OeÒ ³y ðJ=¢ ۋȺò“p´Ÿž„ø>14Áòœˆ fi(yá•€¸ÈÅ8K©où²Ö;³¶ÉWº”hØòSnÃO=13ý޶äÓÀø #ró\å(HuÑ”¬ËþLsǰ h¬NkÉŠâ3{5î2Èj$…yóÜš7÷´IkV†NŽÐA6Y²‹ •9ÝPy)[˜ûL®àä­@csCÓÍCÜèHG˜Øò #†ƒÇd]P—D0Œöi˜•°e‰í3ôªþ–‡5QU~Øk=÷ÂÕ¢\¸ËøRŽ¿u>ËŠ< Žbû󼊶¹ ôº”T€ ¸ÈȳfkLãÌÐÂ(òªòŽÙ%Š¿€-çm]/̈ó*Ñp}4?›Tu1‡!Ñ­fx4ÿJþ.x^'îŸØ¬óÕhÞf†Ñø3éð—±VÇD¹sFìã6;½Q«W½|iF/ó†ÐÓ&kÖtöµÚâH„ÙãtÙ–†uó¢ùz©7eT.“QlxàW/p/7ñ­'8Šfw¯é¾¨õo‡Wwaõ=ªY„z·}÷pƒ—Ÿן-×Ó07ê Úv]wu6pñ™Ÿz{šq?Y1Òqµ±Mþ›œ Ö¡až.úûÞ£8ºOh7¹¦ ú°é¤!M!TºNh¤ånÿJiDº¡s:»(±cWZKj eŒœdj)0] ‘›pÚWYQP‡z!=$ÏØ€· u¸’~’ÚcŒ’Qù—¶šª ù ¬}ªQVÅÖX«[Zr«º[Y·éúu'úrçúr'—¤kW£óZ¢—¯+ŸÂzk2zs#Zxš¢ÿ™¡ÿú§ïº"ýº®ï5°;Tõº°ÚtA;´E{´I»´5ä°é)±G¨²•³ »Ÿ!{¡]®­aîS·±ú£5»u]Û±u›x!º·sÛªZXÐ8Sþ|›-ýnÍNÙM”{£­˜ 5Tz‰’žwSy™{3­xÑ>ì«¢Û…MýÊéËwþæ}‘—¿‹éè"[íŒ×kN%³'ïUø¾£*iÚZÝ[Á. ØbÈ:»÷÷›dUïI© WÈÀ+öƒßÓ¸ß:yïiêÞ¹û:t¸”’_جÒ+chKuèÂ;2ì®öþ–ד¯ØO•s)sÒ™O\ÿˆ™‡…Q¿ºÂýjŠ”-¿:¯“ÂV¼ì¦(#±ŠsLÅõÔ­¼:d…;ÊÔgËF _Sý¦Æ Çzp8y;˜§TÃÛ{Ç=TùÔ¸œu¼È˜s‹¼«¨x¾ Ñ8þʼn…Åw9(ÙXXôˆX¹8ÎófÎcÒ14ËßÈž\œ¶Ä2ü¬÷eÐK3Í JÃ3Wýü­ë°žD}LËËúPml§—4YùÐuZ6#lÉ·hnD¾6-Ý“Ñßb>;ÜËå€Ö³¹ºܹ³Õ³Ô·¥±Èt…±{‚µ{ž…X8„©LgÎÇ;=o| ÏYš{Ï“ Ä®Å9©ÿ/‘\¦\K)ž¡0c-Ýhïö©k;G¦:ˆ×·¯ [Jp¹­QQÃ[Þ_û±ß]¶ãq²Ýº¸ ·Ï"w¯/»³a;ßo[¸¾Q›·Øí:°•l‘}ˆyS“ÕL§Ùâþÿy®ˆã ³”)«Ôñ¨¿™²¿/•¶;¤³·§ ‹­ÓXµýäë»þøR:Dµ¥œä9<”°ê_Až‚_˜àVdäyçY~#›Ö…æU3*NLWûýÞç³z­ùƒçu’Ð]~oúg„ôØÉä§ÒqƒË5ä›þ`{†Ä¶~0+M¹F¯^I-xÑMQ3ŸŽßÓ}©Éšæˆì-8@)³ÝÉš§·ý¶¸ž>¼^Ûƒ®?½ðÝ}¶¿åÞ²%Û¾¸»àïý·ý=¸Ì3wó[ûôÞUK?ô;ßàñý³QÛ¤ß~ežàcô‰;pMû÷?ø…øSªþöÙ)ò\õ=—õ/¶›ŸS=Ÿ²•?wEâ3ô7›ùÿø[Ÿ«õ?ÿõ³ÿû»ukW%3Þ_Ë}Ú‘4š"NÇý7ÙPºº«Ó[ï¿ÄÓY¹Ç³A‘aÝ >¦n@³K`"Úâ»uaæ?j`FnÒaM‘zšf÷Ê3]¢²V,oXåïÒQõF0.W—Nß3*ÅÅR¹_£ÕЦÌîv¨+îx±­µr°î·0µr ÃUïM‰“ÐÙ͵Q\šœÑÒßÜ\`R#£  #cZ$^fÈ×^!d¦_…ªC•j  å*‹ËŽgì`F+ìZë/ip0*Û•þcÝ%íç0_mš-¯´Ôttﵕ*ç¯×*è` 4ñ›¹7qÙ8®_'`9öélúûȧäüso½7ó!­uÆ"ÍAF†Ð2¶P|J%KÎAj Š{8®ÈBt¿Àu6ñBw"7‚ioàÉR׉<ãça/H2+VëG²£0™Càá3G&[·R¦”·K§²X§Ì8<ª¯ +x8 Åï%=xï4\8­Í½+ùÈÙàªm"ŒO½î“ú/œÕW?¯Bˆ6h°[î´Ýc‹+¢,µGiþÝu+£ÔLTõ¡ñŒ(¼SÛLò©×T?¨zY2ƒsøñÒ‹~Em Çø’c.>z¦þ8ÃlH§vG#}ì0R‰w}%ž,¹7ð‘N /._iœÆ—ÿ^Õ9t)‹£C9\qsêÌI£{«=xvãÄ¿“ÿ¾ü“óè׳÷ÎÞßûøò©ŸO¢¾ýü½Ýë—Þÿ?zӘЀ:ÇŸ¯%¸ xÂ1Ѓ’‚ „Vx¡fU6iš"^uq©3”:và‡á¢˜ÕŠ-Þ×R9!±†ÕsÉ …ÉG…iµ£‹ *×£ƒ@ ) =@Dcû™VÙ,¨ýh)š6d”U*ÁÕfdA6Øm vÔŒƒ¨EÆä‰Vþ$ŠešY ‘ºéL—®ùX•_8ζak*6%jêù_›þ`¶ãNʧY«ÁÄ"&x#•ŽR‰¥Jbf†œ’‚"ÙH–ò&) *ºè'¨óZéˆ6q—)ª¸ù¡G˜’Uj ¢¶H*­ïÙ#®¹È'¯¾®¸ë¨Âbhë°Æ‹æ²ÊJˆlšÎVHì­ÒFm³Ö2Hm²Ú.ˆí…½z˳᎛ ¸Óž‹n¹ê®;`ºÏ¾Ëf»òÎ (°ÅÞ‹¯¸ùõ»/|gê °¿Çt0 +¼0à ;ü0ÄK<1Å[|1Æk¼1Ç{ü1È!¼æ¿kïµ&\²|,«,¥ÀÕ¾Ür½)Ϭk¾2ßÜ^ͺ¼ó£yõ 4Ï?óLty(ûŒôþ¯F¯ç4Ó/ÆÜmÔÚ 8²j¶eý‰6¥"SCØq~އ«Bc kœa—¸™RPöÖ±ºuYzÍÝ×Î¥Gc–òø5NGÎtÞµ:-*+Þ`xŽ^øeP÷üÅ {×'Rå¡~å’ƒÅö†#r#F%ÜÈ9+×r2 ·µæëyX€ËXŒéˆÓ“LPçÈ.üê±G{£Ê“Ìñ#_7CSŠ™ÅLÁÉù’2ZÇ—Q_åbÙßÃþ©hÏ e>F¡ÞÞj‡riM‰¬¶gO{ãç£ÿ>|’c[Ÿ5cUë £ˆ°|j~»ëXâ‰ël~¡HD`,’˜pÏþ$¤f’’®än,5cûŒAAUDD“»ð8>ì¦(¬GO(Ã@~p 2;i …Ó9ö•ä>¼œ…~·;… -|‡4,¸ ÚMƒHÄ Tµ¨¹A*3vðĘ•®.”±ŸKj2˜`*†osâ$š˜A¯°%~bôTïŠÄš¿M¦t‰RH5V(¾û•(_9AȽ6âftÀê†çLØN]O‘¾ƒ¡èÄ¢£îÅ(1qE6·%ÕÐd´¤þ®$=~1]ƒéPÈ¡ÀO.y²[Qi†ØÄI©sUÜÒ§ ±‚”€SáR†Džrtµye>‚)Jkþï£Ûf•ð‡[ÑÊ–åŽVÅ`as?Ùtc«”MoÊ-RÛØ´V5¡qzH”fgÕ¶ OêÌ3ž|§=££´måSŸ@dW?É5µkDõ„ÎAí¹Ïo”\ hC³P€FT¢ÿ4ÐC«6QŒV´A5TGÓÆ(t†´;JSªÒ•²´¥.})Lc*Ó™FŒd%½Û@³uSø|ôp;m`NÍõSžÚt¨ÊÕŒº§žZT©{ ª»œê›¢J•‹BãfU¥ÄT‘fõ‘¥WW3D·º½)pê X‰ÜæL“î-s'j+DÕæ&ŽÉ'L¹!óÎ*W´®œ0ƒë}ô¶I8h$‰þŒŸR"éSH2²©u¼IayùËK)ÖHÛ˜#Þ$ Èf&¦_Ù©¦2XITå8éJ9‚'²ÈJµ„zßsm/gy¼hÌ5°M `åDËî%µ» fcwËX/⑬­™¦ðfKWf¢OPä_ߤ‡@Ñh†ö QA>R2wä eþ¦X½Oæ’~®Ch!d ¤" ³[?]ŠoIün_fxÅþõѤÚáY{;‚´‹É½âE&¸ÁÑ wŠZ¯>.iÄI&IjçÄ.2¨•vx„’!¦¢Œ÷¹·ƒdl0 !ÜÉýæ°NÃ4,M‹º¸²—¸!ò®‚‘Èà /ò™Žþp.MÅ·®/À„j’‘ó¨<Ι.†=&ßÊ"Yp“‰-_Y<|Ùóá€6¾ìøhÌXK’6ø«¯·¤@Át¢zeO8ÒÃR–'I~³ˆÚ b&vfÅh/×+Ï3#¤®u3ãbb©‰>jáV‰ˆX¶–„”^Uϸf ÇÍx­LåU{Kf·ù·)|U“¡‡Hà"348dm˜èã:º¹íV;KÐ\+ñ©çÔYTy}ëëðÚFéäòv­>¨Ú+\zE(Ô¬©Ó°Êï×I¥¶íŠmRl{UÛ×å¶‚¼ Yp©%w¹Å it+šÙ6c·¢Õ Tx[8ÔÀ~þ¼%%m¡ÒsÖ>·hÙ½ï¨ö½ù~ì‡ «gçsàÍ.ø!©j=[ç¯uø»!~¦ƒC†`®88û‰ñ¥i<âø$¢’Aîµ€ŽœŸ%߸Ä÷g“-¢-gèËu{rÞe-– WèWá•s[ɳ+yÐÄñym¬ØQ6ÓnNÑ¡néþ¡ú=‹Žu¯YýÅ[£8¿¾h­‹]êýzÓ…^ö¤ã íæž6ÕÍît¬§]ébg”º~®º·Ýí4ý;à/øÁ¾ð†?|Æb>t¾÷Gï{g{ã×în’û]ñ9g¼~?.¹«}ë˜/ØÚ5_mº¿ßž‡|æ%ÿo¸/^ÔË-”«þÅ–ªq“š evqm_×0ºú'ÑæÓÚ¸#Û÷í®|ËfdGçògÄ1þQýö·¸Œº´ç 6·î䟶wÀÕµfg³Y ¦ÂrfbéóÒÜ=ºÜ Mm3£ÚÐBßß!ή«lk Š£²Ò³= ’Uß÷áÔ……VY4Ö{Õš¯åX¥­rU‚ÌÑѪµZh<ØÂqÔ›åßýuIý¹ßUÒ™ý™{é’ÌUNÍ ýõ•.| †“µ†ûxàâQSx'„û|•Ä ¥ º%Ýoá­— &ß žØ 5ÑHõ`ò¨ |iZ§q•1ýNíÐ…a¡.Ñþ1 †uئ`—ŸEŽÏ=ÅÑMU¤iÐÊŠ"¦FîÆ2A¡žšZV<ø`lPü9Xv\¥Eãþô$êÞémÜî•Ñ]i˜R!0:YR%Ó5%:ª^PjÊáW¢W–¹á ÖÙ½äXFäS]²ýXz&²È "¤GMdÁU$X’Õݤ§øR_° ëüß»ø¤Sޤ½]Aâ¥^^Af\Cdë-äÙ $Ùµåb\cêe¿Ad|8¦± fWåHÝ[gR&dZæÜMff*æi2fë‰þ¦ý)äê™ÞA"žnî&oö¦oþ&pžgÒhâŒDª¦9F&åmœfÆ&gÎæpÂ[qjsN§À)§ËafØ™æF'dNãm½¬YdŸ_ õ|ƒ`íÞfMe(‡/š§&U#ž°LgÀÕæPc|æQQÎcîÑÓØ‰öAtZú`m^]B²ß)BWd"hÕ }˜ï¬ŸÎGÁ_‰%Xèå—Z­NGj— NaYƈB¨˜ea "²æcÊñàÍ-]Å÷SŽò•FåGáfR:×¶… ùŒ˜Ravi!òHQE( Bi”"оbátþüQˆñ q)Šà.¦™’áta¥{éý‘(*YZˆW £!F£|¦Ú`ýâ`Ø0n)>QMŽ!žöŠáE¤!‰­eÖ©$‚(Œüa›†£’ÂéÇ)bŠm’eýç5Jã¦Æi›Òh½"ºR.ßJa”QX™n×x}ç=’–Pò©.(+vb¥š×v`”=©+Rh#}«°:£rÝsµ}Å¡"«r׈£w­JŠÕ“f†9“ˆ6c ºj†±Ïý˜¦2–ÂMž†©MÏ&kÕÕ•-iH>cžMÚ7 Ò‰£‘œ§y`'ºUçÓ¬“³„¾¦þmn¦>Å£<.CsJ‡§j ræ£väuFìCN,C¦¦ÁVæmœkvçsÊ*nj,ÈÂ&×E%£­ébÕ–_&“ ­×Ëæ>z§#V™I²$¿Öh›Õ*öE*ÐÈ,[šlÍÚh$±ä‹òDN¾¡ªŒë½mhz%£¥Še2hU k‘ÓKÃb,µU$Ê–b-©¨–ÖišÎ`Û#Sví­J_†¬¥Ã7YÈzJÚ2£a®ËÖZçÁFm_Â¥0U¥•­¦ØµŽ¦»,ͨ“ÍòÉe`*m4½‰VfâþÊâ²mÇRçå& ÔŠ¬Ç’lÞ}.%¤èš“Ðvnºrçèf®¿þþ«ê~‡æ: Àb.ë¢nß‚®éJ§ÅR¤×¶æéëö®ï§ñ/ò&¯ò.oo¾.¸Õ®ççÆ®ææJfðbUêÎî~fìZ év¯€JoÉænéZ/ïNor¶íOÞîÚ8n6öçÊ2\?Ÿ/½o!JÔØ §ÊÊFÚÂ~Ú cá/׎ÒJ^ŠK~ªM–:èNðÈÂ脲b™•¤–º(Â’eÙª‹‡²h& æ6cÞ®è‚Í«KDÿ—ZÀZ·Ñ-gÉŸSðõ=G“§]äâfäh '×RÇ¡)>Ó”"pyb±‘Ïîl”òUÍ)ñ›¦¡ Uœ×I(7ñPþÊÖŸÎÕ³ªYúè0œ1+& 4!# e¿˜á¢r±Xºé®j1c ëúáçôgaÞ¡8\ªÕΙ•²q?lB®ÑǵÄùH%ÒNYªšü½Úc«*™å¼Ò–FÎa¾*érÑ %wªGjS0i&aƒj Šêß1´–² ¢òΊ+"¿j“ª#ã(5Y$Ó ‘Y"]î$òH*'ZjµR«£Ö%¡Èqµ‚Ñ,K+“¶ð’΃:NÎÚ¥‰ š&ù׳P.'  ¡Ñ2o³ƒAlñ),ø`®ë(ë2™0ˆf¥,Z«”^ÚnD²3Ó*ÒsËî.=®¢ , ß-+2þmi1aÂ|Êsƒ¯?éíÁAïêzË4ë±*l4ZKD³éö†®´\tih¯î®ú&æÇ¶nÈ–ïrŽ4îrìktx:/·9´ìò£B#Ô÷z´I“[›|²Ak#M/'×XFRéTàýÞ‘ý2O1õ/P4Í…ÚeÂEoI?ÙøŒšCnFËpfÙª*9°Cým¤yµU«tÔʱ`nuø>õƒyñómj!‹ZóÖ{Pž2#Ì:±>#púÊA/ÆËöTùÕâù`ˆZ©Nó`«(µjptBØ5 õܪz¯O‹ ¿+±R¢D3›íUnidgíÚ±÷:öÒú c/[Tÿƒy–þÑUcô–Öz¾ Z%ëÕüA© ³US—ö@uöbv~õ²­™ñ—‹­±1÷Ö/su1Ÿ¶úhÈ_§²s2õ~¥—nknÏp¡¦èf»Þ¤îèF·ØæuG»4¶Á´dß®ð¦¶yà E/‹MÏ4Ð…tiª7ö’¯|mrÆw|£/ËôO6IW,úJ,}Ϭ}“YôI7&ó>8„G¸„O8…/L{‡z§5ï‚ß4Å¢ôzÿ6ñîe~Ÿs‡—øXŸ/‡Wwk7¸Š8~x‚£&‹3øvn8MJ³¶>ö§ÚíetõY.Ìi¢ f·kµ¿i¦Ømm€ê§Ï¢¢1çÁþrè'f_‹Ûx®¼-‘—¶õy_2Nú³©J kÛoœJ— â V¹QºÎ¾&êgçb AS3ÆïîØI¹-ò-º6 ­½@ÇÎ&÷ó#ä$ nz±ƒ‰jN”‡ éÌ98@ñ,!¶6/¶+Wj,¦`Æm ö¹â*p“â¢.¢2§–:Ov ‘³TJZ'wôÀíýä¯(eÕjµÚš`$çkf?ðž/¨Û®‚º–QÐ0ƒë ®:PØðÝ£œ·Q8ê¤.mËõܺS½vù.–W¦Å£$ª‘û9¨ûœ;(Wó’2 ²P•²-]m´ŸæÉz£Fà/¾{xþ·™³Òð …âÔši² {Ÿ±Z·çŸq°ŸY#í(²™Ry{Ù»¿®•’¼ï뤿·Q¢mkÅ ŠºŸŽ;[ò§?ÙìxyÐR%™´þã®nãT³Ø|vŸÒÞŽºˆ¸ÃÞÔÏ«û‰Ë¸À–TÑûvŒ_8†£øÊîõJëù~;}VeøBϸlJ}–øõ*xûûešac,7ß<ÊRûgÕ”ÕWÖ÷t|yXß´§[œ/üÊ´½Ûþnç‘ý!棨‰’f÷;Þ¯½Dë½S½½PÁSC/Yœ÷(;œiuQp±-ýC7=ròÞÂë&ªÆg»ªb;ª>þ-Òs¾_óLô•;•Z Ë2°¢Þ“­?¥þÑ—£çkI!ù‘ ÕÅÁ—üãoJ (î‡=Ôpª˜3omOb1­›aºgwãÇôñs½–¯|Õ8qò½?j=xxõºxKs?çøË=c¿Øï‹§4ò_ÿö ½ï¢ÿÓŠ/Œç¾ùs®ü_¬ý3<ÄÇÔåö‡QNZ'@oÞýCq$KóDSue[÷…cy¦kûÆ1kçûE÷…Cb±L Ë¡’ù„F¥ ŒszmX±[n7ËÐz±aqÙ­ž½duÛ½Ó²ßDùÜ~§fðè}ߟû[ª,\Ó3lJ\l‹c "|”4J›„³Äþœr̤ˆäü¼@DÐ8ê(åÝE%E}¥£2µpíªõhE:ÅÜLš5ØÍHÍe^…Žý9~ðD~½ÅE‹s~ªT5.ýu½ÝN6Ö©rú¾6;ÏåìÕn·2%¯ç—¶Wìnÿ¢eOv ´"éÞd+6P¸rÜÊÑÓ·o¡´Vã¢k(ìaF‹}üÊÝlj¿â›×b8svGŽ¥Eš÷¢³†ÙN· ÓÈ,èIz# ¼§ h=¦Gb¬bßD‡LþŒŠÏAR »F´™³é;—ì”î¤L¨ý|XëyåÓ¬nE}ýšLÖ¼Zqæ 9QߤUþkB@úSO¡5‘þmüvÙ`™(‹N«ìí%[½'ÅÆ$;uëÙ|ѲµÒ^õrÞ 0Ú—žï2¬½ªïØÁa¯jµë ñᔸ_+ÅÊ‘ܘ`ug«‹š«o“ëˆf P–®©coL.Õvçibaß¾~S!<ô›ò­<,ó9¨ú"²6þnÄ_8/Š×OžøRƒO.º¸ª+“|£ŒAKØ‹ÇÁ¢äIŠ?«$" +í"Œ¾Ž|Šm²ÙŒÐ@Þ”E³à¾Èo±õ â4d”»d2b´Ï?éö›Ç©ñTpŸ ª:AŠ" ]\©H"‰ÛÎ? kó¥Iàð‘2ïT›Iþ>O†I»0Ǭ`H2|+È1„Ì3Gz3βăš2)QsM.±SÎ<å|ÄÌ7ÿ”7Ï$´PCETQGYIôHíƒQ2'¥tC/ÍTQK×ìÔÓ>ãÄ4Ôƒæd2Ô6b¬TѲpF9Ó4RJ]­õ 3«›‘¿ gÒDŒHq]dSPsmôÕÎöë’v{»Í]wNƒÎ8RëMö^‘víιrþVi=нïPÔœ®0ö¶ cà Ùá`Å-¤N@–<¤RŠÕ‹®ÜÝŒ÷Ê•åšwá’ÕØÙä6Q•ð —™• æÔSQÁé¢Õ°ãiyjÔžÕñ”OI®Új†ñÍúA éí:“‘›—¯.Ûë©EN{’«Én[’±µŽÛгí­Ûͻػ?Ù;ðPÄÅÛpAÏNñJÿæÚqM·>VòÉÜò;¤Í¡sÏ?=tÑG'½tÓOG]…;tendra-doc-4.1.2.orig/doc/guide/table5.gif100644 1750 1750 15220 6466607535 17640 0ustar brooniebroonieGIF87a!i€ÿÿÿ,!iþŒ©Ëí£œ´Ú‹³f ûtàH–扦êʶî Ç ˆô|bãåò ‡Ä¢ñˆ„Ðv fÍÇ&§ÔªõŠÍj•¹žÎ»ó&œÛ²ùŒN«×L±my{¥ëºýŽÏë)ÎÏS÷÷äw@¶gxˆ˜¨hÔæ'FX¸HYiy‰ÙùÑGHæ™9JZjzŠšªºÊÚêú +;K[k{‹›«»ËÛëû ,N^n~Žž®¾ÎÞîþ/?O_oO?Ì˯]äÏÕ$lÿa Øjà.„ \ÈpVÄU¨ˆJ¡.þŒ-ZáhJc.!=V!IJ$.”)MRa™Iå-˜1]N¡yI¦-šb}4Ä'G_u*Óæ†¤•tÖ‚ ‰Ð§}‚Ö˜áÓªÓL•bèºh«ÄU³ö)+5kZ:] z=V‘Øndç˜m{5­Þžqø%÷í„Àˆæö«KõîÔ6Zm4aÁJ î#GÒÔÇ]´bÍÛ0e"“ †ÞP{=e»Yòè!¥÷œ~Å“¯îÍnô®õûñl µõܾÐ4¨9F“¿(n\nlç ‡ú8*])mÏòNÌ9¸$ì_‹GUÔáuX‰Æ…mâI‹È-ØÜn7v÷¢€íò †„Q ‰õÅÈŸPÝ- yؽq€æ©§MbIܦ}·—2˜GRÎ8æ‘DiWr,)cèÉÉa .$—jù퉞Dv¡zbx•J¦pÆ9aƒÁ±E] %F¢‹ZB‡¢¤n†ø('Öe*) ûñÆt™ªd…UŠ ¦#ž—%—¨i,Ú‘‡Ü¥Y¤‰¶ ö븞âªcàõ•þ&‚´v¹Q°'ÖŠä±ÎùºìΖ1lI³âæ†ÔV»íH×Öáj˜|z[+¸á®4.ÖRêfl~²Ð®Iõt/ù^±¯Aý¾ôïߤîâ^†gÁ)Z&ÌÀÛ(¼p³oq¨7ìeÅ'8 Çüf¬1R ã1#!«P2i)¿òɲ5›OÌ2ÏLsÍ6ߌsÎ:ïÌsÍä2 + ‘m)CôÆõÌtÓN? uÔROMõÍ?¯”´ G·5\-gÜÈ­~ ¶“LvÙƒ‰ýK×ÖªýòÁ8”åMxYkÝgV‘æà˜u§ªe†‘z} Pç“ì`|m>Þsð%ß{ˆÚ5·™ª=ùS´E;àûÞ°k‚·åx³i®O†þ ÷ÚÓ)~¬,K¾Úðôi„ä#KH•ûµo¨¦•šyû¾÷C®û÷“Cš½íl8݃ŒŽZ×¼- |ü—#âEyQDûò'½üj@™ý¢¥=ôÕfWIr›ýzç›ÏÈz*‚Þ¼ æYOv¯+Òû&Ÿä0?´à©0(& Âx¶û`CH=òІ£ ÜîT¸;ÁñP^àÛÜþg¸%)Òï†8ÀwvØ)æME÷“ª¥>&=Q‚[D™î4G,ÞŠw»Ñcö–,¤aoŽXüTÞ*õÉuª‡`4’¥FUÇè‰IŽ:âZR8A>e|)ð]S0qŽbtI”ñP=YaÒÁÊ䇘ÕÁ5E ‰ ù–0¸ˆm¾x¥ei6¹¡-–¸T-{aËçôò+¿LÛa†é˳­‹ë#b˜¢Ä &™|(&/eH0ymœ&5ãdMN~æ,¼:bøÚ´ºo²N™m³¢EgÎxõçcê';kYA®‘—œžhê¹N].{‡œÓ6ßH€VæžÀ\xþ„ÂsâŽ` åJ8VASÔn˜ËB0ñQz…­háÚÎkšt†O©²TºÒJÄ¥ëÂZL:šÞr'7ÅéNt*L˜Þ¨Z#ªÅÄÙS™vèô2¦B™Š™ªIuªT­ªU¯ŠÕ¬žŸB)T%óÕJ: aµÏÕf’T–ÒŸ5éêJËzÂ]"5­peUJçšÔºîè¬JKZÓkØúGQ‡š!_»EbçY:rS-ËÓ‚e“‰¦޶Ite HåÉë¡®]¢¢y­Î.ñŽ5Ž ÇR°´mYŽÀþÉÎ,UžˆŸá§§ ¢)"¬_ýkOׯïB>,Óo××ÏÙ]ª²ÆowóBÆ ºFz°-ãÑx¬¢µBH{£¿ó†Ô‰è-‡ü˘­¦Ó¦ã|®”päYNIxÈ­G\_ÏÐ7‹ì^DÑ(`P‚›‡£ŒÎhÚò"1’ªê ("hbEÆW3çéabFå×¾>w © ïò¶Eeˆ‹›XÿL²Wú©- KÑhîøŽú»“']oNÑHCö._ŸÂÅÒØOØE \ÒÆíoªÌ0QÊU¼öô´|`óRÆZO73êÊkõë_Q#çYºÕ¤y6Æö\þÑ>sEШ 3]—¦ÕD+zÑŒn´£Mç±ØY˸!t. ×/£ÐOÕt_Õõ¤ŸÇìê¥ø÷¡Pn²€DP…Ϲ•‚û8·\+÷ YÌŽxy»lÕ†÷Hù®Öö¸íþä6·¹iìx=nïŸ+p¸ðŠú&](QÚ¢è¹ÑƒÞàî~¸òkóÁ_N¢[#1åÏ+ ³Wî’Ã^Z7o{KÈ?ÛB¾"·¢Þ_R™ÆD/l åyߨªéM!÷óÜà;b k»\úl¯?‚=ñ¹îúœðܰ ™Â*ìòÖ1¾xhãqíeìZ¬Þ¸ŸœÁ­wåÚ÷'©àó¬Ûã%ÿ\ºro¦÷dm]ø¸¹×'Šk/x2>‰çžùtPo*ÒñúŸÃåüÔY<ãç'¦Åw¤¾uÁmüsItºÂ¿.IW¿øÆ;¼Š‡ßM†˜÷Hovò3÷¾þJ~É´ÛñÑ_ƒ_æ½yFÖMÎÿîXA$tº‡v¿Fn˜*‰D‰ƒqkãL®§~ý·@äGe6—yô–L÷f åS—¤y¦ö'.Çøªg­pw…ik&‚j^õ‚ǃo5ƒ¨g°vƒ!Èi²f/£|˧t%i0èWõ&q×Mû*˜]-¨pÓ[!CB«År8uTrª÷dÄ…{EHƒÓ§}K1Uè|å7}åWu6¦*A‡bèwu,fwFp8fè^öà|çGxì%rq5PP¸p¼§Je˜w€3xÎćúå‡vˆgWm`ˆƒ÷åxæç2x¨^ Æþ†1–_¡÷‡çgWi…ÃÆlYƒ‡GFXŒø{:—phrfõg5Ègzd‡ÐB‰z÷LDx)z½†X Äcûs)ùµD¥„Có‡MøpÕ{HÈX'n 4;Òb~ÃA5QÙz¼…6²I瀨DmFYüÅj`% Û¥KXÛgXyÔ(ð$Šûx‚b'rw‰þl9y»vAÌEvÒ€ØYþ–‘Ç6q§A&é`Ù€oþ1ü'\Vp°Çsç㕞h“›(nÖå?\HÒ'c{x•î2Nkx=^Y•åd_7xºçB9Ù~w9Ey©HÂ÷”"ùk„‰v\™EíãE)¶b”‡³‹ÏâaØDI[eW“•òy¯‚næ&“™¹OXÙuÄFR/'c±—\B¹ˆjivm9wZ>hl“U£4’•9†¢ã‘U@¹’,E’»yrI–’¨9–»—‡l©–«ù˜j›’F´ šøGEu)œEç››9HWÙmwF‚:âVh{|Ù}¶¨’'YtIVk·åŠþÝ’•s§­EJQù•—)—ØTø¸[÷'o”@5Å„ÂÒ‹7Ç5MéÖöl19‹ôhꈂÖšü‰Kò8 î¡²¤¡òènÇ'^dŠ¿ <ÿÈ.|e:=ùŠîo'*Œ÷j³YNQ$vû¢måŽ"öN°ÇO¢T˜†¢9ueÝÇ”$ný’£Œe¡¹Òw ¨M Xq0j;ÚU‰Qæ¢J¸e1j¥1ˆ¥ÆYG¨¤úø¡ýfÚ¤ÄB¡Ãt¦:‡™&£Z‰CúShºŽUš¢pªŒ^º§rE§¥6§qŒzJ¤|Z¨r*.åÈ¨ê¨ ©Uƒ§šŽƒÚ§„þj–š¨›š©_¨èh§-5©ªkµ¤YŠú§¼9˜D‰k/SúRÍ7¢ T¢ºF–¨Z‘‡“:ÚètKBr¾!¡Sdõ‡M eyÀmÚÖ™6JZ F¥2Õ."‘"«zåDÅŠ;Ÿ÷FŸ©¬ýi«¥vBH]­ENƒâ…œq®©Š±…8§XIéeŠ­Ä*{ÿ• ÙdÉ¥Gõbè£c ¤ú„¨ão™…Èe鉴dž«>h —ߺS{¯Q)€sw®º[ÿºŸ‰(ŠÜsȘ«+;‘øž ˆ_„t*þZ(ZÔ-')‰ôd•Éšø ¯ÛÊ+Ýê™8k˜4þI@šæêˆŠwaÁFz‡i^Œù‰/k{ VdxfÉwx®‹‡¬ây¬&H TD_€§“à)¤@‘´±©˜[–*f–{}lë˜)‘YvoÓ8¬öنș~´r7Y¶¥7®ìIle „ñ¯Ðô±ãO·Šî¯(€T[µ÷i±DX¯²:o§&‘)@ˤöúXõÙ¥˜º#Aꋦ»J9·aqªv††z§ˆ:j¥:S£ê¤~z¨€Jºí¸ªÉÈ©2¨©³û©:¼š;¼Ï葪¼Ë˼Íë¼ê`»lZ©žº»©ê‚Å+i´¡Øk½Â »¢*»Æû½¦½ÚB»KupéZPþ ëº'‹²ôÓTi>%ç±Lñ[1‹R4êÛ=ô[v,yµ4«ý[Iº…xè†#QSË¢ÅGÀ,ÚeI¡IVwi`U›PGd=˾!k·%B®d;ŠúÁ­ªŸ—‹4:üEN…7?ŠPKÉmà'¹Ÿ””ûºÂ­Ç)¡!®+KxD7GÖ†T_v'_BÓc ×ðÄA8f´qgˆa‰s=¬ ¥ é: Ki¿Óò´Ov†ã7yÂ" É·ÒVu‹^R¹Íé¸ ¨¥S…©<‘ÅAi,,ì#ž;”Vb[¹žÄM®F¤B]Sì”+Ÿ(„9?²me|JçyÁ§ã°Ò¹þœ,D]hÇ<‘Z8y‡¢¶¤§ÀxìÇZÉšY´A IG«…EÆót”òyÃ’,¬5š:Ãgü)¶Æv8Y<@pŒ}‘åÉØ‡wŒ}¶GËÄÉkìoté®Cè²é›ÄÖwF-'¢æCI »X:«`™üÆáŒA}H´(SoüÛQÞ§8|œoÀ¹kÚ<®ëëÌ!Z#"Ìp7zÎA4ž«—M(ÌH\Æ´² ÖLb*—>1ë¬H¼¥“ ÂÄ÷·Ë:‰pcËáÊÁ,\À¬«îû%VLÁjàôf°óÛÑ᛽¸»º{©ªš»ƒX½ÛK½/ý»¼Ó¾ë½0ÝÒ2þmÓÚ»¤8Ó–¼ÏkÔGÔI=Žåë’ÓÔ悔D]Ó7ݽÀ;Õ;MÕ4ýÔmz»ã[»*]Õ$Ƽ£Ê6lñüʦ¹ºyëÂW,YzG€ïç™ìjpòûc£h ¼êwM†wÝ ·Ò{@ª›¬V‡Ø" I3Ã(9”XèµÌ7Ø6'•¦aº©ƒºuÝÁì+ÙêVz‚m²„Âø…ˆý¬áÕ™Œ­µ+ °×™uÙyMù¸3†H±ì)k½ºìsË[wQòZGS(×Ï (Ü—³)ÙÐó—r‚M:n©u*Çξִ »vÎ¥Å`¬œkø{4^X:h¸¦ÉÐÍuÄ܉›-Æjwþf=ÐÉݯªÇ–¦w7ûfÉÍ]”„Q=€È)‰Üsû }ç m×ø«ZŸ=À»)}áÁÄå}uðí·cKŸ8©om-×äÖ¬½œ<Ñ—¸·— &®$&ÞñÉá³—Û(¿­Ö*'*{ÖÒ¤àÇ|€ÙìàW˜ÍïšÉ}¬¨®½‰Í¯•áàÁ]ŒßÑâm¬Æ]l¢=[‹=\xCÎáˆI.´&Zv9ÜsÉÛøƒ[ Îz-¯W¸ã ‰½Ò¾üªòGHVä\É™ «Ur”å±åä½Ñ$À¯Œ‰ú2:Á Œÿ†¹›…Ë(øØ&½jPIÂà Ø”Éçá™ÕXûܶ±þ‚9åLÝNÓBýÓV½ÕÚÔç X`ýº< ¾.Ýè‚zÕâËBž¦ŽDßs É.êä¾ œ°Ðó‘”§Úe®á,=ê>½Ãf[N¨mz¹þã‹~½§¾ÒÛaËý†{Œ¸p^ ±þÕ¤~}mG‡1×êx÷è+íè+»GÞ¸ú ×Ýå^íí¤¾Í‚²9ìÿ¬ë@®é—Õ`¹³{‹mչЎÙèþéÔ®è~ì›ÕúRçOï\-½žŽg”ÞDjʽ¡^ê©ñuz¼Ôô¦ŸìÝþホÕ@ðœ^é _iJmò'ò)O5¯£¼.ëÖµXñºxñ:ó Zónþº¦æ›ó½”ñêßÓ髿yÖ–Ñ™k’÷¶êÏÊ·FßÍé-© %Ýç{íQV‡Š·nÍ\ãæí¿Îªè:™œâ¤­‡WO+IïÞýZ¯o ,œ_´mìu˜ÚßYв¼]dÈ.‹âœæíºpÂZ&ݺÅÕM³í0ÙJu©}z[ÃJ9º‹¡Ü¶Õìõ›¯ß=Ùµ®ÎÒ‰!çÉæp;Ýh˜…Ú=òhì¶;úœý€À½w<9ž-ä­óÖãðsí¦¸O¾í»l§¿ ˆâ¶GÎ{RVç,ÀiŒ’îù)E«5ûéy‰k+TNyáîxãÇgë'æþ’ÂÒÌßÈãœ$~ÙN« HÉÐÒ#ÝÌݶ\s¯÷øµëÞüæ‰üänâ_KA0áÖøŒr¶ié£)Ïè5«Ò¼’;ÑT]ÙÖ}%úÄ.ªâqÛáÞï1œÙi¼Í‚CG1V1??)KiAiLävÓÀEÜæôWå à Å#Qzqhµ;_®ñdýž/L=¿ ÀxàÚ¢ú©Ì‰§!')a$+1)29Ëþ¬0/;}6GEGQMWùTY_ae?e:Kg[psyÉ\{Ç‚‡‰g‹‘‡•—_›£¥] §3w­‰«³+·¹¿—½Á±ÇsÅÍûÐÓÙŸÛeËßþW×å=ëïÃñMãõCûSÿ<'0=‚–vK¸“A†KÀD”8‘bE‹1fÔ¸‘cGA†9’dI“'Q¦T¹’¥È‡ _žŠ9‘C†ühJ±y3gÏH>}íÌ)4!Q úŒ Äy”SN¡ÒŠJ-é˪¯N—)×¢Z_xkMl½R×`¡CD-Ö±ºT mTué[G ôÔ%kI“@V ¹µ—ð™š Ë ~D®ßY¿y’N¼opa9Âl’D,›úÞéPT[§|%|‰b„4>ÇN܆cµ²ÔÅ™/Ô颣 ›Ý·GóŽC XÔ6žä~÷ÝÙAtBãÑÛnrþó¸Ï[Û’Cóï9ÒÑ÷žÜx­Ç«s,v}\}àÙx×¾lyj¶‚º^ ø#jM;ª Õ¾ÓÁ‘é¢y­=1þ2áŸê¨³ 2ÝÞóo¶ÞÌÛ! ‡#!*ö 4;îQÐÒÚ 0>ùNœo;Ñ`Äî9ˆœ0-ÎÖ¢16­Ž“ëFÍH4 B¤Xl±È˜zgHH,ïHæž<ˆ/íòZr.+ÍŠË(:²Iµä’6/·´îÉ2Åô‡Ì4dM6'ysš/§’sN8¿ZO;¥áOuÎôóAóùS©@[JTÑEmÔÑG!TÒI)môOB‘ÁÔЧô„SSm6UÌÍOûþ 3Ô‹¤óCSOu-ÐKYm5ËNÙ$5[e]W^@“+Çý†Š5#Ù•X]•\rJ¶N[d@Ñ öÌ? ŠÖtuÕÅ6{ÂåÍÎ£ÉØøC4̸Èç¼mpÇ ¹oǴ£7·Ì®bÛ¡œU¯¾cyš–? ßÐ8â¼»ËÃå¨õŒ·úÞ“naå.m- ¯¡Ï°nûmßq÷jiŒÎb{+ü.Â76×`UŽvJòb$[M2ô×'o™…Œ)‘aYv“ß•Q _fÊë…7HïãL³ÿUóÅ«9–Ðå–gÚ•9Ì9D‚¹Ž9âžG$ûæ”OÜà ‰ƒŒSc(»î8ê}»V²y_ô_ŸUwm í-Úy¿ýìÆà0¶z_¯[œW½oByˆîR粑”œ\/ùi!UÌ8gøø…Uà\1uOÊYßÊuOa}ÐÙk­Ýöl<¿¥÷‘Wßݜ߯!þkÐ…OÇx…TG>yÙ­Tú驯Þúë±Ï^{F ;tendra-doc-4.1.2.orig/doc/guide/table6.gif100644 1750 1750 21243 6466607535 17643 0ustar brooniebroonieGIF87a±€ÿÿÿ,±þŒ©Ëí£œ´ÚKÞxi¶qЕHxˆ)j„Ÿ ÇòL×öçúÞ‘©xµ€ Uè#:”GÌé:%[#ïŠÍj·Ü®7;-J!ÂæhR6[^Ðáø|NçôºýŽÏ/‰¬(Ûè&ÖVõT˜à’ôW¨ÔÇ8V&Ä¢7IYiy‰HØöä–&·òZǤ¸éIjÊ)õÉg•);K[këÄ(Z:šJJÅ«©yJ(|bÜ Ëtû -)Ò©köú·«íi­  ‡­š›">­¾ÎÞNcâ×|XnTÛ{OÎý½ìJŒM?w <ˆ0¡Â… :|1¢Ä‰+Z¼ˆ1£Æþ;zü2¤È‘$Kš<‰2¥Ê•,[º| 3¦Ì™4kÚ¼i‘Î<{úü 4¨Ð¡D‹=Š4©Ò¥L›:} 5ªÔ©T«Z½Š5«Ö­\»zý 6¬Ø±dËš=‹6­Úµ á¼âv ¸o]Ò­‹#î]w{ñ¦ìëw†Þ¶e. c0ÃÈE2nìaÈ-S® x¡åË7sŽY¡çÏX$ñÊ¥‘à[°þ: ЬÐ&L£ZÖLv‰Ñ_ZŸµ¯¶ƒ€EZ¬ïÞÈ£~s\íűù÷Ζïz̈e”£Œ[øÕÚ›ûJíû³é¸Ôˆo®^Æò.ÛõVK.wbõþ¡SΚy¦ýðŸ¶Í&Y û ÔŸ=7ÌÇE}öÄ’ßi®t¶]øZyÃøgÎ8èé– {»õ'`/z˜ˆˆW ‰“ÓŒø‘WËvøí‡Ìy _r5^Wam,¢#<;~LJ€Ž¤v‰x'N“TFcâV:÷dzæ8e•W^ç£::8L™ñ•8 z)R¸`Ž *˜O^fæ„÷²…¤-ñ'žÀ-èIj’–%A‡:£ Î)š£$-ÊY¢|I:¥—YÚŽ¦˜²ã)dœ‚úiH¡66*†¥‚t*b©š¹êG­öê4³ÆÊ"®Ô*®Ýê¯XúºëZÆ‹þl²Ê.Ël³Î> m´ÒNKmµÖ^‹m¶Ún«ÕŒÄ> ,4À~{ɸu…»¹™ûºªk»8¹›+¼É{½rÚ»Fp®—Û¿y!cbƈ¦À° i‘¸méÎì)o/N:ä ÑiÈ碙y,h£8ikÃ#¯ÉðÉ\J Û‡‰¸¾“†˜ô¸·bÊPìŒ|¬H[ËLj<´2G–2=w Pšp ãoÞ4 ÏÄ¢(’HÀS7rtmpZM0Ö¹QªØ)/6èÜ]}s9üÁ)È»ƒ¶}#Ó¿8œƒÕg«Í7:Þè³JˆúçÏqºÔ,nàÙW§Ò9§ñk#þáÝÙC?I4ƒŠ\°%ì]|Ø9(vå#z†Hå6yÞଫn9á²wíµ’þ\)²ê(™4Ÿ’SÎãë›<‡y|«ØŽéz|ô®ÿ‡wéÈq=xê²éޏŸ×ø ûæÃû}¹[^åÛ0ÝÕ‘7(<ûé€>·Í6¢NKúã;?&|:€í'Þ#I£¾­®oÕó_È,ÑìxJC˜ÂÐg²¨ùé6¼ ˆT6±m`-Irëw¶šIíi7ëØÏdgÁ.AO{Š;aêPØŠCá•ÉB»ÝdŽ_à’O+â.Õ”ËVâ¡ÝŒ¸Ãð© ‰ù!LôU?&6QŠItâþþ¨è+Úå†Ñ"†E("苚ù¢eáE2~‰Šg,Q¹Ç8ÊqŽtÄŠ·˜ØFL¤Q8’b‡ÈÇF…‘‹Ùc _6ÈÞ ò…4#!ÅÈ.:R‘8Œ$ÑLñ@ŠÅìˆ54©¥ý(~³^ÕF×¼BÐ@š¤Ï9¶±p½Htù TŸct2o pSûÕG’4ã á ÏÇŸ¹¹%ƒ¢ä] w †3”½ë¤-KƒK§õO^µ¬ .ƒG^jSûz•’8%—é3Ÿ!“ãøîíúÂò¼Ê*°4ßëàŽþðBÙ²A†q9ãedeŽïÆ.?¨Îæ‘@þhÞ„­þ>¦ª…VˆŽ›  FÔ»yšÈͬpy^Q˜"äͳه ÓÒ°¿Ifö&·2Âý5fsÝö,þu¶vi¥ÐÖE>¨=p§Oíê@Ǧ‘²#þâd;ç:ÕŸáø’Ø³J½gÐ^>´ü¬iUüòšãÔÑJà­NOðlVÊ|¾ˆa…ÞÛ!=}ÓZ¥mM•|ÜRØ.òY3ñk7ÏÔ>x¾{.ô²2IæÛ–mº4®(›/ÛVGäõµw¹ufÿøŒŸôÍÏüïåŽyÒ‚6¢¯è{Ü1{°6•_8“û9ïKLþã׉ŸøH‡þÀù‰Xûƒn¦?½?k,dPãd¨7±Fqÿs qo¬BmK$* Ø~ÿVT×c±€@älHgHçpçæ¶ü6‚xÀQnËs‚ÚV‚ׂ,HÃxò<æòƒ¼6mVRަ"€GU;ht3O:²sŶ'PêG„†6@ƒ7= ‡qa'Qàõ„hð‚õÄté´4€–„š…\Ö³…\h„r–ys#Ƨ^}7Hâ†;˜‚¿ÂE?ˆ€%Æu…‡ZW‡q‡°÷Wƒ/x]»ÇTgø€Ä²‚ÔÀˆØ…‘‰77‰ X‰–˜†–þ‰k¶‰ ׉¥÷‰ÆŠ¢…kâa¤fW¨^¥¸„ØRE5a(„®x…°xrT6‹!hжx‹FÄf’:D’!¹’hˆ’iF³öXà6+äC78€Ž÷Š' 06QˆrÉþ`Ld€I›ÁŠ8“Ur\‰:S„]¦Mßt]¶B fš§ƒE¹U€‘A9X‹^ó‹„F¢x|U‚t»EW•rkVzu´”i[I£]K—CÕŠ:5Ò–•Õ–(ö~h†ÿÇtR5j[“qrMuYw–cQ8‰Xjù7Â@~S²™Ò׋f–¹çfHvBj‚s2µ\Û7f¨•…ÅÅRùfâ× 7|s8WÅ“Ȇ§5S—©f„×tÄõÀ÷M©EŽÐ¨•diÄN‹ØÇ›ñuX;6tí;”·c®‰vB)v²i™P˜”f?ßã{ŸÕ§P܉{G%_¤'†þY?5Âg¤ét I´Ä|Ù œjÈ{KåQØ…6,E8Ñçd?“˜fŸ’HbÚÎwž Šj’ySzYÇY”X ]6çX•×—ã7[/eetW8•ó?o¢^•@khw’)›Ô!c±F‚ jNq†AÕ„»)ARC…¼y–ךkÇOHXs4Z@­gYÀtÖ…y Ge'J Nçu®™¦(bù£#7•ÙÕ¦œö’’L¹¶y\ n¹¹Râ^Ù’ù’Ïùk*q ž¹¥Ãb§À“0¸§ š“ÄÄbж1õUbã—@ú§÷Ùµ”7c*h‘F<Îes¹óþ­Ø¨½$¤AC€Ñè Âø])4œ`h©Î9£ðé©§ªf3E6› ¨°†…³é¢o'XÐW†FõmFQ¼—×g¯ «*ùjᔢ깣Ý_~ÇvìÉy6™‹ý°˜÷§•j—Å:­UªŸ—fÖ©Ÿ³ø¬ ­Q‰Rr‰à Ÿð¨­Uõ?\飮c¢ÝÊ?_ÅšÝwp 'ŸéÚªʨíê«'ªˆ‹ 5äJf¦d>4¹Aåj®úšŠ¬*¬þ™» íñx’ª©Ë©»‘ )jÚ“Ûl‹¦ä&²ÈF²^ú-y»p—(a)›§±Â²Å(³ëˆ'+m2ë²d§³×–²=þ«?»/~t‘G‹´I«´W!“bJ´õv³3»*5Ë Q+o¸BµºµB{/O;FV˵ÓçµhdµR[*Y«‰Œ$±êw¨h)LÐDC ‹zÅâ’õzMÃcHv;c§r¥-æŒ7*?tú¯óå¨Ùª1涉û“GÆ2H5Gã#šSݸê*¸Ûsó‡6j[ Ã@×#°~Ù8fÊ’^xcã(­ÛT@Ù¦í…0pÖu·xu¢#Kª;êjZ礭ú·}R™[ç˜ôi\Á[\–çR}[[Ù5jÔJ¶ªäUw5:­Ù™¼·šªY=ÞI½Öûs¥«Z_ض9žú©·¹«ª¤š\þÁå¢$j¶#é…®¸ùe|•‡µŠ©àºhBÅ„nF “©žÂLYe¹8Ë®ºW“ ;V(¤J»¨yVÆ©¾Wë½Gø—nE¿if¿[Y¥V‡ßÔO³åŸ~÷¿ÉÚ¾ÖGIö›iKwRYÎ'½ ¾E…¢ØÀ4왘› ×÷¾ŸI¸IJY,C´iœ”9¯5å‡ê­9Œ.´Š5¨¼…ÛÂO,ÀxƼøg†°ºš¾ZìÃÙ…|îy¦Uœ}ö^¬ç¤Œ+dR]™É£¿Å]Øêz@¤”i„¶Ç™ka;zÜKÅÙJÁ(!Á–o|œEc+p[ ³œ¨Èoú¥A»È…z³†œþ‘†LÉÏèmˆ¬Gtë± J,‘LŠÛÉ(éÈ É„§çºËJ|6kl üh©Œ¥³ª½ôª½…¯)åY´ü—\F%«ÊêúJʮ »ª9¹$Á’"Ì”xÊ·lÌ%Œ¿' <¿CT8W­Ñ ´n:Íuºœ¨©¬º>´ÙË€sÌX+Ê%Y¹Ÿfˆ9°à;·ÛTeìlËâ Éù̑ܧà ¶íì’ëÍ1kÀý<É §<«ÐL²ÍÈ—;²ÍÏzj´K‹Ñ­ÑÝM /w<–ûüÏÒìÏszÐ]ÒxÈ;+Ò&ÝÃP›ÒûxÒHж»c¯ˆ·/÷CÀþl³›€ùGª”+¬„êIhz-×m$ì·MŒLÒX•ïÎ+½g'§TÎÌcŠ[À©Ô,V—’K¦÷Ô`LÌÛ› œQ¬»–¶ PF)ŒEºâë«W­M¬&× ¥ñ'yšºk9hùÂpÝ×ÇŒñ‡ ‡k˜6¨ÕZ¶RÝa~¶ jÛG@²ZV‰¤6“u8UµZš±5_”™O0,:b÷d`}ÆK«È 3¬¤×Ú»Úçs£­W ¬ÌMb˜“vÏë®AíªN_È{ÎC¯ª·W§I°}ü®³ܪMí¥iZ¶ØIB¾0L•¹Z«49 ÂC­>±V­yß˰ëHÖ,Â×wÜþ•«‡ÏvåÉÌûIž…«ždñ Ýw ی߲ª¢ëÕ}(¥Ü]ÁwIˆÉ—©¯mƪ ­brÚ²Ò˜X¶GîÎ3-ÈNîÒÈÉÛ‚ÇÞÔG­Ó2Êëš®ÉGs).±f=¸a~åH{^BÞcÙÖ_¸sèŠÖ`g¥"ÜXA^R^·2LÞÛI¡q­Ô-Ú­œ&¢|¯«vÎ!ê¿…Âç þŠàÛ¬s+è|ŠÃo®{RÙ›tEÂí~¤õ詺æÞÍ^1Lå]M†_|ë™Ë¼YeNÅáM¡^̨xta†·‰™«LìNìÓÞÊŸBë=NÕEû¨[˜íDnÉøíÈäUÎÐåÎÒÝе®îèþͽîÚïîÎíV®ÒáÎFÌ®ïûÎïýîïÿð/ðOðoðð ¯ð Ïð ïðñ/ñOñoñøþäÝÒ÷îÑí:î(-Ñæ¾ñ`¾í0mï$ñ1­æ!Ÿî3éì¥d#Ö.ó//sïCóÆŠž–þíêœlÔüÛ'ÞÌ3óÕ¸ÝG–}üæ:íKïæs)zNÿÝR ôwÚ¥Ù‰ó\né<&•ZÍ!ÝC«nßU[íwÂ|+÷:¿w–Ó¤M¶ºâ êÈ.˜Bß3ú.ÔûnÅvß½TOÖ9Ï¿—úŬzó/÷ö|šådø‡»°<ô4Ïô‹¿XÀäìÊyö#ªà•/£xøi¿tʽžm<+ºÖñ  ËœÜ¬ßMA)SÔ.üU?«±móþ \ú»jžJw2ºønßxŒþ}Îtû[¯Á§:›L¿¦lo¾þý¼/ýˆÛŸ2O -–•“kíD qÖúå.¡Gqû¶T]¹³Tª)­—ö"3þLЫ€¤Xèå¸õˆÆRꬆ Y˜QÓÒ\>{P6”ŒúpÔ EÙò`ç\4ƱԅŽ–ÐÕ—ïª,“#ŸºÀ÷ ÷P†ØÌ˜vÄòÔÆ¥òü"—¼xÛ(5çŽû §œ~.…ÞFg¶DST/×\”8HˆÍ`»jWQ%WK¹È6 O\]´0GËlˆ›[~’˜gIymmÓ@i;±Š~'M?s«Š¤O¡œÓÕ×ÙÝßá þ‘äâåëñëCóùûýiÿìgc™@ƒ¯ÙAøŒáB‡ƒF”(kbÅuû,fÔØ¬Æï<~9’dÀ%Qfì˜ßI–/aÆ #“æÄ•5չĹ“§D=:»´Q£GþDº´¡¦|žF9 ܶqÒ*e‡NªÍ{]ä*TçX°g$„Ì¢ÖtnÏæî̦î~š›L‘W*];e-­lÔ¦íZLñ2-¨ ‡ƒ$øá ‡…]¶Z—d0…¯¼e•u‹<±èXQ oîYn—k”®ût¥GÛ†’tÛQ‚70y©±à0OÆCÒ%n5ë¯tµg"ÃTþr13çÅ!œz·ÞžgÂQÕLÈ+Ãlæ°%ʱ̸査 žô¦ê…U"„&ýlO¹Üj2µëM4€êë¹S‹å½œâëµw3.½Øqöµf£ßD웄|šÄoº,<Í8{¯5 +£FAôÖP&:òÈñ¦Ä|0¥“¼ó¥»jQÈ#7ÒH$ÅŠ°H&¡ü.J¦ˆ”jÉ)±èÊ,+ª2ª-¹ S1ƒòò)0ÉLB5w2“J6á,1NšÜ\ Í9ñt2ÏÎôürÏ—~ TÐA -ÔÐCMTÑEmÔÑG!TÒI)­ÔÒK1Í´ÑÕîüsLÆöêÔÓÏ4UÕSW©N¤Jm5ÍXe%æÕ£h­•Ë\uýâV£xí5Ê`…UáW‹MÖ.e:¶LfE EÐNÑŸŽz ìŸ EÉ LÊr24‚ªí‹2ƒãV¼‹ŒT· z„“é–ü<›‡C“(RÕxÜB©~‹‚ªP „ÈÆÈ@Éõ5rŒ}ëÚn÷ «³‚ØZÑ@Uøq,:XÐãïàr ûŽ»ƒäDz±vÜÍÂ%^ÇÃ%3ˆ2F ü:¾ñŠ ¤•6¹s©¶FÌr,yšlmíÙÀ_¤Ôt;ÍñRžzºÛ¬Âê«?q°êG0øE¬ >šíšÕ&jÞ¬V6þgÎÌè…/¤ªõÚÆz=}ðÞŽ¾rã•‹}qÞ­¯¹6¹@Ø2”ÍB¿×ã8É[¾Ë²Þ°ªm¤ù5–pŽ /%¬)ÇŽ<_¤C[îdzè?V!¯øoš1¼!Æ }ëÄüo^ÈWœÞØ]Ž>ãíq™:‰÷ëdæ¶ wÐU×Ù‘ 3Üo÷sƒ{fZ·øõœñ+a{·šØÿ@-ëEt+>òhìµqû;D_#$óÔ1S»ØLík›×®—?ïe8ÜÐ÷WŸ°-IiLY¹8·µUådD»ŽŒô×±¾\Ž:Ù5>(BÃ$Ïr>C„¸´–ƒeþLŒeðà Ÿ½î}%ŒŸÝŠ'íBLÄ‚Ìhf%y!ñGL”Ù.rÄYN‰1" v8*#JÑY’b²¶xÄ.Jé‹Â #´ÆÈ“3–ñIlDWÄ`åFŸ`LK›¿Ö¸§4¶IŽÄ™€`’G0ÂW}ÌÊÙÂliî}†¤¢ I.r/t’dÆFˆ;G>’S›Œäì4dI‰î‚ÓñdE‰¯Sj ”¹”çx³ÂU.-•Ûš¥Ió¹@Žf@ÒYåqRK0Na02üÈ"É Ä$²˜5æ- Mç(¯ÒŒÓ3UÕL:YsNØ4•6—ÈM8yS‹„–8ljÎ5QÑOêœþ•;…bNdÁ3Läô8­HÏ"ê“…Ë‚?³dÏ?ás—Å£¦šP….”¡ uèC!Q‰NtQ4¨¿äù¬‹k£—̨;Ê$êñ£d é‘FJª’ªñ¤HJ)žúÊ– é¥Ý\)gzPЭ‰üÖ•ìø®Œql™sẀ6£fð‘LM.µõ þPÍ›ézÙVøæ*9q;D\Ñ[8Œ®rž³I‹Ê£ÞŒUÑ9Φ¶JÆt3¥PyQB®|”«Ã¨ù?‘íâw0+^=º¾¢ñ(g×ÓYÏK´‘ÁuD[xZԹ͘ZÝàó&TÐŽ,7½ÔdþU #´šåê]ÈŠ¸Ê’±;uªa1©Ú´HM¶Ñøàq¦VÖšö:ŒEÖººzv(Cà]ÛÀïÝÌC©]­qW«¸êv6`þ4ØÊ&8Ù-5¨ 3M©¹E®·ƒmëy‘×»éѰS5|ïx™¢ç^1¾V­† Å{]ÄÉ ¸ª½í(½+XÚÄuè;l0\ÊÓNø^ËÝ­8ªÇ³ßeBxÓ“ ‹€X_á8­§ë»ý~{/ !u¸ZóXð¼–ݳWƒ ~áÌÎ×¾²™'/ùêPåÖË/¸hqƒ&—²Ó/z9:[‰‘ÚŸçZË|#ñó*{•û\ùèErþáØ‡(”!Ü+áËÙuZ8ôàÑ.[¥NÎæC‘ñ0 Œšn5§¨´¨"÷,Æ>Cω©Ââ §‰è˜FÑô¢¡Ühù4Òø•t¨$]é’úÒaV´¸ÖEeÈX}ìú§¡CiqI6“:Œ3µO¨Åvˆ¬^q«ˆT/æìÕ¤ü#kû!ÒOˆ¤ë…q ×’ôp³n.Ùö¡l®Úl,N`'·ayF” ®h¿ägÏ–w_ÔØðîPá:£nU¹&׆ì<>Ï•²…ãüÐ+1Ýæ§Ú¶e޹óõÛÞžï‡Ü…)¨i4b_¶Em›±†¿KkŸ/ÃWâÿ7º-þ\¬s'Ì})ôªùDM!oØÕ Ãh‡æbåÞ y@¤ñ˹­@A7ܽ ”%’qpN¸PÙ¤ž¡žÝ ì{WYà–²Íç\#_úüžÈ«ÁgÎNCkpËæç¹}f7j:ä Ñ:+;Mè™r¹a¯â§ÛN³ŸýÑ7 æÚ«Ùö´OîoRõÜí^÷¼Ë]ê{×{!ïÞw;Qš¢…7üáŸxÅ/žñw”£ûLöhýÝï9•ü1)?øÀ'X홟'ß9Owσí‚ãè?y·ÃîïVœ-,mÞZ±d!ßV,YTyÌiÅö6šŠãW8M¦µtÁÌõØHš[i®K»¢8þyåm8‘ù¾•=:—„dûÒ7é»Nû]ì79ν01óÃp¢mR‹d¬Ô4Ѕ;pcÝ$ SLâ^Nû:¬l6'“Ë»T§ÌvJÞÎŒþ‹vO{‡åšd¢+xØ-AjðçÚj¿kþ °·#é~ƒ=Zþ„¹â<Ž€ÿ®ð ?¥©ì+ÞЇvÊð÷¤0{€þ…`Ã8DøNÃðO¹ÄÀ’Pkøn°Ÿœ¢ÛÒ-¼ È´²ëBäëP -q™êÐÆŒ­yþP ÷P°p²¦ ¥ëãÄÉ61‹Îc +hÁÌEö"ÏêˆÁâìgÆÛQß0èXÍ÷¯©¢ I«8Ñ=I“i¼„j »°èL@ŽWo›ÞÎî÷‰OÏ™¾Îìȱž¾1œPô@OìØîqªôB¯òèíQ;/Yo ý± RÏÑfÒûâ1 5ï²K2­ó è°L¬ˆF3owE" ÒGÜì£ãL# ¤82 <²ø"ñÛ²ñq&‘ßìŠÝÖ.%±äò’d]’ç`r*ŒFhR#Ò/öKÀ,¤=ëÎ&§'5j¦Œçboˆnnkr(M¯([¢û Ò’£ˆòœ†ª[˜ÎóÂJ ²×r" êȲ-Ýr!¿2.å’%Uo,Sï..ár/ù²ëü2-W20Mð- S/éÉñ³1ó1!32%3ñ ;tendra-doc-4.1.2.orig/doc/COPYRIGHT100644 1750 1750 2253 6466641217 16150 0ustar brooniebroonie WELCOME TO THE TENDRA 4.1.1 RELEASE =================================== Crown Copyright (c) 1997, 1998 This TenDRA(r) Computer Program is subject to Copyright owned by the United Kingdom Secretary of State for Defence acting through the Defence Evaluation and Research Agency (DERA). It is made available to Recipients with a royalty-free licence for its use, reproduction, transfer to other parties and amendment for any purpose not excluding product development provided that any such use et cetera shall be deemed to be acceptance of the following conditions: (1) Its Recipients shall ensure that this Notice is reproduced upon any copies or amended versions of it; (2) Any amended version of it shall be clearly marked to show both the nature of and the organisation responsible for the relevant amendment or amendments; (3) Its onward transfer from a recipient to another party shall be deemed to be that party's acceptance of these conditions; (4) DERA gives no warranty or assurance as to its quality or suitability for any purpose and DERA accepts no liability whatsoever in relation to any use to which it may be put. tendra-doc-4.1.2.orig/doc/FAQ100644 1750 1750 5414 6506672667 15221 0ustar brooniebroonieTenDRA Frequently Asked Questions --------------------------------- I am preparing this list of frequently asked questions from the questions people actually ask me, so it is far from complete. 1. QUESTION: I try to compile the following simple C++ program: #include int main () { cout << "hello world\n" ; return 0 ; } and the compiler is giving me errors. ANSWER: This release only contains the bare minimum language support library, not the fully standard C++ library. See the C++ producer documentation for more details. 2. QUESTION: I try to build the release, but I am having problems in the API library building phase. ANSWER: Unfortunately this area is _very_ operating system dependent. I've set it up so that it works for the operating systems listed under supported platforms, but this is not a cast iron guarantee that it will work for other versions of the same operating system. Some understanding of how the system works is useful in trying to work round problems. The start-up files describing the macros needed to nagivate the system headers for a particular API are found in: src/lib/machines///startup/.h where is the operating system name, is the CPU type, and is the API name. A set of replacement system headers, which are checked before the real system headers, are found in: src/lib/machines///include These are also used with the -Ysystem option to tcc, modifications which are specific to library building, should be enclosed in: #ifdef __BUILDING_LIBS ..... #endif Good places to look for inspiration on how to customise these files for your particular system include looking to see how I've done things in similar circumstances. Often a problem crops up on more than one machine; I may have a workround which works on another platform which you can steal. If you don't intend to re-distribute the TenDRA source code you also have an option which, for copyright reasons, is not available to us. You can copy the system header into the include directory above and make minor corrections directly. If all else fails you can tell the library building to ignore the header. Find the source file which is failing to compile. This should contain lines like: #define __BUILDING_TDF__
#ifndef __WRONG_ #ifndef __WRONG__
.... #endif /* __WRONG_ */ #endif /* __WRONG__
*/ If you insert the line: #define __WRONG__
in the corresponding API start-up file: src/lib/machines///startup/.h then the library builder will ignore this header. You will get a compile-time error ("token not defined") if you subsequently try to use one of the features from this header. tendra-doc-4.1.2.orig/doc/index.html100644 1750 1750 45123 6506672667 16705 0ustar brooniebroonie TenDRA Home Page

TenDRA

Welcome to the TenDRA Home Page. TenDRA is a free, public domain C/C++ compiler and checker technology, developed by the Open Software Systems Group (OSSG) at DERA around its TDF/ANDF compiler intermediate format.

TenDRA® is a registered trademark of the UK Defence Evaluation and Research Agency. It is pronounced as one word, tendra, rather than ten-D-R-A.


Downloading the TenDRA Software

All the TenDRA software is subject to the following copyright notice. Please read it carefully before downloading the TenDRA software.

Crown Copyright © 1997, 1998
This TenDRA® Computer Program is subject to Copyright owned by the United Kingdom Secretary of State for Defence acting through the Defence Evaluation and Research Agency (DERA). It is made available to Recipients with a royalty-free licence for its use, reproduction, transfer to other parties and amendment for any purpose not excluding product development provided that any such use et cetera shall be deemed to be acceptance of the following conditions:
  1. Recipients shall ensure that this Notice is reproduced upon any copies or amended versions of it;

  2. Any amended version of it shall be clearly marked to show both the nature of and the organisation responsible for the relevant amendment or amendments;

  3. Its onward transfer from a recipient to another party shall be deemed to be that party's acceptance of these conditions;

  4. DERA gives no warranty or assurance as to its quality or suitability for any purpose and DERA accepts no liability whatsoever in relation to any use to which it may be put.

A small number of components are also subject to other companies' copyright conditions, which are similar in intent to the DERA notice above. The Power installer was written under license for the Open Software Foundation (based on the existing DERA SPARC installer). The Motif 1.2 API description was written by SCO UK (based on an earlier DERA Motif 1.1 description).

The source for the TenDRA 4.1.2 release can be downloaded from:

ftp://alph.dera.gov.uk/pub/TenDRA/TenDRA-4.1.2.tar.gz (3888989 bytes).
In addition the release documentation (consisting of a copy of the web pages accessible from this site) can be downloaded from:
ftp://alph.dera.gov.uk/pub/TenDRA/TenDRA-4.1.2-doc.tar.gz (765752 bytes).


Installing the TenDRA Software

The main source archive, TenDRA-4.1.2.tar.gz, can be extracted using:

	gzip -d TenDRA-4.1.2.tar.gz
	tar xvf TenDRA-4.1.2.tar
to give a directory, TenDRA-4.1.2, containing the release source. If you also want to install the release documentation you will also need to download TenDRA-4.1.2-doc.tar.gz and extract this as above. The documentation is extracted into the subdirectory TenDRA-4.1.2/doc.

The release is installed by running the shell script INSTALL found in the main source directory. The default configuration installs the public executables into /usr/local/bin, the private executables, libraries, configuration files etc. into /usr/local/lib/TenDRA, and the manual pages into /usr/local/man. It also assumes that the source has been installed in /usr/local/src/TenDRA-4.1.2. These locations may be changed by editing the INSTALL script (which is fully commented).

Other installation details, such as which compiler to use, can be specified using command-line options to INSTALL, or by editing the script. For example:

	INSTALL -gcc
will install the release using gcc as the compiler. After this the work directory can be removed, and:
	INSTALL -tcc
run to bootstrap the system.

See the README in the top directory of the source code for more details. Also see the Frequently Asked Questions.


Supported Platforms

The following table gives the list of platforms on which the current release has been compiled and tested:

Operating System Version CPU
AIX 3.2Power
HP-UX A.09.05HP-PA
Irix5.2Mips
Linux1.2.8 and 2.0.27Intel
OSF1V3.2Alpha
SCO4.2Intel
Solaris2.3 and 2.4SPARC
Solaris2.4Intel
SunOS4.1680x0
SunOS4.1.4SPARC
Ultrix4.4Mips
Unixware1.1.2Intel

It should compile on other versions of these operating system/processor pairs, the only danger area being TDF API library building.

For comments on the reliability of the software on these various platforms, see the section on TDF installers below.


About the TenDRA Documentation

A number of documents on the TenDRA compiler technology are accessible from this page. These consist of documents written and added to by different people at different times during the technology's development. The information may therefore not be totally up-to-date, be presented from a unified viewpoint, or reflect the current thinking of members of OSSG on the given subjects. Time has not been available for the necessary thorough review of the documentation as a whole.

Most of the documents were originally written in FrameMaker, and converted to HTML using a very old version of WebMaker, numerous sed and perl scripts, and some specially knocked up C programs.

The various documents are described below but here, for reference, is a complete list:

  1. TDF Issue 4.0 specification;
  2. TDF Diagnostic Extension Issue 3.0;
  3. TDF token register;
  4. Guide to the TDF specification;
  5. TDF and portability;
  6. tcc Users' Guide;
  7. C Checker Reference Manual;
  8. C++ producer guide;
  9. pl users' guide;
  10. tspec users' guide;
  11. tld users' guide;
  12. tnc users' guide;
  13. calculus users' guide;
  14. sid users' guide.


What is TDF?

TDF (standing for TenDRA Distribution Format) is the compiler intermediate language, which lies at the heart of the TenDRA technology. Unlike most intermediate languages, which tend to be abstractions of assembler languages, TDF is an abstraction of high level languages. The current release is based on TDF Issue 4.0, with experimental extensions to handle debugging in languages such as C++ and Ada (these extensions are not used by default).

The TDF Issue 4.0 specification gives a technical description of the TDF language. This is supplemented by the TDF Diagnostic Extension Issue 3.0 specification. This is an extension to the core TDF specification, which describes how information sufficient to allow for the debugging of C programs can be embedded into a TDF capsule (it is this that the experimental extensions mentioned above are intended to replace).

The companion document, the TDF token register, describes the globally reserved, `standard tokens'.

The Guide to the TDF specification gives an overview and commentary on the TDF language, explaining some of the more difficult concepts.

For those who know a bit of history, TDF was the technology adopted by OSF as their ANDF (Architecture Neutral Distribution Format), and TDF Issue 4.0 (Revision 1) is the base document for The Open Group XANDF standard. Thus the terms TDF, ANDF and XANDF are largely synonymous; TDF is used in documentation since it is the term closest to our hearts.


What is TenDRA?

TenDRA is the name of the compiler technology built around the TDF intermediate language. The design and intended uses of TDF have affected how the TenDRA technology has developed. For example, the original emphasis of OSF's ANDF concept was on distribution, but this begged the question about program portablility. The current TenDRA technology is far more about portability than it is about distribution, although TDF could still be used as a distribution format.

The rigid enforcement of an interface level between the compiler front-ends and the compiler back-ends, and the goal of producing target independent TDF (suitable for distribution) have produced a flexible, clean compiler technology. It has pulled many of the questions about program portability into sharp focus in a way that a more conventional compiler could not.


Using the TenDRA Compiler

The main user interface to the TenDRA compiler, tcc, can be used as a direct replacement for the system compiler, cc. This is described in the tcc Users' Guide.

There is an alternative user interface, tchk, which just applies the static program checks and disables code generation. Thus tchk corresponds to lint in the same way that tcc corresponds to cc.

The chief difference between tcc and other compilers is it the degree of preciseness it requires in specifying the compilation environment. This environment consists of two, largely orthogonal, components: the language checks to be applied, and the API to be checked against. For example, the -Xc option specifies ISO C with no extensions and no extra checks, the -Xa option specifies ISO C with common extensions, and -Xs specifies ISO C with no extensions and lots of extra checks. Similarly -Yansi specifies the ISO C API (excluding Amendment 1), -Yposix specifies the POSIX 1003.1 API etc. It is also possible to make tcc use the system headers on the host machine by the use of the -Ysystem option. The -Yc++ option is required to enable the C++ facilities. The default mode is equivalent to -Xc -Yansi.

How to configure the C compiler checks is described in more detail in the C Checker Reference Manual. The extra checks available in C++ are described in the C++ producer guide.


TDF Producers

A tool which compiles a high-level language to TDF, is called a producer. The TenDRA software contains producers for the C and C++ languages. The original TenDRA C producer (tdfc) has now been superseded by a new C producer (tdfc2) based on the C++ producer (tcpplus).

The design of both producers has been guided by the goal of trying to ensure program portability by means of static program analysis. Some thoughts on this subject are set out in the document TDF and portability.

The first component of this is by ensuring that the language implemented by the producer accurately reflects the corresponding language standard (ISO C, including Amendment 1, or the draft ISO C++ standard). The producers both include references to the standards documents within their error messages, so that a specific error can be tied to a specific clause within the standard. The producers have been tested using both the Plum Hall and Perennial C and C++ compiler validation suites.

The C++ producer implements most of the language sections of the November 1997 draft ISO C++ standard. The known problem areas are:

  • Automatic inter-module instantiation of templates is not yet fully implemented.

  • The current implementation of exception handling is not optimal with respect to performance.

  • Temporaries are not always destroyed in precisely the right place.

  • Partially constructed objects are not destroyed properly.

  • The visibility of friend functions is not right.

Also, only the language portions and the language-support library (<new>, <typeinfo> and <exception>) have been implemented. If a complete implementation of the standard C++ library is required, it must be obtained from elsewhere. See the C++ producer guide for more details.


TDF Installers

A tool which compiles TDF to a machine language, is called an installer. TDF installers for a number of Unix systems and processors are included within the release (see the list of supported platforms above). Each installer consists of code from three levels:

  1. Code which is common to all installers. A large portion of each installer is derived from a common section, which reads the input TDF capsule and applies various TDF -> TDF transformations to optimise the code. Each installer has a configuration file which indicates which of these transformations are appropriate to its particular processor.

  2. Code which is specific to a particular processor. Each installer also has some processor-specific code, which applies optimisations and other transformations, which are too tied to a particular processor to warrant inclusion in the common section. This section also includes register allocation.

  3. Code which is specific to a particular processor/operating system pair. Even within the installers for a single processor, there may be differences between different operating systems. These differences are usually cosmetic, such as the precise assembler format etc.

The various installers within the release are of differing levels of reliability and performance tuning, due to the differing priorities in building up an installer base. The Intel and SPARC installers are the most reliable and have been subject to the most performance tuning.

All the installers fully support the C subset of TDF (i.e. code generated by the C producer). The Mips/Ultrix installer does not support the initial_value construct (used in dynamic initialisation), but otherwise all the installers fully support the C++ subset. The Intel and SPARC installers fully support the entire TDF specification, as checked by the OSF AVS (ANDF Validation Suite).


TDF Interface Tools

The API checking facilities of the TenDRA compiler are implemented by means of abstract interface specifications generated using the tspec tool. This tool and specifications for a number of common APIs are included with the release. Part of the installation process consists of pre-compiling the implementations of those APIs implemented on the target machine into TDF libraries. This is performed automatically using tcc to combine the tspec specification with the implementation given in the system headers.


Other TDF Tools

There are various tools included within the software for viewing, generating and transforming TDF. The use of these components is integrated into the user interface, tcc, but they may also be called directly.

tld is the TDF linker. It combines a number of TDF capsules into a single capsule. It also can be used to create and manipulate libraries of TDF capsules.

disp is the TDF pretty printer. It translates the bitstream comprising a TDF capsule into a human readable form.

tnc is the TDF notation compiler. It acts as a sort of TDF `assembler', and can translate TDF capsules to and from a human readable form.

pl is the PL_TDF compiler. It is a TDF `structured assembler' in the lineage of PL360. pl provides a more user-friendly way of generating TDF capsules from scratch than that offered by tnc.


Compiler Writing Tools

A number of compiler writing tools, which were used in the development of the TenDRA compiler technology are also bundled with the TenDRA software release. These include the following:

sid is an LL(1) parser generator with a long history (the original version dates back to the mid-sixties!). As well as the normal rule transformations it provides powerful techniques for call-outs in circumstances where a non-trivial look-ahead is required (essential for languages like C++), and for error recovery.

calculus is a tool for managing complex C type systems. It uses the TenDRA interface checking techniques to enforce strong type checking and type encapsulation, and provides generic container types for lists, vectors etc.

make_tdf is a tool for generating TDF decoders and encoders. It takes a compact description of the TDF specification and a template file, and generates code to read, write or transform a TDF capsule.


Related Sites


Send enquiries about TenDRA to R.Andrews@eris.dera.gov.uk (Rob Andrews).

Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/images/ 40755 1750 1750 0 6507734474 16027 5ustar brooniebroonietendra-doc-4.1.2.orig/doc/images/blank.gif100644 1750 1750 251 6466607532 17656 0ustar brooniebroonieGIF89aH¡ÿÿÿÌÌÌ!ù,Hz„©Ëí¿‚œ´Ú‹³Þóð† "A扦¥À¶î ÇòLÏÉZçú®ßÀ ƒ>àðˆ„“Ìä² %’~Ñjoj´jcÏ­·ÕýzÃb-¹\=£¡ê5³íFÂãÃ9]ê¡ÞÙØ=?Ÿå'×'èDXxT”²ÈÈøÓ)9IYiy©P;tendra-doc-4.1.2.orig/doc/images/calc.gif100644 1750 1750 7546 6466607532 17527 0ustar brooniebroonieGIF89aÌ|¡ÿÿÿÿÿÌÌÌ!ù,Ì|ÿ„©Ëí£œ´Ú‹³Þ¼û†âH–扦êʶî ÇòL×öçúÎ÷þ ‡Ä¢ñˆL*—̦ó J§ÔªõŠÍji®÷ ‹Çà­ùŒ.ì¶û ËÛ´ýޝ­çü¾¼ž(8H²çwèH¸ÈØ8aˆÈ©èhyi9¹I‡éù©Wfàеpj¡É¹Y ú ›¢x: X›ªúç%)ðåë»ÖÕçk|üQY§LzKZ,±Ú L½‡ÍÌÝMq{kê,Þ¼ 1]}½>gîýû]Ú\>îZÖ€®nØOý'žÀ¸ØB0ë9]Ø¥sÇÁ‰±Š%´GnFzÿÏ ¶Lk #R<éÍ"BŒVàÇŠÊ™ÆêÑs6¯ž­ŽÒÚÅÔF3¨PÓxý494©ÒoGYI\ U)º¦£Z½jŠj+¬\—NÕúæi×±ðȘ={–¬ÚµlÛº} 7®Ü¹tëÚ½‹7¯Þ½|ûúý 8°àÁ„ >Œ8±âÅŒ;~ 9²äÉ”+[¾Œ9³æÍœ;B :4OϤ™‚=$¶´j\§Q¯~ݳµLØ´÷Éž];®SÊôé“§á«ìÔ¹±2ã½ÒåNàÁo-®Õñéùˆ3ÞÚ:t¨-5Ú›¾Ð{ì§µo—:ï»úqâV·é`H‘’èÛçwþm÷õàÛoÿ¼ É? ¨æå'T4îmt.ñŸma%`…U!¸VÐlˆ\xìg`XìXˆ†dí_-©ètˆöB¢€š›|Ãð2ŸH7Ò7#«‘Ö>>¤VBÙX‘T‰äbJ6Åd“‰=yT”R&Z–f]Ée—^~ f˜bŽIf™fž‰fšj®Éf›n¾ gœrÎIgvÞ‰gžzîÉgŸ~þ (ZJh¡†Šh¢Š.ºè‡û±ä¨£Œ‚fU\Ši¦šnÊi§˜Zêi¨¢v ꨦŽZꩪršêª®^Úê««Zùˆ¬¶ÒzL¬¶šªë®¢öê«§ÀË*±ªkl¦¸ÿF€l²°Vê,ªÑþ:m¨Í:{m²Ù»ì9Õ í·ÅŠ»é¶Äš,º¾ª»k·°{k¥,ƒV‹:Ñaz‚’[.¿šÂ++À¯ ì*Á³®`ð±L€£^8ÔmØ “ Ÿ:1¯þ*{ñ§úñîÆ×ñÇ.Ç`r*×AÅÒn¬2µ,‡2Â1/¬`r2ã­-[ óË>g¼3¸*MªöfÎñIáu=ýôÅD ´Ì,CñÌÉJï“rÔþNÝïÏRÏ, ÙIŒ2¤ì¡­ ¾ ‹ýµ×ü‚ý¯Ù'Э,Í–¼ör7£=žÜäâ­1ÜsÛmáÏ.A/Ò-þuŠ£½$¸¸Šÿ{\ù·—‹ìôÕQmžyµ >¸ÕUú´£CMºå¦K.ë²Ýºæˆ—°yì´Ïwí¢ß^ðA­¾{ñ½ó>÷ë_ë~¼ñ‡û®ºð"äþù¤Ö_}öÚoh¤j{ø½ŠÜ7hùæŸ~úê¯Ï~ûî¿üòÏOýößþúïÏÿþÿÀ p€=ßoP`À*p» Ÿ÷ÀAVÊ9í@ÿºhQ¯lžãÎK„À(ƒØ’^6è†Å‚*¤ µeB P*-ìÄÐj(‰r+†Éà!E¨ôΡ‡¾Ðá¹|صÊkEnGp‚É™¦†BT‰×¼åÿ ‰BxØÚ Ø¡0>HPW¼›3®hÄtqq3\˜É4D²¬¥Œ@ôIAŒì¤q]k Nž‹4ÈdlÎGr„# UÚ $~‘ˆ=¶ëûR¢¼™‘õ`r!ÎWdTÔ,0#1$y+JH•<èOÄþ“ÉX²‘”RÒ"ËAÊúÈè”ÐÓ +)gI7~Q!-‰àƒø.ZÄ–wÌŠ6èpPòyƒûå7¬™&2hÚÜæŸ9 jPÂŒxg8‹„Å玃ë|BzR!œäžød(G2NÜ!$؆.‘ÎjwØÌU.Ęø5¨ XA™õЊtÿI u 4#ÙË4x^#ë¸ÂVIšíhèÀ"ú.”¾¤t¬¨ j9•îC¦Ÿ`éVDJE“Œ¦¨à)&$84 u¨Û{࣌ɒp©LmªSŸ Õ¨JuªT­ªU¯ŠÕ¬ju«\íªW¿ Ö°Šu¬d-«Yå¡@„)5¨DåžQßêÀNŽo‚8¼å못bõ6»£l '\ײؼAбŒI á»ùôÂù«‘ì€X£ír_g²Ü#yBF¶lƒCü`ß¶!-=ü«%³÷s í2—uÔy¦¡—ïvÅ2£ù°_~2JŒ¬âÐRQÕ‰²l]}X'âÍÔ5nZÝäZãz‰Ž‹gãÜfâS'.ÕÊ&è°“ì*¨$<Þád²Ï¼ìºÒÚÙÑæh·§0m¶€Jäö±¯Í \…šÛÖÎ`· ¦-Æ Ë÷£ýLd¬c»cÔT‰Ž´÷½ÇÝîg×ÓÝz0HóÁ´KÈÂt Ï-mAo]6 ®¥"ŸKð6JÛàÌ%kï»L ëSÃç-ÃM¹kgœà\ุ ‚ìØœ\œ'NàÌmYñSB¿g#Ëgÿ HÉqzúÞð¿cKr§äôåðÈWÞó³‰_’ûvŒ"q‡?÷æt<±‹SœòkÜÛ¬65ŽÕã(7ëEG7”„ õoûµío×qy“\ž#Ôîc÷ÏQdñ:=Û>ÉûG¥¼fÛ®ÚÆfwó”¿^åm“ñ–;”%?nfc›Ûe¯|ç‡rnêÊz±˜Ÿ^Ü?Ÿ ÒŸPóŽO|=OyЗ:qžþ4[×?uöf²¬¦/ùû•_øÃ¯iñüŸ&_ùËÏDóÿüFø>ú]¤>#¦o}@f?Gݾ^éý÷Š'üœU8ù¯€ýó¢öÙ5îEûîð>?¡çqëzÿ¿G]5õÇûæ'Ïfø{ôÇz/Egçr€é‚$ý"Í€Š{h" ("¨–'C7%XYë¶•„¹A.¦zõp!HˆODŠDQÕuzxËUo§wTVƒ<"rHƒˆ!E^.h^%—_ Ôƒˆ @httWøåéÕw¦÷¤¡„NHqC¨tƒ7=5…ª ‚X’sM·t h…Z(C'˜Rˆ‚´x(Ævg÷Ok'x¢ph(ƒj˜†Ð¡„FRxJ–ej&€>˜„7%}X ¸.ˆØ. ÈoUsø„è`®§eÈp¦ˆñ"‰Xæ##xQÿwy˜]ÿØì×~·g@ñ§ŠIjê犯‹±(‹³H‹µh‹·ˆ‹¹¨‹»È‹½è‹¿ŒÁ(ŒÃ¸i¦hŒ„â~Ǩ(«¨{¬ø~möR—Øk™èP¤xMÖˆ žˆr•U‚=´^u¶‰ØBW³¦mˆ‡Y$ˆøzT4dZî¢4æxBr&4”ˆèWiÓMóFzåŠç¨WwXØ8Ý7aðÈOÛhrä&h‡™Ž¡HQ0j'CLD ©PWXw:veâŽc¶ãçïsÌ¢‘¹ Ì¥ƒ+]¿‰ „o!I’ˆ„V`’Ä;H$ápLç8‡wD¸`fE3’àX’Cÿ·ÿá]*Yx1²’U†Ù„YØ <—f“ˆ“먹å7æ÷BYJ„˜•bÈ‘$¸„fø./ñc蘓9‘'Oú’’³4•gyufÉpJ½¥ ßÀ…]ɉ_Y‰I]|ù—Ø’7“ŽIeYnƒ) )Pk8pYü2ù’ö×€†7š~—Ìrk†)’øè•7¢¹$°¹w§ Lªy“‹W——™X¹v·q Á™”¬B3Žÿè÷÷«ô¸”Πܸ›h™µ d‰Éy™kÒHAñœÌi›L‰˜ù¨˜ÊhžžE›ßpžËíIm­Èé×L¸(Ÿ¾u‹õÿéc¶ˆŸv§Ÿà¦‹û‰oô韹 u( RP ‰¶žìIŒ ê  ¡*¡J¡j¡± ‰’Œš¡Ìx‘ΘŠd6øœjÔœ X¢‰ØJóäŽ7Ñ69Q/D7¢˜˜¢“t¢*Êš‡©Ç~Ãqñ”I%毩œ˜™×IžÙ™›-÷=b)aM 8G¤Ô™£L†¤­i¤`yš%–\2šœT:žÑs£‹X£šè&ùNÒÐD7bä#•cØurz”—y¤Ú™‚œ9r6cm\ƒ ñµz‡”ƒÖà›™u† c 9œa£™>ǧë¥ió£OYHTa{I_èä—Šº¨V¦©ÿ©oó¨0À£ßᣖê¤ò&¥d¨•hÉ–Wytg“;4ª3u«[x—í89Ms/BÚªiyat«¯ZsÔ¨u“«=µ¬?Åuk)¬Ÿ¸ƒ¨I˜µzDÍŠ _ˆj—¨Äú›rÈLÕI­Öj¢¥8Ø tQiªpÈ’²™céù–…y¥¾„®˜©¥sçv…hˆ³I˜Ô*¯rSõú¯Kº-$‹xýjk;{§s‰¯aJ¢ð* ÉŠ1…º)±4J±ªÕh® (Š»±Ó(®µò±'…±ÚzÞÙ•Š²áy­! ž#ûCj( 8‹³ê¡ó§z ùŸB{ŸFÛŸZ´J7K H[‹ zBKK´ʳÈx¡W‹µY«µ[˵]ëµ_ ¶a+¶cK¶ek¶g‹¶i«¶k˶m[;tendra-doc-4.1.2.orig/doc/images/capsule1.gif100644 1750 1750 3051 6466607531 20324 0ustar brooniebroonieGIF89añú¡ÿÿÿÿÿ!ù,ñúÿD©Ëí£œ´Ú‹³ÞÜÐ…âH–æ‰._ʶî ÇkL×öÏøÎ÷þ¥û ‡Ä[°ˆL*EÇ¥ó õ ¢Ôê³iÍjwØ­÷Ûê‚Ç丌NSÎê¶[Á~ËÑñ¹ý[¿ë­ù½ÿ:õ'¸Õ7h(Tx¨È¸èˆ”ø(é9iyRy©ÉÔ¸éù’ù)j:jQzªÊºêÚêª k:K+j{뙫«ÉÛkù ,)<ìXl¬ˆœl¸Ì,èüì-­G]mw-Wùáý .>N^n~ŽžŽÞÜY /?O_oŸ¯¿Ïß2¨›¿ ‹™q Å‚²e8¯hÍ:™>6µãÝ¢YƒMX±ðЋisNJ2+ܧÇùëÁ+>?Ým7uÜ»{ÿÞ;ÅÒÌ-Û%s»|ÕócÒ«ÊŒû÷GããQ®‚>Éñ ÿŸeúA$àe ù÷_@øÁ1`AßüåHø`]bP`ƒu!øŽ~hà_4]h‘†bít™ˆ)ö·"‚ .t^†&ÖÓT‡*ÂaˆúØç…á£a-ª˜"‡üðHˆK<É8ãNF¢ˆFލà’L5IÓƒY$‹TÂÈÔ•X¢’Zø(æ˜ü©dfhÆ¥&|$ÞgœõÍ)›vÞYe˜uî T›|(™& Báš?.Êh£Ž2f\Ûœò椸jé¥VfúI¥œú‚é§ n*ê%ž–JL¨¨¦Jꪜêª2ªÆ*k«´ë­‘z ë"¹öjÒ¬Àvdë°Ák¬MÂÿ&Û²Ì*åì³X-+í´ÑV›Æ¯ØfKí¶ttë-zà†+߸äöhî¹g¦«î ×¶»î»ðºëç¼âÊkoÚæ[žüêËî¿Jø+°Rá§muãŽWÍ7Ùý¹ˆ8W‘ÿëI•;xù×󌳹X/—y9¡ïXƒâÉ-9õé/žüÌ1Ò(Žë_Æebµ=u‘}óØaQ“Wz޶ß>YÖÓu6\pÌ+ß¼o|5áñ¢×K[Äñ^ðy©¦™ï.¨˜õVé5tØiÜóé³ß{Œä`>/åŽ?«Uüdàÿù ðq,àw–&ƒðu`>õ³Ðôl6?ÿÍ®Q]Â&HAƒX0 Ì`æºörNláwU"^…¤Pá@6‰ùÁÍ…ýaßhè@ºÃ€<ìáÕH˜B¾ŽXÄâUD#Nˆ-TbÿÐAÂða2t≠+ú.Q‡ Ñ"l(9/þŒŠò¡ÏÁrMjkH#y¶Æ7¢Âr$6ꈻ8âQ‚{Äû¸´?rxƒ4‚ „C"r‡z,äùHG*r‘™$%çØÈAFR“–¼dÑÊPŠr”áÈ)O‰ÊT:Δªl¥+_ɼä{É{62y­LÒ2¸Üež$vK.ø2QÀ¬F/džÌbJ㘼&tÉ a&³– æ,§I§jSšÍĦǬigz ž,§9ωÎtªsìl§;ß ÏxÊóeЄä<Ç–¿ÞDçžêp:‰Çå';üìç@¯SO–µæ û$W;tendra-doc-4.1.2.orig/doc/images/capsule2.gif100644 1750 1750 3444 6466607531 20333 0ustar brooniebroonieGIF89aü&¡ÿÿÿÿÿ!ù,ü&ÿD©Ëí£œ´Ú‹³Þ<ŸÐ…âH–æ‰Z_ʶî ÇÒ*×öç ­÷þÄð‚Ä¢Q7<*—ÌR²jJ§Ê'%JÍj‘Ïö ŽY'ذù<ÏÐì¶F)»çô"¯ëç÷G~xÖçðh¸5hw¸–¸Ã©å¸PiI4©PyÉÙ“™°Ù)jói0zêSŠÊš¢Ú +óKÛ2[‹kr›Ë²Û ìÑ%\,¼Jh¬ 5|¼lü =­Ù¬BMi •Ý-Ý­ü úA^n~Žž®¾ÎÞîþîÎ)n PoŸ¯¿Ïßïÿ0 À~ÏèÌ 00¡Â… .,ÈgÛ<+Z¼ˆ‘ <‰ÿd(fü2¤@ˆnŠ<‰2%É6&Sº|iq%›–0kÚ ( Í›<{ÞË)ˆã?>‹újf§Ñ¥.‘6¼©T«Z½ju£ÌJõ}` àW¯ZßxÌg.¬Z{iÉ^RÊÝZ£éö9%q—ÙÖ³øØÍ­‰5]UyU>÷¦_·Éât!7nØŠ(°$9ù1ÇRqìÊM̳.cEx@=2¢²åq¦P»¦»µjÇy)y&ši[´¥{W{¬)²ìԙ夦A9y!Î|È=w,o?Qe›v]ÙtòᯃS¾ž½µ¨o ›éÛæÉgœaÍ ‹¶Ùy§›^ÖÉçRx~¡çoIŠè‘~’ N€Ú(^ª) Mi¥¾Lz©¦túC¦ž~Êé¨@€š ¥¦Jú¡ª«Š€ª'®¾Š`©´æ+¹ÞJ]«»òJZ‰¿›°Ä’bë±bÿ$«ì Ã’2k³ù*íÌVëʵØâ¨í¶Ntëmà†»)µärkî¹ß¦«®¸ì¶[®±ðº+ï¼ñ®˜h¾úîËo¿ø¡æoÀL0¢Œp /ŒÎ½3­ˆÃ³þ•ñ ÃGÅÈ2q±-ÖDÇ–~,KÈ‘¼,Ç'¯¼ñ"gË2È*gsÉ3S\sÊ.ç¬SÄ&Ó ´Ì;Í’Ï7ͳ³?ãL´ÎŽ6ݳÅK# µÒG?œ´ /OËt×6í5Ö-?vÔcóº5 i»¶.ö¢[õÛ¤f-÷Äe×mÝx_£÷Þ\ßí7ßqþ7Õ„ÛmøáΠ¬xáb7.8à÷:øäÁÿJnùi•g\ßœ‹‡ùç '.ú夗^l蟷½.êªÎ9묺núã´w¾ùêŒÓ.;½·ã{æ½Ãú»æõ1lpà›hI óè)¿æ‹ ÏËò#NO¸õ)b}sArï·öÒ?”}ô5‚¿·øç“ß}ƒã+D}.ê7>ÞË#Ÿ|øæÓϾþÞïX¿ºÍï{ýKßþ¿5,çXÆ!XLh¼Ú½Ê8{z_BBAœƒiG9¦¢ &u¡¨lG5ê ñœ' Z;’‰-<¸! @…ßÁŽ G): „4O98*â/L Ù?\Q†3,Ÿ™˜ÄZ,Q†Q|á ¡èDàÿÑLUŒÅy˜ÅëÄŒíãÕEX|aÝÕÆ4/r8œcR"hÇ;¦.zÇ?""‚Ì‚ 9È="Ò„\äÔêèÈGÊ1’~œ$%GgIJò’WËd$7ÉI²™1”`%)ÊSb¢‘ª +[é‰WÂW²œ¥ÔiËTÔ2—_3%/‘°Ë_Zͤ%.‹y6O:2•Èì%$›é´gBs˜¾œ¦µŽiM­3›³«&7G†Íoªm›â¬U8ËÙ:o¢3n|#’âXA6–ÑXœg¶&evb|Ò§<®HEx~0 öŒg?:FyúO‹õ\¨ñ¹%¾åõ„îTTAj ç H¢–àçm¤“"„ž‘¢©‘H!ZÅìÈaÜÁ‘š×MiNc€^ùŒI«CÂï´ÇÅ¡OõXSÜth(³ }Ø^Ð-‘ ohÔG@†;ÆR¹HÔþPªàaM ú?$ȨZÕ†|4XU¥Ò¦¢ü‘™ qÕ|Bë­pªXçÙ"‚:4Dz-¢ûêW õÕ~vÅ*^ j'ŽF"®m£KÙÚÖ½®óš“ufeµyÙhfœ›Ålg=öYІf£giO‹ÚÔQµ¤emcrêCÅŠn„ÇlgëÕܪ“µ#Ô­Lyûž&U];tendra-doc-4.1.2.orig/doc/images/capsule3.gif100644 1750 1750 4745 6466607532 20342 0ustar brooniebroonieGIF89aþ*¡ÿÿÿÿÿ!ù,þ*ÿ„Ëí£œ´Ú‹³Þœ©†âH–æù|èʶî {ñL×öàúÎ÷˜ê ‡7 ñˆLŽŒÊ¦ó aB§T¥´ŠÍò®Ú®ÆýŠÇ¤0ùŒþ¥×쌹 ‡¿ãôlà.«ëÉwæ|h…·ðh8Ô—w¸ø”HÈÙä˜YI4yPh¹é‚i Ézâ *z*BjŠÊº¡Ú ÛòKk2Õ—«»ËÛëû ,N^nÎíåÝnÞîþÞ¥îŸ_>¯UO¨0 Àgüì ¢e Â…÷ bñ—€¡Ä‰ãV˜‰¢ÆÿÔ,RÁø‰£È‘Ë8¨™s^ˆ¡0º„ õ„=PX T zØÐdmøƒ‰*¢XQ/!²h›‹|Á(ÏVЈ¸Yžˆ£Í¥‘•i»ããñD$Y¥ù” ’î¹#“h ¥”ø¹$dMúñ¤–@½Á#VNB&fRd~yÕ7?¾ '†VžR-:”¹BvÁæ#{Æ×'BèNƒ"Rè`‡6˜h)‹z噎Ø(ž“úI§¤—ÞYé¦[tê)§‘‚*–ŽEj©6œ*]ªªÒÀj z¾*+¨´ÎkšÞúB®KìÊ«,¶ÛɰÄbeì±(ÿøš °ÊÚ’ì³µŽÚ¦´Ófꪵ×òᬶ!0ûm·ÞRí¸äR‹©¹©|g»î¾ o¼ÁÕ%o½öÞ‹¯1«åËo¿þÚ;g —’nžJðkNš° ?Úp/1#,¬ÃZqsœ±ÄÿÙ±#ïYr“²Å'Û™2+£Ü²Ç/×ó3Ã\³É^2ܳÊ7Ó’3‘CÇRô•GÃ’4K·Ò4O³õ;ã´ÌS£RµW[–ÀedMañùóE™Õ%Wmm«,tk{Ö™›qG‘M±ç_µÛÊí&|o÷u ¯×[bmËvãxß}™p¿¹&(lÿræ8…Ã:ÝáÓ}¦8k‡þ¸ÁtŽºÛ{§>zhšoÝ+늩n„ç°ÍÆwж‹®÷é’ß®‰ëÀÞ‰ìu+9ïô~6ä¡Ï®¼pÉCïšan»B¼,“ÿ¾XíÆ·ÅXéË‚µ÷½é´{õs9úmöœtÝû›ÀŸŽü–ÐOý•àßþ‘ðoÿA€ #x.Q 2Èf´´Q ‚J“ wþ%'¸%ƒÎiŸ¤%Ç i}qÈÎNƒ#‚§"%ô øæB7Œ(DŠ _VC©°=4láë>ø j0?=LÍ_B)íˆ,4â¹D%Êp…âÈá˜\(¶ÿÿˆi‰Z,"Z°ø Jqˆ]lâ˜ÅóÑ634#WnÄËqpHd<Ý™Æá1I\T£§G(†d‹Sc}øÄ4ÂpŒl¤b8¬h“@*2‰|,ä‰Hœ0M}\ä!¨É ¡e9‚Œâ'ÏX¬K¦0Œ“ô$&Aé«)ÐÊ ´>Í%'j¨.¨ÿ¡Ûô%#ó‰QW2tFþ¤¦!ª(^'‡ÊÜ`4I‰O…^ð¤.©'ÕzYAQA ¦NÛAÁ ÒÊô¥×ÔÙ§˜FS©ÙiAµÚOgÚSšŠ¨IéRÚÔšî”kEõÚQyšS§¢ ©Y•êS¹šR­ ª]êTOQK—¬îÔꪘZV£®«aõé\©U³~•®mõêVùšQ±žUiµÙ]ÑZÕø–°‰­ßbCQXŸ ²ÍßcßWÙþ]v~™ àfï×Ù~v¡MàhÿWÚžv€©=ÉUñW«®ö€­•Ékae·ÛªW¾ûÛ[‹¼ õZyÍÓÜfUתŒHoÞÿ3\m5gô̓UKñœoDs½â\Žm‰.àÎG½Þ]¦zÛ¥u—õ½Ù!yÛƒ½;ËEtŸi¯ëBúW]™ïwë]ù#IøþsÆ‹èzÿ‹]üþV:ûeM%‡»è¾7°á’/õê9î®…y»å-ª”'›o80¿y§&ŸK¾Æ…wy«Y^‰k§[µ]oÂ>Éç<¾nÐ-5–ÉŽ{ÜQ¶RX©ùElqe;YÌY±Iæì’ÛdÐ>Ù²Q&í”5[eÔ^Ù³Yfí–EÛeÚ~Ù´If)“›3¿Ä¥}ÏOª„æ6¦‡¤lçH_´œ¦@tòB#$#8»sÏúé³ÿAÿì­(Ύܨ Z :fUž%¤è:ÛŒ_p´‘¥s|Ð;GéÑ 4} i>¿yÑœéD7ê‹–ZÎ…uEKDÈ:bz¤Ÿ…IÕÜR]Ø+©3:*ìc¨×Ä 5@-éìÊ(I%uõ.¡èLO»ÚѾv²-Zíßì™¶s¯äý{¤ðâ‘/Ÿè<úBê×jï>êàùôëÛ¿?¿þýüw ÿø`€(`5ýˆ`‚ .Èà}>á6Ha…^ˆá^nXàRÔí&Üj E"'ØuPbˆ¹©È¢r˜œ8œ‹;b-Ž÷áÃШ£HT˜˜£Œ#ò(äŒÅy›CŠèã‘N¾¤‘EŠ¢¤”=2£tV.¹â”MV I–¯mù¥u^îø$xQrI&•Wjô&"bövæœ'Ðf™”Ø9]žþ˜Ù¦žb¹¦ hv9(œg¦W衉æ€gœ|’Ò¨¢†úé¢îY馗"á§¥lršä•“’飧æÒiªnjš"”¥ÖI¤ª?v×j¨ê«š³ºª®~*ªy¹ÆYœ­iÿ’êÛ²Äî*é­²6ëg°Fël˜Çjšì¨ÊN [¶Å‚Š­®ìmûh·ÏöÊl¸ÕÖêm¼Ïžû«¼ž¾ú-²„X+®£öΫ´¾Rk*¼ë Ü.;ÏÛ(»éî‹î¿Ðº°¶õ~ÊoŒ÷ËèÅþ2ì±ÃÓK0­L~,qÆ•E rÉ› ±Çãª<&Ê/ìnÁ'Ëó½ÆÊìsЗ«ï{,ÛL.ÅËytÏê"Ý3ÍñMMuÕLHÍÜXóa‘Û`ãôXq¤õÖÐ ÍK.ðÚI‰’OJ+½&Q••Ý¡-ôLé“‘Ù†èÞžζڢýO9Ý!NøÚó ÎØ1^8QàÑ´wx2ÝÝKß~s¸4‚ÿP;tendra-doc-4.1.2.orig/doc/images/class.gif100644 1750 1750 4002 6466607532 17712 0ustar brooniebroonieGIF89aàÜ¡ÿÿÿÿÿ!ù,àÜÿ„©Ëí£œ´Ú‹³Þ¼û†âH–扦êʶî ÇòL×öçúÎ÷þ ‡Ä¢ñˆL*—̦ó J§ÔªõŠÍj·Ü®÷ ‹Çä²ùŒN«×ì¶û ËçôºýŽÏë÷ü¾ÿ(8HXhxˆ˜¨¸ÈØèøÐ0IYiy‰™©¹ÉÙIù÷¹ `zŠšªºÊÚêú +J»yK«»ËÛ‹:[\6K¬Pꋜ¬ ,Ü<Æœp¬Î.µ~ž¿»Þ^ßô.ŸÏKoß„¯/ 5ˆP¿‚ !IJñWÉFFL¸¢Fÿ/ ̸1dŒŽõ‰’É’òN¦|yb%Ët.aÚ!s&¸š7{vÈ©ÓOŸD1 JmhÑ¥Ž"]Æ4*‚KÆ:}ŠL©Ô†“t…6ÊÁU¬½´n-øu[XIÉ¢3{6ا®^ÓZІëÁX·ºàÆ­E÷–`¯æHÖÛ6 <„~ÿ†²Klîµ»ˆ,~;ÉccǾÖ5'/h{Õe¾¸™s£¼i?ÓåÆÍjâ|¥.›T=ñuÝÈŸc ÖmزâÚ¶[â¶YlCi_ÇNCL}œUå³i6wÎ8:ÊÖ–ïûUœ¦öñS«c¶:y‘Þûv£ô|=ùö±àaW,<}köïÓÿίÝ~­T|€|¥¢‚\™· :â¦`„¦48!AZˆa†ýla‡Öâ‚"ŽÈN‰|ˆ¢8*ºÅb‹Ø¼HVŒ26C#V6Þ(„–ÅcT9>µcùøã™vÚ×§ ÂB) –ŽZê§‚š*N£Ö'*–«†ÊX¯’zO4·Äé”þ%eÐT»ºÒjwÂ;i¬ñÿDjàN‹æék«ü:í³æ ›l¯Ìe†‰,úÇ­xÖÎÚÒ9àR‹é¸ØV«í¶z /·µ)ôî[…z5±ßüÚ,¯Ï’{i± |мñÎ ¯BÚ[èbè=,Í%Þ¢wáiÎ- ž,´Bî°@pÅ[ ¿×YçhÊ*_fpË[|pÌ$'üK½¤®Œ3£ë¦[è2ËLÜ´‡< ;,}2ƒ"Ÿü-ÓÍ)Ü/±FïÌ ²ûDÜ,ÆC×1s÷:|Á 20Åe1ÍþKt›{ÐöÐ+r|­Â@ÛÍõÝ#;­´È°ªKuÁVg·¸¹Ö=öÅQûM²âa/5×–¾0…ÿ—õžÉ{öœÔâX mÚYù-&èˆ?gn!SÞ]à‚תëØî&]:ìÇ2·/Ø«Ž-ëÊ%N®Z£Æ;±›-|äýýìu»Tû®Áå!o|Ùw‹-vÍÌoʹëÐWîú…Ô¸4䨣­z×EsûƒBê~ü€Ž?-Ù+¹ìj{ÊgÒ›H?¼MLoèC[Áä–•þù/|ápž{—µÆå¯b’K ›ÂÀï%)¤#Ò1> ^`uS_z>(ÿÍÍî › ݄¼°FDU I3œáío_5dÕ õb¤%Î8,ìÝÅÄpY=ì”™dšJ.jább‘õÄ¢ÿ f.ÛÞ½ÈÃ+zìˆHºÙÄ\¸C³1¬{Ë"Q¶8¯ûíofVdãóÈ(uü¬e´™ ›gǹÑ'p”V̶vÆ:r,ƒxÌEKlF<1îj=™¡ÄÎÔDMQò&–ìc$¹®M"‡‘’¼•(a"¿T>Š}ªl%¡º6Âr#3ÔÑ,5RKÞ27¤Ü%Cré$_r¥—Âô‡ô7¶Îî3ªkÝ08ö±ÙmCtÿÓ]Üb7¼Ú/y·p̽Šõ¼Ð¥æwÅÇ^4\N½ŠŒ/†ÐK•‚r£ïëì;Œ®:ÕžñR[ö`àÏ.y_Uè5š`0T•À\\7aí–•€h¥Ùƒ!œáò6µÁh…«ùb Wó®)þ‚^ß[µ»ø¯,–1`¼^gÇkÔ±x¬UÿؼBVÂvAZä$+yÉLn²“Ÿ å(KyÊT®²•¯Œå,kyË\î²—¿ æ0‹yÌþ(;tendra-doc-4.1.2.orig/doc/images/classA.gif100644 1750 1750 1722 6466607532 20021 0ustar brooniebroonieGIF89a¸d¡ÿÿÿÿÿ!ù,¸dÿ„©Ëí£œ´Ú‹³Þ¼û†âH–扦êʶî ÇòL×öçúÎ÷þ ‡Ä¢ñˆL*—̦ó J§ÔªõŠÍj·Ü®÷ ‹Çä²ùŒN«×ì¶û ËçôºýŽÏë÷ü¾ÿ(8HXhxˆè°ÈØèø)9IYɘˆ  ÀÙéù *:JZj*©Êµyêú Û™ºZ{Õ*›««Kkë+…»+ o#O¿î~ß¿’¯o?M ¸m`Á…!"œ¦¡D—U$q¢Æ ÿ…1 ˜q£H «-92%E¨Pª|ÉQßÉ“ôB¼  d®V4çÙÄùRg,š}=ê@h7O=÷!}ª@é+H5¡Z5 õTÓ¢N¯BÍjJZS^¿²‹ -µŸe‚Õˆi\rlÛ|ûÐU]»ýðæ ˨߿ÅãLXÔ^Ãï'nǦãÇL#¥ÜÕòÈɘëiÞÜ™ìg‘œ1/ ­4åÓ¨‘©~̺õ/K´kÛ¾ýh îݼ{_æ;¸pJÿBs‹½ãubä:”fÁù_èøŒ'Ôm}mñìͰs·èýû0ê¤ç%Ùù‡è_¨q»éÁíÝ#ÀË•ë'µÿ3¿•Züé%lîôD/~Š~Ç%x€Ê åàP Ö²Ô—Vq2¡bŽ-%á§M…Ý\ØÍL¤õ…›ÐE*Å( ˆ)ÎxV9µU:ò„Ë\rµT ‹-‚˜V“."É“ýÕØQNΈeƒRÒÜPY¶ôd˜D6¹ß–ª •˜NêØ ‘ gœ#î—%Olî¨"DrîÜPwÞ‰# f9'\JNÅ#Q¶3æ”%V™è—n²I&—T.¹¦¢ù9–¢ª’j 8 %þ¸§‚S¤jvxc¤uJJæ“QRÊ ¨ZµÚÓ¤0¸੨"*—‘bYè™þ£yˆ©¦¼)(´¾N¥+¦¡½Ú‚‚ð‘¸,³;áj¶÷5³^9Ýz»­²Kü·!•†[­^yN—L庴®½í™¬Lñ¦k(0ç*ðÀþkð~á%L`Á 7œÂ½þ:ü0`W\š|n¼›n\›Ç <‰l&ŸŒrÊ*¯ÌrË.¿ sÌ2ÏLsÍ6ߌsÎ:ïÌsÏ>ÿ tÐBMtÑFtÒJ];tendra-doc-4.1.2.orig/doc/images/classB.gif100644 1750 1750 3327 6466607532 20025 0ustar brooniebroonieGIF89a¸ ¡ÿÿÿÿÿÌÌÌ!ù,¸ ÿ„©Ëí£œ´Ú‹³Þ¼û†âH–扦êʶî ÇòL×öçúÎ÷þ ‡Ä¢ñˆL*—̦ó J§ÔªõŠÍj·Ü®÷ ‹Çä²ùŒN«×ì¶û ËçôºýŽÏë÷ü¾ÿ(8HXhxˆè°ÈØèø)9IYɘˆ  ÀÙéù *:JZj*©Êµyêú Û™ºZ{Õ*›««Kkë+…»+ o#O¿î~ß¿’¯o?M ¸m`Á…!sHM!É òZ„“DŠÿ/XÌÅ(c½Ž$?|”µéäÃ’,+²ÃˆáÆ–4¨|Õ*fÀ™5{¸y*fÈ>‹Bj*˜Nz<²DZî‘L§Tm’[:”iÕ­ Ž’¶ô%×­^C…EuVàØªe?5òäHëZ§mE&K×®¾¦x ÖÕûµoÑ¿€Í öI¸°ÛÃ=+žÅ¸¦ãÇ##?¥¬ÖòeÌ 5oæ<¯gp“‹­´âÓ¨¡©.̺52K´kÛ¾ýh îݼ{_’ÝtDÝŠ?Ž<¹òåÌ›;> 6Õ×€¥ó=»öíÜ[7J]ïwØ»›?~¾|çêg”ê*m`÷à§ÿ_d#È=¢€ÜÕ'™HB”UPOôâßÞE]yÒwaÀQ9³0¨Ÿ(Î@ …ÛY8 rðeÈœ€nøN0IWX j„ V)uE*&ªà$űXá®è_ÝØ’EB¡匨¸õŽÂ¥=þäE‰‘Žˆ¤w^¾¸¡“)­¥•íäP\2úFgBfftg*™œDöÙÈ’vÖVšS.(å›W§–Pü¹œ’yV(f™ÍùÉe‹ÿÚÍ”Q²™ÖPºéé¨ùõøSz~i$«d¶Z$¬«Æzâ¦~u“cVUÂ%§©ˆ¢b;vU`’cÊJ©‹´ÿb¦²xÚêÏMÁšö`ó]‹­wÐö#mx¼8aL¦ÙŽû¬ L¢Ò®ìÛ‚3â’ ¯¦ærÊÞ2ìÊðn¼új;ï­õso ùî»oÀü‡· GpÃò¦€°ƒNL±n_Lç¶÷(|ß·Kï¿Â|l1ÉýF+òJxøøªþ¦üí%v5ìË0£ä±n!ß‹ÉÀèl3Ï8åüsÐB;¨‡Ïï|4Òy(Ô27íèÊ@£Lõ©I_ÍmÖZ?ÍõÆWA2ÕÖEc-P®ÞJ nØö”¦#Qf»Œ67qïE4Ý]Í«Ü`Ÿ½wÚ‹â=wÍu'D6áë-6߆Ç6Œ¿½2Óÿ6“·áÛm9ä™Wkth+n5à >ø<‘3±ú­Ë1Y$e/®ùé^óyÛ¡ß¾Xá λ°¾ë|ðpå.ùæÆ÷NûïÊ/ÙðÉÛ}ôÍÿ|õ¯'±=Ý¿Áñì¥OOø~_;åÕ3_ÇÌ-;OýúßÁòüØÏ†ù¤Ûñ[úå¯Ï>™¹­|+ànJfÀZBcQ )ñ‡I°OºàñN5 Z/jìà*¨ÁÊDí!á æÁkáï~%ü˜ [ÁJp…p 1Ø÷Üb2œP °EÃ,mj£’ºÆ@ï QuJÑÑAàRKú]욨(Sÿ!‡"€SF×7-BÑ\ —¥æÄ ”(!YÂR;R§²¥CAiXàˆ32žG@HV³t¤l¥Q1ºÈšd'ÆQ]®js!ˆÞ’8`‘ꈢYÍã§/ý‘Odr”'I)Q¼j½’è:‡$ T¬T$›J¹È»èq›Ü$²^E+4>p—‘ ¨Ú³4Jt¼,f,…ÌÜèIT"U5.)(=1ë€Â¥ A9²­Ò”¾úË Õ"+½²WËc"óÈÃô¬*O“ä-åóÍ øhdž æ§JåÊÎ +œ‡¼§(“ÙÌs¢“ƒ2Ò…~èCO~t¡l⮜ÈÍ-êN¨•Ã|è¯î‰£'Æ,º”¢Š¬¸CÖõ%ÎŽ ˆB‡Åóá ©ZšŸ ”2'Õ%S¸ÒØ0„9eÀNƒ‚(ýÉ’£WO# TÕPƒ5-ªQ—jÁ£Å¥qª¾¤ÊRª&C«$0¦WèÕ‚5¬ „‘YÏŠÖ´ªu­lm«[ß ×¸Êu®t­«]ïŠ×¼êu¯|í«_ÿ ØÀ v°„-¬a‹ØÄ*Ö;tendra-doc-4.1.2.orig/doc/images/classC.gif100644 1750 1750 3324 6466607532 20023 0ustar brooniebroonieGIF89a¸ ¡ÿÿÿÿÿÌÌÌ!ù,¸ ÿ„©Ëí£œ´Ú‹³Þ¼û†âH–扦êʶî ÇòL×öçúÎ÷þ ‡Ä¢ñˆL*—̦ó J§ÔªõŠÍj·Ü®÷ ‹Çä²ùŒN«×ì¶û ËçôºýŽÏë÷ü¾ÿ(8HXhxˆè°ÈØèø)9IYɘˆ  ÀÙéù *:JZj*©Êµyêú Û™ºZ{Õ*›««Kkë+…»+ o#O¿î~ß¿’¯o?M ¸m`Á…!sHM!É òZ„“DŠÿ/XÌÅ(c½Ž$?|”µéäÃ’,+²ÃˆáÆ–4¨|Õ*fÀ™5{¸y*fÈ>‹Bj*˜Nz<²DZî‘L§Tm’[:”iÕ­ Ž’¶ô%×­^C…EuVàØªe?5òäHëZ§mE&K×®¾¦x ÖÕûµoÑ¿€Í öI¸°ÛÃ=+žÅ¸¦ãÇ##?¥¬ÖòeÌ 5oæ<¯gp“‹­´âÓ¨¡©.̺52K´kÛ¾ýh îݼ{_’ÝtDÝŠ?Ž<¹òåÌ›;> 6Õ×€¥ó=»öíÜ[7J]ïwØ»›?~¾|çê‹b}ÆàÇÿgd€É9’Ý"ÜíGSxnåtPVw9Ñ‹ÿùg rVH†\BÅàâƒå@A‹„Ý•'¡‰š¸œ€Ú!Ø’E!A‚֌ʜ¥V)áE*,¶hà$Åi8¤Šðiäþ‹Ph= ÓŽ³h“£RñØã@vAz™"‹+RÈåv0~æJ”PJ‰ãGq¥é[œrRb&˜Ñ•™¤…ZébsÖV›; e£YV~¨‘~Àì9 —]>Ú%‘–ÉÜ…ŒøO M¢’èšO¶£æ§¢‚¨(‰?™yç£(ÚIá—Ex$¬Jªpæ ‚NYeZ5’:e¢)‰Ø%Öé¯ê9¦¬‘ÿFz©w™úU‚( {@³ó];k µ ”®”mËBØŽ‹-¸˜ûÇMø±‡. ÎXKn¼->ë´µÛ¼òî ½ýØ;0ü¼!­š²· ¾áÌpsø*¼ÇŸO¬ÛÄÇéï=¥ÄC‹01“¸0‡,ÌÈþIJ© £\ %vår½0§±n'ß]ó¿R í3Ñ6_}´Ì?o}ÕP .ížÖV ¤”·e uÑÍ€56ÕÔº0•#ýõÙ5 —Ü+Óýß±]µÆÜ¨í4ásƒšàù]6ão›9dÿÓáâó\¾ç"xGæpßíµÙ†£Í+éw€ëo¨¶®êLç}:×AUÞyÔ¶‹·éö8¾îJ¸îðW ŸñmŸYïmK¾»(Ê1ýÕ«ÁüáÈG£{ôílÄõid¿¶ó…ÿîýˆŠÿ }ú}¯NsËìëíþ÷^ûø<ýõ¿/sÉù×¾?þ•î¿»˜wS±*63ÈH‰4l‚˜Fí§­\p1´¢ ‡¤› Piáã:øA ŠäÎÇ´eÂj+…*Œ\‡Lø°¼‹)iƒ°°6C6l…èOBre¨âì…#0âænÅN-1ˆB„UŸþ³¸ÄÿèTRl›X-œL. ™«x,I‡ˆ(Qh®.ÂÑ…$!“’ùª‹±UÂÎ8°=­h\ll㛥Ht±WÁû" |§OýÊS¡ Õ"ÿˆBó̲S±öUÈ6ª+'”œ$7×ÀG†¨S Ú•$ME:ºkB\r•-? JêR(™$)¡$Ç}ìr˜J\+_™BùÑ^y-mé¨I)K^¡,â(9UI`aE{YÊ6ÕTÉW.³Pf̤&e)W5Ê;„„šº8LJ’Ñ”å[YˆZéÊ=Öˆ•PÔÜÔÌy¢TASšV$¨|ªù7‚¤‡Éôádê…v㥒 qfzŽä¢FðI_Ð("ÃÊX.Œ‘Ä'Â,:5VÑ£69WHYú—‰Aþ´ JÊQ‚Á´Zïy© )›^T¥4ÜéG}jÁ ñ2êZèÅ!•ŠL¥f•zUþƒ˜ÄL WuéÕ¯>pId-«YÏŠÖ´ªu­lm«[ß ×¸Êu®t­«]ïŠ×¼êu¯|í«_ÿ ØÀ v°„-¬a‹X;;tendra-doc-4.1.2.orig/doc/images/classD.gif100644 1750 1750 7731 6466607532 20032 0ustar brooniebroonieGIF89aôT¡ÿÿÿÿÿÌÌÌ!ù,ôTÿ„©Ëí£œ´Ú‹³Þ¼û†âH–扦êʶî ÇòL×öçúÎ÷þ ‡Ä¢ñˆL*—̦ó J§ÔªõŠÍj·Ü®÷ ‹Çä²ùŒN«×ì¶û ËçôºýŽÏë÷ü¾ÿ(8HXhxˆ˜¨¸ÈØèø À`y‰™©¹ÉÙéù z)9 (ª ªºÊÚêú +;+@Iz«G©{JÛëû ¼j‹KLg{œ€¼ÌÜ<\ -÷Œ Ül}-;½Í¦mP.^Ë]Þæ >¾ÎŒnþ†®ÎNïëÏ%_Ïÿ{Ÿðʾ~³<Øe`Á…­þ!|èD!Éä Z¤"‘âB‡ÿ;ɨ‘ G$'UÚ²bÉ•G@²»r$Ë™7\®ƒ©Q&Í2lŽC5¯ NžD[ø g©VÐ~C‹:Eq›²¤›>½:"ª5ª8ZÅ ÖƒVg–Öû6m†±í4UU 7Û`TSu7o‰¹ÀæÕÅ«70¾öüþe*8q´0•½Ë­âÉŒS.“<9qeË}3{®°™³½Ï¤#„ݸ´ê§Q\ ›ÚI×gcÛN7›öËÛ±[ëv…™÷Sß¿Y^”xqUÇ‘óT¾\¥sÏЗ7Ÿ>³zñëØWÊ >¼øñ™¦?>½©î'´ÿæ^s€üùôëÛ¿?¿þýüÀÿg¿€{ºýWCýˆ`‚ ÒG €²8 Î`à‚^ˆá„âa„R51M…ŽHâ~:( m'®ðŒˆ&Z2ß%õer &ˆߊìi—I/nù¸”Ž* ãâ‹ éŸ8ò‡ã’úɘã†{åöPf7Õ•ÌÆÇ0)_‘H&æ˜ö‰èd~PÞ'$vŒiÉ•–vmÙ—vE¡Ë‘JvRæ™hV¸d €£‹0šéŸ”$HÓ&J5ªN]qÒiW–•RziR¤Ù$™‡îég¡}Žšäš‚2¦šŠfE¥Rs&«•µ`I©¤Õôxi®sÎ œz¾þúÉŒ Ú瘆"**™/ÿ®é'°à9—Q¬®Âª«0eÍ2i6Ž·©˜Jz:hšÊŽ«æŸF’;#‹Ð¶JmœÒ6t«œ¶j[­0x¦3£Åûm²§’Ú$¡'˦ºÈEË«´ïÒꪮó6ü0¯ 7äå7Âêi걘é/‡[ª™m¶iÇz˜ü@y¨ü »ñÚÊåg¹íÄî>Éí\bÏ>3h07–¨ÓÐC›vL‹­^ë!¶QósÔ%Ž\ÎѧœŒô7YâÒ˜6 ‹Î‹&#uÙR &Z'Ž(FŸB$ÛZŸìŽM5ƒÍí¦fï ÚÅ­KàmÏ 8ÜtϸÕz·æñ 9“Aãr´à•ÿc]¹6-þ£øÕ6N×ã‘^ðо^æƒw>8â(Þ5ÑÎÎî¬y´ßîëä· Ž à–ïrxëÂs]x2ŒƒþZbË•öïq.8Ýj/|Û¬sÈ ò)ýËw”¢k߃ˆÛøc|¼ö¯˜¯2Ø“þçê§–‹yëÊ?òx°¾pá£Æ?ïÙ/~ÙËŸÓê·©ûЀ±`ûè¿ô1ðN¼“1AýÝÁ!à @þ'¶‚¼I'ÞRÁŠ]LyÙDDh.FŒ 8ìT›>A†ïß_pHAˆ ¢Ð$¸•ÊIB{0"œ˜6$:#^(T oãCGÿy¥‰W´M³E>m…'k!ÃcÅ’ñfd÷×ÅÞH±-g ÕAüÝD‰/¼£ó¸À=Êl#~T! k˜Áõr‡4da")FøÒ‘¤X$™ÅÓÍ1™ìA')ðIÊmr~¡ÔA)MÓÈ2VÒ’iüc G©¾Sâ@–)K%WiœEŽq’ªÄe—.)À5¦Ã—öæ…¹DÒ’B-{&m9LbRpƒås%a©½eÒe’|¥W©ÍmÆ6ßÁ9Éc»sªóY¦C&éÞ 4½Áž~óæ)æIOÑástõ¼f€öIºpþ ‘ë§+ñN1I¡ýhˆZPÝtÿ UCÖÐ]â‹Dó³žÄPɵ“—ó²”%.6L¤’ÝŸ‹â½xÇÇìñ’¹aE¹qGf^…],””TÙ¦WV²H€&GÇBs[iÈ3Eb}„2¶¼ˆ)×ibaD³ˆ±™z½‘š]éõ¨*RØš^>‹ÿ éæ’Ù†¸ªó ÓŒç—(ñÏ£It ¥œç:ÚÑÒÁt CiúÙ9É~þôïfNΊu„X=¥>»éÉ£nq¡‰ ëþAúÖ¾Ì5«d=cZ?úμƥ¯E dOKóØ”1´„ÍgB—šÄb~³²q]mE;™Úz€³m;xØÏævÆ-m7=xÁé\wb ns{©À…w„å½CzÉÓ^ö~õM[~ÏßÖ†À-ᆻà%½èŸ<Ú]×F\¦ Ç¢ ÍŒ%È„Í,|%Rß`z±‰¿Õ­"³·Â÷HçZ [@,î—À4ÕEÖ´Tù²„jåmÛЗ8ÿ^[a퉒õìÇ Ó’ <Ø•²Rž)*ÚLên<5›s‰'$ÍUª°í˜×‘~T¥ç\Í›–¬¨Uõ•C,gTOØÆÝØ˜âv_gª¹[…ÛÜ2±{Ñ™:;PôÌf3c<ã-mÈë:[‚õv¹P;—u>­µ/,í,·–Ê?aòiý¶œwì×úp±*²d]z¹_Õw³÷jf˜‚ëw•òbŽQëȵªNAN®ÙÊ6d¥{¤aö2Pß%b8k=ÍŠŸ·–á&·øê'ø]ƒ/?à¾/6æÜâO¿â^ìÕà=$äéÖôñö7ÅïÜê÷ÛçG?SÉï?¸¿»[ò_gó»Oðÿ:$›ïìߟ†Nv½bÎtc'€|vb¶¶~Xk€Æ–m—Æ 8l¤6pæ×€<ö€à6¨jCtmm»¶€ ˜M¶•Älƒñm÷†¨€Ø2耑fjSÑf)8fX%`äh%Hl'ˆ†±ƒ6¨8(!9s„BhL‡rÔ«6‚E(>S(‚(®æ X(-8\ø5–†bñ‚'…!ä…AP† †ckhZ¸…ðÒ„%s†Îgjpˆ!ø…Uxqqè‡lˆ,(‡?І¥°pƇøƒN8ƒPXƒxƒÖgˆƒ8‡>Șˆˆ…øH‰¸û§‰—ÿˆ‡Æe‚4H‚ŒbOxz“¨ÿ§m+ø‰‡è èõ‡X톋¸s½á~Ôµ)‹øk=ö‹éç%ˆlRVŒz"ÈØl8¶Œï—„oø½HN •x •w8§”ÖÑhu¢qac–EOÕZ4ç[-e…p8y—{ä%t^—Ž6ÇŒc&g:hy Ós·âåˆ`w{C·YGz¤gŒš÷s8ó#1#uó}¬ôx(òW¢•Vž§x0Ò-…ñ*~{C’SÆ}¾ˆVš…[Æbw3‡p»xN ‰v–„|xh}á`ʘ’buyø(;äaí"+ÈGy€{¢V‘)V#ÿ'UHµ[®¥Ž½÷4C™0§z|4uÆW”•¦~nr‘¼§w´—x¶— LC9i߸–VW-¨¦”;Ò¥o×K®kÄH—ô}w™Šyi‘qÄ•}Îè‚sܸC†ù¶ØÚ·|BƘfˆ˜‰¹ŽÇ˜º–b0iNºÈ™µ3zˆ—t€n§(‰IšØ“Ù¥9„¨ˆ‚H¨‚J¸‰<€´IŠ²Ùƒ²˜HŽùDªIN¬Ù ÀiY¹¢È›ä›€œ¸9‹œX‹ž˜Dƒ„h¶‹Îy 0#h”˜„–X‡7Ù‡Ö)£ø6D´èIÒ™DvXºy›Þ™FØžÜ9›ð¹žšÿ–jâéäŸì¹g¥˜n§ùƒøyfîyö9EZ ô¹›ØyŸæ¹ЙžtXžã˜Ÿ:žÉÙÔùŸúGœfœjМÊ@Ëi#J„¹É ï™¢Ï©ŸTH¡½‰ž<`¢Ê¢i€¢°Y›ºŸZ¢3º5º?7Š9* ;º¢Ú¢2*¡4ªžÐGa X‰KªœÈaBºADj ÖT ¥M¤Rö™·ã™cš;¡¹f:;g™'™™9ŒP¡¥íg™1bs ,‚§óV§—¹˜pšŒC²§ùF\gYo7H""ƒÞ…Žô¥Q‰Ê–Qg¡]ˆ¦ÌII^1i1‘9mÿÚ” %Y¹K‰Jg˜ú•é—e¦iïè•dЦ;e#]uZ¢ê§iùIyu¹ò©´W\æc®¦ª«*¬ñ!Ä¢’jUsƒ™Qf’±rB#ÙvÙ¹¨yJcn¡E6u‡A|³ªËš‘+ùX¥z6™ ³¢0]i|?‘­ ‘cï"’íšzG¹=e ªŽX‘ê“祦βž3™)ÀžS°¿BkvôZ”~q°¤¯â”uÅYºJ•q5°ªg”ï ¯Y‡“ë.%™rá*®b°¯§²Í²ûÆHûx•1‹vÀú†WM2höÊ®4u×'±´J_+5.wKt´%5sjpù!Cª5¯+‡´7ñ¹l´óe—E;šÚ³ÈJ†TËQ-»O}‰µ¨™¡ÅŠ•ÁÚ¡` ´†^›J¶0úªâØt¨z^Û§ñ„'Žšt+…[z·éŒ€ºaz‹I‹·˜ùˆA¸Þ“°¿R¦; ¹á±*•k¹—‹¹™«¹›Ë¹ë¹Ÿ º¡+º£Kº¥kº§‹º©«º«Ëº­ëº¯ »±+»³K»µk»·‹»¹«»ŸQ;tendra-doc-4.1.2.orig/doc/images/compile.gif100644 1750 1750 5446 6466607532 20252 0ustar brooniebroonieGIF89a Ì¡ÿÿÿÿÿÌÌÌ!ù, Ìÿ„©Ëí£œ´Ú‹³Þ¼û†âH–扦êʶî ÇòL×öçúÎ÷þ ‡Ä¢ñˆL*—̦ó J§ÔªõŠÍj·Ü®÷ ‹Çä²ùŒN«¶û Ëçð)ýŽÇÛóü~` h @Xhxˆ˜¨Xø'5¸©Øõ(y)I¸ùa‰ù™¨ùä ZJ(êDj ŠÊ銡ºŠÙÊ+›iwkJûÚ+a«»È«šk<ë»lQœl8œTÜ&Lí|ŠüÍÜ}p½m½=9ùHëxë&ÎíÝ ^}êünT_Îxi_$ˆŽn¼ ü…‚&°R8R‘c…0Á‰ä˜l³«ûž°Î"»,ˆÓ&[­Œ×R‹j ÒnÛk¶‚€;¬¸%ðJn¸ÝÒð-±n ðî­æ’€nºòË,µmÀ»¯ºëÊЮ»³úû`­¶öëê»ñŒ¯µØüë¿Ü,ºõÞ»+ŹÆZïÅÅf|ìÃðº+q ÊðÀ3 r¾sülÄ-;|°Ç¼®ŒqÉ.œœpÌ­.Œë¼#àœpÊ1rÃÚ¾<ò¬F­ó <m3¿22Ó6?ý1Ö.ì¬ÕåF½³Æ`W-vÎ š²Ï*#°Òã2ÝsÚP“ÍÂÔ£m7ÇrŸ´Û>s­ìßôÊ+xÇpÿŒwÞl£|óÈñôÌKÿk]wß•{Ms®ók1áq7nùÜö²Ìùåžw ú³¢?køÐ§«Mº Ï>kì",~ºÐ»ãÞuí¶g üè©›^<㥞¼ñk62iê=»ï!ðn¯õ `Ÿn¨Mmx·¯}'Í;/ãø¸—/g°êW¯ûõç+<óóÃÛc~Òïß$—`"©$6˜ªAç)Õé¿íË£×õ7$èYh‚!ÚôNä¿ é~°+¥ÎôA‹:mºRwd"™p‚9bñçAiÊL˜ªáîn´Ÿöï‚u”–¢§î‘+|ao`D^¯F<á•TÈC&qG/$",Ž˜*,ÿzK‹ELR ¿ˆ!,-)Kš¢€ª¨.Î ‰3¼”S±}QÎo1l#•HÃ<ÍNlrëf5N n,äõ´$ÂoøQp´«„ MöH°‘8‘×Í.¹±Õц†L$!ïØÞeÒiB„a#ó@ .•¬l%VéÊXÊr–Ø[Ù%5yJ÷$§>ôÒO¸Â¦9:æò.¼ŒGú|‰8¾½.w›¬J1™a0YÒj— (Ó¸Ëlêrhgä ]§öæ™®@“8åçÍ@托ڄfTw¾oªÈPÆl§;éy¸tžÇŒå$g/i¿ùÉSCÛdgAË̲6®dh+ÊÊ’œÆÿ2©yÆ@{4QÔ8!K*šŒ‹þ(£Ù¨>‹BRÀ|4" RJUZ Ž¢”¤-ÒKkʆ›ÚA¦ÉÑéJ;ƒÓéä¥4P(å¶‚ê­ul¦¨™°…ADŸÆôZB„ê¥ÊÔL„¤Æ°j7ïçU“au+Zí*HqòÔž´§MµLNRC¬•¬eõDZM´[º—l ËUŽR—ŸZ°w5Í6òQ¼Žow›éJÖòVÁ.e°‰5¬_)û•©þ ˜€dN]1›Ù¬ö¯¤ÕJ*f”¹¦B`™kl_…AØ¥ÒÕ¯=©¬hەµ\MƒXgï²´È%´³­†%†K–âfDµMÿàž3YU‰*F¶§Uªm• ‘ë®–ª|õìt™[‹¡j·¹Öo-ŽÚ]à>f2–Í®y—àQânw‘éLJÃZ‘±V—¦j«tù+Ù¿àW?úm.O™³PG$¸ .jƒ£3`ÇÙ³žQ‹p9aáv+Ãõ[‡×F¶ó ö0d¼Î »SÅå¼ aÄâqƸ$EºSüùO"ƒª„vh¢žsÅ$&òh¢è%½˜Œ\òR‘=¬ã509D'º’¿NƒjÙ™&î_y¼ä/$Ê&3¦œ§$-IGIÖ¡¹üd€8l"ˈ"3BÄë°žwüçô˜“Ï´ÿœÍÙO+¯K‚CÞ£ íG¯˜Ñ‹þ“3ñ™cLkÇÒ%¨‰!}HP+PÔÄ´H½=A?8«nn«kñjh8¼÷åj0lTªþ7£¸ž5|}MŒ«Àíe©­óÒ_îz¢½®u€ “ìx.û4Í&öm™íæ¡úzÀ–no}ì¾(VÚí6§ŒbÖg«&ÛÉÛööÌ}«€[ÝewñÜ݉à†ÂÕ~l,Öòzà×lÍú¶Lf:6µµ\c[ZÏ[(vlÁwûK´ÕrÚûn¸|ÉûÙÉ&w¿—ùÈ ¼Ùºm¼Ü£õx`'NY—›ä£ùÇÏ›òŸ1R˜¯e¸Í_ÿ~mŸ›œ0ê.r/>ŠÍjü·ö GGŽËo¸ê7ÿµ¾¯Bp¶ÜsÔtÚÕzNíp+Fì±7ù8T²³7èÆ·¶Ñ^™~cÛM/"? ï{»é¼V;uaqgð(ïýØ»Yœ óÁô Æ2"üF ÍXCò.º²˜Ÿ8£EQžS›·’©;qçÑ&ƒ)·›ðfú ´èPì1 ïÞç-K,õ‹’dèíC12Ψ¢}mg þI‚°=œuæû+_Íûy¢›[ü D1Ÿ?sõ}}3LŸ ÈÏ~¶¯¢{Ÿ àOq…i‰þô«ýìo¿û»$¼øYþô¯øíÿ÷üQÿüçùû€{Nÿ€xG”f€ Xi𦀠¨&žæ€h$(¨zg(ާxè!X (‚%Èvig‚)è$¨‚-o,è‚1¸†At2hƒ'Hp;˜uxƒö÷o'G§Q?h„Ä`.7„5x„M؈¥„'1 NH……?·„HW…[H%!‘ô¹åT\H†Pƒeˆ†Cs†iȆ‘¶†m‡Eô†qH‡À1‡uˆ‡.…xyȇ¡–P}ˆ(ˆƒHˆ…hˆ‡ˆˆ‰¨ˆ‹Èˆèˆ‰‘(‰“H‰•h‰—ˆ‰™¨‰›ˆ;tendra-doc-4.1.2.orig/doc/images/diamond.gif100644 1750 1750 1404 6466607532 20223 0ustar brooniebroonieGIF89aÞÛ€ÿÿ!ù,ÞÛÿ„©Ëí£œ´Ú‹³Þ¼û†âH–扦êʶî ÇòL×öçúÎ÷þ ‡Ä¢ñˆL*—̦ó J§ÔªõŠÍj·Ü®÷ ‹Çä²ùŒN«×ìö6ËÝt<¯³ïx¾í×÷À88vˆ ˆøÅhðØÈ)©EYh†©¹™¹PÙiÅ)êõZJuú™úÆ ‰Ú*u˜'ë*7g«»ËÛë‹øjü3,LLc|¢ŒŒÂ¼Üìòì½2r]½‘­]­î-!>þQn}ΑÎÒ¾nô>>ßR}oO~”ßëÿ -÷jB°SÂb!EY8â ‰t(N i¤’NJi¥–^Ši¦šnÊi§ž~ j¨¢ŽJj©¦žŠjªª®úE;tendra-doc-4.1.2.orig/doc/images/eg_scheme.gif100644 1750 1750 17200 6466607532 20550 0ustar brooniebroonieGIF89anÏÂÿÿÿÿÿ³³³!ù,nÏÿºÜþ0ÊI«½8ëÍ»ÿ`(Ždižhª®lë¾p,Ïtmßx®ï|ïÿÀ pH,ȤrÉl:ŸÐ¨tJ­Z¯Ø¬vËíz¿à°xL.›Ïèt%Àn»ßð¸|N¯Ûïø¼~Ïïûÿ€‚ƒˆ‰Š‹ŒŽ‘’“”•–—˜™š›œžŸ™† ¤¥¦§ ¨«¬­®’¢‡¯´µ¶•ª·º»¼À³½ÅƦ¹ÇÊ˨¿ÁÃÌÑÒ“ÉÓÖ×°ÁÂÄØÞÌÕßâØÎÀÐãè½áéìÇåÛÝíó¬ëô÷µïçøü¤öýMéã°à¦jO¡Cj#.Ô¶O¢ED/jLÄpÿÍÆ‹?^ìHAžÈ„!OF$9Á¤Ê€)_*d)Á¥L~1o¤Á¦Nz9öã Á§PvAÞ#úÀ¨/BP£J•ãO©E¦œ>ÒjÕWJI»²ÃÚà+#³b9Vm6µ­[BŸÈ2@«ˆnÚ_©VÙ½—"ÁPi/áý¶Ò^±‡É]p8±ÒÁžª9VXp0N‹4®l˜@^D‡&«åL©f¿ “ÎöY€èº«'™Þ”ÀæØ¾<Ãbš·ïÐmzÏzã:øVÜ‘fO|öÓk»žNÝ7èÑóO7úúeÚ¨=ªFÞ(zyb¼·‡Î¥½zo÷ÒÉ—ÿ¾Ü\sÁòç'G½¸köÿýñçßp¾ä70§1—šs*¦ÛSÀ ø›pŸW :ˆ x ŠÇ`† =؉*Ý¢kô…^I ˆ"(Æ•R¢LÊ©Ø!‹ã‡p¿ÅXqç¥g‹yéå"ŠÖ‡ø±"$zÝY—€Š§ ‘‰ Ta¥Y£s+¶Ô¢“.I)e…ûÍXšˆi©%.F–sN ä”í|iÙbæ¨chUây ^é_+¹©è–qÊiâ“6=9h'jæ–¤ö-ؤ^>úÞ †¢9)¢,j*œÍX*¤…"ª^)‚ÉgMcrJ¨ŸêÓ«¯übê¯À+ì°À¢e žêÚê€ÿºšd\pÙ¹Q!¯^źg¦LªêêyÈV娻$ú+ª:ªÚ)«AFª<Šf´?J±§ˆ-ŽÚ n¼òB¹/®°2²(¹™{k®¯;ª´ î{,«w®yé#ÿÔv·ã Wél¥u±©È›`5*[™þ>ú-²rì0º6fm#‡I«Ÿ3¹‚e&\Êf0¸Ï¦œ¬¼ý½û1Ìж+«È4W;kOµ:rÈ­,gÆuLÛñ´.ÛynŒv¼µl3 ìô½}æ›áΘô ‹FQ–¨gÓö.‰ï¦k3ͳÉRLíÒÕ]‘¶'²}‰ÛÉ]Åîߪ–½HÍOuƒ†[‚xÿ.Îm¶à÷båËè¥ã#ŸmwÚxS®wÛ|«>±#£}³ÚªW-zæ¤giúà†án;)S¤ùã»wÞ{á«޺×ι¦žçý;Œ£¿N÷Y6Cs~ —œ*îÖoŽ}äM-ôÖùè둼åËsŸ;FÅCo‰6ô×oÿýøçÿ;ú÷ïÿÿõ›^\ª·>ø=/[ó ¸?:ðïóÞŸÒGÁ Æ!‚±;Ýì0Á áýH š~8¡ G(‰ òn'"¹œ$VxÂÒð†€ ÃÈ08„  mXª š0pã“öúæC•(J‡$ëÇå×Ãü°õÂfÿÞ™ø!°Š¹"RŸÉ•‡0^¼[B‚' 1¦#‹güÞZ’¨Á%欉N$cÃ(¶Ñ)=€„—Ò"1Å/„ÑpãݤE)Ì=’bÖ­,´. jH‰’{"Á!“'N*Zí1làÃ@J‰—¬c&™hÅN2ò,*¥•T³~‘’P¨ ù²²’º2Š^ä$•Ö£PF;_)¤ ²Ée(r‘´Ø¤2Q·F_º2š´˜æ«‰Ç— Ú´#7[‰1 š³‚ì;à2Á’öÉ-á\åÉY®Ì¥“Ž/<¤5뉼ðOÔdæ>­ÖÏƳ|šèí|ÖL¬yºè6ÚÿM‚¾­¡Ñûð0˜Æ€²Ó"Ï,„Zæ^1®>¹äHöä9Î0ºÓc  dÐú%­9m] ¤Ö8vR$´£ý¨DBz0Ì R§ëJ*gYKwÙÒ”Ÿ:fO!j@|òPŸ]èH‰Ù;½G§MåšS‘ê#u59•¨8)JO‹Hh`Ý)¢ÊÔ®±,©}ÔTÓjU*bµ­ZÝ .“vÖv•U©_=Ú,'4SQíÕŸÎë«!‚Qc•+ «ÓakjVÎÆtq2=*(yúªê¨kjD.k>óm¦=(/ÃE»Äex­ ÅYé[Ô²4µÅ`q«ÛÆ4O|Me>uÖ¥íÿ­ö¥Èé­Z+† J‹åX“6´ÊƒµàûétŠBA•’×9ï$ôU˜žó½x S¢]–Åè%ÚÃÌ»2õ¤—&r©?—\]b’¼­U/)Ûë)¤Æ²¨*tÓåYx‘6B¯½0O7Ì„Ex+V©}çB©Æö‘°2}j$3¦Pâî¼qYWÕ…Ìn± CÆ q}<[Ú\ûê˜0,‰Æ¯ÅÑu7/[V¡2|6úúT•v®;ZL½ÜÓ]¬ßâÖª]·É-ôm”Ù*`~:e+ò—횬„}ÙB¯d×ù×2»õgNµñ)¹Åe€áI]L¶—“-ÖËÛVYÿ·ÜÒ¨(Ýç¢-6t‘îœ=Zçv¸º‰Î3ÂPö² ÁlD'ÔJƒTÈér¯×:Œæ¸¡øÄ¯}$ÙD]Uå^•²T`x½Ò1«ö!à-(¯G̘B÷ЦõiÇÛãyÚ9°®“³­ýºGùÁN¶´ åfSvc§¾t´‡Íãû®1b7ótüäåV;Ð(áÔ{çývúÜð6QaD‚°ß$w·ÍÍLt?·wà¿þ@»ûÝ×Öž½ñUÈ–àï)8iä‹“x&kŽ÷K*Ž ä—"ÇbrÍh’䶘89:.foïæµùiÎlŒ<×#ÿá9©S›&z]*s< ÎKÑrÛ—yIÉÒ£.âr<£k£úF¦.À{c½vh¹Ð§=Ù2˜„W;·Gß…]%\;Ùé,Üݤ|?øÕ¬MYÝ8´K$î]ìùÕ‡ä”sç5‘l0EʹK]íUøà›k÷êz9.œÎ•¦;ãwcë»æ>÷ÙÉ"6cî´Nʺk—ÁÔù„²ô“=’Ý‹5Çf7ªñZÕ×ü,×¾;^xß±äOþõL&eżrŒß5VÛ¥{NúëO|ÂãÓG¾“™v«y\=úÑ<܃ßî[›]Ë´íb­ÔØ#kTñžæ=ˆ[ÏÊÏSßå©,ÿéC»ÔÑ¢¾÷„zàF6ôçlÑtÏw$/ò{[G~^g}gÇ€p〽fs\ävâ§tHløW|îS€-u€°W}”‡< Hf"xN—`—7}CG]¶õ~4¨3Ôbp£Çx*u‚¿†øtTb|·pƒu…ge’CÑ—v08w”Vw*sy¼@„%`;(Ò'r1èk38Jü’SžZµGS×k7EZ§1΂„a&vK¸væ§Eª—}pcE¸]†ezÃDS¶ä_m†=oi€k 7€yã—…9‡ ‚‰Ž[„—y™oXô˜ž—Âr󘄙nR§™•É—Šé™™ywâÀ™:Iš™Ù’Qy—~ùw¹7¢Ù™§t±É:³™šµ©€–e™}ùl¼É ¨©–ª < yœ <¹Iœ» mÜ£œV{L9”øeO÷4“2X“À‰lãæp?É’’écÂæv ”¯™ÚfIPIwמfä>ʶlÑ©ÓI—Üùfðÿ¦{šò×8ñ9œõY•wfRE8Vý™`ìò”Aò‰Z¨˜ mjyò†S5gêY—­ižÛ©k8S`hL_øY©Ö…buŒ›VLà ˆÍI\ˆXŽW‰èˆšè‰œJŽ•{좾9š/ZeHVŠ*6Dº0"êj«h`ê“å žç)ž ˜XEºirˆ¤ˆ…eKúiˆè£Ë 0'U*‹EÚ¤Mz£ºWŒ‰UX»eZ ºžMØžSZ+zçG!ÚÇä…VŒŠeW„5WÿçSOJ”ìIaƤ΢ƒŒg,G)žéIžªù….‹úa *ÅÉëöŸ?šQÈÇ‹zÿæäWãIŠ ‘ ê}ùõg3*†°ª5Èy>ñå»&©‡j|CR”Ä_b^Ýõ¨ïùN`:Ÿÿx2\è-ÒÁ~B©ƒU‹Â*eÑU¬©º”[h©ZÆXÀ¨­ÓøaÀ†jœ§-Š~tr ÜjY–bÜØ¡¦:$wª—z9„Ô*§»8z›+i‰îJKÝÚ`Å ®Ÿ [-ƒ¯û©û 4‹ç󪡹êšìJ)á`Š·zưaê¢ßªn6†8ÊfQ¸°P:©y­ëH&E3©÷¯ k¨sj”î)­úBˆüʨѴ²i)¦hfùyjƪ½`±Æ:˜º™±:Û+_C²jšå©ÿ ¡‹É©ð‰„ãÊ[¡.–7L´;z”9¦K@[­Ò ŽiŸº©¨J¯4 q[+ŽàZa«¶Ùɶ/{f³z·x`³J9¶çص^Ë )Ù?l¸ T¨7;ü™³«@¸ù3¸Œk‘¸Ú²T«´Ž‰pk?Ž{¹ ¹õún$‹oCy¸dÛ¶n;r”{ SÛ·Ÿ›q¤K1{s«{s§; ©[m±ûm‰{š¯œ1gº¹û µ *x;¼Y“¯éP¶H±»[ø9Zǘ~«»M+åX°(÷¼À«¼¢:½ɘ|¨¸rû ÒËŽÛÛ€¡»·ô¹¼#eœy÷ŠTÁwœG½¿ù·b{¾ÙûIŽÿJ®å5³nF…he¼cÑ–Ùx¬˜&ªdù‹°ì{ã+”ú½ª¾SÀ^˜¢‰Êi›Ø±²F€ð ¤½px}î볟¸£6ºj—S¬·Á´ÙÁZ[²£ Âdµ}3jÁ2ûI)<¾sH³GýUZü»—-LŽªd1Œ²±ˆ£F 0f¡Œ®k‰rg©ØÃÇӺˈ½¬gð:ìØf܇Ĩ…¥ýª¿—xaŒhÛ¸pÖÀ +¹ú !–7Z<ê76v´9ö;ȇaͪfÏ‚Â0¦‡µÄìU¾n9ˆHÞWc3ö`{&I·W©Ì×½i,¿l­õK¬¶“xÉô‰ÌÊ^™ÿj6É޻ƛ‰È½h¥T’Ã~¶_Êú0٧Ĥ Äó+Àè;¥Ë¯~DÁuu4²ÈœÅÈ`óÃ8aŜʻlƒCè¯:Œ˜ÈŒÊÒ3ÆûYYÖ•¬”lÊk›ÌÓ¼ÀTYÊ!Û°ë:{ ¼¾ú§´}¬ÀªÌ–f+ºẆçPÙj\Æì*<´ÐìÁÒø)Ï |fõ,ÆÞ“ÙÎl<ÎÂå‹¡Xa†ò¡IêЌ͹á¿Ç Àþ(´Û¸ª9Š{¸WÂWËGÕÑ…õ¾”íl¾¸|Éq8¤ÊÇ}C*£]º‡Ï ‰÷Ìœ,|Ë=À£(Zˆ\ÚŠëG‹1¶Ä YŸ¨Æ}ÊmÿL®z×¦š—Œ³h¥íÓ3M55³ã¼ÍÏ(¦]ü7¨$¨ÆH†ÄHÔTÜÄALª›w™ý{Õ›Ïk­©ï×½ÎÆI¼z¯F=×ÙÍx µÏ:Ú<·Ü¬ËéX´PGØI½Íû¬½}¶hÛ×8mÈúXÒÃJ¬Ê¦Õ¬¶×{]Ôj Å®rx³;…Ù°ÙÑ\’š[?D´Ú* ×:}Ø´qy7Ø®m-hi[Øà D®­ ­ÝÛÚpÖÞ*V˜§<Û<‹V¨­Ïª ÜEäÜô#ÜZq]ÞrŒ*Š]UË`È(Ó**„ ¶Ü~Ô›*Ù¼_R5—œ¨¥ÊÊ|è±}IœëØ•[·BÿLËë-chʦ(ÌÅ>½}ÞýÝÛ¦Û|Ó‹žø›‰a5Õò×~+Ó\Zö†Ò9 ÎãmÛɪç}£Ýu©p\zêêÞàmØr]§0 ¨Ø]zaͧûíY&Ö áà]ÙDkâ.üÁ¥= !ß$>ß6^ßgÔh?wÚLóJq*à–|Ì·›Ñä¼Çp—"ztä#¾Û[}â˜u=?ÎÖ¸aÀY4þ×;üØKÍŒœôµ`[*Ââå•Ì·jàU~~dîLo ·hŽ•©‡'&‹L¢!MƵVàaÞ©žÑå£öå0Œ0æÊ ¶è¡ EðóMAJÞîªXÛ†`-#PîIŽälžÈxÿ.ʽº^|øÉýAèV+ØÀ4ö­êN23F¼¡:Ȳ|Í }KyØ&š~ê’ÞÌïçßC¼´"βJí/Ë£>̽¼ÞÝGO»ëCÂgy¦ŸP¸ 3¾æô‹Ö#åÍnáWò§Ý²¦Óž Áîέ òÀì±ÿšfìÅà #^È,gä€žÙØÔëB>Ò°Ríœ~í@AïºÎíJ‹?¬Ÿ'j¤/|-åþï'‘mCqÏ!i†®oq® (ß ÞàôòÇó‚¾ð%/0gÙó>o–È0çÁòÖžÒ¡Ý Šÿ„óQ«óI;.’ôBñÀNÙHÏ ¹ #ž½õ™ËôŽåWŸY_ Ð]ö¯=ó-_¨äADÂÍofÿö#/ìòýO Âö°‘pÿö^ïï`o vÙ¼[w®æ~õhø7ø…ö íôˆ/œŠ¿éùñ÷‘ýø®ÀÙ‹_øbø–ŸM‘/åòAùËÜùžÏÜtŸ!¢ŸØ¤_ú4Ïù“oøV¿ú­€ù’ßei–òÿÍäËûG/û‚oú„¿¬·°·ã¸ÀûŽïû—ÿù9ñiënÎU%Èß÷Ê ´úNüŠ+>ÖRU¢Ú{ÝýÞÒ¿ù±ÿ‚tý¶×a“£A 1 Vˆ6º]zÿúæ3ýâÓºï$×YôUßû™–b!¢ì¬AéªÔ^µ;ˆ…X9h©®ØÀ¾p,Ïtý’v‰W»ìª @,€ì›@>ÏN„šHzÑÌG‹³Z³S*§b’Ïè´º¥J¯*OžI£‹8ÎÌ€‚ƒI9|+Nz"\"‰#XwŠ‹_[pSaBj‡šž7`¡¢y¥`U¤—ª¬w-ˆ~„²J/œ@Snvpxl]Jr¼UÉzu1bž·ŸÍͦ«ÑtyvÐÒÕÖ§ÓÚ·AB³³µ,ÌΈš_5å-™çéódŠÑ©’ÛŽo÷öûÞ6Á7Ž»24ªÿ ˜#XÃÉ–Ñ›Ø)ξk“,²ºBÊ/|N  €É“(S*|E±ÝÊ/ÕDŒ×²æ¼˜6‘ ’²'Jœäåêl¦&¡D“²ª”¥Ž’>}EÚ´ª £2­jÝšÔ¨* q{kªdÓª ºóØžS×ʵïèÜ»x½|ûS,Þ¿AëfL˜«^ž|OÆ-<×,´Œ#7;ì6±ÉÅ’Ó:.›¹³MÊ–ûõlXðYÒ¨ç]èjê­›É@~MÛ\[Ö­!Ö®{ôîߺIîµùóèÓ‡—¨zÿºû÷ðãË—üéðÄܨßÏ_¼ùþÈ^WØ™vÄ¡à‚ûýÇàƒä XTCxkùåE!BªmøYuÚ‡X‚×Í–š‰axXSoß\¸v(’£N*RÄâH"VF"p3vÖ£-5Ú¢ .îøÛ’!¹‚’A.9¤S#|…ŒM²Õa•]=Ù¢p#NYb•L‰eQZâå}|exW˜„±¹å˜ŸÜÈ‚^ò&=nŽ)§•9â¦fcw^ ç2eÎYdGJà „Ž±Ä¡iRÙdž†2O¡|žÙe¤_NЧ¥—:úqföæ[‚²Ñ¬Æ ¥k~ºTªhÔÁηâb‚ÿ`ɽ©©Žˆ:ƒŠ(:´êjˆ5ÂzN~©ÚJk®º~“ s¾šº)ªô„4+®Èª¨ìRµB›ÐâN´Ã ÏU;…”œ¦3Œ%Op!‰C=Ìgï½øæ»¸¡@1J%Ô¸q«³Áè2oÁÄ`³Ê&¸)¶îë G©¢-”ÞʺF.uü†å3GFé“F¶ê³ðœÓîë*» +±Ê•pCl@õ*:! ÿÛF‰Ì1#SâÈÐ×¼¯ÈÉε`Ñj¬=Ú ¬r"©»á·<Ïá×"yI×Dÿìõ»HÿBÂÒ¢nBj0G}Ç(W“E?cì!×_6@"ß ÉXóƒvÿF©Mg»ù¡o­õÄצœñïÃÉò 5ÖúísŸöô—ù­ëTõ£Íý —?.Á€ð__xü ÐÈ€`$(š#UЂ;à4ø8õyðm 슾VÈÂ6‡„¼ >˜Bz@è†8ÌaÒPÂïp|k¢~tHÄ"ñt¸ŠanÖÿW¹tñ‰P\•ø;Ñ|"¬M_4Ã&f±3[4Òc5FÔ„1X>p¡ט¯–ŒTô͵vš9ª“£3E“=J挑K‘Æ?=/€¼ž ¹'’1‰Là"ÙH=RrqtÍ#+ù¨Kb¹Ó$#5HGO¦%’y%%IY*S®•¡¬Ï* ©JW¾2“²ô#'d˹ÀÒyµÜã._×Ë[‚˜¹&++ULµüòÉ´ã0—ÔLcB-•Ñœã4»UÍ­ûùÏ€´ MèBúЈN´¢ÍèF;úÑŽ´¤'MéJ[úҘδ¦7ÍéN{úÓ µ¨GMêR›úÔ‰N;tendra-doc-4.1.2.orig/doc/images/graph.gif100644 1750 1750 1441 6466607532 17712 0ustar brooniebroonieGIF89aãÓ€ÿÿ!ù,ãÓÿ„©Ëí£œ´Ú‹³Þ¼û†âH–扦êʶî ÇòL×öçúÎ÷þ ‡Ä¢ñˆL*—̦ó J§ÔªõŠÍj·Ü®÷»ˆÇd°Ù$NŒÏlPZýnË3qøün©#ôø~ƒè7h `HØgˆ˜x·ÐyÉÀ(Éöx™˜©9(hÙ öIªHZꥷ¦ŠG{ê:K[k{‹Ë#‹–;´{òÛ› ,ÜCŒ‚l £œ¼|Óœý\l3M=r­¢íÁ½Ý½ò-îì3^.Þ²žþçëþÑþ2xTŸžÏl±óoY@€ýÔ=x á …³.,¦ŠCMkTltÑ"ÄŒÿÖúq„ï#‘fHŽ gr¶”:X^qÙÒÌ3¡Ô¤©à湜›ÔD’¥óØžŸC%‘ ª+P'JHqÞ ©iL©AR•¤J3VhZ!zý 6¬Ø±dËš=‹6­Úµl™Á*tP«Õ†uÊØ:* ¼@øð½„h+†½‚‘ŽBXfa‰…ýUâw±‘ÇIXQ£lXî3ÌøvµêÅyò¯Ð¦ºŒRÌå´cd’)¨^=ZJcŸ?Ç…CzÓ[Ärs·ý <¸ðáÄ‹?޼Ck›[–l^ÐùéM¨Ó‰Ýcv{Öc×ÒòöãÝ…Ïó=ËyÃåõµ/·Þuz,ñ'¿Gy¿[}ƒÚÁÏøùŸû1  q  ö•ßJ ^öàf 4¡L:v!hæ²`%Jô!îä_ê…8ňB•È݆¸¨è“‰ô¡X]QpI¨TŽÌÝø„¥bT=âgäK¾¶$‰-&e”RNIe•àÈ(`“?i™ŒÃp9•— Ò$æãQbÒ ¦Dm¢ôfŠ{¬)Ã)d²i‡~yò¶çfx¥¨ÙJ".Œvàl¨å”h%†²Ìôüñè•…Þæ(m”V·*v%Ô)šŸZIj©¦žŠjªª®Êj«®¾ k¬²ÎJk­¶ÞŠk®ºîÊk¯¾þ l°ÂbS;tendra-doc-4.1.2.orig/doc/images/guide1.gif100644 1750 1750 11301 6466641220 17773 0ustar brooniebroonieGIF89a ¡€ÿÿÿ!ù, ¡ÿ„©Ëí£œ´Ú‹³Þ¼û†âè扦êʶî ÇòL×öçúÎ÷þ ‡;ñˆL*—̦ó J{Æ©õŠÍj·Ü®·øŠÇä²ùŒÞVÓì¶û ×òºýŽÏëSô½ÿ(ÈÔ7hxˆ˜¨P¸èøIÖ(Yiy‰ID™ÉÙéù©² :JZª(jšªºZ‡Êú ëå*[k{»D‹»ËÛ‹£ë,<|ìH‚œ|@Ì<úŒÝLe¼8m™]Í u}ʹÝ=ž+.²ö½\|cžäN/ô˜n6 ß²¯)ÿÿ®_${ôLlŠO.€ ‡̇/Äpˆõ¤øOdÇÜôš¼3åf–iÎóyó°Î’‹ÆKÚOè;«O÷J9qWïæüÜÚõ.Øz$ú•aó7pà¸aÐ-ò×îŒÂ¿™_<¾²¶sÆ… Å—; oСš&‰ŽEç~JyDV¬ nøŸ5oê^‰Àøß±ÎŽçhwýÏÙgÿLŠM>×\ÐÕÇu5àgÜ|ïÍæ^Ul‰‡à;e½@ƒNl—VV©ÄUxí!…ÀuêHÔˆ.gÑtýøâ‹%˜ÕŒqIwcZ‰è_LëU¨ã È”„_Å8ŒnIÓ[ô¦a+ßÝcb‹E©µŸ‰J¨åIý˜`„µÇVp[Ê߃5~x„"Êåd€?¡ÔHŠbÊi¤Zâ¤i&W)þ©`²Vß[cæ©ÐˆfF—¤¢'ziSò9¦•‰ipeâÉ%£HîØäƒÝµ Ÿ`>¦é£—`QRžV¥Õ'ƒ61«”s¾·Ö‘]i:d§KÊ¤Šæ°x'ˆ$ö ë¨bÿ=J&°nêxfs-î älǪª"¤¦f[▆Πå`™«m[e†:`[¯j¹&`F¥^ª‡F7£³Ú&F$Œ[¡[ _ý é¾ G­„™¶¢O%­³€Jð§Ö%ä5ìï—TL¼·ó±¼áZ\±6ºÅq_bLñÉP¥3—+<ܸéS³þóºÿ¾úà+qúõÿùÓ¿?aôVÿ;Gx?ùT €|M5Ñ“²¯¾`ÇÚf@ ò‚Yþ4h *pAÊ(¡@¸A 2L…(¬Ÿ©ÓÂòŽ…“¡ FCóÜp‡þË!N|ÈÃʽðjA,"þ€7#*qHÏŸ8’&zŠT41XÅ,ÿ‚ì2Zì"v<ãÅ0.çŠó£M›3ªqtªY£“Ô›7Êb¶šã‡˜;Þ‘ŒnÓãñ˜D?þ‘o¤éDCš‘STä" ‰;Gf‘C¶QŽ$ Ã~ñ’b¤d;ÉÉ.n§_銚˜$#F†òeÙ!‘{.ŽÉÉ ’«ŒÇ(…‘dQ(+"Se-}–WÍRLâ&P:õË"òæ@ë÷ŠäM%Óˆ¾ìã4«øÂ¤ÁñšØä#-¹i92VœO§éȹDsFAT§ôØÙÎ+~žÝpç<éY>Aᓊê¼ç>á×DþÓ~lè@‰aNƒ-[h:¨P‡VÐÿ‡•¨ÑPQ‹î&‡Õ(.|ÙQ††!i-TYR“Æ‚‘)U),XêÒxN¢¥1]"iZSUÒ9hÌzºC<ò¨Qo8Ä¡fÆ„LÅ)jèÔœq‡›ÙŒªQ;DÕûYõª„Ê*Ò¶ú0¬^S„` kWÇ ¾²"ÍcxËåö,£Ö¦q!®sÕá­Z·:Pþ’ƒtíË9Ƕ¬‰¼³‘Ó„MAúê×UDgkÉèKØd6moÕÞ`ÿAÈI“T­ê[ Ú‡VòEgÓ3›lb´YàSOM¤dbYÚÈ 5µs«éð:-Ö~Ò«“jelÍHÑJå¾ôZËMüÖŠˆÿM,†>Ë;p²Jhìœù¦&Téîéhý³®OÄZبÉ/ºJÉTC»Mílí%¤¬ÇyûΠa •¥å"x_ ]ÿ±O:ß=+|Y'°ÅˆH±uñtëÞûîuaúV—‚G.zås ö/~§7[¾ ÏS€•Û{/üÌÑN]Xt l?¼`éeØqÐÊl‰7JÚ¿JL»t¬T„wZК¸k…´0qhÄ` «Œ„8Ñ·n´Ü¥ª˜s­%ñù÷÷X¶r9&4[ÉT=®‘ÛmÐõr;”W:5DÊ:æ6<¿¦ªyÍrŽ#;¼eåµZÐ[q‹éekú`»õ.DЇ¯ÿÌRÀé©Þÿá¦lÃJ°õÍo!딯TN…xùZ™p ¬I£+bA»í¥CõŸ^¡’Ðò òšÅ,GÛ+åÊ}·O9zJÆÓ+ U®-; PǵҵÌå¡wùæÀå3›ºuk é:º†Öñ®çb)שvÂxûlXL©Kѵ6YŠªö®Z¬§ÙòœÅ+ÙëŒ|©RÞXy刯„ó2ªÎ ™ ‚C–åßQåTæv–tlÙÙ [ÈB³œ¥5nâN.R¹ž°«R»Âôª×gñþ7ÀîUª0ƒ™—íq¼÷«]ÅŽ—ÁUl]ªý*u-Ö/¶…—äþ·MA“o³b4Ë™¥‚šÞ ×¥q!ÿÞhD‹†¬]öª™íã²}hiœ~Q†â䆸Ýd¹çÒÉŠ$cçêÖˆú*i¡u¥sm^¥1ž%ÇXÆ[×&e)³vÞ2W›¢­‚¥Nv»'ݹ@ž7¡¡M/8݉ÝLÜ¥zZ;ëÖ±Ê º¼±ŸºìyopzÿåJ¡CòZüxí°^JC–U춂꽟~ZÓÌ!5j9_ù—Q¾ÔÝÎôœ}õw¿eØ’ããåÊâ2½ózïð/Ç¥k/ô± ŸÿmÍ Fõ¼T™F½jˆ:Ïó_~®aîzpQ̷ᶯ’ÏKoïUO_àÜZ”3©Í^^Á>öŸ»o»íÿt×ÝèµglÏ­Ì&­ær÷lýG:/W/ÓqOÃw:q<¶sÀ&b¡r÷NwfÄ}’1Þ5`£~ý8tMAxÖ³á“vE'#Ó~¸Re³fH|uÁVtÿ‚ç§+ÉÇvÄ'{©1H¦/"{ƒK8ƒÝ“‚d^ ¸g=Hð†]ð@w™ta hZw÷uiõƒ{gv\ˆsZø…k†^ÈIÙT† …>耳'IhXfxIp؆bx†P‡j(wuU‡rø†x؇zˆ^d(…~èHÙUˆ‚ø_fe6l– kˆ…GEAˆ(‰-´S•h‰´„˜¨šÈ‰ t‰Oø‰Cÿ§BÅ1Š›qSxŠp—:­w`ÑÇL«è€»ópÉæ~<(f@G‹Øu°dLèxÂÕ‹¬d8ž¢p®sd”Æ_­ÿ†#Ø’ƒŠ§+¢¨`ÙŠùw¢6]—6\­'·1©Mš=‰«#@¨R–¨Ù×`ÀX›ó‚©± giŠª`Z˜âç}Eˆc€É9,ˆª`g¸*ÜycÊ ™[¤g•É#ÛI¬›ã Zf–=J®£µ­7uyF­}ÙKW6«Ó 8—‡yFèZé:qÝÊFíª¿:LŠF}ç&ØæœHНFW¥hjJx™Ö©ã6°;›úz°ïF±g¶œTc[vQó¬Ø(ÌY±3Ä®Hs³Š²){¤<|åf£‚G«#{Œ%›B2;³뤶ÚCÒj³¥&ZÛˆe†JŽ“%´{v ÚÚ³ûÚ¦¨ÿ¦³›´5´1E[¨«!µ§hX\º¥OÊ¢º…G¤µ1ƒµ^û´,ª^M[¶fÛAcKB*Jd:ȵI(1àUSv[x+Rzk_Þ™´| F§¶€ ^|¦¶àª‘èmOK¸z8—hj³+%™&~qÛMºÑ§Y'¹)æ¢÷˜‹»­›K2ø*ºÒ`¹Auºq€´Yº°ºýغŸ»{ôPO9»F“º.”»È2¹ê»àZd0¸»çE5gSÁF¯[¡KC‘‡dXÉû¼n$™:¥¼Ô;d/&.©G£J¥Ðp½s4¯öG ÅE‡¶B’Ïi^ùµ';pÔg«æk´ÿ˜¾1Š–!»oÚ›Òd—áGakŒè«¥–i}Ûœ?*†ËNH¿P+žu¥UÒ¯ÊYë†/£—1”ù½U÷À4eÄ ƒ¯i~C˜<üé‚·#lÚûT 5—•êxßyžLk+ÜÁ(óÁ¯à;lù{ fj]66„·›†äJ¶#Ľ<€&€:H¡zÄ;Ì Ä¾ì„ÅÓ»›Ò˽û´Å_ÜÅÖËÅ¿éÅU|›gœ¸iLÆblÆmŒÆ`¬ÆÖiÅ6õ»w<že|¸ÞƒÄ{\·~œ>} ÈJ6ÈÔ#È…œuŒÈ»Èë¨È|D,D,ÉGpÈ•<·˜\6—¬Éó×ÉþÿøÉôHÉ¡\¿¤£lÊp™Ê‹Ê«l˜®|œ Ë@3Ë YËY|ˆ×ʹŒ³¼LǾÇÀ¬ËÂüËÄÜÂÆìÁÈœÌÊ<À̼«Îœ¢Ð|ªÒì‹ÔÌ©Ö\Í7ƒrÁ¥|¨çz¸£1†#ø¶ªææ»•éì.±8s÷{¹plf‰/ˆ&höBœ‹º'˜›•YüEÑ÷X‰ åw¼Üç½l'i§nDè±ÊòÏ ­Â‹ r‰ê-ø ѽË6é¢p‚ga)Ñì±÷ü—†ò€JKsÛrqü¹Cø®Ø«S i˜ÁŠe}ë‚™ ë.lÒ¨5u˜Ó x-M»¦MyE¢*w§˜‰˜ 8™LÎôjÓÏú(ó¼Ò.+¸­Íñ, ½jcIÝ9&JÑæ:›P¼#å®a]cEÈr{ØÑœØ¸ØŒÝØXGØÌlØm¼”]O‘­Ì“mټپÙÙúÙ Ú††Ç¥mÚ§Ú©­Ú«ÍÚP;tendra-doc-4.1.2.orig/doc/images/guide2.gif100644 1750 1750 12132 6466641220 17777 0ustar brooniebroonieGIF89añ²€ÿÿÿ!ù,ñ²ÿ„©Ëí£œ´Ú‹³Þ¼û†âh扦êʶî ÇòL×öçúÎ÷þ …¥¡ñˆL*—Ì¦ó ÆŠÒªõŠÍj·Ü.‹ê ‹Çä²ùl£×ì¶û WªãôºýŽ?Ïóü¾ÿȳHXhxx7ˆ¸ÈØèx¥ø(9IY™i™©¹Y‰Éù êç)Zjzª€ºÊÚ:Fê+; K{‹›ûb«Ûë{ËûKCr!lì| £ŠÅ¬üœæ ½#]U=M}]³ åÍ.“,nŽ\ž>|®¾ÂÎôÞ®N^/'ÿe¯ºâo.À5€ù¦Ñ ‡àÀ :4G°Þ€3öSñ¡±ƒÿOLœ±H"zʨñGnÎL–¦Ì‹0í¡LÙke¶–ÌFÒìI1(Éw7qæÒ‰gM¥1{^d¸t—QyH zTÒ)È­P‰NmWšW¡µÖeÝ*µ"®1Û~LТè%¹ÀèZÝ•pˆ?iN=¦'S(ÓÀ@kƵÆÏ®Ø,{ÿ‘ý;£¾†3_%žÊ—ñ°`JrÕšâsèÉ’ Ý/ïUÏ!I„{ú±¸°9;ÞìnÍÑ9Wëk²2j×·eG5í·¸ßàŽ_ OÝ”¬bY´v‰[òïÝÈ—e¶íêLð»›&OûkèÍ“‹ ï¾­ütÕiÁÎjü‹~wÊûÿvßRSе{m`rñ©1àvç¥÷Ó€ÄùÇÒtöÝÔ[<±Öš†‘õÖZxÀ¥öQqb¤aœ]õÚpÌ­æVhžDaÅ›ˆ/—^€8 {Ùý–…®0Pq>=']ˆyGIŠ?Æed+õ±‚d’] &^—ë-óÕ{$­þËÊsÞ~7ñ‹œýùÑX»û0ð+^ÿds&¥™Ì{ÛÕ²;MdF~Œ"’8…ýJ‚9òŽæÀ< fðä»ÇëD%du(ä_¸ðÔAœùNhË ¯6¸²ÆF/½Ò`8h<›l†Ò«` ˜ˆÌï7OâO›|¸Àf%B¤°…ʆ#@ãÿpì†ÍÃ"ÕŒH/(–u Ä¡ÿ^™Q0 ÒMÒœ$ ÁÅ3o øÒG"Õ¨‡iÍòáÉxFÑ4Ë z:È» ½HDM²YÓ ¾Œ]‘g.Ê##Ö!­Ë‹ˆ@#œŽUf‘TR¤Ë(4$&ˆ‰¨á$YÊF–Çgo ÚaHˆ?Ð\((w£IŽÕЗYôrF´¢÷$óK[ò˜%‰HTU³LšÔ}°¡6½éŠà5°š0† Ì ¯øCd¤h‹2ŽZ)TÆè¨‹jãœ8ÌüŒAn)jÕhh"ZΩˆ:Z¾`Gº–~£45¦Q:?kLp>È,'"µÚ…©æ´llS#÷îç!4}rÕÏX0z8êÐ íøF½žµm†ó«oˆ>¶4&±R„*ºÅÒšõ\å9áùHÁLÖøR+,›<:Õ0Zãjg©·¤Üpö®ÿ¬¦*oÖ¡jS¢ñrmÞ ºÌþo0ƒÿ%lÝʃYtR†·—¼$0Ÿbݨ4s®A]mgG{Ì Újº—=c=JÞ¸±ˆeäY2 _ø& s$;#ëÝY¸V¹ª-]’ÊâÜS2Ô‘lœ*ÚŸ>wÀ—ý¦qÿKÝ0Î%¿H€@éԄǭňzøY“o£º—úTÁ>èfŠভDÊgYW,3/Õ‡…#&ñ\flUÒÌ‘<š[µ Ç×Ïz"®qVÕ©»0fó­®{uëäùöøÈDö—‘Ë¿s=Ø[ƒM„j¹ÃŸv‰šJÚ0‹\ãã±KÉ}bp¦®Dã+[±µ{ù`ø¶Œä*enÏW¨óJD6÷YW'‘áÿf:è¾õ“ ˆN4u¾é†”:Úyi¤[£'7ñ.ª ŽMd ™`L·Ë³L9¤}íŠê§PÔ-û1zÄN’w›¬×±¼ÞòÇnÆi­míjc^‡Ž²æÞf-/R›wÓQä5^¡ŒìQ§q¬ª>§‚Ô›Ô‡FØä|gØTaîˆM’Þ6>\6s'ÛÁŒ.·º¯õ=¾¾{Ý#lÛ¼é]@@ßÛ^jÖ‚ ÷M«dà9xÀoòïÑõÛà®BJÁmˆðK3<ŸÆÚ'•,<‰O|¥ðè¥Rݽq2h| ¹¾+5r“·› W9³vá’»\©G;ÖÃg6CKªå8'ѺÿuóžGjï ºÐcÒzýèQ Ûš¡š™3ÝÏÇm”!§=u”¯ü.eë¥>uò}¯ºØgÛ+_¤)‰Øöüøžßž¹‚•ÒKÙÙOæV¥à¨gþ®•äné£Ò°ÓzòÒhv¦” v­o5Jc#º¢fºÝÑYJÀL5~_|5 ߺÅdö²ŒLòäaØ@ÝdZ7&惀â‘n¨=7a£³¿Ç¡t¿tí ržqZ^úbýmh›Ùšw&"•éiáóÛqÃã9€×Þi,¯iË„*¯kÊí°Êñh†—–ã3JV†×'m¬‰AûÐâüþŠŸõ¹ÎÌÿ\ÛXþ¡Ïß_VþÇ\Zó3ºáU\æ:i³uåWRI†x¸Ö1–Jß|l”\ªÖwž¢%¢ECæBû§mXµ3UP÷÷#|fXÁ„‘JÈÔ‚‘w31¦CÖùvRfák#hƒòÆZ)…‚<˜IDL•4#ÝÃ}_çTŸ1làd1±uGq„bAXo¬$Ø6'uÕÆ_Ò´ %bMÿ—±4O&‚ZN6Kåƒ%Ç~\„%‹FXôfå¤~Áu†çã[²TDï´‡Ïö–r u{è6EE8NH âÖ‡Øe†¯4j(Vóôa.Xƒº“ƒHˆ7{ÍõI¬—]þ”ˆþÿ3#åL•¨‚ú×F:R‰m§¹v&æ‰3Q <[(‹µ9!Hh·è†¦‹0DTÅr(jÂ8|$… BÉHQ̘ƒ…x¹´Œ³èA‚äX¬'eŠèz4UG=ÈÆÆ]¦@-…THa2 Ö‚òg‡áUf´EƒÇ‘jÇÈŽÛcj„GmÁ_árPÐ)¦?±Çda¥χEKØŒ¯È;°ô;Ž–Á†Ô¨_ø¸dQÛ[“·‘§h‰é}¶UK%lº6NÎFPÖeeÇ„Kxz‚Ä5ûÍ4hayæaweW^ûÇT 44ÅD OvdÁÄ„Åu”¨Y*æJiX“Hÿˆ29“pUfEY3ßz¸†wô 8x(Y;¯tuã((9CØ'IlTõ%\Àô”¼6HOâæE \c™cew>2•¶Åu»¶•Ö|æޱ4ṅŠQ¶”B‰‰×ò–O|¿néã4‘·vž&Lî†CqŠémc©wâNuEZ0HH§¦’+(Š%&FGšs0šˆ\Ò¸yËøš'8›ƒ2/D&­éŒ•É·B¹©›´x›ÒFB¸Yœè›·‹ÉiV;(A嶜҇œŒv€=Aȸ›þE½FBî6Ë㜠×cbbKIC×t͇)ömJ9šúœëi@2žâÿ©{·™x™+$sÉ…ä©zŒ¥‰=YRQmg!˜W˜ÙL-âŽþ©uöiˆ©{9W¢ŠvõYÊc$Iiv {(3ŸWH„@dˆu–¤gíAT iGiŠ‚¢5j™PF•Úç{!¹cungM†RQž«F„˜¤k=kf²CJØŸïلɵ{ÃØ ,Ç‘‰yh8—véiJÖÆ¥&œ'j@ù3“¹¥ÐW—_z’b*¤ÞR¤UX¥ï5z§Ùe7j‰•i]\ò‚“(cJidÚ—âó›™[èA¿4ˆ‡Ú†j>íÙ›ÅX§’:}ìÙ+¼y©˜Ê-šŠo•s…pÿ'ªÄ ¨šup#›§úrã¨ªŠƒ‡èªS±z!>G«²ê›f¯çŠ·º*©Â§|Wž¾ªh/´J‹ø‘TG¬ R]Ê|¢GeËʬÈò‚ N (­ÓÚIÑŠˆÙªž¹*+Êê­&Jãú­åj®¿ª_­z«sJpì«îJpéŠtÕH¯Š«÷Š £¯äʯýªO|°*”K°k°ç&° K{˰¯ø°æ‡6Ëqk±ˆú*+S›;qòÚquȱU*èX²·u²(›²&5ÆØ²©ð²…³a ²³³5koÏ€q:»³ëo ë>,+´©37¯Dÿ{9LûnJë9NknP‹{G+TkµõšµWƒµ[‹¯^›3]Û'£*`+‡ÚiefËœ¦š¶j „:£nû¶‘š€ E™æyR+7bkÁ¸>ê{ý¨¶|û_ãW·ÞV³Æ?æø™¯·F‹“BXa!Fk”Êž/ Zˆ+Jv)PyL¦5J½¡gw(£ið®Ñ¨®È(ºš~£[¸RŠ—Õ¶G•¤—èîŽ5%’ÿA¸ª9ESv" æ~Iõ§ ˆûøW‘« ŸÆ’SØ„¨5[ ´£9[JؤRÈ«!Ú¸¿Ë­ÅÓbnÚ•w˜{Ú¥qjDÂ6ŽÊ†hù‡³I¥53Šÿò¬igK½g¿dõy–+Œe™ÃbÆ&fp¸:¦Eœ{mäшÏ[•Y†*梿Púé ¦%= Ú|Š9¨œêDslY5¥Án}(Nj¨2Ï$¿yO¥5¼•ħ'ŠüÈÁ_˶W'‘Él{~£XC_ÖŠ¨Y¿äx_ª¸:)G¢˜¹iv0ì½Þ›»9|ej¡%Å œVˆÀ±’p´´cC¤‰ ·s[©è¶Pœ¼Ï9Ƹ6*b¼Æll£c(øIÇõÇ/°*Çšw¸UÜSÆZŒz#ëÆƸ+r_E¼§G’ÜYçØ=Lå’Û Ã ·ÈMzÇüx åº^’•l ˜¢ÿ‰™“–S6¿ª"“tÉh›Ç¨K2zIÃÇÚˆi3lE¿'SÊ/lšO:/ª\¹¶¬„³v"Mc¶†Î„¼ ü"«¾cXy”g¦ô¨·m‰ÉK[±ûb©¸óZ­v” ¢>ÉŒ šKë˜4ÀW ´‡Âɇ[µu!…Îø¸~GÈl–Þ™H¨ÀAj‘byµÆG½‹“½Qî¼­¼Ìö«jÉøxº^[Èy©x0FÔu¼”Ï1Ñpɦ’GмaX¢Y»Ñ¬ÉÜ\Í‹Gzp*Ð[[Ò·Ä&ôüÆ/Ýfýdso Çp‹08-®™ È3mÏf<¸Aݶ@Ç™ŒÓ4ÿ]ÔkÑdK<ý½P­,…,ÕZÕ›zÕÊÍYýÏ\­Õ^Ím`-T-ÖµWÖÔBÖg]‚[­ÖÍÀÖm½sp],i-×"÷Öuý w×4·×¶Ø×~ý× B×QzMØf؇´Ší&ƒÍØqýر“Ø‘-Í”-<–&ŽÙ|½Ùã3Ù{ Ê¢]Ϥ=Ú¦½Ë¨Úª­R¬=R®ýÚ°-&²ÝÚ´m0¶5¸ý5º½0¼½)¾ÝÛÀÎÂM8Ä8ÆM9ÈÜʱ̽ªÎ-aЪÒÅÔM›Ö]ÝØ½®Ú 8ܯÞsà]«âýhä]Þæ}$ŸmÜšÞìmÞîMÞð][hÿ·wr"~šû†Ü΋ŗ\v‡I {lwy#ß³¡]Õ£á’É| ᜠ]XJñ¤ð´À ¤Þ¯3ÙØSãì¥1ødn¤àŠÄ¿Ê†–îÎ}{›áíW–°å£©ùá Þà¾ß±É£~8€¤ˆ­\»âLdgΣõɨ9 üã¼¼ïUľFÐÆ Æé9zܧä‰â2P…ÇiЙ·>Žv:nÈ‹³±Æ,[¦CxZ~ÑíY˸Úw•—9ƒ{SàÝæŽ‡&Î÷|ZÓ›§{~”$‹ç;i–_‡}‘9àr~áÂ=çÜèÚ½èØÝèÖýèÔéÒ=éÐ]éÎ}éÌéʽéÈ1Ýéë}èÀýéÄ=êˆê¾]ê¢~ê¼ê¨¾êºÝê¬îÔ³Nëµn뷎빮뻮;tendra-doc-4.1.2.orig/doc/images/guide3.gif100644 1750 1750 16541 6466641221 20011 0ustar brooniebroonieGIF89a’¡ÿÿÿ³³³!ù,’ÿ„©Ëí£œ´Ú‹³Þ¼û†âH*Á‰¦êʶî ÇòL×öçúÎ÷þ Æ¢1g8*—ÌæàŒJ§T±ŠµB³Ü®WvýŠÇä[¸ŒF%Óì¶ðìŽË©ðywmÏëYõ½ÿo†H·5hØÖw¨¨˜¸h$è™Õ(Y)Giٙɩ„Ù êõj³Izº3ŠºÊ¤Êúbú* ã:kËS{«V¨Û«’ëL+¼B\ü Œ¼|¢|{Ì|ê-<= M j­»Ì˽ê.;.NNjžŽº.ÎÞé¯lOψŸß½ìÏïϼ€•Ö#( Âsð²)\8Ç ÄC9=œè¦"Fÿmu܈F#H=" 6¹§$ÊKË\iG%LD-¿Í$ófÈš¬xê$ôò§EŸí‚ Ýiô¨¤œ‹ˆ*måôiƨ¡¨J-Âô*–¬“jåÂõ«”°8½Š­Bö,T³ÿت’ö푸~¬ÊÕA÷n¼$ •øû¡XNÀ„Ùñ̓ÇíRÁvùRì±q¦ÄÏh˜h–8ɚ˛MAôGè/£{%<¢”ßÊ3 lyízu^²g«iQºµä_ Ÿ®«úö.)h/~ÜÄe:¥ ×Ìû×fÜN÷N·.7ó”ŸÛþÜ/lð®›}ߎŽ îêËß>æ kÖqb½çwÊàæ½Kÿ/ŽY€äE§ÜѶ fàg^¯5èž{©ì }{÷„(ú‘Ćà5XagÔ96‰1 Øßl®—âw@§¢fÆáBá‡Ò±`rò‰cz;²x¡‡ú(ä%³ ŠÃ9‡â‰EÂÅãÇ^‚!6¹ÚGõ™ˆ¢°M×bm‚Ù¢h22˜€’V~©`lm:—™›'Îבè$B¤eZ&æ‡iBheŠ åÙœn7Š' dW§‚¶•¶%“a&ãƒeÚèᢣYjé£rò¦i€+æèbÐe)a2y'¡`ìiè¥Ö6å¬˜Ê e55ÊKª.º k°^Þ‰§¦Ò™©„”ÿÁøèã={,ªZ!i©jNà˜f–àŽ‰"Êm‰ººê’ƒ^[雸¶g¶¢ ª&·¤Žª$³›ÀÈi„MÆ»!º±ýÛ¨“ŽÞX)›·ÂÒgOŒ‘;â&âÈG—ê®1¦,ªx7߯O{lÁÖjÜìÀÔêiÆ#ãì›2Ö+ì0K³° ` ‰‡¹'¢G7àÌf`‹®'ý²("³ëÚˆ,¾1#,tUšüÜ0ÕV?ÐÍZÊ¥´áÖ«ñzÀ¾ürÔº1ÌrÙ]6ÙúFì/ØrW †ÄíL ´Î Vµ[Zÿ­§…ä¬ '›m—½f²n‹Œk¹b"û-Žiþ©œÏÿ\ë™^x̹`eýwèC\7x¿1É6­Ê6%ëª'J^– æ µŽßÛ.3"¾Bè¾ÿ|ðÂ_.éáΡ¶e’œw­?lu§é]©7-0S)*€:ßí o½àÇ›ê=ßÛóÏ‹šì'¯žñ]»ê¼ã+Ê»šËï‚¶Iv§¼L½‡R‚Š—EÀ_!XÕ‹ß·®ç µ,iXùÞ»ðS4 ôo)ä“™ÙÞ`¾  ^´óR—²…¤.=É,¢ ¿t4¶Yå«pÆŸ†‹Ùap|lÙZaa+¢€[ÝÕLÈ3ØQ-8b~nøA¼´æ‚9„ßèJ”ÿè/ŠCO¬’EB•u«Vb|Ü'è'Ï}Š6‹ãf°iŽy…vÎå®Q‹Â£ÇØs¦¹„°5Ë•Îȶ3c­lM-+ÞŽøª o’}4%/‰³‡QJv%äYÓ6>IŽ+¢"ÍFG+lÄAÜUõðq‹~ÔžA$>qÑ¢d‰(á“.D^JšQ6u1§ 3^ËË^ià;,6ÒZ)+Uòh7šËÔØ’µ8öÕ/)¡¡ßt„i;b)m,þðaô¼9ÆÈu„ŸôH!«S·4”3dïq¦å+¸t0ØÜeAîÙrörŠH€áÜäÉ8¢°ÿqúß~êy „¶Âx¹tò3H½Qî}£TåٲЩü’£ZHé>z:ŒòíyâC …R¾”¥Ô€i¡²¹*]R4nÌôQ,¥œnÓñEsFÆG„toá¡èH™÷ʤB§žØ_Ú*Š —rä©TTáóiÕk¡ÉùBg"é‹H…¯xš&$£Óö¼ˆ_?Â×¾Ÿµb1žìZdx¹T|Þ4}š°ë`™ñQé­sdŸ’áC Ú¸×5qqô¢ßÊ:"V}îj…Ég Uõ€®%´)9Z7Õ»ZM¦’’–Yã‰Zݬ”RLlBYÆÌVº®¨ÂôÑjªA89Ì–œÿ­)d0[R©ùv£p£,±w¿€âƒµ}!+)ÓECð~W`‘jtµÁUOªD®,£ÊZµXnøt‰¤™ª8Ò+Õˆ3Ëzàü¶‹Ü ¾q´&Eëyåß³ÚOŸ>YòÌË]°$˜ž\"hƒáSƒ2®£poKçÝòÑVaÓÝ©øä;ÐŒØ÷n%îꉒbD¬X›¡ä&ÿ.\Åvý jJ€ã­^TÃöl±^øc6@×Ã[™ð‘“dñÇT¦€“Ÿ‘ÓTÙÂò$ìå 29&c¾ËaÂl¦2³Í_.³šùqfƒ²“Í[3Ré<;_Q´x~f}ûœç7¿ù¶ F3ÿ_8õsŠís^h=Ïu'Æ3] iAÓ¤ÒtŽ ¦3-e>ÿÐiù4¨C²éQ;ZÒ¦>u­6@ã’©²>[¨)­jKÛ¹Õ©È㸭½þP œfsVxÝëÆlRz·ÒcµöEd÷˜ÀÒvuCÄl^}Ñ{Å6t‡k½JA¢0ÛEš—pûª.Ý7×ÞUµ“Ͱ3vGšs[ØüyË SwÕÑwEû6ïf7PNÓŒ5gMh ëú†ïFˆA4+ð#¼gìžccMêÝ4ÜáT ׋œM؉ÛûK$ hÂ;ÚîþmœãšvñŸ­Þ…»ÜßÿFµÑrq~Çœæá¶¹¨÷ÿ-óBó4·~u¿3Ž¿¡·´èdÈyГ«ôƶüç&Ö9Ô£N£©ãè)¿:ÖkþêTsÝØ>YùHýr´Ýê9þzÖ™>§w=rvûWã®Õ¦¯ýéÊ3ûJÎÌ£w;ÌÀ¸œÝ¥ntòægð^.¼âOt½?þËŒ²ã ù†ºó¤Ù»¥AÃùÌhó“¯så±xÑ÷\ò‚ö<ä—£zÍ#¥õ§W:ìcŸæÙSÝ{·ÿzé!bS2«ù÷j¿yè±ü{â;ó©§ íwÏ{´sØðy_p<šzå'TÒPƒ>´µ“ÞZúYÞࣹï8%«üøJqùE»ÐR£¿.ëÿÙíò~ÿ1Èߥùßú\ú? 7ð'yØ2€}±~G½€Õq€Ð³}¦Æ~•€OA ødV{ìT}˜ãcxø€^”aõ0 öGwPÑ~agH€b–Æ—Z1‚GÅK+Wƒˆws×öœ±ZˆØC¹ÃBBøXǤ‚]áQ ¨e:¨A¸xÍa/—v68pëvgX³„Áj<Z…„ð.÷@Uh…¥omd4MÔe”G#‘ÇyB}‡roÞ2âg^fâ0.ô†YÈWsTGø§€ËÅ9n¸‡QX,ÔfÄFTØF\Œ%‰)¸ˆ¬Br _Ù5-•_¡`w4|²mÿ‡i$%‘x…¥(Š–èL|¸gU5?Ôƒr“!{ÆuXi¦9jõ#ºeSaõöƒZ‚G§øˆ%” Ø‡QUVjø,ðt*&×v>؃%u’Bm"69‚‚r#‹o3zü¤(ÉDBbc`ŒhEx‡‰Ž‘÷¶Bšôa`w9Þ PÜÈY†³VYÜ,€¤‹ “°wZQÒLØó‹EA`"Yú¶ Ñ9ŸvYæ1Õ¥Fu‡†2Ö B*Š3=¬b}㔑äx†ëÕT^—‘Íþ ăX(;D"tˆ8iCöb*Ô‹ëhT ’'DC^qNQ75'Ú˜@hr*UŽ0gaÁÿá5>SFEÉ/!yW"ÆSE#0#9_³‘'T1¥Åt#dˆ1ÙOШ†T•y_7F.Q•9›È.\)–“ãgþØ>“H–€b†ÐU‘Þ7eQ€™h?±¨VAC%4Å•S‰[L¨xà2#¸å•ôxrÃæ–Ë3„…Å(¤‹“™,v“mö …U­C&‰¦ lâ•[f˜uÙT‰81cÓHËáJüh@†9šu„4yPsxï†Rð%”ɘɉ;±S@Té|5 @úµe–“1©ÕpEI‡ÓÌÙÖG’Á×ZÙ–xGÉKX}§Ò4„Õb9PIhEÙ’ª„fÿ7f p–Òõv64]ù]‘3'ÒIr )›Ãb‡(Ÿ…I1–UŸ|ržøÉzÀHbNøF€¨GóUÆ©¡€ˆ–ÂB† éq–cDexU×ù‹Ö”ú§’ÚŸm‡D³–m=‰]¯˜gÕ%N¶…”z¥“©é2]ñ›ìÉ–¶8~lT æ@7‰©™5Å”¦’Y‚\øxÉ™xxŸú¥@›S7¹™é¦(ëHRëÆJ¨y£r·zÑH_&YoPÙ.ú¡ÀÙtKˆ„8º:®cWð•›1Ú}b--¤—5z„-”9ä&’Ug¡«£dT›H¨–y™iz-˜†ÕØ5X¢Èy†u¹l§ ?ÇÔ òÿi¦2†OøTš—š—ç,ÙjЍ ö¦úMó©]KÙO(lW^°ä“êr¤cv•aä-_ó”±´’?8ÌÈ@|<+ Ƀ,É8LɱG—ŒÉÿ+¿kÈ€lÉžlŸ‰Ê¢kÁXÛɦ|ʨü€î¦ÃOäÊŠ ʱŒ¤×׸µüÊ*³¹,Ç»ÌËû Ë¿¼ÊŽìa'ÌÀ- ¾=xÌŒažÃL̨lÄ$¼¶–+Ím¤{[ŒÇ‹ÍÙ<¿ÔÌÍ YHΦ~1\ÅÞü¶çì¹é\ÍZÁÌfìm¬»yˬ~Q¹ô,†ûüœîÎb›š(š±™ÏиHаc½ o­½¼É–$ëš‹ |üÜx3[žƒŒÑà )–t´nêPXiº>ªÏ¥ ѵʛñ¦’¹«ØèÒ¶ÙÐK³\¦¹ª˜Y°óÄÂ+-Ï]UI”®‡DF]£@ÿYDü*9 ²ˆ©¢£·Ð¦-Í>e—ZJ«… ÕÚf`KêÓMm™Ü:Sä<ѶÜÒÛ0n£WêHŒ|E—B ôÔ£D›wº>vÖ½lÕW=[WG‚>bl~Z®{ù–:ÝÕ_úˆL¤×Ö­ÖÎõ4YMØÞ rQ-×ô…˜Ý:“K-@³è×QýÌÚÜ×£ý’SCšÕ­çÓEÕ™©sƒ±Í²&]†%Úy½·UmÕ“=ˆ4ÝÙÞÙòÒŒq TF2‹#äXЙڰн-Ô][WùŽ¿šz»¸$ºEqéÜÝ×¾-ݱfÝ"—J`ƒo³Ü¤©§ Þ{}ØÜkÿ]\Lc0œ™êÁÛiÝð…òíu˜-rÊâ,)/6XNmyÑ}¢Ú˜ oÉÓOúrH­c^"Ó(XY9Àªnç8¾§Ú^¥˜$zÛš†,yÓmáÆÞ-¾ÕüÒC­Ø|9Ž'I°û¯á­”¢¹’û-Ùý=â>Ù+BŽ1“âžË‹ÞdegÖ*þ ®¿"Þ°BnÒêDf9áÇíŽ>]Ý=Þ|ýÝ@Þ¦ ÚØêŠ4^œN"`-XtØäÇEÞËøä þÞ ¤¡ ÉO‚C8žÓ‘ú×s]!6èbÝ1î•:=¢¿ÝèªÝÝlnRN2¦+æÌ-јžà@¸ýÿíäÿ èê˜ÙBàG¼hZè–[ŽžÞè0æñêÇ)§½j隨Læ]M•5CâYÑ!Ðïýé8Zœ?)ªpžÛï“Áõ“Ë.ÛŽÎP.ì±ê:ÞÜ!ƒNÖdÞÍ5L'èï,ížNíÄNÝ™>•‚HJq"ÛèfoßþÜ0ë6Ù}¿®MžŽ$Xçî-îò.ëÖîUªèD‘ÐÓÎïÕîߤ^ïŠ~ð9Þðæ>ìnçû^áýþðtܘy@ÿ2”˜C˜*;ñ ñ9=Õ ÛŒª>}ÞÚlóñ¿ð¯Yò0¯ìlæ\M™íØêµ.­cÎéÎŒWèšo3Žó ŠâŸüêAÿØS@Ÿ2ŒÂôCßô…îã¯à˜ôîšØd*¯;¯ó/þã=ïˆå=©DD·•Ù˽éPåÈ\õÇÉ®áE}w”Юð<È`/ÒˆïHÿóCjôy¹vŸðàòuïÈ/û³ƒ¯÷€ïïíM÷‰_ø|ø/ïø, ùìS¿}køïÞõ„OùwŸïÿ÷úÓ,ù£¿¶1oò‘/ú ÉÊ,Ř/Êt ûûìõÄzɵŸ¹²¯Ê¹Ïû(üû1øúÁû·ð‚¬ûAmüŠšÈÉßéËï³ÈOü¶ïù{ìü¸ ý÷ÞüÓ¿ûÙïŠÃïýñËýâ þh0MÒç[þá~´Œ*Àê¿ýáÿÀ`Ö“õ3ÎJ|ýHÿù-GñLûãOñ1u¹ýa”“Kñ²àlà ÌHè*ÑT]ÙÖm±éTE¯y¶™Þ•ZŸé笵Œ¡Ý«òÍÛ0tçÿ T¸P AmxHõbï‰<-8$Ô·IÚ6~y¨Í›¸Ê¡Z þB™’eK+ß© èd]Àa³2†ó´ZTÔxá‹ÉSè ‡./uCº”i©HlRí“«ž$45aÁ‰‘”W]ovõZê—£M)E»véYš[L¹ñS–ÓGyp§à²†w§V½íˆºìã·`²=©˜1ÛÃ!+~ùŠZÊ—U&Ƽ9ådÎ/?‡æYtéÌ–M#F¼n ¹±CÊe}gMa»·móUH:uoHG§v ;Jmã­—Ý û”¤XŒ\ãÐBÅúb¿x­:–ÔX{!oßߢdeÕT6ìܦ›ÜÿêÛJM_ÿA?ø9õt ‰†z­µwp¨•èK¯Ùš N8Dð9å¦úãªØ°ƒP*~ŠhP3¹Òˆ6q¶¢¨§íš¹¨=Ô41F¤?”IÐ+ä0|(¦RE Òl¤S±o>\š« ¯¢šhŒÆ*qÁi:Êʹ €óé™g’‰¤«Å ª{`6¯¼'"‹:ÙÏž•Ó‡Àò¬dh²¥2§¨ÞK+ùØSo&¥ ¤/ù„Hµls‡Ì·Ì,nÌ4ÃTJ<¸~ª¨>qè{òÎ!Ÿ|ÐѡʴN£ƒÃò.‹–pð•YÔÁë‘4º¯ƒRUŽd3*¡AõäT3ÿÅoÓ;‡ËG^ƒÛQÔQûT0Èü0¥ÅÓݳëÓ|}ͦ$XôT1'lTØY†º²@_Ù·ís?'õC÷H--ã‘”¤ Ÿ ÷„GZZ¿ú‘º˜^()µÕEó 2h$J×-µ FÑ1_xý{Ì× Ù_w9’’߉ÙUG'{Z4,.ظºˆI¦©dÝôpô[s:§T"éüÐ5‡‚…˜fmÊ­•AŠ“ùß§ºÑ)¯f¬d9WယK@Á‹Ú…ÃÚã¦^akíùÞ ã•v½}΃Qí­†3Õp¥&íµ "qÄ“-uVä(žU9䮕 n~lÙir,ÎßûØÿV­ðÃ=ñÅgjÇ!ç8òÉѦœXËóÍ1¿óÏ4ÿ\tÆ=ý²ÐMO=íÒUo õÖaOõØi¯ÝvfoÏ]÷Ý[š÷ßžß…/ÞøãD^ù噺ùç¡ßøè©¯Þñé­Ï^û¨±ßÞûïAWüñÉç¬ûòÑO¿wñÕoß}–Î_þù)‰ŸþûñÁþüùïŸIöý@€4à}…@.4 tà;A N0y´ ÷wA Z/ƒôàó:øA"/„#4aðJxBê.…+t!íZøB¦.†3´!çjxCN.‡;ôaæ øC!š®‡C4"xŠxÿD%š&‰H¤µä67œðE7¡ Prü²$».:ÁÙ[º¦½$ÊicßÀe¡¿a,E0 âçf$ “ð*Ç‘cŒ¸$“V Na—ëb£PU08íj0~â裮wÉha\3Û唵Ü5ñ;üã¯Ò³*ô\òX?ñRvðžmeXª’Žä4¯O.2‹<;Û'´ã5FÞŽ’NåˆQÇ ’, äÛýˆ#KécìÊÛ&ù¤È,*MgZöhùFÑÅR˜CB™GIÍ^ýRW¢,F¥RÙÍ ®DÙªÛ½Ài'²McÒŒ\-+9¬:nŠB£‹6ÆÍty“TÇa'žRuAÿêLo¹K9 :/ª¨ž”iho¨¹±ia  d/9)¯ùS“XÓ‘F%êyIt•j“Ù.yÐR‚T‹°{¨ì õ5‹ñíL MÎLÏ9R*fgs1ä¶Â9§2Æ'B]¬˜’ªé®¿ô¨/u;—ˆÈ¨Æsªµ¨*U¯šUBU«]-S½VNqU¬e XÍšV_ U­m[Ý×3U®uí]íšW˜áU¯}ÍŒ_{V¾–°e€kaÛ«Ä.ö®Œu¬šûXÉ6p²•Æa- XÌf¶¯›ål^=ûÙº†V´q%miÛzZÔ¦Uµ«-kk]VØÆ¶«³¥mVm{ÛªæV2·Qåmo•ø[à±BÅ5îq‘›\å.—¹Íuîs¡]éN—ºÕµîu±›]ín—»ÝÅn;tendra-doc-4.1.2.orig/doc/images/home.gif100644 1750 1750 365 6466607532 17525 0ustar brooniebroonieGIF89aH¡ÿÿÿÌÌÌ!ù,HÆ„©Ëí¿‚œ´Ú‹³Þóð† "A扦¥À¶î ÇòLÏÉZçú®ßÀ ƒ>àðˆ„“ÌäÒ–(šÃ'ÍðÂR«¤_NÛoyVhÖ8&wÑf—XzpÃñpýºþ2Üt ‰…ö¦”Wó&†uè7…çáUÈgÇ7 ¸h¹S&cˆI©øÉ&9ø8:xFš:Y'xVª‡šªzyy›×8×—ÈÛêJj€“v³‰¼¬¼|Üì<MU”rý“ÍÝíý .>®P;tendra-doc-4.1.2.orig/doc/images/index.gif100644 1750 1750 366 6466607531 17704 0ustar brooniebroonieGIF89aH¡ÿÿÿÌÌÌ!ù,HÇ„©Ëí¿‚œ´Ú‹³Þóð† "A扦¥À¶î ÇòLÏÉZçú®ßÀ ƒ>àðˆ„kŠÝ!i#ýzFž*[æ®Ä*Ö¥ez©_¥tœõŸ­& ñ~²ÅžéÖȈ·ñ{ýÛyFæ'pÅ¥·Ð‡U§hÖw¨¶Ö8GÅ(HXU‰ fxgy÷È9§‰—‰Ùh*6Ê…˜j˜ù¹h€SÛ—ºËËK‰{[&L6l¼jkw¼¼Çʼ\ül--\”‚ý£Ýíý .>N®P;tendra-doc-4.1.2.orig/doc/images/mips_files.gif100644 1750 1750 6352 6466607531 20750 0ustar brooniebroonieGIF89a¥ëÂÿÿÿÿÿ³³³!ù,¥ëÿºÜþ0ÊI«½8ëÍ»ÿ`(Ždižhª®lë¾p,Ïtmßx®ï|ïÿÀ pH,ȤrÉl:ŸÐ¨tJ­Z¯Ø¬vËíz¿à°xL.›Ïè´zÍn»ßð¸|N¯Ûïõ€~Ïïûÿ€‚ƒ„…†‡ˆ‰Š‹ŒŽ‘’“˜™š›œžŸ ¡¢£¤¥¦§¨©ª«¬­®¯°±©–²¶·¸¹º»¼½¾¿»Ã—ÀÇÈÉÊËÌÍΟÂÄÆÏÕÖרÙÚ¿ÑÅÛàáâãäØÝßåéêëìí¥çÓîòóôõæÃÞñöûüýþ°ð2PûG° Aƒ1 <Ȱ¡Cu /,|H±¢Åf-L¼hÿ’lj8ŠËXa£Hg&ObJ©#>t-«±<93f2’jÚô¥ób·ÏpNè TÑŠ?‹2*á¨R[N&}zó¥>ªÈ¢:œŠÓZ»² P+@+ýdú³ŸØc_!}‹jà\h Ì¶2¦—Z_½óuK—§U…y®\6%cU|ù.žœ–×Ĺâ>¸‹Y”ß¿#ï Íg±ÙÑšÔ¦ý[:ÔãДaÛ%Ü9óa…µÆ>ý9°ìÞ¾_ƒ†í›´ñ¼f|9w,Í8;ïÜôîKÊG ¼z8qhÉ«ŸNM{úóÛÍCMºz{öÛ+—?Ÿzøë–Ë«?‹^ãþçÿ¦a'àu 1‡š|Þ­fÜkôQv púý· t H'ár–ö`¾—‡ªÆ`kÝøXsÎÒ_I)ž5!ˆPYh Š-θbN5ŽõbƒÏÉøN„9þ( bAÎ2Ö1ú8 EºvãPMÖE“QzB!Jš—¥=TV™ß¸yIÊ–ÈÍXÕ0bŽùdSi.™Õ˜ked››\¹™¹áIݘ}ɉ&y­  yÁ5Ùv~Á5'¡ *£VzÕ[|qzµ(¡v* gg›~X'¥+)ú'¤u:º©–Iú!¨Åqs) ™Ðib³ÒÇ‚¨ÕÚÖ¨¨î𑽆êU€·¾Ç Q¯Ò«®ÿo1{l¡¢ìZ¦F7­°Ãšé§´×.{m«Eu©lµ~Ël:â¶ém·S&«.¹XšÛ.¯½®;í¹å¤›¦½Á‚äï¿€¸»/¼w~› >'¬ð 7¼p7G,ñÄ ›K°¦cBñÆ# qÇ ol1˜éeòÉŒòÊ ük˜ó5üμ ¾6Sw±¬9cÔ3J;ãü3¸Co SÑÙ"môUJ+Öt´G?ŒÔ–’ìÕ¼]tÍXߢõÐ\›ðØdÿKÀÙh§­öÚl·íöÛpÇ-÷Üt×69aÓ é×'€wÐ#í ©ßãäÝ‘àŒ>àáð-’ã)Þ8ãÙH†m“ÿ‰íÚ~’ƒc8FÚ=›b袧šÛÙ˜ ½¸Õ,n£š—¯Ëù5l¯þrÉÚÄ¥î¿e×Z®ZW,’þ¬m{Ô4ÃX%ï²µÊå =v—¤vá”W®üîÛ7ï|ô¶b;ž÷õ÷ñLk_z$zjù­ã _éó¡®/Ïõè›;ñËó_Üû’Ìó0D½æÍ.ç›ëpä9ÿaΛsN=—=õ°!í{ÇtT‡½B©qÜJ‚ºÞîj ÄÔRÂüÁÌu+<ˆýrÔBž°u]Ë¡<ÂV¶z$> "$€(Ä"2¬ÞNôCŸéfJHô ›L§&Bñ‰OqLÿ5g}¢‰KرÒ΋[ëÉ¢µ¥Fz5JŠƒ¢b¸Ö¨Á6¢QvdŒíˆ 0ÞŒŽp:ã—nÈÀ:Yé>¤›za¿zêSQ¤gà‚Hšéq‹&aB$Á¬édDe‚¤ŠÜø gU²#—„ã£äøHŽx‘@\RxsJ‘ÒEª¥%ݸGË€ˆ5À#NüÐâKçS˜¿P©u˽x¦€Ú)‰^—L_Pš¶‚ yš©£Ê€†~!"f{†MHІ;ÈÜPïÃKLr@éD¦<ƒW,sâJžÞ\P®¬ÄIh•iR ªgèî ªáAKê'^¼ ÎaNO™ËÁ'ù5ÿOtztïK%!?hHû&êNEíCNx*>ËôUŸyRÂ3žß¬å1:΄rsBŸ:§HyM\½t|«2©PQTvÞÑWÈã"L]Ú¡2ôµèIÙ£³›BF–ªƒʪjê“GN²ª‘šzœ¯>R ¢ùŽizШTíTå©X)<¦Žôwí(S©™ÕòÁõ¨™$¸¦ŠÖ¡þÆšuªDm ØRê}à;QOÍúVÓ •¯«¢jÝIRà˜¼ -éKVµ )™Çze‘SÙqϵM%ž†8$"`2v¥ØD­NaëYÏRU¤â­dmûFÀörŽb•ÒX ¢Ú@ÿš#(Mª•ÑÜEšiºRh¤ôæ¸ÍÊÕZtí¢v÷ôEìf7¹cLžL¢›>%н°JuËÇ¿JW^Ãoë»ÍN±£Œü¤òj5ßNòd&Ä$ðxÃËôV͸œe$Vy4«ûÓiÐê“{½¶`sãªä}%,ËÔxYð]-‰#¨a¥B¥Ã&Nšr]WÿÆñÄ™ylN l›;—Äõ,&ƒeãJ}ø©ö42#Ù«?%gfR*]§,¼Ð#Ù£C>­kå+SÔÉÁ`ò Áìµ(›¸d~ñ–éëÉÌ’¼a^3lœÚ%Äöm¯‹{T!—oÊEÞp™Éêçǹÿ±¸ûÔç'¸“\îY§{mÀnv¶£ªÇqdíŽóä1üð>\(ôŒÝ,Â+]‡¨.¦xÔÃ÷§ òù%¯_êœÊW¼˜ï¶©gÑÎ;…¡wõâÅäùµ§þ¢» ÿúºtv§Çaí!sûåå¾»ŸEïw÷{Žßö)õýŽËJݾŸ¥^ßGÀ4ËcìùÂüèÜ÷Û¥Ýû®Ñ&æb¼y—XŸgà/¯ö/TÔàTÿó¦NÿŒ¢/ýâÿWþ|"{ÿîœÿBç}ö¨×9Ózq×uˆw€Ha7Ö/W$ øn 8xA€ß÷wì¶hup¨vhçæox“ ·ˆA$&ˆe­”KºTJÅV9)u÷~þ'l-8gƒÔ¨{õbƒ2qâ·o#è({ï4S<•N¬ÁEº5iÕ4|*…‚FXxmU%uY•O¢ZjRú´>DX83xy5JlUzêÄ}](T.ES`µƒT؃À÷ƒh(i¢Y#ÒSJØ‚bƒÏU…´§™URZV{ÈP?¥†¾("H†‚€¨‚`<õYçä…/¸h˜\³%:4"~èR†×w†Kÿq>rR¸h… h(s‘|‘ô'âT«8ˆW˜œ±~au€ÿ’/ü÷È5°ò‹ê‚_³Èzĸ/¸èv(vT©˜#aŒÑfmÓFRãr~ôÇ\†2lÎ&oï‚òbkœGhˆ{—Œ¬ç‚AvOù¦eÞ80àÈ.Üpfm˜f³æ2®‡„Ÿ†f[eDöˆÿÅ8:†OÒlÝ‚Ž¸'޳UZåû!Êè‚3Öøøú¸bÔX÷¢ÊÇ ¤x‚)’+¸DæÈ)I|ã¶7+9~-Ùu/Y$¼ˆx)‰I'G·“!s‘GÈ(<”C×/3%By”0’ðGƒØ”³¯–N•_T”RY•Ì´”fh•Z9•&¹•^éTù•M™“b©•dY–Uy–h•j¹–c–n ~m—ø7—t™~vy—Ø——zy||Ù—»·,59˜„Y˜†y˜‡¹3 €”ŒÙ˜Žù˜™Gç’Y™–y™˜™™‘I™šÙ™žù™ šƒ¤Yš¦yš¨™šª¹š¬Ùš®ùš°›²9›´Y›¶y›¸™›º¹›¼Ù›¾ù›À¹ ;tendra-doc-4.1.2.orig/doc/images/next.gif100644 1750 1750 364 6466607531 17551 0ustar brooniebroonieGIF89aH¡ÿÿÿÌÌÌ!ù,HÅ„©Ëí¿‚œ´Ú‹³Þóð† "A扦¥À¶î ÇòLÏÉZçú®ßÀ ƒ>àðˆ„yW"É$ý„‡—º[ Öäµ'ýöÜ–˜¸õLÑäë·ÙæVŸ`õ¹æ6ºÍú=«œFÅÔG×Fætg#ÈöW…É—f€ÃdU˜H¨X p™¥døhG¹8j7˜I˜g˜š¹8ù§ûHZk¹†õE <Œ×H|,iŒ¼¬Û+ËŒ¥½\”b}}ýƒ½ÍÝíý .®P;tendra-doc-4.1.2.orig/doc/images/no_index.gif100644 1750 1750 371 6466607531 20374 0ustar brooniebroonieGIF89aH¡ÿÿÿÌÌÌ!ù,HÊ„©Ëí¿‚œ´Ú‹³Þóð† "A扦¥À¶î ÇòLÏÉZçú®ßÀ ƒ>àðˆ„k–Ý$i#ýta[æ®<.Ö¥e¬ãï+LóŽŸ­&‹ò~²kèÙo—Ë‚;ŸO'e$¦W¨¶æÖ·ÆS'ÃõˆèRå%Iè1µõ·©XÙÙV xi€C˜—77é:7 PšÆyz—ª'*Óø¢¡è‡¸ Œ›%hfÜS|¬œ›¼ìÜ¢ûl-ýE] U”²ÍÍýÓ .>N^n~®P;tendra-doc-4.1.2.orig/doc/images/no_next.gif100644 1750 1750 361 6466607531 20242 0ustar brooniebroonieGIF89aH¡ÿÿÿÌÌÌ!ù,H„©Ëí¿‚œ´Ú‹³Þóð† "A扦¥À¶î ÇòLÏÉZçú®ßÀ ƒ>àðˆ„yW%É['¯”Hú ©Î¤5š52’.ò»“ŽÉh4»ü¶RŸ½ð¶oã¿pòë¡5µ'ÇrñçÖW(¶“È¥èY£æX•—(ùh€³&i˜‰÷†˜f7ˆIØ×–I)cYCGJX[z»è)xÅ ÆØ œ ð\¬„jœÜ«\ÌÜ \”2MMýS­½ÍÝíý­P;tendra-doc-4.1.2.orig/doc/images/no_prev.gif100644 1750 1750 363 6466607531 20242 0ustar brooniebroonieGIF89aH¡ÿÿÿÌÌÌ!ù,HÄ„©Ëí¿‚œ´Ú‹³Þóð† "A扦¥À¶î ÇòLÏÉZçú®ßÀ ƒ>àðˆ„u™ã¹ÜI\Oáôe^×`W˜Í}c-”ôã}«dv»{¥ìµ uÛfÅñ{"ÚhÖÇ·U–gHCG3¶öd(ö§†'HI˜'¨WH9')ʼnÙW%Š(£8ÓXYñ6¨ šèY‡q·©ªºYúršÄÛé‘Öì÷ (l K|¬l»ì̲û,\”Rmmýs­½ÍÝíý ®P;tendra-doc-4.1.2.orig/doc/images/no_top.gif100644 1750 1750 343 6466607531 20066 0ustar brooniebroonieGIF89aH¡ÿÿÿÌÌÌ!ù,H´„©Ëí¿‚œ´Ú‹³Þóð† "A扦¥À¶î ÇòLÏÉZçú®ßÀ ƒ> P3‡–Ä™.Î&ËÝM£Ï¬’ôóv_WsÝ&ÇÕëù’T3±nó»JׂÜ<ûù—׳'æGx(˜#wdh'øå–ÅVi5f©7É×Xf§Y‘Fèų)i€cªƒ*UÊšãJ;šÅh»û¢Ëûëû»[”Rllüs¬¼ÌÜìü ­P;tendra-doc-4.1.2.orig/doc/images/prev.gif100644 1750 1750 362 6466607531 17545 0ustar brooniebroonieGIF89aH¡ÿÿÿÌÌÌ!ù,HÄ©Ëí¿‚œ´Ú‹³Þóð† "A扦¥À¶î ÇòLÏÉZçú®ßÀ ƒ>àðˆ„{Œã¹Ü\OáÔIúñ®-îÖ8Œ2_^)Øš=×¼U‚Ú¾¾™­9gÉ{½zö•çGXVøX‡(SÆödèÖö¥(8èvH…IhFÆ÷ ºIÙÙ¶ʵ·#FÓÄZ;5—›´‹fÊû;i€Lœ(lWœüʩ܌êëÜ\”B]]ýc­½ÍÝíý ®P;tendra-doc-4.1.2.orig/doc/images/rttiD.gif100644 1750 1750 5122 6466607532 17677 0ustar brooniebroonieGIF89aÒ¡ÿÿÿÿÿÌÌÌ!ù,Òÿ„©Ëí£œ´Ú‹³Þ¼û†âH–扦êʶî ÇòL×öçúÎ÷þ ‡Ä¢ñˆL*—̦ó J§ÔªõŠÍj·Ü®÷ ‹Çä²ùŒN«×lKà ËçôºýŽÏÃÛü¾¿ 8HXHh˜¨¸¨ˆÈ˜ð'9IYáøXx‰¹É¨ÉY*:ê¹Yʉ:xú:êúÊ·Ú™Jk(»Ø «»KvÛX ,è É[lŒó†°w ‡q ÷;8w(p;l{¬½ý’¬œ ìm ©Z.-¬ê8Í Î Ïðîüî=>q}Žknž®¯=y¹Í wÐÀ²qq0è@Ö³Tš°ù£± Æbÿâ@Eêˆ0ä½à~k¸àÙ›hŸ2Qä×h£Lƒ ?ÖDR¤G¹ðIèUÅ—,³Í<Êkd¸8MêtØÊgBžíl½Ø²h&¤\a‰ÃùôÓœÌzžèÑ™ášN ;µeËÁVÙŽû´žÝ¼{ûžš;¸ðg‡?^¹ò剙;Þ¢8ôéÔU¿Ž}ƒôìÜ»/õ>¼2ñäÁo/Þøùôìq¯o_wüùÈßÓ¿ßÕ>þý2õóÿÿOÓwþHà.6 X`‚¯€¦`ƒ!è`„jüFa…^hánÈa‡J˜Ïj¬™"âUiX∳&Ð.¾cŒ0Bàl*’È"i¡˜ã¥e…‘ŒBÊHã6ú¸¢ŽJž(Å‘Hj†cCN9@‘ü9ù¤U=¶ÆdXfYÍ–ŸP9¥•û} æb@š(K¦©f”‘9¤™ø¡©N¸¥5+¹–gŸ¬¸ÉfžÐt²’mü©e&ŒÚB§vÞ‡ç?~öh˜Ž^Úè!Ó¨åÎQ¥¨i¦ž¶õ㥦^†êªÖDJ$ˆ?‘ª)«µþ”­º®¨)PxD+VŠZÚ©¥ æú ¬1ÿNJ_¥‹!íE®»kZ½bÂì Ëj´ýXD-µÆ6¢ìŒ²F0ìµÏ.zh¯¼+­ºÎK/©Æ¹.¼©¾›ë¸”ûb¶ñ¥«/±º^†k©îŒï ¿†sïÁݪ«/¿µÆû/ÀUž ÁývJpÂþ~Œ1É¾ŠªPÄ!;æqÉ¿•¬ÆÃ×2Ã¥¦;-$Ÿº»sÁ71ª 8Ûm͉ŠÛjÌÏÜ^Í SóϧzöÙ®É~Šü ¡V µ?ëT]0Ø6‹­ŠÆsŒ–Ðpnd¡'{©òÚ/Ë)”ÙL³ç´ÄO:Ûv›MÆ-wÃ})3Úi˸àþö¯€Î7.vÞ@Þ´ÿqË8¶¡þ­vâ‘“[8å)=.÷çã¢õÜŠûh:¤¡‹U½²ÏÞ¡†´ßŽûo°÷•àÝ»{á»rÁÿ¾ÅðÇO<È·|òU4ôÎ7ÙûôºHïžõ^U¯½+ØÇö}÷K„/›ø¤po~%䣶~úE´ßüî !dõÏÿÃý†é;}G)…°l8‘–Â?&†/”b Ñ@¹Dð+Èëà• jAÔ@ëF4¯`I!ì >Èšzo„¬Ûœ =È©1TÑ _xŽæ†t©!]¨=|08óa H!ò¥³Sšxè<©”1èb— Ì!µvÿ±#s?{Þ¡r“&Ú"güøÚ·XŒDÈJK,Kˆ…´rx‹Sh¼"êèç)2£/ÖòÙ·PEG½Ù [ :îF»|#‘Udœ/Φ:ž.&@0äT>s—/æ0ªˆ°<©¹’$‰)ó"Eb÷rË×ÜFÈ ä„”§ )Ý8&*#sØ¸Ü »X TÞ¤!J )SDaSãÒÖ&¸G‘ sÅü.7©L;þÒ™„f~„9ÐTó˜”Y%P:Y”µXN–àÄ‹SÈÂÈžè’œÜ&:×ôÌQ"°–bç"‘™Ì}-I*cç>iy˄ޑ¬\=WG,ŬSÿŸ¶|^¦™É†:Ž“DA…F?Ä ,$…§"ùÈQ?nŠS°”˜7÷ÇHøE›žj&>»i*Æ`¦5·nã ¯Ô:ÄÌD ªS™q‡ j8ÃWTƵ’OÕÖRÚT:ãªRÍ* ¦©j•`Í’X{XÖ½yu¬(}œÎª>®Öf­lýÉ7˜Á»^°®\xéLüÊWv#ƒ ì [ÄŠ•GcÛÇÂC²õ ú*ÛÊnC³˜uÃe;«ÎC´ ýÉgK{ÒrµÅ;-k“èÚ×R/˜²U^lkË„ÿ©ö|­—xÀÉøöW¥Yq‡‹„ݹIÿЫssg»çJ·^Ó‹ê\Ó*¹×9»-änÆ–V]¹ru6“ãœ[Ÿ8Þò&Ϻãu‡z·{Þ°¦W»¿c¯wU :ðš·¥j½o&Þ[_ñú÷žù-—rí߇šÀ»«T×'²=-»ú…v¡qÎkIØr †³^ÉE†j¶épn1÷'WÙS-=³šŠ`bÊ}’„—ÅæVã²Ñ× |»„Dô®…•xÇþ©:¼V ìsß5pê覺9þQ VYŒ g¹=IØe6$‡‰|b&ãwd›•Á,cÒõ‹h[¶Õ9ä Ã×gyã–ƒLg ;9Šj¾¦÷æ XYî³ÿŸYb43 íÊhË2¢EÖ²#nа*´˜ÇÈb-SÒÏÔ3¡Ã{iC1*ZZ$££œ¶aF'×Ð5}—Ô6SŸšÒ‘R.W ¹ùÊYT¬6«®÷¼Þ^÷7ÁPõp]º_šÏ¡Fï€u¼k ›„Ê®4¨'ëg ÃØ9vw‰íik3ÛÉÖö«Ð¬d¯-ª0Fw£§ oE7ÞôÎþl½=æþ–¶ú~¿ûÍãÛ<§8ð½ ƒ­à —%Þ¿‡C¼õ®¸Åã=ñæf¼áß8ÞsK)Ú@§NÏ Gô¢;Vžôp9UÊ*É¡ £ÝIWúd5ylÚYmCIò‚nõ å±éNwH;íc„ rÛ`»,©KLÖd‰Âm+”JªÛí˜eXŠQ‡ 0£ùqO—Ü¢¶ó½Üæ,Ê“‘Ú½¦4mK›Ã]ëÅ÷½sçBØJÒÓ‘­!¯â5†¯Pðû |4‡лãS¢ÜF½ê2ÒÏ 2*)ûè‘ÕÑR»²ö¶g<ì‰jÅçøÅÿ‚P}/X¡ç¼êÍ/ƒî×wô¯S¿ú°É>¶˜ÏýýIßçÛ¿nÆô½›?74—÷úßÿøËþô¯¿ýïÿ#üëÿüï¿ÿÿ€(€H€h€ˆ€ ¨€ È€ è€`P;tendra-doc-4.1.2.orig/doc/images/tcc_files.gif100644 1750 1750 16625 6466607531 20575 0ustar brooniebroonieGIF89a¢öÂÿÿÿÿÿ³³³!ù,¢öÿºÜþ0ÊI«½8ëÍ»ÿ`(Ždižhª®lë¾p,Ïtmßx®ï|ïÿÀ pH,ȤrÉl:ŸÐ¨tJ­Z¯Øì)Àíz¿à°xL.›Ïè´zÍn»ßð¸|N¯¿7~Ïïûÿ€‚ƒ„…†‡ˆ‰Š‹ŒŽ‘’“‹x”˜™š›œžŸ ¡–y¢§¨©ª«¬­®²¦¯¶·¸¹º»¡±³µ¼ÂÃÄÅÆ¶¾´ÇÌÍÎÏЈÉËÑÕÖר¸ÓÀÙÝÞßà“ÛÁáæçèçãåéîïðÌëíñö÷ø¬óõùþÿ)í“ÐÎŽÁƒlF!\È0ŒÂ†Â’¥Œ[Ÿ~õ`Œ´ÿ`ÇGÿa¡`Ɖ²4…Ì—LåI@±`R¤Æg%KŠ0k&²y¯e&ž=ÈüEÎP{>îÉÃó(¼¤_þ‰9u¦ER/âÔˆUÀQ§ï ŠËªuhÅ¢ÉÖÜÊEc[/^ßÂýâU­Æ­˜À†Z•(;¥v½²åZ·°©ÃLÝÖÕ[¬ËTºZSþ”æÐáR0 2^ÊרU´]%9ÞÜÊ'ÓÄ\Ûb=œ:ꚪ_s–œWšbȃ½&×(oªžýÒ,ú4ïU-ß^¬ûtoåèdß^>»sÔCˆ ïnž÷xU³4— ¬å=UrãrÝæ^Os\ÇêÑdžûž¼óúϳ¯/>Jëÿc)ÒŸ}°vÑ}¥çÞ|¨á'p‘õWZÅ=HWv ²GßOxM•Sh¾'b~½1gœ~%Êå\l$*ÇbŠåøg–KÆHàeÒéGà‰*ÂxnÔ}×—„ÃQ˜™…"*h‚Ï•WL9c·ÉváÛéxàhðÍEß|["–h•Øøã#e9]•Ù¡ÈŸ˜V^ŽD‰c“x.gå†uàt`ž h›>¦z%ž™£ONVã‚&ú˜š’ú˜¨Œ–5äœ!B²[Z^æiàž£øI¨Š Û‰’îɪš=FÊša•J%crT&¤#® )¥2Jª¯w•xã-ŠÛzÿ¹Ý‰¨J¦žŠjªâØå€*^ÛÜvܢ߷ñEÉg£µí„)dÖBéå¶Ì6+¥ŒBG$?Ä©•ë ÓæK€‘•’é3÷‚˜`Æ‚†¬]¢ïÂí9õ¯3 7â¤7®%½ü’µðÆwúñt„búÈBò²ÀŸ"ùÇ;2²äã#'II4 ’„¬¤&7™HªqÒŽüä+ÔQšÒ—\Í¡Hh¬Ò<¯ŒNöú6°+F‡•±”åyXK¼å’„àhe-WÇË^V•Áü¥.(Cc")˜ßf/‰EgúšÝæ4gI9kÚ’bÙT¦;¨9HoîœØÐ¦1ɉIsž3+Ñ6Ĺ—]VÓ!2ž5â)z"Ÿxã¦ðÌIžN©ó#éŠ?ÿ½ Ù³œë4R?úÿЉ6ë€â¤èBÁNU®sIø ÖFáV­Y4…²c EŸ"Ð#š Y&}W4&J" ©ÂNÚzÖ¸‚øÐvNÓMÌù&ÄC+íd”¦65iƒ|ÈÌbš,¦©ÙÈJÇwqÉYüL¹3Õe†1+B£†• 1¼3«cÕÓš ãЦÞ M¹‘.Üc¡\ŽíŠWÕ)±ÀÚÒfT˜ Ú]ç9+€Ö©¯N5lòVQt¶«pEì[k®ºb°Èp,e$YˆnÖ_–)+c/‹ÙÏú«³>5­/C‹ÐÑꃵªÝj=Ûý•–´‡fkОҖ·«uíM… KË÷?_=î–rKÊÇÿ*W³UÀH=B\§MwzÏ=­o¥›]Њö»ÝÅNtpÝ€0›áïvÉë/º7!{¯|ÑðùÚw ÈEayËú6„õ׿©äî ÍûßœØÀfï€=²ÅjÝ0ŽNla‚{ø`òµ‰$•ð f=cøLµ0_'ÜÀ“Ô0„<…p†u¸aíYϬüéVu›{Øw–¶MfXËã—8\älÈ0êwˆP , "ܳ®ž8}ÛË!ì„Ú?¥>kÉäŠd•zåÌݸÉÇI`Gü½6É•RuëyŸ|ÖSF¥ MÑ ¨g1ƒ¸ir¾w@¢æ‰‰y¼~æ­²dÑûå4ÿOךÞm³ü§D‡ÈßÊ[XÛuÍAs5€v;¦¥ Tºbôp‰×¢9Câ X£Í+˜M(añHª>–hé c?Öˆë¢f›>]UX#±Vr­æ\­gKYªGO32‡Qz;ÄJl:œ³`å¹½2z„Ø’•XóÊ+b÷«Ù¦»t„D íM£‰¨Ç®ªN±Tío¯¥Ô¿^Ô™¥ški;ˆÚ †µ9ÇkØÎÑNw¥Ã ð¡‘ÝŸ²¢êÝïbŠP2w”Á½ð`7›Ø¡ÊÀm5¤Ê1¤Uµ¶ %pc3TÉÖ>ñÇ–hÛn“õâ ¿ªÆ1ñxEHk9fWùحH_)Vÿî“w«§ {/Æ>϶§¯ÄE.·ã‰à™ƒëÊÎî1¸«½Å´´;>¢÷¦SÌc€|ß‘¢l³Ý¼n‡í~Jw†¥}ï…Ïûá=lçö┉•ïƒÔŒøµÿR±<1q?¯ëÁ_/ìzX9žÕG»óyÞiLV\æ BMO>/¾ž8ïìz¼wÓô+þ²ì¾{ÚŸ‡'µaí…o¬˜¢rºîó·B«.ú²ò§?õÚ­o¹VêEäûæ–Q˜ ]øBÕe®¥ÔÞÙ×Üöƒ4øû©{}ÿV{•ðò¿oýnRX«W;Û‚pþWdëçj‹z6²kɇ<úgAÂ6gÀ·¦²"ÖÖ}õ7k¥}U£²TÇgÅãlØ|7÷|TkèwWÃÇi¨¡H (mgWÂVX’83˜oIâ‚(Xg8};FiØ'‹Ckîòq ä?(~_ce&¢n#xACˆtE¸7¾7P¸+7…S“{[ø(÷y7È~‰QvvUzœ‡…ηlŸ×r¦–:¨‡bËçwÊ4t/t}‚†ù¢åOl–‡à…n'ˆ„Uã§…se–iAä‡)F|ˆ×ˆgÅbmhx‹Çx–hˆ6{ʉYèR@ÿv_¦G(pЬØŠT׊¦˜ˆ¤¸§t‹…D Ô‚‹hJ¼H¯Ø ¿8Œz4Š~ÅĘŒl4 L£ŒväŒc´/œÔˆ|€Õ˜ ¨ŠÒ(FÐè‹ÉÈ4˜ Î¸vé•zÝ8W¼ Žç˜‰’wŽìÈ_똎í„(”^ñ(óXîH~á•ú¸üˆ2ËÖ]9c¦`ðH°‰;ãéµi©ŽÆp‘ÊE‘ð8TÇ ‘ÀÅ‘ÿ0Í’µ%’Ïe’É *iZ(y\-‰)‘Çõ’¼u‘ò¤ ¥y4É[6[™“µ“<^?iZù€ûU‘ “‘ÿµ^L I™kKùZîd™]GIYyRXQÈÁe´ÂV|à”u\Y–=9*55”8æ•¥S”{`–˜—\ÖD-8‰!a‰w—禊s¹–g •\™*AÉ>r{¤Ãm£q…Ò¨z”µ•w ™Ùô ›™h P—Y™Þàx˜I˜š¹1ÑÔ h÷™nåYîTšðDšªù”§™Zæ$šè• ²©X’90ž9›¬™/¦i—±™£À9˜¯ù[“I—Í•ŽÀ›†u›vœ‰ÐÙv 9™ñ œ0”§5²ÈŠHqÝyŠùå†/uæ93㩉åùç `ƘX{©`¸’ãCï9YñÿÉJó)“ê`Ÿý¨ˆØySû©“ìùD÷‰šù˜œí‰‰)Hž¯•cx• K‡. * \ë"d£F'1´xŒæC5Oq–dIxWz *eH…"Ҷ§pœ%‰n†u¢……jPçú¢bÙ*N’˜ks °ù.íF¡AÇlt#o#ê–aÁ…ý×–ß÷&x„I‚]g¤Æ)S4ƒbUè¦j5g€M8vSŠ ? ‡WjY§ã"JR¦ø•ža£ ¨m0ÂmTâmOÚ<ÏVgUZ æb‚œ†WWZr-RvEúŸµ¨OÕcukU¢Çqåfnß¶¢DY¨‡ª©î.ÿ·pƒbqŦd:l1ê¥%çmÚ§ Ç©ñ¦](«4J> EªBš«Xó êY‚C¨s/FWuR(ªV¨¬WŸ\Ó¢î—sÒçªN'R\×+ ˜6! Ÿ‹ÈsÐú m –Ïz ߪ9’È…D*B *Q^H ©"ú7ÚázYš„3åŸÀ*yåê„ÛèJ ¥䪯ö¡!æ¬jE–ùj ðÊ­¤‡wX¯¥AªxJ’gc}ˆª )£#d{d‹,J±ŽÚ©w£„ˆ{çz`tx|ÿj±žZ‡êÂr¤±&;YúO6²$ë£B(…Ž&”’C³k¨[7K=9۲щ°X´Åÿ~3Û°øÙ­³§w´/˳È¤ÙÆ±5*´qx$ÔÚ*xµ·„VÆÊ.ízZ)KdˆÆªc«´X»´<¸ªi+[k[|U؄ꇴ j¨µW…Š©(ëµu+†c¨®ºY¶ûÇ{A µ*µŠiy ª^|‹¡~ ³­×¸ûJ~™®²¥³µz¹#F¸\Kx K ýÊ|0s·ÔY¹!+º«Kº“M«¶jbÛµáyе–O±»÷•»ëäHŒpD¶îP¼ÉhŽ’¤¼Ãè¹nzÎ;ŒÌ;½Æ›º+›¼Ö‹‹ÌûQq;ôZø„?S‚ºà;2â;¾hóµ¾{¾á[œ© W'ʾ7å¾ÿ•™¶i©Üºö; ÎiMÊ– }ýË ÿ Àå«—ü[À‚…¿ŸÕ=³K‰ ŒˆéûÀù6¯õ;Á \ÁŒ½¥«Á,ë›ÍéÁ ÂÏ3^—Â*¼Â,ÜÂ.üÂÓÂ0<Ã4¼0PÃ8œÃ:¼Ã<ÜÃ,¬@ð»±>‰½$Ü]œ¿ kœÄEŒºýÚÄÏuÄ‘ ^©ÄåÀ‰XiÅÌÄçˆf½\Ü[^üÅišY#;ÆÈÃÁZ™ÆÅ…Ájl„eÜŽD‡[pÇr8ÇF9d%Æxܽ! cÎõÇIÃÆQÜ¡ÃåÇ„¼¹Š8‘uŒw¼È5¦ÇõøÈ,Éd#Âm¬ÈÞ…ÉòGÉàÿËÉpêÉ!| ŤÌY†|ÊÀ+_õÕÊïx¾¨¬e¶ü#Åî[Ë´G&»öËËy,Á Èü(ÌèºÑ»Ž{¤ÁŒyÄœDÆ\ÉÐÌ—ÌÌȵ¸ËVé)–ZÞ<Ÿºæ?ŠÈDû>ÓLÇì‰ ÕA(ze* ÎyØÎØz‚Æ'k×sÎ_Ü~Üœ&f#Ï£J¨ÖL!þ ÆSÓglˆÍ"ªÍ#RkÇš.gkh׺‡k*‚´:´§6o(U¡W¹m¥J„6W°†§Ð¬vnsjkŸ#,-˜6‚Ê<ãÐj:ªÆ…¦cjl4—¬¤Ì"MµM[uý¢€¶ò4+ý—ׯB£<Ñ]oØjÿwÆŠÐÜšÓ%X©h¥&÷Ï/Í£˜æË Óôv©êöo^]Õï†Ó´ìs\-©pw⧇râ’<”iÑ=¢]½ÔírÑ&¸7ÝÌaúÌ ·.Óº¡à·\útÓÊO6œåÇ¡Œ­cXW¬‚­-§dÊíÆ†û¶ˆÐšÈ\dø<‘}…¡]Èÿ*x%ìÂëÌ ]&DÉuâÃÔÒÙ÷\Ù YÍ›]›œM©Ý׫=‰QÇœr yŸm”ÖŠ{ûÛ‚ù°™ûƒÒe­‡2{Úª¬Ì'«²Ù{ÂÃmÄЭ¨¹ ®ó×­v¹ØMÛÇ|}çÇVë`_¡‡ç7X¯ôg«\ÞT•¬ýÿ¡Ùô{ßSË´÷×Ý”Í×ñë×(iåë¯íeUZÜ$ˆ9šúÞ»ýß½ý~ÌZ+ö]T¨°él¥,â—ÝZÉz%=©öƒX†T+X¦û-ÝjKÞÔ LLÖS(r×N…L·t®âèÌzƽÆÖQãËg‹«(žâþ-Ä^²ŽøÉËßâÝÖƒÃáQlÛ×]¸–³Pð Ê®BÔ=ã×ÙàFžå¸ Ï ëÔQ æ¸{ºL¾PžÅ°ì^àÜæïåå² ÔŽÄ(ŒÛ{‹k®\u®xžç¦´çÇÕçÙøç€þI‚žÊ—¨è'ߌÞäžÌšéKN駇å–é™þ…޾éSÿîé• Ë ^é£n^êÑ꜎骾è­nê¬þê©.ë”;é´~á¯.Å·žÑ»žè¥n© }×®­Äº~ë¦(\V±Œ^ì´~ìÉT¢l”¢ÞëÇ36VfÒ™Îì¨^Å^ݘù1ç7qê²¾ÅÔîݱNíä\îÚ¾í–\îCcëãÞîî¾î¿.ïê>íó>ìªNïõí‹ÌÙèŽïê®ïÛNðïîŸ{î ¯ðp"îÆî›ð?ïOÇpîEÄ~ñùìžLñ±%ñªÛ¿ ¯Z"?ÝÏñ }° \ò.)å$¯òÄÍòìò¼¸þþð[î’2¯Ý8¿xÍóïóYª„ˆæÿÜóÞ¶@B؂̥ޞ¨÷F•#ý%½»MC¿ô,¨—/"s õÓ"øíIº#)mRW?LJåÓ&ÙZºjìrao†vÚ«õÓ¬-Ÿö‰]Xl3݃X&n¯·/Ä)Ÿõjo?T÷ªgs¿©fÚ$†¦àµrömï>?¬NÏu’íÖ%ªqöЄMùuiÊ–ÍŸˆÊx_“:9RAŸô†Ÿ÷­… ¢ÏT–¿ôx÷¯¯ú4ó¹’Žã/ˆû Ÿãüéà³ó®?üäÀßèÈ?ñ©ïû,NßKz³ÿ`µ¯öÒsÞdiýHý½ï“:Î¤ÂæýÏß›¤_Û$§N{Ülóïápÿ_TüÕ5ýâ?ýÿù/.üÙû ºÜþP…+­8»0ºþ`(Žd©q ®l \f,Ït-Ù êx„½ p(û¹Ž+qÉlÞœÐÚ/J­ÒŒÈ£ÒÊíZ¼`Ì4L.c³­myU³¹ã·zF'çø¦;ïŒóÿ6uv/€…R†U~ˆ‹!‚v{Œ‘”•–—˜™š›œžž@’£b¡ƒ,¤£¡¬­®¯°±±?²µ¶·­ªº¦§*©»‹¸ÃÄ®´ÅÈÃÁËŽhÀË…ÉÒ¶ÇÓ֮лÍYÏÙÞCÝßâ=ÛHáãè1çéì$åZíñAëòõ½¾ôöû úüÿï\øûWo Á‚øNÿ<Èn!Æ 9|(n"ÅŠ]ÜÈÀ"GhÓ|äèq¤¶ŒÎL^tCI•èB¢‚ùPÍ…›4c¢ä–óà–—0^öô&óÎP~?ŸPz4XÑ_M÷ý¬$`iTf;Í]mØïÉVˆ)|ú:®%U²éžŽEûíÛ´Yá½%jinÙ¸ífs«—(^‘}Ö òïLÂÚZ"Æ6ßb]gC6lT²¤È–G©-™¹âΗ)C͈)éҢמ.dzµ¡Í®_sŽM6íÛ}mãîø©·ïßÀƒcÞÝ1õl²Ç! {yrUÏ‘_ÞÕ¤¢Ý͉G׊º„éÞ·‡ö‚ß–k'™‹*öØ#w.KJ‰Îxä¦3çCŠ©7‚zëm~iÛú~ºÏ°QxÒ²í·ûŽž¹kªvñêXˆ<¦¯·Î×òÌýåÊOºôÖïP}ö945Ü/‰}ødJ~)ãŸÏ ð:ûÃÛª~UÞ?ÿõ§.ÿü˜Ÿ¿¿úý»ŸÑç¾±Á)9K x@‚0e ÛkÁ®MP2 ì‚Ìj–?¶dPpkUúê³qp„•*au…BO•\R0!;ˆö5†ÆÅ ¸µÊCxàó‘’%ÃÇ\Ј¾ÊÓ$ÿ”¢#EiJƒâÿ¥C0ñPk>Œ‡Äp® ‰è‰N¬Ö›Úä Ïaív¡;;üô'6]IˆYÒ–¶'¾*êŠüÊb;¶¨&8V)ŒnBS”~t¹".戇\ã$®G€ª^@ Ø@68ÃÒŠz-R" Ù»(zr“…\!²,9,Ú‘Tx$Û˜J&Êh‘RY¬ºrܰÉ/ß}=vã„‚³V5|?ãöjþüzÙÿÕ5÷Nv%m×4ßMvß*Ü=¦tåAÈ^„½5çÞ{ îa pô d_† fu¡ „È|àu¸Ð‡&2Õ ƒå¦ e-â±á|ÞÕ^.ŠX ݘbŽîÈ#U/*$GR†¢=*NÄb%³…7 rt&Kb$$”D®h$•µ•R’amùcd°ÅFžŒ‹@8ØqJ>éW”M9I„P–š´´)¨~É™™Yt†®b~}yg˜R޹'G[Xhn] @Zœvª–{pN¸(;J%Š_"Ž’ˆçHzFb*¡U–™œnoÊJŠ\cŠÎ¨ºv©#± `¢RW)­Êù&"ÿž‚3ÅåB)n²R+(…¡ròXºJr‰™¡jÁ+{…ŠZëyò–î²Ö.{Hª6ö´êJ­ºZaz±k…ª`©%!\Ž*.¡¾Æ¶h™ùüëƒ'[mº†À{‡³óÎT¯½‰ˆ)±½öš­¶I¢ð¬Ï:îÃè\®»¼5l^»% "±oIU¼ÓÅ&b)ÏÈ–zlèÏn<ì„2 ‹0Éì¶÷®yP iž’©³Ì<[É”ÕT¢Êt¼5?ÍjÔ¸ŒÒÁ„,! +ˆB$ï(þ`¡2\H ®ÈχÑ]‚ê¡Ãbði‚Ñu8ǦN4Mˆz#¢—‚ô•¤8"3}àÉ@ÿ8ÄL±#ÜÖ1Ñ”-B±pR¬È‘è%í\>{ØÆ.…²Ƀ]ôà½TE•ñ§\ÒQØËi!rÙ‘‹Qô¢ ô¯‚õ‡cúúO›þ¨žþòŒ›K#DÖÈÆfT¨d{ǺPöÈAŽK{6¤ß×ôQÄ2ˆM¦‚æ²–™‹O€¼ÛüT˜’=‚‘“M¼"ѨÈMjÇYdÄâwÙÁú’ôf&‹ÙNïšØ4Cíð˜H=.2k½Ûf ›ÙËoæBvM‘f÷4YÍdö\'5bMt“™yt¦9ÑöNµõAõc'=ÝI·hÔ17”JŸé*{úÓ)øì¦> ÿN`´’÷ä&1½IÑsâ¬1bÈE“Ñq泜å§=Âó*‘Ú ¿Óè49ÚΊú…Zµ e(;FGœæT] {¨†âÐy¤žàQùSG’ÒaêÙLÿIÔU ô¨½)Ãþ£±c f=¥TKšJ^ªc¤¿@êJu/Ÿì«OŠ8UˆÊTž4¨MIäÓ£ÙYûqP5Ø–è ¨Kh[nñA´úB­™°–& »'²"T•n›EcºmA’ŒŽMlõPyY³ÞJ¥5õhA-+ZÓ®Th„%lT;,+î“íM*ÑCõìŠ*¢luµlW Ï‘^n;;ÅÿnÆ£sMY²xÅ1Š„ö•k›‹YÈÙ À®ÉÀ¥¨Û6´Ÿº¨UQ]9=¨¯±EVySŠ[ô2÷Y ØìK9Ë^©æºCJoU3ÛÙýš·À‘ý¯3ê:Ôˆnt¢©E-™„ÖTY ‹­šÚ.~àQtØY%Ó¯Ž%Y˜‰÷V¶3Âß8ù7¬nÍi€â‹b¡òcÀÞ½jAB,­…Ù²Äû‰ e©”bçg¡îïd‰k´Á6=©`ð+ª÷ÑëÇþšÐ9ös‹;M\ËzÓËþG±¿álMññ˜6vMæ*Ë.Ú¶E¸5¡ÞBgÑè6îûÀÈí²’Ó‰áô–­ëQîdÓɾs™w³ÝæSá{ã1½«½XÞÉY‚x>G½s l{0~T R»Kûn€,7ß8¿ÍíïƒcœÝG6Ãïíq€ë»Ö·wÇõ¬fÕ|ß"vÃ/nòŒÃ¼ÛuÎ"Í•tr^ü¹ÿg¹n#þr”ÇÜÚ ý7Ïmntœ[çJÓ}žò‘¯\ª n/œ ­ª–‹3ÉæX¸ÌI^¶¨Qv×±¤d ¦¡Û5ÚÞ;Òg®2ÖLíWüÄfû(·7¸èT?zÁcg0­ë´e[WÖáÿÙ«…”_Ñò禾¹>épÅ–!ïìÔ[¾L¬=åÞ‡ˆ¿ì&ùù‘ƒŽ´{1•ÓI+ìøÖZ2ÀtàÞo¨g¾­s<™`üÕSê]®q2ýßáîÕ›šõM¦ðRÁ5|H6Ùdñ=4éIêuå1?–:ᵊ÷µ^IùS|åŸ_&v¢ñ=>÷­§ûõW}ìWtÿî#½{wŸÞ%·têD÷7wd§]óÇ\üg‚'~˜'tûÇlMç½€øq߇ á·zãw57€ Ç~æ~"y.—6!‚0'Qç7”G€ Ȉ€~—~(H ø|Hƒ(m×Z7_(€7ø 9Ø~7•nJ(=è7f¨G8‚ˆàjüã@Tx?=˜j?È]޶Wˆ?Vø…꓅Ķ…¶‚‰ †ì†jX>d¸…a§‚‡uO¨…!Ç{gfõ—pqX€—gTX‡Í燈>ø‚!Hˆ2hˆv‚}ƒ:¨c‡èˆ©§ˆ‘Ȉeˆˆˆ‡P@žø‰ Š¢8ФÿXЦxŠ x†s8YEÈ€Hqÿçf­…—P°ƒœhuz¸€Ò–D®HevV³H‹môŠ@‹11Œq×Gý‹(‹‚hÌø‹ÎŒÐH_G5¸ŒÞ&ŒÑ(·HÇøŒÉø¼ÖHÍ8ŽÖXŽØ˜á¸ÕØ×ȇ©†ŽâÈ…A(«x § ïê(ìHõh†™ö‡^ˆÕÔÀ9çíømà&‡H‘ÉÉbæ=KØ‘?ñuñøó¸‡JVP éO‘ùQ%x’øˆŒú¨ŒׂN’)9’¼X’C¸|Ü(’I’.F“k–‹ø·‹ ™g<SDi€V–“Aÿ€ùh’º¦’Ö†žzB`ú¨€ÿ¨å·‰•ð©äèÊ>°Ö…xÚ Æ( «jA­6«ßÓª¥ùª¥Z‰¿t‡‡ú–n«éèŽ )o7Jž×¦£.a‰ˆ¦È m*êÌjª¢Š¨Áª¬à7­¼ ¡—x‘¨P¬Â©Ôú¦º –z®h°n›$®Ûú«“™p£® YŒ÷¨Š$ø˜òJOì …úɘú†çh‹Ú:ˆÎÊ‚W™8æT‹Ãj¯«¹¤ë«é£[j[;yšÈ¯Çj°fê’z ~ýº¢ë}ØŠ 㩱mаw«™ÿŠ#»m{¯÷ѧ÷EHúU^¢$ J®sS°2˧M8¥’THej[9š‹É³¹Ç°;ÿX¨D(0u'ZÄž§´.[­ÀjšN+u±$Gî[ #¢¶"X`¶ÑIZÍÊ­G ²'k±Pk){'–XÁµ›•Y·‚bµ¨¶iʶºJab»«™ó?¸4¶\µ/S|j'·ˆ»¸A†¶ã*±wJ±;z1»úŸ$ŠšmG¸Ûy{ov!Ô·ZHíR{å¡·O‹µïê{¡Ç^ q¹ØÇ2|ç œ‹ o4¡@“¸@dù2Jm…º\Ë·Ïê·ÀRaÑ[¸ ¸æ Ÿ°²A{¦/& Lf›±Ê´þêTÚUÒiŸ†Ûºª¨]Wƒ5ÙX‚›T³ ¼.è³ Ûœ#úzkg»½šÅ yλ—†ÿ­;´²ðнÀÇ2±[G “1ÊŸõ+¾BÛ¥*³ë+[½©Te¸ü¾êù™øiÀÐû²â¦¿¬+%ZžÊ@³¡ŠÀ+¼&kwÂÚ¡ËÂ÷+«ÀMÛ²{‹Á³P²?k*<˜,¬¬µ0œº9œ‚.|½Èc¿8<§:ÄÄë°WûÃ8¸Ã3Y±'(ç¾/¬Ä1l ¨˜ÅZL.i½I<Ä™Æj9]|«ËÃVìÃKÜ"ÍãÄ•ÛÃÁ;ƬS¤$\ÿqÃ=zÅ5ÒÆHl˜èúÇbð¼bܳsì¨ä:ª}`«âÆŠÆÔ›¯{Œ©ZÉ|>•Ü>k¬Çl,É~É¥FÅÁÿCÇî¡p9¨™ "|LÅjº¨qʆ,¹™ ¤RÌ—®Œ«Ë§ø»Â›,ʪKÊFiÊšl"©\ÇÉÄ"[Ës‹4\̟̱ýYÊò€"a¹^ g¤ÔÖH ¹‰ g%’+~ÚËÝÚ/QÚ±nKµïçÁ|µÀá'\üÈ£\Î%|¥éü{·$.†×xü,}»ëÁ§5âÆŒB[`åV ¼”‹Î ì2_‹Sð»½qÕ^ž'_çͨ!ÎÔlƾœt…kJ^Œ»˜šËÊé ŸC#_MÑDG&6[·›6h2ËÜšs«|⸻Yx'¹š\IqkÑ{|ïkÑÜþÿ»4b†Ž7ÝÍŸ[J¥+ºˆÛ™à{srüœ×¡Ôq4ÑþÁ‹‡¼3Μì¤(Œ-ª½ÿŒÐ¡wÕ±ÌÖ5¦2SÖpš›ŠÒ(ÛÐØ‡ËG×媩n=¬ÁŒÊ~Èv]¾#œÕȬËÕ,lˆÝÂ˃=¼|ÈÖJªØ¼ !„¾8XØ•©}Ä‘]ȶü£š:Ëšm„œµvfÚ²3Ìó¼¶+tÙ˜ :‘<Ù°<ÉÞJn´½Ë¯ÜØ ÈÀý|ÊâÚä Ûû›3Ã=ØÅÚgüÄ!K·ÅÔ]ÝÆMÙªýÆiÜÊ“pÉÞí†!|Û¬×pœÒ§úÝè >£=±JÔ¶-«éß½íÑôÿŒÆ`ü°¹گݷ,»ÝæßÂiÐäíßy àÞÇÉ "xì¦Î OýÅ .ÈòÜà٬ʞ¥DœÇ¾Û¾mßžá ¾áNßȽÁ[ûß"ž Þß÷MÜ)NÈû=¼,þáŽ<Î/®ÍÝýœ²=¸.~ãAMâüàÒËã &á6äŠÝÏ­Ý·bä)Œä«äK+ä2NäÄ€k}&[ú^µÔå?5?øMåÇÌä­ã ªå:Wl¶ÍË›=zm^G¯+å´læU^j%îá[ex…E¶3J¿n2çÈ'|¨¶àt©çAÎçCnâŵÒ, Wš0<õçdKÀÅÉ ¾èŒÿÎÝâmÖÞ¹¤›2—½Ôœ%ÝW–Icg‡ç§ÍÆÞÞ‘þ])[®ž‘æl%ѼþWÛSæŸ^Ä9îä%5>åÞÏh^ßj~%ϬY°¾ì0~ÜîçŒÕãÜ%ìÔ~À¡^×Ü ÅÎÜíôWìáÝ Hîånå÷LàÌ®îgþí=ê4ÎíðÞÑŽ~åîîï}ïámîôŽáÉžçþþïìnÇŒ¡èÞ^ðÄ~ðõ<³ÓÎðeœïíÞâÌ-ñlíúþ~ÁÝñ[ñ¿Þ“«ÞÍÈ•lï÷¾âPòò ò!ÏØ‰“úò ðxš²4ï±ÖÙ|Ë)?ëØ íºó Oñ9J‰DÿOð6ô8Ÿó*—øüòOÿ¤QòSoÍêlì3detZWwõæ Ï¾™Ùõ²`¯)G;ã[Ÿ¬túÅw7áÍÞç,úd½þWDÍÐ:[õ²ŒaýÜ0^›¼r%è½û»@³C+]Õ‘Óx»×ÑÎ÷¥Í¸c­1¦Ž¾YJ6;´'Ö i¿Î­þS˜/\xÿåŽÏ©¡Ô”ŸùH.SMÕ¤{ùÁÔù39\ožúi½ö½þÓ@=QS]}ãEù„.ú²'ùµ'ûæZ%zš¶¯ë í&Ÿ ÄPçnî»™O+£»Öpô[ƒ<÷×~غ¿û!ô müÏÀuÿhlï %‚à–ýüMœô5ó±ÿíþïÿÞ¾ôÔØo ÿä_þßþ ºÜþ0ÊI«½8³ù@(Ždhhš§ê¾*Ïtmßìx`éû-œÐÂ_º£rÉDv>„Ÿtl.‹ÖlNËír“ ^t:­zÏh8Ín£Ö 1¹ŒbÙïø¼~Ïïûÿ€‚7pn†‡[rs?fDˆŽN‘••…‹Œ&u–q„ž¡m˜Pš@œ¢‘b4…©®J¤=¦T¨¯†«3­¶»jO²³!“¼\¸0ºÅÊ1¾cÏÐÑÒÑÄ’ËiÇ”×ÛÚ ™ÓàÒÕãÜFÙ.Éæë‰;¥áðÐåŠìZèÌõù±Îñáó;ô5¹—B@vüüÅÿèíà(¬J\Pá¿Zq|“QbE‹Ó†é(d£ƒ$‹}I cÊ&3 |ik%Ëg"ä¤) b.žõlÞÜ)€(Pw"5'”%Q£ô졉‰aæROMA>­c˯:çAí™ôçÕmY-nÕ–IÛ oaúDvm³›â\¹çž¾Xø;°×Áv û‚ê«u#¥U¸6⾋3/>¸íå˘Ö|ñÈ‘MöWÃèÌ 7ë ãÙ¯íO¡óÐ.9·[êšwñÊÓ[ôç|qcöv;¶ó*Œ÷N÷[epáZÃM~ܰmèŠ:'^>¾|t¦+ ®>êÿºpí^ŒžW’~û]«Ï2¿Cÿ#õM°Þ}hä|*2> ŠbàEl58A€ (¡‚ƒà…q,X‡X¹‡×†R… 2‘aHû•è!G)^"âP-^hb;1ª6£S5Jxã(æhÄŠyE(d‡eÑuä!D¶dä‘?:ä’ (øËˆ=6eSRYC“Ã=)ä–xé˜8q%Èšl¶é曼x’™íYÙ–*@¡çž|öé矀î)F „j¨ŸJi碌6êèxí¨• ‡Vj韃^ªé¦‰–FeNhfG馤ši©¨Ú© c]ª¤jéÓê+]F¥œ\ÂJ™ÿ¬$ÕÐ’é…:«*½~õ)®eÚ‰]šù ›Š¯ ‹,Eº²ÆkGÐŽ$m’V«_³Å–t,·z{àµe;Û¶œf.„õ8렱캫ì²ò’oˆâÖ«Þ»¢;‘ºÊ9­ ì½·’{"À,‚‹-Ã;Œcï`·¯pQã¶ûï½+ìÐÇw,ÄENUp´p…«"ΦŒ±Åäœ,RwÒˆÍËÚÆ9gŽ&Wá¨;Ͳ“’aÞlw39n…¥Í]2{Úºëâ‘§[埫ý2¤¼ÿý]Ñ¥nÐ7x’CZaÝqθä“Ëža¸=³5¡b\ñÀ/|X‡Q?’à2-öàWúìeøÐ_衇ÿ¨·ým¯a"K"cEÂ&žì„ ‚á²DUE¶äˆP¼a s¨C+~‹.²˜k”HÆ2q`¬XWÀF*v±„l@™ÖöÈGGŒ,c´£ñ˜†- UˆL¤"-%Æ-Ò-râIÉJZrŽu¨ã®âõÈ)á’  %§€¨Ikq’†tºE)¿ÅŽ7Â1••ba‰!ZBB–‚¤ZwÉË@TÉ–ªœâ&-ó©_³ ¸¦kŠÙ‹crl•çSŽÈD-g"šðZ&}cÍø`3`Ò|áíÓMl|3bÄÜ&uÊ9•s¶L›a4 ;½)LS¦3žJšg’iOxÊqUú´?YéÏÿi޳[5†;YWPqªqN µÇBÃtÏÊ3¢™(³*jЇÊ£üÑ(9êPL2¤©'A…†On¢ô+"•Úϳ…z´*/uKL³a¦“E9h¹rJŸ†=´^¿ç9ã‰îƒÔDQ‹ªÒh’©n«S;ç·è77ÔTÏaÔ«®àn£{W7'<¾i0ª0äXjÕ†b•|S{ܺê¸üH¨+L«šM–r-o£Ã«Þˆˆ·£¥ª&ý`ÁHe¶£‘…ÑdKSV»f°Û¬\ N³¦1³mÔ:{Ù’Tµ¤$-:={Ú×Â–Ž²}gkQZ Ýÿy¹eèn?V{ý¶‘øâ)à*שҵ°lé:ÛÊös¸µ½(u3\ŠÒÖF½•Òv«ëHåÞ‡¹ã•¢uW]‹æ3½–am{1k[ø¦¯»õ`/÷Ëß<„÷pö]#~GŠQøÀªÒn€Ù·ÞºjÁŽpqG¶`"ÈwÁÍ%êsK[a©v˜ª-3œÓ ϶Ã$~©‰u[á£tÅÂmñ‡AÜ`èbxÆ ./Ž] R{Å8¾o9 ä “ãÂæ1F}œß"qtò“Éâ +9¢L&°”§¼´÷÷Ë`3ÖÄLæ2›Y¤Ñ°š×Ìæ6»ùÍÞbšáLç:ÛùÎ[xV³œ¡‘ç>ûùÏ€tŸöLèBúЈN´¢ÍèF;úÑŽ´¤'MéJ[úҘδ¦7ÍéN{úÓ µ¨GMêR›úÔ¨NµªWÍêV»úհ޵¬gMëZÛúÖ I;tendra-doc-4.1.2.orig/doc/images/tdf.gif100644 1750 1750 2362 6466607532 17371 0ustar brooniebroonieGIF89aÄyÂÿÿÌÌÌÿÿÿÿ!ù,ÄyÿºÜþ0ÊI«½8ëÍ»ÿ`(Ždižhª®lë¾p,Ïtmßøì|ïÿÀ ðH_1Él*AtJ­Z¯Ø,5Px¿à°xL.ƒ¹¨vÍnKѵ{NwÍø¼Þ wÈë€k}†Vƒ {ŒgE‡‘ˆ!…’‡‰ ‹Ž›z˜ •–ž ¡u£š{;©œb¨¦’£¥m<¶n¨Œ;|®¯is;ˆÅ‘´ºZÇÇ—wy¾_ÀÁ|» Î†ÉËXÚÞ‚ÐxÒ°aÔÑ×l=Ƴ”¢áÄãfåçöޱ·¹ìûÚ·ïýs—F5­x°ê¥ŽYgÅ"ªÈ,à©x»æ•©wæÿ^£|‚¦4{(§_F(ðdåÒ¸1•K ‡ úñ†ä¾št¸UÀX"ÁèZXXí\Ão[$Þ™Ó¢@\?ýjMf¨ý”â¢øÍ©ÊS,©â;úUʲÊ mÚ³nÍ][ d\€pï®SK7U½âò¾Ê·/=²ƒ'Aq˜ɒƇ<ŽLùI9tdö£sóÎÎ4@ÃíùséL¥1Ÿ.¡Ú3i­W›ˆã5 Û²Iáž]{wnݵoøþ½vŒáŒo¡œyèå6šÃ–!ú ä¤Fc·NiûÎëÜS{—0~IøÕÕ)±(ž5{Ô)Ò·ÏAÛ7šûP¼ŸÎp.ÿª÷É~üÅ`òu@‚¢‡w 6èZe:&á|AFÎ…f¨áF^èá‡c¢ #’hÔ‰¦¨â4,¶H õ0ÍÄÈ߈šHÓã7þ’£Žíñ(d+G&”¤‹&y›A縤‹U9É••ã#lY –Àh9e—^f”A*ùQ™ÖyIªÂæ—pù‹wRY圹é©b“|Bñ¢œö9è…¢w¨_‰žæ'‰€6ªA…”F*i7ÈAhé¥ilWÞ¦œf2{†š©ý™ßmÓ©JB©ÙÁª•°NꜫÔZгv¸)¨½68ë°Áž§k3˦²ÿÈ>‡ë€ÑEW¬xÁ gê´úчíhŠl»à'àZ.‘÷y‹Ú½}ë$Ž2{‚»¿Y–™¹¢ÂÛ_¥”M†ocúîë„W‰)ÖVÀ„ýEpOl‡Á / ÊZOÄÔ~0£4MN—ܳÃÉMÒVG_ý2×]‹ÔvÑ=‹¬óÖKÙ¤TÛ<«5å€k5Úúmá8ËdÇÍ7Ø­ól3ä¤/mzÚú¨þø[o{]:à0[½t!vk>„å½;!@3oøÐXguøHͯíõÉM;$Ëä[ÌøÈ9Ïô½ßÛýñøj¯þ|®á³®Hüâ>ø«°þã_1ÿéë^bä/Èô«€I8 ð¬:ðŒ 'HÁ Zð‚Ì 7ÈÁzðƒ ¡G8Á;tendra-doc-4.1.2.orig/doc/images/tdf_link.gif100644 1750 1750 7116 6466607532 20410 0ustar brooniebroonieGIF89a…¿Âÿÿÿÿÿ³³³!ù,…¿ÿºÜþ0ÊI«½8ëÍ»ÿ`(Ždižhª®lë¾p,Ïtmßx®ï|ïÿÀ pH,ȤrÉl:ŸÐ¨tJ­Z]¬vËíz¿à°xL.›Ïè´zÍn»ßð8ºØïø¼~Ïïûÿ€‚ƒ„…†‡ˆ‰Š‹ŒŽƒt“”•–—˜™š›œ‚‘u¢£¤¥¦§¨˜¬­¡©±²³´µ¶‰«­¬¯·½¾¿ÀÁªº»°ÂÈÉÊËÁ¹º¼ÌÑÒÓԚήÆÕÚÛÜÝ×ÅÇÞäåæÈàÐçrîïðhìíÄêÙóœãø¦úû”éëü]ê'^AKï¬Dpa¦U'%ñ_EQ/2šH¡ÿ¡ÆB?:Ê(Ç !KJ©×€–‡NJ` sÍš„Hâ $3ÂÍv~ÎÊ¢L'P?=!ݹ¦ü^]Y/àT¤‰þƒZÊèU=IpmÙtKP¢¡´Ü1Ëå¬YhÓŽër.,0xØÖ)+õ+Ÿ°ƪ\ªõ®V»kÓÖ•»W.âlj/6,ùpaÄ|ýöÜ@pIÂoíŽ\øXé¼§Qלx2鯇söÕœ‡3Ï"AÛ½ºµoËj!¯Üuo·1gÓ^[UáòÚ†b»e,YõoÇ©¯×wÐïÑ•?·½÷DzÁã¦.õ[ºÁ§—~ß6¾^éžÄ/'¯À¼Fÿÿ@yµ_s=– éGX‘ƒ8 ¸ !H攡†N%gaP¢ôáZ´õ‰H¡¸™Š+†ö‹<)¨ƒ:D“tá˜â…/à£!éø"ùHãˆà­tàfKþÕ¤M?þXi%\ âLHbÉ“ML>y¥—PF d•c’™—lZc‚,᧦uiÊIPfž9¤˜Ç‘‰Œ~‰ä`5¤œIzVžˆ&Ú§ŸtÆØæ3ÎYHhšŽÎyeŽ˜$Ÿ†*釂Y"r;¶—j|=Àêž¯ÖÆX–¤néS—BÅÚb‰7éº+O"èW¥’j•… ˆÿÇ&Ûlx“bS ©ÎVû³Ï¢ì ÛY«›•™­¶~`{H· »,´ßR:­±Ö’»‡¹ˆ¨»®¶ön6ìUÅZ:.·èÒ[ï¹ÿRm8š†«j0Œ§Ãn@|†ÄcyƾSõûa=ëB@Ç ·òñ?†lr=#ŸŒ²ÊĤÌ2ÆGiláˬ¸L3É>Þ|²Í4ól²Ï&lÏ»]r[o¾Š }mÁEÿe«RMÇ p§•(]®Õy`Ý´ÌQ÷¡õæ~mt"bÛQv»Ò&ÜuºSã!0#g»]vÜp>-ÖÚ‚hý6Õ“Ð 6ÓxsýY»I«ÇÜúm¯W5ݧó <çɱ’»»jG;ØÂß»HófÿžôÆë^}M½w¿ýéÜ›-õŸàx|yÉSrù󅮸y°y{/[ûL½ß_ü}KßÞudá/oë‹ÿ"ä¿°vÙ#H`a@ÜDÐt߃كè¹m[bS vê ±áT‡.Ñ‹¦ø ¿ü,|| Ù”U6´‡a^:Í|¸ã,Ùm Ö yÃA0kTSpp¨¨ôÌj>ê¹I ©2D²‘)ç ØÿMc.’Æ„'œË7×6j0" ;OÒìu¶âG;;LMl3¶:*ðŒ6º¢õÎe¹è¸G„sŒ¢ g%šõÈ«ŒTÄãBÒø>bNy´ÄõUÅÁè‘|É"Ýõ" I;~£xŒ˜?Oö“Dà§—¶Qb‘'!Üкr%ËeýÄU« õ\¹ÇùêDaŠ\ô"bîÆ˜…B¦¬”ÉœJ~æ’¼K•¨d…)á¬PQ×<‹§¤™©HeÓš9eÿDÉ%R:©š‡BÓ4ׄNF©³›Óä:ÅÉ@rÞÊœ`U7 •Ï~²³LÛ<'¦úÁO}Â…ž0´'ÔðIM}ÚiOu"h«Zÿ?‡Jt7þ¬ B‰¨Ð»1tSõæ?±9ÒG­…Q!%i$‘é‹ä4îd.…™+;7«Y~IëÄœKŽ8ëÙÎhå/ ïŒgTr"ÿËÎoŒå<觵ф.4ôLè5†9‹î\-•Ä¡NŸ—žþR¨QA˃ºC¨–H¦££d&U>)æ0_f¾&Ö­^&­U‘k`âÚÖ°+°3XWÊJd.ܤ)²WDQˆÂ“×è\¶³•]M^7»¢<íSŽ¤Ñ›ª®ÇðcˆA5%OoŸÔSH«)FÆ­îŒÒtܪЩ@³MnŸÚû~þ†»×ÉÒgOIÞûî`Àwší‹œÿ¾¶·ûMƒƒÊ?ù6jD÷mMx>Lá*xC'nQŠ?tÒ.Òk~ÛæŽ^l-l‘ó‰€(¯ J}ªq¼â+½yXñ=óy·”Üýÿ†¹ËqßÛçïÆøMkŽS£;éL_”Ò3Unz·ûåª÷ÿžòÕn¿:ªÌvX»žB"ªfjÔ µUµ7õëi÷úÎYìÑ>¥«xçUN澕¼gÕï©Ø5ÙÅU½ÿcÕ‹t¹RÏzÙ×þð¨×8í]o{Þöp-}OÆÍÛ÷+'±è„ïûã;ŸÒÄ_kïýù>_óYO¾—?ýà3?öž¯þî©ÿ}ï_{ȯ»l7ÙýâŸûÐ/¿ûÑ~ùKÿý–O?ÿv +Iñ3¾ùræöÇ}ø×y×—Ù§~,Ç~hzíw¿7€ãçdø€Ö‡o˜÷àw€Hñge Xú74ãÓ9µ•‚lÐ*Ø‚òÀ‚.ƒb°xsÆ‚T5áE—rp&\68f@„æ !ì£}ᆃT5„÷ƒSUmÞ`„f¤€Ê·ØgZèRH¸uJ¸ƒLo\Ø…Ïö9f؃ôV„c…Q؆n¨hDWƒHw`p¸†eh…!…wD…Û§‡b˜†YˆV–†tØ„Nˆ…Hˆv8‡¹‡oW¨i‹¨ˆþÀ‡t·ÆF‰{x‡gˆ†føM\èˆ_˜zaÿ¸‰‚8‰’X‰†øˆ:¸Mõwލ‰#$?!xcňDG‹{'–øI¬XгÈo¹Â‹œŠ1qÝv*.ñ‰bɘUs(Š~˜„¸xl‡2Œ±˜Šwsc·ŒÕƒ×˜Dª#˜hw²xvÁÆC×ôюƘÅx#=„B`wk_THqÄ>ßhP$Gø!LíèD{Á&ß6ŠêÈC&H&¤‹±?çøTÈáÇäÝ‘BéBû¨íA‘Éd‘aÔµ¸Š ‰ÀÒ )w!™?¤‘Ú¸*ó“*©,™‘ ´‘(é‘vòD#Ô-9*ËX’­Ø:ÙC Řa”¶èR)w9ÿi”üx‘RIÙ”Qù” É“ÆA´”Ó†8i)YHË”’ÙsBd•ºV”ÓQ–y–;— aI2I–îa–P9w½(DB Œ³6Hù*ÙEì˜+ñÈ+€äŽÄ4˜ÄÑEUù’Å”˜i—bI™ƒÙ•Äó‹aù „q˜Ó`˜Ì( IŽ&˜]‰ï¨Q§È 6é’§ ­é-^YŽë§–œÉ«©dšºé!ù•¤¸™o˜›…È›ªHšo‚ ‘¨œ¨ÈœÇù›´¹€¶é‰Ó9œÂéA$ œ9ˆ¯™ˆÕ)‡ÐYšü×Þ霪hœ…ˆœábžçùІžžÉiËIŸÍiŸÏÿ9›â™‰äɆÄi‡ð9{)UÚi’÷éGŠ ù™™}yïéžØ B¨ž»ƒ 2x¡`º¡[PŒøÈ¡2˜ÑY…Ž^&z¢*Ã(º¢,J *Ú¢0z¢z‚/£6J]5z£:z39º£>4ò¹žx3¤¢™Dz¤b¤Hº¤×µŸæÈ¤PZ¥T*¥#ú‡Uš¥A¥¤ZÚ¥<Ö ^¦¦bZ¦zˆfš¦0Æ¥jÚ¦Ú †n§²@# Z§vz§xš§zƒ Ä?ú§€¨‚:¨„Z¨?º†š¨Šº¨ŒÚ¨ŽŠ¢ˆú¨’:©”Z©–º£W©šº©œÚ©žú© ª¢:ª¤Zª¦zª¨šªªºª¬Ú© ;tendra-doc-4.1.2.orig/doc/images/tdf_scheme.gif100644 1750 1750 15675 6466607531 20747 0ustar brooniebroonieGIF89amÈÂÿÿÿÿÿ³³³!ù,mÈÿºÜþ0ÊI«½8ëÍ»ÿ`(Ždižhª®lë¾p,Ïtmßx®ï|ïÿÀ pH,ȤrÉl:Ÿ€tJ­Z¯Ø¬vËíz¿à°xL.›Ïè´Y¸ßð¸|N¯Ûïø¼~Ïïë~‚ƒ„…†‡qlˆŒŽŒ€“”•–wŠm—œž|’Ÿ¢£¤u¨›¥¬­¡®±²§©«³¹ºw°»¾¿˜¨ªÀž½ÆÉ¿µÃ·ÊÏ­ÈÐÓ­ÌÄÔØœÒÙÜœÖÎÝáŽÛâå߸æë~äìïèêðõuîöùvòôúÿøþ |ÃO‚¿õ"üW0ÂÁ…ìB´×ÂÉå$b|W1ÿÊF{?šëèà¢Èl!O†#ÙÀä>50cÊÄI%B– \š²YHçžq.ð9‡(P9FyMšÉ´©DB$…3õ¨f†Rî©j•+Ra×âY',ëÒ±~¼& N,Z>X iÕ£–gÜAQx­kóî d|>é—¬­txßÂ-+7‘`CƒEN˱ È'·s|óFÍ‹ÏK¬˜.cLR®¦–2`õÕ›ª~]´´iT=+¾l;hªW _¼xìãzÆøûv3Ën{ï;M'µêà¬W!~=vuéÁpÊ»ütñ¦´К q÷³¹×6_ºá绣Ó_‹>}ÚÿµößqÚy'Rûee¢õCZ‚T-¸YqPAȹ5hЃ &a€ØY‡}¸•nò&ŠuT-g‰fÄø$.BÔœo(n¨¢#±HP”øãÍå&‰X›Œ°([?fˆáx9:TåTÀI u¯¹&'2©ä˜xöYgœv.ùä)]ZäYþ%yÝ¢—œ‰ZdR†‡_Šúµ8à€‹_|NŠ'…°Ùݦvvš¦¡zJ€¡¬¶êê«°Æøªinjkšsvúi œšùg¨¶ÆÉ§£‘ž7©Ž•ÖÚk¦ÿ¸ø`Ÿ¤ «ì§Ì.Ë(°Ñ’Ê%«_ò¸“ž—*‹kªžŠÊ«§®¡[뮼&«— jèæŽŠv·³y’iîvbV믴øšj­ðâaä”ßâ[=š*¸¸ò '£Ôªïš×'¯Eo’‹*µ‚Òúk¶ ‡,r"ž¬òÀÙí9ÕI|æÌ · ±½×òIñ½+gbÛPI¯¥ÏÒ&¢!+mҢʙtÑ ?¼ÚÏ/7ù¶4#8\º·í+Ö=[lñWýÙt›-î,Ž>š2^Tçq0’Y^:g’^û©ëÝûŠü' /É4Ù+uqXiÅìKÛaÞÊfÙÌæ'ŒSSìÙÿ{¤6¶lW³ç'B>!˜”kIyÜn`~ì¼É&X¬Ùˆzò67—k¼zÚ­ï÷ú=±wXûám%N¢‡¢ƒØ»…¿Žx¦…ºá¤“˜ü…ËÃ5<ñ… "ùNÃOaæ%u¬ûó°Óx½÷%‚ß’øôíNÇöWKÿ¼êÕou>ù¼›ßýü¶×O×ýÅÅñ „¾« MsPrŠøü½o€®ãð 3aXð‚Ì 7ÈA 2£ƒ ¡7hýÉ/€@ë_ð1ºPƒ|¡ g˜½È-…8Ìá$¨¼‚†@ a ƒHD–ELb Ä&^°jT_NôѼYÀÏNt"ÿ³èD(bÏ#ÈëÀÅ nqU¨8cèE¡QDŒq‘2¢B Â£Š²ã4 ¿ÍqæˆÔó!;ð =Bƒóa|ù½Û!°„t…!ŸH­E¯&Ì$áÇ‚MÌc3Úž*V*¢ÐOëˆd4D2GÎxGSïqÒ¨„#”íá”vä';)#,ñ d´¬%…>Æ)f’R›\È$)é2àlÍ=6‹ØÀþ³ž`fì˜ÈJ&B–©ŒVÞ)U´Y¯HÙ´g¶ñ€áˇ*YÁÍn¶T•À¥&ï¸ËN¾¨òD&=•y’^š#ŸÙÜç6o¨Ã‚æÐxØd6ÂÍuêÒ˜=Ìå ë ÿ³î!4¢ó|è@Mè;R¢©¤¨Õö‡B8T¡e(ÇWÒ7œw HCXÃ&¦)•éJwºÙÑÈ£c9JŠ™ú±' £  øET†‘Ÿ… 褪–©B¨ésd:ßÕKRgÃbà–«²:mO«é^úÈœd§¨‘æ,Åæ8fõì_}SÓ9¥8*Š´BP›¸Ð%N,‘Œ”â´ªíöªÕõ©ó¯# ¬Ór5ÃÖ‹œ$Û™^åVÇŒN´«ÞJ ¦ƹ€©³˜¥bÉX¡Šƒ¨£0jZþ¸;Ými}ƒš(Ó%ÊË`5Š")¦bØŠB¶zRéÎñÛ¦ÿ‚´QÂûh+ê ‰ubµ’j­Sï)]æA2¥ÃngÄ»XÎÞ´­¢%؈©´°Y˶-¨€˜BSí>Wªæ4~IõaÓ™õ*›ëÆñô—NÿÕ”ž°óJjþQ¥QpKS×Y}Áþµe×F[M†}©• /çrÁÔ_¦·å”Únqö,Ê~ØÀ!Ž’qMA`t:Æ<}±"ÇÛ“—÷šµptuác|Âu¼û•Þò%¨ÿ†¸Æ|•Š_§;Ò2ÝõT‡½×¸`$(·¯¥2`ËdW]ý+É“}ò„ÀÖ­Æô­æÏð6纬Q^víJt|T+ù ¬©<ºæ6ßøÿÍú@.”;ÖÝ:z½Í+±…ËÕ‡ö„j•ƒ¡)è|(Z™NáyÝìÖDY:’޲^îØ` 8‚…®p@‡zW_º£±õ¡S©\8¿š¥¡6©¬Q:’^›zÑñMvSò¼]ÚÛÓ–X#‡(í Ú׳¯}6H¢]mR»Û3¼¶i-bj¼Sà~á·ÓíBqϺØåÞcf2lœò:Þ‡œ7<6 fVã›’ú~¿¥L‘"k#à©7zyÍçN‡cà«.xÃK-ˆïEÙ— &™Co]sÚ»EĹ—Ññ ¿»»¾ÛøD ®ò 6{r˜fäBX.r…“Úz$ ¶y9¾o›ïç1ÿoÌgJþQl)ç2¿ Ñ]~_º#_ÜÒ³ªgî=ý/_‡ÅwŒj®5ëÿôùÇŽkyåSnÕûˆ<¯kìtôx¿‡Üö5—çûÜA~uíÁ½âz'xîÚçvQã=ár<Ýe§ŸRUÅK«uV“>šbëCsY• ^þF«Y꙼ЯõÀG\?Ž?ñÅ;aÒÖ¸”JÚkt„}’°¼õÓˆ—<¸¨á6©–Uà /ì¡÷<ñ§²ÃÌ4Íj^VlZÎkã¦ú+¹ïëÆÏ;ò˳¶¢ÉuP›Ïr™ùU¹›mؚ؋=Cfob²›v°(^›øçOý¸ô?ÆzÿöorbÛï¬~óI”å5gögª¥§¥^ò{£‡vÇ×öö‚£$º¥{:JŒ~¼÷zI†[¿—~ h#ì7i{Gv³õÉU ×÷vûWzÛÇuóo³¥‚ÄçR÷p¦Ç}x‘z1æP+hx¨}¸p(Wwf=AÈE7nG×w6Ô‚a÷‚}?X|¤…CxsXvê7eB¸„'ׄ\‚37‚ª¦ƒcu7Ø ˜7xæQ…6…q—…?·…FØ…aö…LW{N—†úw…s†þW„ †‡•¦„{È„S’qŒ˜5HarxRHìæBëV‰!dˆdž9h˜8B—ø‰FD†Jÿ÷€‚} ŠB¤Š/¤‰‡ƒ“¸‰¦(‚³Hu/× –G Q—cµ¨v·H;§Qˆh‹M·gbVyføeŠ÷г'{½H{ŠøZÁøYzHŒ|hŒ–F‹Õè‹éð.ÞøàŽâ8ŽäXŽæxŽè˜Žê¸Žì#$¸ŒÂ¨S‘‹t8vÔhk•ç`2(I˜‡ø’ˆ‰†C5äÆƒò‹ÌøŒT¤mÜІ²¸ á(ч(6B‘Ø‘ )‘¹…d‘9Œù’ü¨ñèki’«„’÷ˆ‘+™‹xŠDøTÙ8s29“9ˆ6ù7‘“~ç’!uŒùŒí÷’*y='Y%ˆ”Çÿ¦”!)”= “çЈVYE“Z8”79f)QZY‡\é“‘å„6Å”ðè”ÐgÀænJ–/vc¿™¹×x cÊ€që¢ãD}…%{3Z¹R½Lz°ë´{ÿ[-Â×|¢{¤™"}ÓWŸ6š]ž{C²cgÚ+žhH½h•£aåzŠs¾„#¶+¬=^†—ö;ª/ºÀ6j~ü »+c¤ùW¶óX‘Aë·Dc3Û} Lóžå[N#L8Q[›@ë¶- _+&VÖ·5 %ã½¾éλ‹R§Áy›Àß9²3Hbø¼LËÄÛ[¸¾a]ËÉ6S¬Ä¼¨º•[¼—»ƒßª²/ÒÅ„qÀó)¼dû„k¸‘bËÁV{®ehŬ©Š‘ÄilÇk|¥m<:) Ç+ÌÆ|‹Â`«Â<̵TˆÆÈö±Ž³ˆK¸kÚu¼«°y²>»º’š´œ•—ì²™ÿÆHÛº5›¾±ZÊ©ÉÝÊÉ…ìÆ§ µt¼ÁnóÈ´ÌÈȺȅ*ɯÄËòÚ·‰ŠÊ¤©Ê€º¾, ¡¯9¬È”̪ºÌBÇLÉì®Ë|¿°\¿M¼˜”J·­|»¢Ìº¡*±xüòù‹¶Ê°ÆÛÍüÍ›Œ¸ú¶ÂüµÄ|ª“ªžh«Î–<ÏÄZ϶™¥ø\ªú¬·ü «ŸÌS2žÒLTœÈ¨ÚŒÜüºÌ£¼]6•IJLÊÖìÐʼn¼=½&­‡ í ¬ÌÌ¿úÐbâ¥'úK4Ag{ÃÀø½ª1¯b…Ï Ïð¬¼è<Æ)=.ò[º¥T7‡ÅÀx%¿&Q ßÇaÿ5Ó„¤`µ/#L¿¦\ÐEìÓ LÄírfy£Õ2̃qÒ‘¬µ\A~F¢6½Ï¤¶ÒÏÁ\2Ø4ÞÉ.ýâhZ~öQÑJÍ=º›¢ê‘¢Î¼Ç ÅmûÊ&ÊfÕhªœßWgƒµ¶¬µVþdÑÐ  »,§[º3m[¡yÏU;CÌ{áÓ@ì{&À',Ïì¼Ê<ê½iÖ€c¥û;8öŒÒ}Ó:ÊöJØ…MÕ#ÕÌ™ËbÝ«¼M†ý¿’‘Ûà¼ÛÀíʾݵÎ<ÜÄÝÜ€üܸÝMݼ©Ý¡ÜÖÐÍ­Åܰßz §ÖåìœÌºÓ,üÏ6-ݽªÞ„ÏÞL¼ºÿÞâŒÅy¬Üí̦´üÈ/ëÙלʫÞý†ÿmÛ× Þô\ÒísàÝÉù<à ^à¸|yÕ߬߬Î,eá•Í ®A ¾Ï ÞµŽD!žâNàN¨'> *ã#.Ð%^áÕ|±Bº×-nâ7Žã57È~ k/îã±¥ã<ÍãLä–@Ùr,aC®äžÀä5NÉOå;ÒSnÞ•áV¾äF¾ÞÞã]îÜÙ­©ð9 ê [ÐcO_ßPjÂ1çÞ¤G(àm> R^çRŠæý8Užç°ç;îÒHÚ[zNu“Øú¸fÌŽŒ8­ªó-Ë|θú¸>=ºø ¦ýÿ{«€.æáÜÃAâÉeŽÅÐh\ºÙÙ-*¾uçÃ\RèªMßË æ]ze/=€þ ×p-3¼ 7²~뙟Qšè‘Û Óï%Ò7MênZš>£¿îè+sg¨} Å>ßÎØ¡V³à.îzÖ¦¢ÐîÞú9+§µE“…œL棞>‘¾~ fÙ V¥C,½NÖj½»Î¸¾ßG¾"-;kÞÞ?Iº˜€™}¢¼Ææ¿á]ç»Ýßí³ÐyÕùNúê¥dºô@è_]¿ñ÷^ÅæöæAæIþÆà ó®ëä¢.è±œÍøMÎ Ÿî:?è6?¹8?óAo³–ÿnëZÎñ\~ô?Žå—Në?ÖNŸã@~óBžóU/ðIŸò†;õ ½õVõJÿõLOñb?Õ§>ÉKßòi_Ýkß±þ±`ïÅcŽòdâÓ,®n¯zsÿ±¦.á$Ž!¾÷ÚÚ÷lo¸_ñ,ŽEzãg¯ðþ¬;‹ßõç|߉/õÂ÷R›³“à³Îø>êï,íß½óW|‘µýàì ô‚Oãî¼Í¬/ß\ÏóªÏÖ~úØœú# ÆÞà¨LJºúãŒç®—ßÙÞÌ3ÎÝQ?¼¹ü»/ü·OÐ ú·nùê ß1ý•nû½ïù¿Ïü’ x¿ý˜¿î²`×€KìÙ£×Wc³ÿ‚õ&íûÑ_üs­Åêþ©Ã-Tÿú²Üþ06%(´Uê…·ÿ`( Ãvf¬é„²m¬Ä`ßxÉèO@€ pHDÉ4y­}¯~À¿¦¡Ë¸–¸Û¼å¾äŠŠËÊ´—Ék™ÍÏòiÒ©7×@ÙEæ” Ï2]‚ÿ@ÇhÛ%Ý¢!;¯X!z鹋ˆŠšŒiþþ !'ˆSÄ1&=±C¨°å9^Ùrƒˆæ“og>qÁÉe]¢Ÿù<Æ)ˆÌX]ª f ’L‚âxèæÈxÑbÛûÙ-§º¥:MM84åшYƒØ)ÛDç§µÿºNQ•tn ´«ÞuYxºÉ¼+ÿ+Þú¦ñᲘ?7,ôuÈØÓ¹:Ï®á6oîYw‹UžMú`6à_OÏTÿüˆÞØÙ£gÿÀ»Ø-øóëßÏÿ òñ°YF1VÐ&¨à‚ 6èàƒF(á„Õ%àrÎW`SæPèᇠ†("ƒÔ]'`†æ}f_ #¶èâ‹/–¨ÖZ¢Fߊæ߉ôÙÈŽ9ÖH,AÉ¡‘Hö0ä‰õçä“PF©¡ @&ÉÝŽâfe$)V¹åsX–çÙ—é4â%™¿…I¤–hÎUa\m"©&“cÆ™½ØYäœÚÔ©§…yþ¹!Ÿ±)è{o¦pæ¡ -Ù§¡ŒÎ–¨m‘öèh¡ßUz¤#,ªéy—é§Jêt„Ššé©Àq*‚§¬¶•êH~ƪ$žKÀj«ÿW³^Uë®L¸‚®À¶*­£;©Å²Ö+ÉÊv% ]³©=Ûäg±L›b©¹b+í±¾FÛ×J¥[βኛ-¹Ð®ú‘MásˆJAã Yû±î&¤í¯÷iæ“)›\äŽI:á«Y¿¸rpi›{z>%¬Ý=.à«ð”Ô1;1cËû”3©[ Föæ´Ñ/þzðÈ0Á»-Ul$“s5Y´rRÇEL%Í^(¦ÅS AúÒÑÏ@qó3-µÑkšÜ,»[™ÍËõÐ^·6Ò¬ŽCÕeW‹5hŸª6œmëv¶ÖÊNÝ]Ýv¿ý(Þbë­ßp•,ÖÿÄs+J¸]†¿‡¸à,^øÝ‡œ8¥’—Õ¸[ ½væšSî¸åï:Y›ŸÐ¹È§K%:ç¤{NwëÆúiåî^Þ)íQ¥>›”À/¼¥×Ç{í&fŒÌ7ï|ƒ¬ï!B†ì³!q˜>m ï‡@ ¢‡HÄ"ñˆÏBø–‚ÈÄ&:ñ‰PŒ¢õf¤D)ZñŠXÌ¢·è!*‹` £ÇHFyñŒhL£×ÈÆ6ºñpŒ£çHÇ:ÚñŽxÌ£÷ÈÇ>úñ€ ¤ IÈBòˆL¤"ÉÈF:ò‘Œ¤$'IÉJZò’˜Ì¤&7ÉÉNzò“  ¥(GIÊRšò”nL;tendra-doc-4.1.2.orig/doc/images/token.gif100644 1750 1750 5006 6466607532 17732 0ustar brooniebroonieGIF89aˆÃÂÿÿÿÿÿ³³³!ù,ˆÃÿºÜþ0ÊI«½8ëÍ»ÿ`(Ždižhª®lë¾p,Ïtmßx®ï|ïÿÀ pH,ȤrÉl:ŸÐ¨tJ­Z¯Ø¬vËíz¿à°xLžÎè´zÍn»ßð¸|N¯Ûïø¼~Ïïûÿw„…†‡ˆ‰Š‹ŒŽ‘’“”•–—˜™š›œ’‚¡¢£¤¥¦§¨©ª‘Ÿƒ«°±²³´µ¶§­¯·¼½¾¿ÀÁ˜Å»ÂÉÊËÌÍ¤ÄÆÈÎÔÕÖ×ÍÐÇØÝÞßàªÚÜáåæçèãÒéíîïÓÎëñïö÷òjÖóõøÿ{9äO? *\(QB`/m\gq‚É•rnŽòyçèQ§S?ùœ,ÝÛsoç®–[àß•‡g>ÞãyŸæUöY|{‹x³ž¯¾þØËÿwÜÚ~©¼‡^¯á„‹Mbc | MÆŸt£Xaƒû¬¤tfW!ƒfÈaV¯È†K‚B-X‹…!îU⃆Á8ŠW©øU‹ÊX vË="Ž@!Kp xf!‹á i ’ÃÉ€‘6)É[yî0Éãz¡xg•S½'e–BšGcXNÓ›—\å׎•C޹և#î'S‹¥Ã¦˜ômÖrÎIÔƒ_^fzj¨8uҩᚸøÑ–›ˆÁÙ™n4ÊÞu9:Z‰X|9¹€`jöùVY¨éxé0–púhž¦• œ©m½hi6°žz‰Sªf ©pKÖúŒ¯Î%yVª†®ÚŸžcý˜×¬ÿž »i±º² Û’ÊÆÅ,+Îîm'ž*åŠU¹8háºíE»iã,×vEÀ»ð¾{¦°ç6› °¢¥ûäºiUo¼ó†›Fª\ªS0+§ªï§üéï¿òŽ;¥ŸŽ Y±™.R¼×ÂÞ6,ñ>Gì›–Ï:(Ÿ½˜ªs2¶Ç1ß–ùD!ÀÖeñ I¢Šð\±Î K« Á}›­@5ß,¨/‡" ô¢-æóÆB§h4ª%×ÛRÒ]‘5ÊA½³Ø=“ý³„÷¾ŒU®ÚÍ×nßšó×OgêØtç[uWËõ½‰wßɺ©á¢!Nœâèí7ÖmkýÒá+uNÿ&ºW·0?î9ص´Aèlî8áƒþ9@–—­¹Üžžúá´ÞÆä '>p0íûÞd.¾»æ™<<+pÛc±'ÏmŽéÀ¿É|Þhz1õo%ÏÊx§Ìté¿Û*ËÞwo½Êا}•O£Ÿ¾aà£}¤ª_hþù¾µïÛúvN=²þ7:ôÄ7­ù¯g|Ã?;™Í}o Tïê4|ô]õò·ÁŠ50~å ÷uš™íLv÷³_ í6Bu|°tŒsšOZç1Þ ®g(4૆R*“ô8/ìŸWŠGÂ×ÍÎ79DØ[GDƒÝH>:Žäp8?^¡ŠK™  ÿ¯HE ZíˆXü“j®®bkRçKfî°‡»8ž²;ï…¶/zm©t+«ÐRP³!éÚ …{ áJ4q“ d½:[ãÿæV…?©í ¢ûÜáî¶­×¥îlkb7M6³ÆUâH¨;]Þn×¼ÀLte{ÛõÆ6´ËÍ |ÓÛ[3!c·Ùµïsó«YürWwäïkÑ`õعåùaxÝûÝÅÈÀÓÅ-‚#ÜßýVxƒ´unYÚû¾÷ªáE°ˆC,Ý Ø-²½åÿ.ŒùÎKû-Q(àG˜Âãc×Úûaà¶×İ1†ßËc`ÅwÀ ¶/{,á SXÀP^ë’SÌ\ÿž'ÁÕòðšeçöº$‚Š{«e{yÈó.‡-›Ü3“È•‚-zFòò˜¾õ](rY¼e%‡ØÆêr’ù¬æé÷ÿÅ4˜ÍŒä1ƒ¸+‡%—ŽóFç/»ùÏ‹´„­]gzʾm³uqËàC¸È¾2yß\à5kUzÇý3yÚhR×:Ŷ½smëôø J¤%ly[öáf—l”a»z Åå²ñ3SÉ&æ¾ÚCí´Y X'®xOð4¨´·íV6{»Ö>.2~¹€õi›Üt}5ý†êio»_î`»Å]\x£TÞV¤·•=üc'ëÛÝþ¾©¹‰[ï{uÌ~¶'Ä„šðf/œÖþm+¦- ]3eûݧÀÝÈ<9ã›ÌxFrùö=ñÕ†¼JÉþÏ›µò—»ìâF´ySb®s(¹çËý9ÐÍó¡‡HèFÿ&Ò“Í¥3—N:*£.uKR½ê‚¼:Öá¨õ­‡±ë^÷yÑþ›NèìhO»Ú×Îö¶»ýíjÙ6B÷ºÛýîxÏ»Þ÷Î÷¾ûýàOøÂþðˆO¼â ÿÅ;þñ¼ä'OùÊ[þòo<æ7ÏùÎ{þó ½å5/úÒ›þô¨O½ê_†Ö»þõ°½ìgOûÚÛþö¸Ï½îwÏûÞûþ÷À¾ð‡OüâÿøÈO¾ò—Ïüæ;ÿùо;tendra-doc-4.1.2.orig/doc/images/token_args.gif100644 1750 1750 6103 6466607532 20745 0ustar brooniebroonieGIF89a„ÖÂÿÿÿÿÿ³³³!ù,„ÖÿºÜþ0ÊI«½8ëÍ»ÿ`(Ždižhª®lë¾p,Ïtmßx®ï|ïÿÀ pH,ȤrÉl:ŸÐè'@­Z¯Ø¬vËíz¿à°xL.›Ïè´zÍ&o‚¸|N¯Ûïø¼~Ïïûÿ€‚ƒ„…†‡ˆ‰oŠŽ‘’“”•–—~Œp˜žŸ ¡¢£”§œ¤¬­®¯°±…¦¨ª²·¸¹º»¥¨©«¼ÂÃÄż´§¶ÆËÌÍΖȿÁÏÕÖרwÑÊÙÝÞßÅÛÀàäåæ®âÓçëìí“éÔîóôõ{ðòöûüôøúú î_„€*tfÂ…±=ìÖðÁĈaYWÑÿÁÅŒ wQ¡óñYÇ%CªŒ%/e³“ \^’¹r¡>šá|qcµ±¦Ï8#ëà$sÁÐGAåýY¯åÒc:ǵjÉÔ'UsE–æ[K*,ŠÉAÎeWÖäÈÞ.ߪ̳JÍ– ´ì9$h–¢’.]®Í¢©®f]ÔiEœÓ–lvã·ºw»‡8¢Ü\]›>;”q³±-"åɼó¨ç†§^ýoïaÚÍrï÷ºïd޹Ž'ÿ]öYÑ=NoÏTÏN÷u…Ͼ¾>~jÿÏö^ ûáEŸ˜\5S^…ýæ—#‰y2 ]-È eñ¡„ÛŸä‡`)Vá„™dSr$ý—â‡ÐÄ• ‹ú™h”z+Zˆ‹-֘㠢Šh‰Õ†‘P9x˜Y!‚8ä^E8“”¡)9Úv$ç”i†s`ŒVª†%6*uŒSêxÌ—†)а• 'rRFÀx@œ‚ÂçyÒ<¸!‰uê—'ž{> ›¸HˆŸ‚wè·H£›É¨Õ|æI:i¢Y^ÚiŸš’Å©{DMªg™@y9j’> Jã«"©ª%Tn†i•±.‰[†~*a®ö­1ˆkè”J¤ÿf‚&~ºIଶ){ªsÌ­ÓŽY­›ÒAlŸ¨6wì¹Þ^gí"lÛ®uömû(ºâMµn&éâ[opñš+ÈPi5ç½}”p¾ã²ŠÎ¾€œ,¸ò-‚0ûû¢Å ãDlÁ÷z¥¾Ò6<1­xd2O]˜ò=+ÿÛq ¾r̰Ä3—É»"¨²šÚ謲<LñÈzT ïµ.ïh¦È<+J¦³Mã'ô=DWu¿,·$½¥8-sÔO/ ²ØÎMÝ3Ðÿj]²ÚY³5ÒÝ:¹uÑMY7ÕsK ±†,ƒ½¶ßJù¼ˆ° G²¨hÉ´eÑ0|·Þ‹É8à^·ŠwØÿn3ÜL IùÜñùuàêî}âå˜ONùãÅižpÍqç ÔÆvÏ^’“mŸÙ¡æý·ï¤¯ë»r;ïOJ1í=Ûþmäoªüس£^ýà‡ Ì|ưw]ò»›>£õ‘Òœ:èïÞöŽ~ä5Úó#ø}ÇÉ<îØçÙl”"Õÿ£ìëžF^¶“¬e'€ö˜Éx‚(Ü4ðQÿSX¶˜ž¡YC`qÒŸ˜>Ji ~ü‘¿÷—ìÇ„ä çHh/ñmÊ|&Aǧ’Ç?:K}†a ¡.ê½D†8ÌŸ…¨Âwè0h.4 2.p ΂â c¸C˜}ltLÄ‘ÿ¯£œ/@H~Ú8¢ÉG²)B'‰Šã­æ }¹Ëâ¥÷:IÜÅQa¼àÉX.hጕûáÙX8?rÂ;Ú™H7:òplÛ"‰"8Fz¨ƒ$#V´ ê§‘'iG·5 ‘´¡¨>WÑ©è|pÒݱPYÃ!R2cdË—CV–ž¤Lãæh©·ÉòW üe$;©ÊEù’=‹.£Ë-³uÍ®0HWfrz7æ,¯Y²`ŠÐÜ&&-gÇd*s“|c¦-'ÈH»1Kr£ß)Éy6S®©žÓ,V€Ð© TÂÒU}æ.ú(ƒsÜÄ7ßsÿG‡²“Š =h‡6úG~ÒQ}(¶y‹òRŽ!=iñÂÙQxžn‹ˆQ1ùõÍ3ºt|0¥)×p4QŽjÒŠ’â¼vZdIÐ{?- uš´Ý8ŒI2baÒˆ4§ŠyêÉš:Â㬪ÿ4k´*V®.«B«H˜Ujyõ­gjq zI¦Æn¬n%ê]?º×¹zTœV)ÔœšW³–p}'P£‡Ø‹¢ÃŸk#edȚؾ–Œ­iýkÓ&ZIÇ)ô3FÛêÎäGÅ*UœŽýM¼©–º.O²íÓ'Ý`ëÎßQŒ–¨-ck;˜Î.ï³¢ä,]+ÉÐáö³ï¨fn QÞÿªtµÎempÙµ:ºâ³‰+-AmrU\îV¡cmcUšÙ›¾°}C%ïG™¥îöî™ÅôlÞ”;_ÓVÐ$×¥n1M¡°¿]½^vkß‚jö%ÛužgÄ{YàF×w&±ð˜R†äw°²ý]†›·ß@B·º®0~iHâNúÝ0…EÌâ³™ø«!n±Œ‘h^%ÎøÆ¨‰1Žwœ “òøÇHñ1‡lÓÅöÈH‘“ÌdkÖ˜½M䊣Leªê¸ÊX¶¯³Ìe'™“]óm¯,æ0O¹Ìf^2š›|æ5g¹ÍÂIp–bÊÄÈ™EpnmqîHg8ä*)¸Úqž«¢Ï;˜ÿÃQò ™"'´&pF*Ô}9’H?z¿€n4d¡©æ¯¬âÑ` ž:hû±³Ò£.õ¥Wj)wz%™®Þ$»tiQo3Ö¬¾^®MÍkCgõÕ µuc=`RëzK»N¶ªKML`ۦϣôs QÍìc÷®Ö¤S¶íØ¢í=âq{÷ú"íù“@”B9”DY”Fy”H™”J¹”LÙ”Nù”P•D™;tendra-doc-4.1.2.orig/doc/images/top.gif100644 1750 1750 341 6466607531 17370 0ustar brooniebroonieGIF89aH¡ÿÿÿÌÌÌ!ù,H²„©Ëí¿‚œ´Ú‹³Þóð† "A扦¥À¶î ÇòLÏÉZçú®ßÀ ƒ> Ð1•-C3y$ý˜,gJ”¡VlÔ3Åv]²àÀÕzÇU£“‘\Š·m2°‹þpé×øMÇÖ£6çxw¨DØg×øG·#ÇØxæ¶%8É£`ˆ—¦é•5Z)Z:X*‰Jƒ©ÀÇ*S¶¶({ ³Š»kºÆË[”2LLüSŒœ¬¼ÌÜìü¬P;tendra-doc-4.1.2.orig/doc/images/trad_scheme.gif100644 1750 1750 11730 6466607531 21110 0ustar brooniebroonieGIF89am~Âÿÿÿÿÿ³³³!ù,m~ÿºÜþ0ÊI«½8ëÍ»ÿ`(Ždižhª®lë¾p,Ïtmßx®ï|ïÿÀ pH,È$/Àl:ŸÐ¨tJ­Z¯Ø¬vËíz¿à°xüÝè´zÍn»ßð¸|N¯Ûïó~Ïïûÿ€kf‚†‡ˆ‰‚zŠŽq„g‘–—˜uŒ™œžo¢•Ÿ¦§ˆ›¨«¬‰¡£¥­³´pªµ¸¹’¢¤º¿µ·Àù¯½±ÄɦÂÊͧƾÎÒ‘ÌÓÖ–ÐÈ×Û‡ÕÜ߇Ù²àåwÞæé{âäêïnèðópìîôôòùüiöøú©Û'0ß?  –#¨ÞÁ rc(1ÝC+Z£¨ÿÜÅ;:ã(rÛG!K&#©rÚÉ)Ù™I³&C,[:{É æ ~|ÊÉ 4ÏBýå“ÔV#›P£’tTASW—DTNV _ñŠv'¬Î­n™Uº´,Õ±Úܶ-Ë‹«š3k±Îµƒ¶OUWó–ìÛG•`½{é^wܺÄuËdâ²e¼M°RÖŒ™sÂÃ%Ó…åèÐuA¡©Äzu€­³.eq›ÓEómÜî1ä9º5ßm]{µpÙ•‘»ÁÝ0¸bÞ÷|ÿÞ% ”¬ØÅõƶ]üótêþü ü|½Í™ ¿L›¶zïå—§ö  ôø2çVoˆ¹BçÿÀÕ‡Ð}øÝ¥°mÖ_ùU”€È VîÑ•$è }¤õf& &w €y<•¡?ŽvLirQuÞd&¥8ÀŒt\X†Ã=áZzˆ(¨ã:f‘Y|õâ ~36ɪY#J™\•QÇaxv݇H:Qzb¦wÙŽf¶w$hª9[™pj9…XÕ9ig&ZG™"F‚åy‡ Y„I˜Ù¡³¹¦]{Ç!ªh£ar¦h¢ëu6i¥cbj œ>™çmZ>êc’xµé椖jZé™’Ê™¦˜·¥¸›—ö9Ü•”Ži£¬þØ'Ÿ"ž©kŽÁ¦º&«ì²ÿÌ6;£wkbÆ(’²±©£º’)­¥Ãòéh¬2E(H†2 ì°hŠÚk@Ý^—+•î¶*…u&‹§"ІX¥’°fç­¨>J¬«êòºœ³w‚»¢‡-fú-°žiú0¥“I+°Á< Ì*Æ{2™ðPŸRê¾ø¤‰+À¿r»§ÉKÌ®¬lT3^¹°FÜjÄl²Ùjͦî曈º©mÐr¹F“÷º²\¦÷ÉÖ›’ê+õÉ7¯»nÀó¢®S Œ´ü)‰¯ommaÈ¿ºrÆýúÚ¶¶QŸš3Ñ6ëÜ ÑqÈ<.JåÖ"ö‹Y‹³&hûé_'‚êM뀶2ÝJÆ|Aˆ \+ÉßÿÍ$.ßâ6ž!åo@¹ôå˜5¸Š/üeߟ^"¾'kv<›«^+ë‚N;ì±kÞu\EÆN/Þ^ ïw{­˜ðÃÛÅû‰ÈÓi;ã¸3¨{¢ƒÚ»ëh¤®–ãp¸0¢”HÅ*š‰ð¢5ÿ  æµ¢ƒ,²H½È 02LŒ ‘"‹÷»0šƒ‹´Pãñ`ÆÕ¡± rœ£êx»; $zÄ¢þ´è’1Ò|¤žûÈC²¯g|£!™‰DÊo‘üh$%w—<'zd’›´„%ù§8ÎB“¡< =ù S~1•—%¹ÞáÊ2ÂR”«ÂvÉË^úò—À ¦0‡IÌbó˜ÈLæö‡€ò–%ßhùLh&BšÍ¤f­ùlöÄ™Ûäf4sÉ'VSœ‹ §%Nt¦B}d'ÝyMx*Rž¤ç;!¹Ns¶SŸð&RÀ9O€¦“ŸñôgA ªÃ&ºQ¡ùdhC³ÈÿJnÔr¨„¦@­BЈJÔLGùÑ´Øó’ø$iI÷ƒÐ{BT¥*l¡LYH¸–¢ô¥™|ÄEµùºN>ô“ÿÔ`ëPˆº“’“ùÐäNKéº.ÕOç\Ðöˆš§Ž4§Žxê›jÔY–2ª.š*ñ°×Õi~5¨÷+ÈlzÔ”b•€ß!‘UyêÑç u¬ãc«W‘ªâ«z5+_ç¡Ô¿Rµ{eÍæYj×ÜÖ§å*c…êØÃ `®Äð‰VéW]”NOY•.›Øoåkh Gc™¤Ãæc˜,< Û£ÏÎ/®­‹½XÅþ j¦‚[Õ„K·ñl[3i‚¤2¶jÿ•j=>Kל*/amTCëLLBÁ¬ª–v Ëy®0•«÷‡[lyxGÖÉ5×\—"Ø¢´[²«-Ê`ø…ovv±ËïrttÛé—y¯uª–Ý×={j/ 2R³î°Ì3* ܹ<á6¥ì³[Ù-i{kÚ)Ø\ØQ¯Ó,f]î$IÁâñ v†.ŽÝæ]?k¿âõœpÁÖ¿"}1Äâ6aïL;Õ òÍøSWÊÊ„Èãu®±¼U_»ÌÉ#ò f‡´ C¶ŸR¾‘'Œe›"§UOîФ«#ÉÙ NoS[tmæ•Ç ‘•šýϺ ÓÕ‚ï,nvhd[‰â ¸e³aË3EÊçÔJ5ÿ2€´ecëÖ¤vvƒ‹Æq¢íÜ׬¯ªä0xëeÕªõ‘WN¨d;}è»®5Ô.5 & ÚBÓEÕôŸH7G[úÓöÓ3–MB¶JÃu:r•t){MØG›Ôo&ôˆˆ=[cWÖÞ•uŽßÈìw¸úعôžymÛ‰è4¹à–Š YÑew{€×Íî"[×¢¶¨äЉvÛûÞ‘–6«Ã1ïf×ß·â©“½m^z¥=%ø®å}p„¿[Û 7ød.È–›âyÅxMQ}Sà„ûã6Á‰Æ÷(lÏåNä#¯øÌLn=”§|àðNuú.=Ñ—“Uß²ž9ióâœåôÿ³,Ì{žq…ÇGæ6…ø2MtžÇ¼ã3ë—_¬ôß¼éW7ºÌ‘¾às«ø3›]z¬±^ô§·uyT)±…uö¶ßÊ­Ú‚úÃÉne­CëâIÔÔè+ª¦å—8Öê‘ÕËN÷º›}¯¨œ5¦²kíÓ Çà™^xÞrüìwV¼˜QV·•mÇ-.´+ŽuF£]ñÿb|˜s´zÐ[Ëë ˜üØ+oxˆ]˨šٮfÖ Â[°ì£MûÚ“[Ù¸GÜq ±Ÿö¦Ï¼òýÞõæßòÐ/ùü.1ç´CÑùد|ô£þêš“}üx¶ùK¯ýàᚥÅGò± ÿìÿ|ûÿñÏßþėߤŷ+‡¤éKò—%®ð# ¨6ì¡vÀ×o¢·~M—€£ƒ/WLF0|ç3ƒ/€ÈLæ~ øx³1Ñ2-è !€¨=ˆ‚×b†5Oñ‚ñ×§·€#_—2@Èe‚'wÙFw1xd1õ*¬×3Õå€?‚Fès;x&X[@Ò}Š6z '~<(}XØ?Ô7 à·ìg…þW[!£ƒöwyˆ×ƒôÇFU‡‚•†ê€uhw˜G~sx60ø…_r„(nè…hxWC‹XBHtI8EA64‰îR"‰m`‰T‰œè@Øsš¨ ûÿ†K˜ˆ£Vgˆh‡¾Õh¾ö†|‡©øV°xxwˆS¶¶‡¶ØŠ¸øŠ«‹·8‹¹X‹¶·uªÖp(ˆÕg/7Ч…Œ¢¨ŒÇ8qH(D6猙Ōt†Í—B¨XhÔ(IqQI¡ÃPŠÔà>¥ö€ÈŠ%È Û¡€~H‹ÿ4×¥.øè{øµ3§*Ï·‹ñx9Y¦„rh¡w&¦|÷\¶^bAæèèi2x…IŽÐÅæ'Ø-Oã€w^9{¹i‘5haªb2pÓ‚êØj!,"ÝÁxBö‘©„w|` Y*𥠑ñ•^¤f9f¼²”ÿƒ5eF_î‘5£Ø-;ù4)’$¹•'†¹1 U9•<©w¨“7¹‘wÁÿ–Ö¨cU“•aä•dk¢–W‘élÉá8uPå•)‡—že—'˜Å@˜g˜d¸jªXxŠù8µæ‹ŽÙ–;±|æp€©ô˜©…Ö–‰”™9–¹žÙ—[šà€™¡¤™›“ãtŠðˆ’´{léšÀÈ‹­Ä†é€š›¤š¬À™I›'9kÁš­©‹Åxw—”ø™• ˜#Ǜ褛”¤…XaàrÄè“G'wPÆ©·Ç:çÙiqÈ'žôCž¿œÓ–—éP£ùšÂ‰‡ã ŸÀyœ}ÿ˜~õŠÙÈœiùWê9™¤‰žåÃ…¤·œz Õ˜ Ù: Ê—òÙžôùžüÙŒþ‰”jŸßiž?© z¡™¡Ù\""M† Ú…:¡äQ¤")z‘±¢C× .šs&Š ²90¦a£s'¡µ9Þ8ˆ=æ’ùx(:v]‘×6«".z†9 t€Rƒ._û•‚*X^Øò¡¥:¥Huqãdðr+1ê2ïò-NC`šŒjˆæ0–.3oqÉ2SÃynÊ¡å9€îY¥v¥óB2¨’*®/¤G(¤ìù¢)É„¼‡f'cÀwaL雃ò¦Ñ§î¸˜yÇ}ŒÿJ…J‚°I£rFg0Ú£}*¢ÏI¢ªªtkÁª­ª‡¥Ê9ù9ŽÅ©ýé©ÿ¹=¼Š¡¾ª¡»ê§ë‰Ÿ²Xûi«Ç žÆH ù¬#:¬%Z ÒúªÔ«ʬ-:¤§Z¤ÛЉ÷é¬È ­x­w «Ä™9£êtÝú¨YhòªíšuÍê¡ÛŸØ@ž¸¯ú~ÁXþº@ýZ°èZ˜°š+A ŠÆ°äš«×8ŽRê­óÙ‹ôÖ¡€êŠ{«û“¬Ëèœéš­éˆ˜«°®ŠHóú²XŽb*[¿idÇ#³K¡ëo6[>8 ¯Ý¸— Fs,:±";Êégïרÿ»³5ë—:'w*Ë£(«¬*µ ,[O”–Xk²Z[µúyµ+:µÏ ¶ºte›µ¿°µûÔ³^»¶`Û¶hk®ÀöBs;˜$[¡q«r¦š±Â(™R÷‡îj´;²›bK[U£5‚¸z´›´~ѶuZ}[r47–·rÁ.Ÿ¨< µƒËZPÆ|—{Yf¶Úù¿ê¹Ok‘üÖZ`³²¢Ð])°¶ykáå^{‹ªhêXú{8cúè€P™zNú³Û_š;§ÆÕ”Ç.0+Ërïu%¿Ûwe*¼MÉ8¨å…–wÓ={{¸–3–t¹^Xâ¶Ýðk7˜©ùH¨(H\¤Ò¤ñÿ뛋Á)mV¾·Ë;Úµf¦ð‘¸J;6jÂbÁ'¿2\+h¿~'ºQeS²æËoäe§49†Õú¹® ”¿¢^s—ó’‘×’,˜`ÚÚ<]BÁý‹…v_; \*´Š («×y+y©C4“ê{QØ{ø{}M{²¾ØÙÁ|ÛBü·:›Ž\· >Fl}S¸¼~Å3¬b`±ÄüK¤â(¹LA´[BHÒ0źQ[¯¶ëň Æ:Ú·+LÆF!”f\H¬¶qÇÂlìv3ŒwL¶y,Çs¬©—ùk¿~CÌm´ª .+ÆâÒÅߪÈulÇí˜Æ|qeLȅܵXÅ3›“ÿL‡j½¤ü\¬Ç‘¬È ÛÊǘœÉ¡,ÊŠàʶÌ@—|ž|<Ëï¸C·üËjl¸-¼l»Kt‰Ìp«L{Çœœ)‚kœÊ6\ÊÒ|ÒÉMËl·ÀvÄÎüÌ€kµ>›°MwÍQ¼¬âºÍ9 © zÆÛ,ÎY®”lÎì,§!Ê­ðÌÄè<ÏålÎÜÜÄÞ­êìÌñܹøüÎëlÏoìÏ®Jv½ºMËú¼ÐÄJÎ ÐM¥MÏ ÉÝ<¶™»9ÎÆ\Ñcú8·Õ¦.Ò=ÑL†Eœ¦¡…Ò6§Ò)¯Q³y‡º¦Åë¤vú0ýr2(É‚t:¨júw+]«ù¬ÏøÿžãÌZLÈ’{ºcþH§,ÉÓÿ,‚?m%'-#|:=1 ØÓ)—Õig©+(f¿g5F}ÒW€dí}R¹Ói¶’sÓvl69Ál’-Ò¬Wy]»õ¬ÑüÌÑÔ{-Ø÷\¬ uo΋Î|ÄçÚÖÅר ÝËJmÙŠÑØ=ÙÒžmÑ ÍÙXÙ¼[Ø}Ú£=Ò‡ü×ب¼ÑiëÎ-Û‚Ѷه=Ûƒ]ÛÖJÙÊŒÚ7ÍÓ ÛfÈÚ¾À|Ëb=r2ͰÍíÊÏ­qÑÓMÝÂ]y×ýQÕlMÝ-Qß MáÍPã}KåmPç Ké PëJí­OïJñMOó½IõíN÷MIùJ ´÷ÚÙËÝ¡žÛ)ûßûÌËE±ßŽÔßâÄà‡äàÜá$áÖDáz4ƽá7½þáÕÉÄ Ý$^â&~â'´(¾â,Þâ(®â.ã2>ãþª6~ã8žã:¾ã<Þã>þã@äB>äD^äF~äHžäJ¾äLÞäNþäPåR>åT^åV~åXžåZnä ;tendra-doc-4.1.2.orig/doc/images/try.gif100644 1750 1750 6311 6466607532 17430 0ustar brooniebroonieGIF89a¸¡ÿÿÌÌÌÿÿÿ!ù,¸ÿ„©Ëí£œ´Ú‹³Þ¼û†âH–扦êʶî ÇòL×öçúÎ÷þ ‡Ä¢ñˆL*—̦ó J§ÔªõŠÍj·Ü®÷ ‹Çä²ùŒN«×ì¶û ËçôºýŽÏë÷ü¾ÿ(8HXhxˆè°ÈØèø)9IYɘˆ  ÀÙéù *:JZj*©Êµyêú Û™ºZ{Õ*›««Kkë+…»+Oî>nž~´.ú^Ïÿ.¯¹içð´¡/Ô¿Vû…𞽃o$%¯áÂYÿ ÛI$V°¢È1¤·‘#Ãyô&Ž|©hšL0k²(93ç>›8üùãÛ#¤ÿº}©éóûÿW¾Ÿ" ôW#ÿ}'|·ÈÆ(à8¡ƒæ%¨`z Z8„²V_… rˆ†ï‰hÞ‰PœöÝ‚.FòœtΈ¢sUcs/n¸#vލ¸"ˆú ×`‘.ƈ$‘J"ù\y7•£yKÙá”D&¥¿mÉå†vˆe•؉É$•Y^hÙ™Çñxe˜fЧ&sO’$ä—FjØ“eN©Þœ<ÅÉ Œ/b¹—^ödÔ½ ¦ TŠiè¡Þùi >&Iæ£VJJ¥>IÈÝ›ŽŽ(œ–®ib¨l²éc¡¬Æ— §Ò7냩ÎÇ©…žB6 ‰¶‰hšù-ék‡SÔÚg®“ÿÞz"²S,‡»¾äl´:k­ƒÓŽTm¶bë­ÛŠÔe¹¾knº·ÕF»îJ0î»tÅ+/\ôÖËÖ½ø¢¥ï¾_õëoVÌÁûl°R'ÌpÃ-ìpÄCñÄ_œIÅoÌq w rÈw|,rÉ&·Aòɾ¤¬r-,·¬ÊË0c"ó̈Ôl³!8çLÈÎ< âóÏ€-´D=²¬J#¯„N3ýj¬A=WkSS]õ"XßuôÖɨ ¶Xè†Mö$̆ k[h{öÚ‚ë¶‚mÇý¬Út9Y·wwÝ@­ ÒÍwÏwã­„3È*ëmàƒèý7™$*~µ~ç¨÷µÂÿfø`å–Ç*ù¤=ÆÉžJ»ìå'~'÷,ªêæ‘f¶f}skI*èUºš'á±Êc˜z š)Ÿo›Î:£°[™aÙÊ?Rüè™Óx©›NÎ.%¤"hå;Í;òM’éïˆ/Jý¯·gôz¦Í¹¥¢ž«›·ç˜îY&¡Ú«*àì?6«ïvò[ßô8×¹µéŽÑœúŠ¥8ù­që­ ø8¸ ®‚’q 3­ Ђ"”]KX:¢0T\¡u¶7´åÉp7+šá KC¯¡L‡U㡽|5 6Mˆü"bø/$&Q‰cbX0(öÄ3O“"·ª˜C+ÿÞBr\ÔbM¬–E/b†b„IËÞjl£ßGܬÀƒh #ÃuÇŰn>Ù# OèÇEò…ƒ,Ð I>@"2E‹ÔQ#7×ÇG¢Én’LN³uIkQ*“ÑŠ`¯ 8>'%.”¨«#) ɘJ~ ŠzÜ9iI²zº3Á+1É¿Š 9yä$<8Ë&­ W‡ÄÏŽIG®ÎsèórgLö°jtϤ`Ÿp5MgÚš#Œ¤1ùÔ;¶)€°Ã”ñ.(Jã-ðœìQP9×é½nž ™Ž[ ¸â6û¯Q¦úà5;åªL­ïQ§“fôz—=ÒqpžÃR¨=ÅÉ¿S…oÿšf4z)ŠÉÊ(ú*Ú­M6Ô€ÖL–9$m¦ïN Å•“¨I(ýù³>ÍRi5mZKÈÜSR§4!¨Ñ˜¦ï¢ýèõ6åÒfU¨Î£Ÿ7§­“ÊJ•–¤*/­ŠªTJ Xâbe+±LiaÕVZÕUOzÒ±ŽH­±c«HÙúœ‚®Uë[Ý*W«†UWt½«]óJÕ½j«¯sħºÆfØÄ*v±e;ÿôØJEö‹“5ce©uÙ+föa›ålgrÆÏzU´Ø-i wZŠ-­‹©u[û ©™¶ù˜Úli›­á·Ý-SË%Ä·7å[¤ K¼±®½ÿµƒ`´\¼&¬€UesëðÜÿxrª!Ê./‹; ž•Ñànˆw]:hOÌåW_H9_¥wÚ+%y“€Rø®ê|È$ ô|ÚÁ¢ÆïsÎ/$w×¢Ö!œÞóxy‰=£Âs¦ª®oX÷=bjŸÒCëb$ÚÌã9”˜ãlžíJEPîÁ™0R빿\vÏwmÊÓDq'ÏÚÏžAÝTŒšà¡æŽ›3°%Yü<ðÄFžä:³·û&™‚ÝnÿLYW0šMvªê&ØåïN× ¸T$uÇØêVr¾rð.DW„ܼYø\lœ³npè[ÞæùlÞ³Eü¼2@ۢςžÿ¡ ƒC#1‹ÎX£iöèD(:Ò”ÆÊ¤+iezÓX»4§?M.P‹šgžµ©K{j>”:ÕX5«/àêWW Ö²žJ­“vkìæZ×»¦o¯Ý@EÖþÚ £ö°‰ÝEZÛ¶‘[vÈèl4(Û·v&n«mîÁÍÆT³$¹M«mÇÎÌ÷ÈÌÖr3WÜë6³^Ñ]xA‚˜ 3#5iïUæÜ$Ž‹yñMT€‡[ü6©¿u Ý«\» ø¾®ÉDUkP0¥Q¾.`sZ\›ýÆ0m ò®–8 ¾;ÿÙ©wš<žRnó=5ÞVZeû6/=¨ŒÚRQb”Ã͹‡Õÿ»Kñe¹ãÞ9INw]SH¿9Ñ;Ò„Ýà?\Åoò¤ž7äÐË:Õ¡þM’^ÜÉO¾rw¯wÔ˜*=ãLßfSs ô†ò´—R5»råMßNFõàvŸk»¾Õ©?=}Ż˷šÉèÚÕðq(8ÜŸ\ø¿{œñш¼t÷=ó²7óœ/—e£Í†iƒ^ôÑ&½³M¿lÔ[õÃfý¯]ßkØïZö¹¦ý­mú܇÷ºï=+||Íð>øÄwlñ?eä+_0ÃguóSýüSGßÔÓuõE}}PgÿÓÛ§t°›½|ãG®û–møµÆóoÚê·ù¥Øù±l>þ’@²¯\ÿûïÿgVaþÝfhf]ÞöHïW8”ô6$ŽW"xvó£+u‡pðUBˆ_#{d&'ç€4*^Vvï%'Ÿ# ¢ãv-gÒT;Æ<9VvjÂ(C&=꤂Ì×Qê„=¾STôG _’aø;¯ò:7Vg="6PFH$Kj×*)&agHÈqïcL˜X‚4&L^'…!¥óÃcKåcZX[\Ø…B&@¥’"øaÉÒ‚8F@–H|ç„ò5xŠqIc‡J#ØwòÅUpȇ ç‡wHnõ¦9XyÚVü§Wc¸B€þ׉øWi⃕0›èxÖ~늢ÿ8ŠYÀˆ‘vŠ–ŠåWŠ¦ØŠcôŠ[‹U°Š‹V‹ˆv‹yö}¹[Å~³ˆ_ÉŒõ‹ÃX8ºeŒî•Œ¶ä‰fsmÍhiÂÓè vD×È‹ùpÔhÛÈÙ¨ÞøÇ"ŽUŽ>°å ݨŽHåÈOáUéH ô¸ŽïA9ö çØð襰=¢±ŽÜH øh±ñ8Ê$9¯Ð ß(±!Þÿ¨Í`‘áVѬñ9 Ù(éé Éù Á¡Ð ÿˆ‘¡’ípÜ€‘-Ù‘ ‘,éŽã$‘,ÁI‰ >Ù“Kš)“© A9 ¡’!)’2v”þ Uy‘\™’) Ù•;©)!% •[i‘‰”OY)•íHH’rY–qi—4‡u™—W™7}ùä˜s–ƒé+b˜]Q˜‰) ìȘ ¹˜™ Ž)™Q©‰6¼™Ÿ¸Œé™Ÿ š¡)š£Iš¥iš§‰š©©š«Éš­é𝠛±Éj;tendra-doc-4.1.2.orig/doc/images/virtualB.gif100644 1750 1750 3407 6466607532 20405 0ustar brooniebroonieGIF89aÌ´¡ÿÿÿÿÿ!ù,Ì´ÿ„©Ëí£œ´Ú‹³Þ¼û†âH–扦êʶî ÇòL×öçúÎ÷þ ‡Ä¢ñˆL*—̦ó J§ÔªõŠÍj·Ü®÷ ‹Çä²ùŒN«×ì¶û ËçôºýŽÏë÷ü¾ÿ(8HXhxˆ˜¨¸àø)9IYiy‰ùȸé ð *:JZjzŠš*ÀÙªæ©+;K ÊêŠ[[ËÛÛ{› ¶ë[l| ,¬¼E|LëèLš¼LmÕ­úˆ-:]íuíÎ ½mû.5.¹ é> ½nÜn4+ﹿÿ™ÏW½{_d¢t¬¬rä´™X0¢Œ¢*÷Ža(ŒÿyQ”²ÄÇM)än—Æm#CºüÐrAÉRï:ܸoe´˜/{fà™`f<¡iú<*(¢8‹&D ÕƒÒLmêt:kjÔ­A/Tź“«X Z|ût¬ÚeÏ¢-Vv-Ò¶ëÚ½‹W®^mßú·7ðR¯ C,¸¯á¿qƒT¼ømcÇ!G;™2AË—fÖls營A£=š¥éµ¨S‡]=¶µë´°¹Êž ·¶ØÛ¸é¶M¸·êßQyÿšd¸4ñ`ÆÛºêy9Ôæ‰A/ª\º+êµ4^'­ý(÷gÖÝf ïs|V•æÏ£©žrÆïá']¾}¶ú.ãëÿköÝpüIä_6ê—Jv"R`‚Ò·`e÷™ã‚ F(!YØÍ!†÷4(œ4n6aˆôŒøa‰& „bh*®èQ‹é€ãa2zCc ÞèGŽ0îÈ#>®dz ib‘FâdˆJ.iG“Â= %RöFe•r\‰[–ZÂA^bŽI¦—•‰fš—L¡f›nj¢—³™é‚…S²Yã‰q¾˜';xöé›:€š‚œ®ÑÙ‚Xþ9è3+š¢&L£h—MPÚhwò™©IN$cáM7Y5*6’Nº”dùáTSn{j™$à}J…å eÓ¬—ÖŠÖJ¾.ä¢",…PÜ¢ßBÜÿH󜮻šÅÎN×°§g¡œÒsQ³ÒJ”6–äJ¸Æ ­ƒÉj•¸­‚à~¯VÐmNçzû <ãËFðœ4T©¦ªÓQ?þð+°ÀÏùã»ä«,Áü4ÔnuoN\fuû”kJÅâGqÇxé“qÆôŠŒq¸öºj-¬äí‹’?'7õòkøZÜ2G:1±¦ OjG+j±¿j‹2 þÉcï¹ÒP{/·¹Œ4Ë.³[é*›]îE\pÒ-ÜóEWÃû´Ð5k,î¶L kñÁA³Š°ÌXŸÀ¶Tãl3ɤ†-öÎ0ìvàóíÞ³YwênÊ"ñšUÚN޽0=U­¨ˆÿûé7–ÿõäÕÎM6\žZyP—ëy˜þôè¤Ó}­¹®;*èéfNÂ챦þ·í·ƒã{ߊ<2¼ËdzñFìqóŸ»òËï¨Î_¿f}ÂbÏý$Ð#ÿe§¹pøDŒŸÇTè›ïÕcd±ßÈcÝÏ&ë+Bx¦÷§¨+º/¶?É¥nÒã†ýȸžC{¬I•7rÀØ$ï6BODÁ¾ç‚‡ø_r"¸†ƒß a!DØ! &&ƒ B¿¼3D %ÂþÉ€9 ªd!!·È>²“<’"+¹ L2é’ˆÜ 'éDGvÒ6Ÿ\cû8Ê­ÒŠ Ä"ýèǦWÎï{¤eíP‰;OÚx»ÝqÉK`ú²‘?Ì£É W: v|TU—Ö-Z%ð8ì9к´¦3ê…RÙzæig¸öTÈ3¦ˆrYK­¡¤쬗ÏÒ5d®ÍgQÖF˜å͆ds˜ÛÄæÏ¤ϽÍ;%“ç<&/tÝlcxkJ4Ñ©JÎìØ™ï$–9Ž’èËaýj(< Š9mbQ¢^3ÌbFÎxÉ’{ò¹Xç–¨­vµƒ<³l&2^ʯ”ªK%D 醴‚ùË£&Ù˜A—„PÁ¹c¡­ò(½§I¶.[B#Ù+гž¢.|ñTT+BÏ–º¨K}ÛÝÊ N-uµO_%É4ØÖÑt•¿[Âæ– %ºÑ® ¤ùôŠFù•«¥˜X ¡>…¶@†ãˆR…G&û‘û´n\>wW¦¼ÉÁ&ƒ–ïxfD¥§P Aø„yôgÈ¡L=Mp4¦Ò“M«fxŠ éÔhX­z5ÐìÖ±K¿š€V²TϺÝy¡ Ü¹të¦}‹3íZ¶¿ðúÍ—¯à»9êÌ—pᇇ“U¼x`cÇ[!G®7™²RË—ÑeÖ\³³ÛÏ ËqÝtiy¨Íª^ °õ××°–­Œv탸­êÞÍëvï\¿ÅÜ+KøpWÅ{Yy9ÓæÁ­ÑÜ,}(õZ×a>Î.tû¬îƒÁç Ì|bó7Å'Gn›}J÷Ï෕Ͻiû¼ñoÿ¤_hÊùg€ªt§žiæGÔfî$¨à‚’é÷“TßI(À•…!=nØP‡Râ}"¦Fb‰}x·*ÂÄb‹)¾HË€1òáâ‹6Þ¨GŽ*îÈ#>–dv b‘FÒä†J.)G“À= %RîFe•ÄeW—^v9Å—bމI˜dž‰¦#+\Y[–-ð—d˜4šh›°¹ÉœNÊ9çŠ)عžköé'zN¹æŒ„âÓ7‡bÙ(T‹r—hƒ“ŽçD2B]O—iV•=‡ ……žh}ŸÒiϲӓKHi#h®Ž*S®°Æ§’¢lUg­¶ªE«wïÈj,kOÿÜb×=[¯–Vˆ-´FÁŠVäX2«®¡J,P MRFH={¨›n+ ;ìz¡¡kU«Hð¦÷ Hûðcï¾ûv{­´ÌëO½#Õk…i.,¦sóóp 3Lq]Ί‘E,±²úŠÏÁ\vØÌïÁîZ÷\¼(d a·ðìºÌ*—l-¨S€²Ê#›l´Ë†Ä3 ñ¹»üñËDS¤»HÓ ìÒJÇœ\¥d´È7{jµÐ‘>]#©R•ŠupZ—Àìæž lÆÍ˜Ê* >Èéªçn:´X—f½óÀødîkO`_Úëí÷ß‚¦N$ähÿôjK8êø3–;%©Û ) «š¹ÚétÎhàu²ÞzÓ[Ã.Íê´sè´J·Sô9T¡ï"¶À›ã:Ûß»DYU̼\f6½%½‹}õ”Lï»–E+$ÏÚÁý]…ÿýÀÿU~#ÿYϾ˜ø‘éÃï^Ù¯‘|ôÒ6þ¶¼?š[ùFú;‹ä¸ ÎæwþÛŸ7XˆÔ•‚Ô à $( Ú , _0¨ .C„!eH( þÁ„¥Óà[Xˆç/0,7ˆ@âB‡8zL ]8š±èÍdìáá@8Ä Ñ€BLÚ…ŠHC\ÉWPü`eHÿ¾p*qûÕ]ÃÅŽqñ‹³áb»žh#öÈA`"½’:–¡/•ÕÃ*i»ºr:°Þ(‡IYƦ¥)™À¹[²­”ƒ¡[HZBÉÐ8x»Hï”Yj§t”ƒ2ûF¶p Yi{&òPEL®T+hظfßàñœI¬Yòëe.C6’qý gëܦ9‡Æv$lüÄ[¬ò)Íð”Í;ýŠ=ëy,'FódMÙÔæv‘ƒ¦óÆqÅ+ãx¼ö°R•Í›žbeôz´Œ uùl_8CæÐ•ñÓjÿ¨šçzQ‡ÆMŽê<&ª%Âuí¡Ûð×. uSÞK§"Ø\º´ŸT{„Sj3Å7œé³Yàʦ¸:êO™d•f3¹g•Pf¸P5N—9uXX)*aI[=!ãÆÊËïYrŽ™ªåWíº,¼.i®y´å/sêÑVæ+°ÐKŸa‹ØÄ*v±Œm¬c ÙÈJv²”­¬e/‹ÙÌjv³œí¬g? Ú;tendra-doc-4.1.2.orig/doc/images/virtualD.gif100644 1750 1750 6703 6466607532 20411 0ustar brooniebroonieGIF89aÌ@¡ÿÿÿÿÿÌÌÌ!ù,Ì@ÿ„©Ëí£œ´Ú‹³Þ¼û†âH–扦êʶî ÇòL×öçúÎ÷þ ‡Ä¢ñˆL*—̦ó J§ÔªõŠÍj·Ü®÷ ‹Çä²ùŒN«×ì¶û ËçôºýŽÏë÷ü¾ÿ(8HXhxˆ˜¨¸àø)9IYiy‰ùȸé ð *:JZjzŠš*ÀÙªæ©+;K ÊêŠ[[ËÛÛ{› ¶ë[l| ,¬¼E|ìüœ¼,mÕüëh-=ÍUŒ þ¹ÝmÎô}œú8N~ï´~ ¹Z/Îî9_\ïÿ’‰’:}í­kŸ¯~ÿª`è@a*lÉ}‹än•Ãÿ. 6Î"Å‹ AÖòÈ1å” L–i;[.g±T‰3ÃÍ5¶Y­]½q;sPô@Ï|%= õã…¥4›ò‹ŠUÃSTíÉ´j3«X [»‚u:6­„²gÛ"T 7âÔ€tëÚ½7¯¶nûjÔ ˜ëT¿}·æÈ—ðYÇ&Vl•qã!g”<9^eËD3ÃÝÌ9fÏæ@‡†F:­éÓêR]Íúªë¬°cÿšM{°mѸ±ÖÞMktoa¿ùMj+|x®âÆiš½¦ü(ó…ľºK½Õô_ÕŸ«Âž}Óv^2­w¯rüÉ]æÏ£G¬»©@°àß'R¼»wTõíÿÂoS3íYÓŸ…(ËW¢f`CÒ£Í~È4è`|—u·…”YHÔqj¨‡ÀÉâ9ŽÈN‰ðœˆâ;*–&b‹'½#Y2¢E#7,¶X`Žzìˆb>âäˆBiG‘À‰$JîÆd“rR'«ŒBr¨$•žº¢—5Fœ6Œª«úêžÁ+ê¤Ä*J«‰¶ÿÞjÐw%íÊ«¾ÆJç´ÓVkìµyªzl¢ÉÖXwûˆ³ -Uç W|²j ¶Âr›m«ðºKé·ÝÀ&”$^ÁÄ”EþþÛl¹/Iªí«Õ:ç° _{,± Ãëj½n.pLUÔoÀìmüÓzl~ rÁò" èÁ$k ¬È#“\oÈm:F1L2<Ó¹!5‹®7*Ÿ|r¶ÝÒËòºóJü 1ë£1¿Úl >“bïJê®[2Ï=WMïÃ+³ü³·+DeÌYL¶O;½1Ía=qËÎÝÊ 4°níp$Ÿ¢ öµ¡ÍwPºž]®Ð.Ñk¨†Îè×Fkšë¦.vªâ’‡Š÷âáÿ&(ᇄ{:yçD§w—Œ‡ó4œSììyêwVóèŽwºê²ûɺ£—¿¾¶7³ï®g홆ërðfž.|ñv)Þúí¸C÷ãé–Sù’?:"éóH-&‘‚…¾÷l”Ê™÷HK>ÛÖ¯…=óyˆ¯ÕùDåìëä>ðC(ŸüÎ'½b_§¿êñïz¢ÉÕýRÔ<:¨LÔÉW¾Ìv@[0]DuȆ±šEðqÚSàïúwAŒ™çoL mBò® fÃY ×·?P…¼ÓÜÓÁ~p†ã˜ßR:òL óC€‚è¿!š¯‚ïãΊø(’¥~XŠà¤gB"ÿR1"éƒ]—(õoŒ™P¢<¶ø‘.NÄŒÑb¢ýÔÈ6B.ŒM„cÓä¸97VÑŽwüâõÈE>B J¢.(¦FCò@‘HA#6h*B&‘bŸe(™LÚÏ‘-ä ýØF:¾Ñ“Ú$4ùKB•6`¥T™FR"†$"»èJä²%œÜ‹,g™C-Âò‘¿” (ç¸C Fæ'äàv¹—^òd™ùhæ_h©C[ºŽYK£)Í, j„š &“YÇ…ï›F'Þ„Dp¸ 1Dç(ÕyIvažÔ¸% óyÌ<Šr×±f±)̲ Eè9µ©<|f,ÿ-¤4ÉŽÅ3Rý£Bc¹Ì.DŸDàgT™ÄŠNò¢J)¦3“TOˆ¦ð—&•SLC´MAÖyÙÄiDe¹Ó¢ÕÒ§3*I‡Ôž¦“¦GBRZTR>õ&ªG£êI«–`7­UNù¨Upu¨^ý©T÷0V¥*‹,dlk%ˆçÖ¸B‚§ö$h1«êÒk¢ ¬ŪNO—WßÉ”‡F•T`é:Xe2™\9¬P?JL—B3UŽK¥UÙ9šE£ º"<ÇÇÖu¡!E" ׸Äç”Ço.¼Y<KTÂʇ…kmhÍG•„hjÏú¬¹rfÙavÒãêíÍÆ–="*Äÿ²ÚwCàîõ²ý,ïÝnjÌf"Š\ãê,rÔжÏŹTû]êJ„\ãÍ tÍk\!ê,¼"ô— £«]ŽNW¸¾lœÌf[äx¢ƒëÔz‘Æ/Ï–÷¸Vß~!;\ß²0»-ðDkâζ}gl"¼X¿Ì7 OH´‰]jÓ8|1Ú~8m€S±ƒzàjrVÆ«eh½\÷×·@qÎÿ·à}É™MiŽG Rü}i—Ñ(2a`[V¿6ÍÉ#5¬d[ÅÊC&bf©ªÞ/ËCÌ%à«EÙú]¹Â5ÍnÅr”e›U´vu­f³Ò Õßs±yÀóC³l⳪ô”a¾òÿ ÁYè»65f^éWíØhT5Ñ|6çUõl×J?sÎà’2¤=>þNÓÐ %’#«èC·“Ò…-5c½çV[ÚÔ˜&­¬7MV:{Ú)*'®ÕÚi8sÓ+ÈuY-OY3ÒRËu°µœìü^ÆØ>`öª= O[ÕûD¶hXj÷ÀÚÝÆ6¯µýÂY¿úÍÐþ6ºqøë<Ã:ÓÅ/o¸]Ro[méŽ÷ŸÙhw¸ß0uö½üéD‘ßHÕ7q{íM†;Õ៖8£)Grs`Ò¢Æ(™Ä郛—â^$ÆYòh¼á#OßÊÛgp!|y/§_̧Ò'w°mòx¬ÿ½iV̳桮ó_×G=`‹Üè`M¹ˆ~l4³¹­kžúÝL«·™Äká×gåT™ªAY¾Žv¼’úÁ{-»ÎŽv¯«]ìl'ûÚÁL–¸ËÖ\v ”=Ðd·£j*z—WÄ7YC#Ä4¦èØßRj:§ñÄÅz ?²®Ý ŒÑ«ñÒº‹ÚÇC¾º³•¶ÒHT÷®«ÎmñB¼0÷£Ûšuô#=äA"!çÆW¿x׬f·à[kh†:ééq]}‘—Åò´¼¤ÿŒôš™>Ô.í_ZfÂÿþ` »Úð·F9‚­G¼ÿ]¾½qdû·Ç˜½åO°‹ƒÌ{À½eÞOXÜZe0ÿÄkŒã¯oL 6p·Ƴaÿg1îW1»u~É•zïs)öW7Ý÷.à~ó•á~º·€¨×{Tµ\Ï2a¸r^­ñ—}y7+UÓ}¸:­‡8Æ×Öaø/×W{ øvÐÇ[Ø~º5‚Åo&˜uÚGØò0]7’“a˜u>dÊ·mé×l1–4£pe³/6hLRøŇ:iÇy‚÷|òy „Uâ|]H9_øu‰W†šF†'È„þu†Z’†´†š‡yL6jt×£7‡ÖÕcY‘AÉc/†oÈ—9è—)bânHYù‡jÑy¢¢<Žw™¢%šI4j”Úù1j‰£d²‹=ê£? ¤A*¤CJ¤Ej¤GФIª¤KʤMê¤O ¥Q*¥SJ¥Uj¥Wš;tendra-doc-4.1.2.orig/doc/images/warn.gif100644 1750 1750 277 6466607532 17546 0ustar brooniebroonieGIF89a)!¡ÿÿÿÿÿÿ!ù,)!„©Ëíã ²º€­N˜îÝeŸŠ#Tšg ázaÂüÂJ;»ªmtyÞá|?ZPH(»Q üŽ+tÉljžWéT[áv‹:ÏG%×¶²bZ ž´o¸Y"žÕ‹Ôˆ• ˜6XXæGè†ØÇ’H·È×óée؈ðgȹƑ*ª5ZjÚtš*ÄÚzP;tendra-doc-4.1.2.orig/doc/images/work.gif100644 1750 1750 314 6466607532 17551 0ustar brooniebroonieGIF89a&#¡ÿÿÿÿÿÿ!ù,&#„©Ëí£œ´Ö`mØ™nÜE>#Y.'š"£0¶É'Ô± ¼5̦ônëu~‚ÁîVú –Gáe³[› Œ®6¥VŸ(ÈÛN”Ò,ð£–¥à°Ø-Ëí°'NŸ›ƒoQï•e„q•çHè@ôõ´f×À(¸·6èt@æ¨G§87·IiyùvU—Z‡d°âú û;K‹R{ûŠ£ÛR;tendra-doc-4.1.2.orig/doc/pl/ 40755 1750 1750 0 6507734474 15175 5ustar brooniebroonietendra-doc-4.1.2.orig/doc/pl/pl1.html100644 1750 1750 4573 6466607532 16663 0ustar brooniebroonie PL_TDF Definition

PL_TDF Definition

January 1998

next section previous section current document TenDRA home page document index


1 - Introduction
2 - Notation
3 - The Language
4 - Example PL_TDF programs
5 - Use of the PL_TDF compiler

1 Introduction

PL_TDF is a language in the lineage of Wirth's PL360 and its later derivatives. The basic idea in PL360 was to give one an assembler in which one could express all of the order-code of the IBM 360 while still preserving the logical structure of the program using familiar programming constructs. If one had to produce a program at the code level, this approach was much preferable to writing "flat" assembly code using a traditional assembler, as anyone who has used both can testify.

In the TDF "machine" the problem is not lack of structure at its "assembly" level, but rather too much of it; one loses the sense of a TDF program because of its deeply nested structure. Also the naming conventions of TDF are designed to make them tractable to machine manipulation, rather than human reading and writing. However, the approach is basically the same. PL_TDF provides shorthand notations for the commonly occuring control structures and operations while still allowing one to use the standard TDF constructors which, in turn, may have shorthand notations for their parameters. The naming is always done by identifiers where the sort of the name is determined by its declaration, or by context.

The TDF derived from PL_TDF is guaranteed to be SORT correct; however, there is no SHAPE checking, so one can still make illegal TDF.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/pl/pl3.html100644 1750 1750 11173 6466607532 16677 0ustar brooniebroonie PL_TDF Definition

PL_TDF Definition

January 1998

next section previous section current document TenDRA home page document index


2.1 - Syntax description
2.2 - Lexical Units
2.3 - Pre-processing

2 Notation


2.1. Syntax description

Words enclosed in angle brackets, < >, form non-terminal symbols. Other symbols and words stand for themselves as terminal symbols. An expansion of a non-terminal is indicated using ::= with its expansion given as a sequence (possibly empty) of terminals and non-terminals. For example:

	<Exp> ::= * <ident>
is a possible expansion of an EXP SORT. If the word for the non-terminal starts with a capital letter then it will be totally described by a set of such expansions; otherwise the expansion of the non-terminal will be given by other methods in the text.

The post-fix -Opt on a non terminal is an abreviation allowing an empty expansion. For example:

	<Access>-Opt
is equivalent to the use of another non-terminal <AccessOption> whose expansions are:

	<AccessOption> ::= 
	<AccessOption> ::= <Access>
The post-fix -List on a non terminal is an abreviation for lists of objects seperated by the ,-symbol. For example:

	<Exp>-List
is equivalent to the use of another non-terminal <ExpList> whose expansions are:

	<ExpList> ::= <Exp>
	<ExpList> ::= <ExpList> , <Exp>
Both of these post-fix notations are also used with sequences of terminals and non-terminals within the angle brackets with the same kind of expansion. In these cases, the expansion within the angle brackets form an anonymous non-terminal.


2.2. Lexical Units

The terminal symbols ( ), [ ], and { } always occur as parenthetic pairs and never form part of other terminal symbols.

The terminal symbols , ; and : are similarly terminators for other terminal symbols.

White space is a terminator for other terminal symbols but is otherwise ignored except in strings.

All other terminal symbols are sequences of ACSII symbols not including the above. These are divided into seven classes: keywords, TDF constructors, operators, <integer_denotation>s, <floating_denotation>s, <string>s and <ident>s.

The keywords and operators are expressed directly in the syntax description. The TDF constructors are those given in the TDF specification which have first-class SORTs as parameters and results.

An <integer_denotatation> allows one to express an integer in any base less than 16, with the default being 10.

	<integer_denotation> ::= <digit>
	<integer_denotation> ::= <integer_denotation> <digit>
	<integer_denotation> ::= <base> <integer_denotation>

	<base> ::= <integer_denotation> r
Examples are 31, 16r1f, 8r37, 2r11111 - all giving the same value.

A <floating_denotation> is an <integer_denotation> followed by the . symbol and a sequence of digits. The radix of the <floating_denotation> is given by the base of its component <integer_denotation>

A <string> is the same as a C string - any sequence of characters within " ". The same C conventions hold for \ within strings for single characters.

A <character> is an string character within ` `. The same \ conventions hold.

An <ident> is any other sequence of characters. They will be used to form names for TAGs, TOKENs, AL_TAGs and LABELs.


2.3. Pre-processing

At the moment there is only one pre-processing directive. A line starting with #include will textually include the following file (named within string quotes), using the same path conventions as C.

Comments may be included in the text using the /* ... */ notation; this differs slightly from the C convention in that comments may be nested.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/pl/pl4.html100644 1750 1750 105744 6466607532 16730 0ustar brooniebroonie PL_TDF Definition

PL_TDF Definition

January 1998

next section previous section current document TenDRA home page document index


3.1 - Program
3.1.1 - Tokdec
3.1.2 - Tokdef
3.1.3 - Tagdec
3.1.4 - Tagdef
3.1.5 - Altagdef
3.1.6 - Structdef
3.1.7 - Procdef
3.2 - First-class SORT expansions
3.2.1 - Access
3.2.2 - Al_tag
3.2.3 - Alignment
3.2.4 - Bitfield_variety
3.2.5 - Bool
3.2.6 - Error_treatment
3.2.7 - Exp
3.2.7.1 - ExpTerm
3.2.8 - Floating_variety
3.2.9 - Label
3.2.10 - Nat
3.2.11 - Ntest
3.2.12 - Rounding_mode
3.2.13 - Shape
3.2.14 - Signed_Nat
3.2.15 - String
3.2.16 - Tag
3.2.17 - Token
3.2.18 - Transfer_mode
3.2.19 - Variety
3.3 - Control structure and local declarations
3.3.1 - ConditionalExp and Assertion
3.3.2 - RepeatExp
3.3.3 - LabelledExp
3.3.4 - Local_Defn

3 The Language

The basic philosophy of PL_TDF is to provide the "glue" constructors of TDF automatically, while still allowing the programmer to use the significant constructors in their most general form. By "glue" constructors, I mean those like make_link, make_group etc. which are there to provide tedious, but vital, constructions concerned with linking and naming. The "significant" constructors really come in two groups, depending on their resulting SORTs. There are those SORTs like TOKDEC, whose SORTs are purely syntactic and can't be used as results of token applications or _cond constructions. On the other hand, the first-class SORTs, like EXP, can be used in those situations and generally have a much richer set of constructors. These first-class SORTs are precisely those which have SORTNAMEs. These SORTNAMEs appear in PL_TDF as expansions of <Sortname>:
	<Sortname> ::= ACCESS
	<Sortname> ::= AL_TAG
	<Sortname> ::= ALIGNMENT
	<Sortname> ::= BITFIELD_VARIETY
	<Sortname> ::= BOOL
	<Sortname> ::= ERROR_TREATMENT
	<Sortname> ::= EXP
	<Sortname> ::= FLOATING_VARIETY
	<Sortname> ::= LABEL
	<Sortname> ::= NAT
	<Sortname> ::= NTEST
	<Sortname> ::= ROUNDING_MODE
	<Sortname> ::= SHAPE
	<Sortname> ::= SIGNED_NAT
	<Sortname> ::= STRING
	<Sortname> ::= TAG
	<Sortname> ::= TRANSFER_MODE
	<Sortname> ::= VARIETY
All of the significant constructors are expanded by non-terminals with names related to their resulting SORT e.g. all EXPs are expanded by <Exp> and all TOKDECs are expanded by <Tokdec>. Any first-class SORT can be expanded by using the constructor names given in the TDF specification, provided that the parameter SORTs are also first-class. For example, the following are all valid expansions of <Exp> :
	make_top
	return(E)                  where E is an expansion of <Exp>
	goto(L)                    where L is an expansion of <Label>
	assign(E1, E2)             where E1 and E2 are expansions of <Exp>
Any such use of TDF constructors will be checked for the SORT-correctness of their parameters. I will denote such a constructor as an <exp_constructor>; similarly for all the other first-class sorts.

Any of the first-class sorts may also be expanded by a token application. Tokens in PL_TDF are given <ident> names by <Tokdef> or <Tokdec> which must occur before their use in applications. In applications, these names will be denoted by <exp_token>, <shape_token> etc. , depending on the result sort of their introduction.

The principle of "no use before declaration" also applies to <ident> names given to TAGs.


3.1. Program

The root expansion of a PL_TDF program is given by <Program>:
	<Program> ::= <ElementList> Keep ( <Item>-List-Opt )
	
	<ElementList> ::= <Element> ;
	<ElementList> ::= <Element> ; <ElementList>
	
	<Element> ::= <Tokdec>
	<Element> ::= <Tokdef>
	<Element> ::= <Tagdec>
	<Element> ::= <Tagdef>
	<Element> ::= <Altagdef>
	<Element> ::= <Structdef>
	<Element> ::= <Procdef>
	
	<Item> ::= <tag>
	<Item> ::= <token>
	<item> ::= <altag>
A <Program> consists of a list of definitions and declarations giving meaning to various <ident>s, as TAGs, TOKENs and AL_TAGs. The <Item>-List-Opt indicates which of these names will be externally available via CAPSULE_LINKs; in addition any other names which are declared but not defined will also be linked externally.

A <Program> will produce a single TDF CAPSULE.

3.1.1. Tokdec

A <Tokdec> introduces an <ident> as a TOKEN:
	<Tokdec> ::= Tokdec <ident><Signature>: [ <TokDecPar>-List-Opt ] <ResultSort>
	
	<ResultSort> ::= <Sortname>
	<TokDecPar> ::= <Sortname>
	<TokDecPar> ::= TOKEN [ <TokDecPar>-List-Opt ] <ResultSort>
	<Signature> ::= <String>-Opt
This produces a TOKDEC in a tokdec UNIT of the CAPSULE. Further uses of the introduced <ident> will be treated as a <x-token> where x is given by the <ResultSort>.

3.1.2. Tokdef

A <Tokdef> defines an <ident> as a TOKEN; this <ident> may have previously been introduced by a <Tokdec>:
	<Tokdef> ::= Tokdef <ident><Signature> = <Tok_Defn>
	
	<Tok_Defn> ::= [ <TokDefPar>-List-Opt ] <ResultSort> <result_sort>
		<TokDefPar> ::= <ident> : <TokDecPar>
	<Signature> ::= <String>-Opt
This produces a TOKDEF in a tokdef UNIT of the CAPSULE. The expansion of <result_sort> depends on <ResultSort>, e.g. if <ResultSort> is EXP then <result_sort> ::= <Exp> and so on.

Each of the <ident>s in the <TokDefPar>s will be names for tokens whose scope is <result_sort>. A use of such a name within its scope will be expanded as a parameterless token application of the appropriate sort given by its <TokDecPar>. Note that this is still true if the <TokDecPar> is a TOKEN - if a <TokDefPar> is:

	x: TOKEN[ LABEL ]EXP
then x[L] is expanded as:
	exp_apply_token( token_apply_token(x, ()), L)
<Tok_defn> also occurs in an expansion of <Token>, as a parameter of a token application.

3.1.3. Tagdec

A <Tagdec> introduces an <ident> as a TAG:
	<Tagdec> ::= <DecType> <ident> <Signature> <Access>-Opt : <Shape>
	
	<DecType> ::= Vardec
	<DecType> ::= Iddec
	<DecType> ::= Commondec
	<Signature> ::= <String>-Opt
This produces a TAGDEC in a tagdec UNIT of the CAPSULE, using a make_id_tagdec for the Iddec option, a make_var_tagdec for the Vardec option and a common_tagdec for the Commondec option.

The <Shape>s in both <Tagdec>s and <Tagdef>s will produce SHAPE TOKENs in a tagdef UNIT; these may be applied in various shorthand operations on TAG <ident>s.

3.1.4. Tagdef

A <Tagdef> defines an <ident> as a TAG. This <ident> may have previously been introduced by a <Tagdec>; if it has not the < : <Shape> >-Opt below must not be empty and a TAGDEC will be produced for it.

	<Tagdef> ::= Var <ident><Signature> < : <Shape> >-Opt < = <Exp>>-Opt
Produces a make_var_tagdef.

	<Tagdef> ::= Common <ident> <Signature>< : <Shape> >-Opt < = <Exp> >-Opt
Produces a common_tagdef.

	<Tagdef> ::= Let <ident><Signature> < : <Shape> >-Opt = <Exp>
Produces a make_id_tagdef.

	<Tagdef> ::= String <ident> <Variety>-Opt =<string>
This is a shorthand for producing names which have the properties of C strings. The <Variety>-Opt gives the variety of the characters with the string, an empty option giving unsigned chars. The TDF produced is a make_var_tagdef initialised by a make_nof_int. This means that given a String definition:
	String format = "Result = %d\n"
the tag <ident>, format, could be used straightforwardly as the first parameter of printf - see
Section 4 (Example PL_TDF programs).

3.1.5. Altagdef

An <Altagdef> defines an <ident> as an AL_TAG:
	<Altagdef> ::= Al_tagdef <ident> = <Alignment>
This produces an AL_TAGDEF in an al_tagdef UNIT of the CAPSULE. The <ident> concerned can be previously used in as an expansion of <Alignment>.

3.1.6. Structdef

A <Structdef> defines a TOKEN for a structure SHAPE, together with two TOKENs for each field of the structure to allow easy access to the offsets and contents of the field:
	<Structdef> ::= Struct <Structname> ( <Field>-List )
	
	<Structname> ::= <ident>
	
	<Field> ::= <Fieldname> : <Shape>
	
	<Fieldname> ::= <ident>
This produces a TOKDEF in a tokdef UNIT defining <Structname> as a SHAPE token whose expansion is an EXP OFFSET(a1,a2) where the OFFSET is the size of the structure with standard TDF padding and offset addition of the component SHAPEs and sizes (note that this may not correspond precisely with C sizes).

Each <Fieldname> will produce two TOKENs. The first is named by <Fieldname> itself and is a [EXP]EXP which gives the value of the field of its structure parameter. The second is named by prefixing <Fieldname> by the.-symbol and is an [ ]EXP giving the OFFSET of the field from the start of the structure. Thus given:

	Struct Complex (re: Double, im: Double)
Complex is a TOKEN for a SHAPE defining two Doubles; re[E] and im[E] will extract the components of E where E is an EXP of shape Complex; .re and.im give EXP OFFSETs of the the two fields from the start of the structure.

3.1.7. Procdef

A <Procdef> defines a TAG to be a procedure; it is simply an abreviation of a an Iddec <Tagdef>:
	<Procdef> ::= Proc <ident> = <Proc_Defn>
	
	<Proc_Defn> ::= <Simple_Proc>
	<Proc_Defn> ::= <General_Proc>
	
	<Simple_Proc> ::= <Shape> ( <TagShAcc>-List-Opt <VarIntro>-Opt ) <ClosedExp>
	
		<TagShAcc> ::= <Parametername> <Access>-Opt : <Shape>
	
		<Parametername> ::= <ident>
	
		<VarIntro> ::= Varpar <Varparname> : <Alignment>
	
		<Varparname> ::= <ident>

	<General_Proc> ::= General <Shape> ( <For_Callers>; <For_Callees>) <ProcProps>-Opt <ClosedExp>
	
		<For_Callers> ::= <TagShAcc>-List-Opt <...>-Opt
	
		<For_Callees> ::= <TagShAcc>-List-Opt <...>-Opt
	
		<ProcProps> ::= <untidy>-Opt <check_stack>-Opt
A <Procdef> produces a TAGDEF in a tagdef UNIT and and, possibly, a TAGDEC in a tagdef UNIT.

A <Simple_Proc> produces a make_proc with the obvious operands. The scope of the tag names introduced by <Parametername> and <Varparname> is the <ClosedExp> (see section 3.3).

A <General_Proc> produces a make_general_proc with formal caller parameters given by <For_callers> and the formal callee parameters given by <For_callees>; in both cases the <...> option says that the procedure can be called with a variable number of parameters. The scope of the tag names are the same as for <Simple_Proc>.


3.2. First-class SORT expansions

All of the first-class sorts have similar expansions for native TDF constructions and for token applications. I shall take <Shape> as the paradigm sort and allow the reader to conjugate the following for the the other sorts.

Those first-class sorts which include the _cond constructions denote them in the same way:

	<Shape> ::= SHAPE ? ( <Exp>, <Shape>, <Shape> )
This produces a shape_cond with the obvious parameters.

Each constructor for <Shape> with parameters which are first-class sorts can be expanded:

	<Shape> ::= <shape_constructor> < ( <constructor_param>-List ) >-Opt
Each <constructor_param> will be the first_class SORT expansion, required by the <shape_constructor> as in the TDF specification eg the constructor, pointer, requires a <constructor_param> ::= <Alignment>.

Any <ident> which is declared to be a <shape_token> by a TOKDEF or TOKDEC can be expanded:

	<Shape> ::= <shape_token> < [ <token_param>-List ] >-Opt
This will produce a shape_apply_token with the appropriate parameters. Each <token_param> will be the first-class SORT expansion required by the SORT given by the <TokDecPar> of the TOKDEF or TOKDEC which introduced <shape_token>.

3.2.1. Access

	<Access> ::= ACCESS ? ( <Exp> , <Access> , <Access> )
	<Access> ::= <access_constructor> < ( <constructor_param>-List ) >-Opt
	<Access> ::= <access_token> < [ <token_param>-List ] >-Opt
There are no expansions of <Access> other than the standard ones.

3.2.2. Al_tag

	<Al_tag> ::= <al_tag_token> < [ <token_param>-List ] >-Opt
The standard token expansion.

	<Al_tag> ::= <ident>
Any <ident> found as an expansion of <Al_tag> will be declared as the name for an AL_TAG.

3.2.3. Alignment

	<Alignment> ::= ALIGNMENT ? ( <Exp> , <Alignment> , <Alignment> )
	<Alignment> ::= <alignment_constructor> < ( <constructor_param>-List ) >-Opt
	<Alignment> ::= <alignment_token> < [ <token_param>-List ] >-Opt
The standard expansions.

	<Alignment> ::= <Al_tag>
This results in an obtain_al_tag of the AL_TAG.

	<Alignment> ::= ( <Alignment>-List-Opt )
The <Alignment>s in the <Alignment>-List are united using unite_alignments. The empty option results in the top ALIGNMENT.

3.2.4. Bitfield_variety

	<Bitfield_variety> ::= BITFIELD_VARIETY ? ( <Exp> , <Bitfield_variety>, <Bitfield_variety>)
	<Bitfield_variety> ::= <bitfield_variety_constructor> < ( <constructor_param>-List ) >-Opt
	<Bitfield_variety> ::= <bitfield_variety__token> < [ <token_param>-List ] >-Opt
The standard expansions.

	<Bitfield_variety> ::= <BfSign>-Opt <Nat>
		<BfSign> ::= <Bool>
		<BfSign> ::= Signed
		<BfSign> ::= Unsigned
This expands to bfvar_bits. The empty default on the sign is Signed.

3.2.5. Bool

	<Bool> ::= BOOL ? ( <Exp> , <Bool>, <Bool>)
	<Bool> ::= <bool_constructor> < ( <constructor_param>-List ) >-Opt
	<Bool> ::= <bool_token> < [ <token_param>-List ] >-Opt
There are no expansions of <Bool> other than the standard ones.

3.2.6. Error_treatment

	<Error_treatment> ::= ERROR_TREATMENT ? 
	                                                   ( <Exp> , <Error_treatment>, <Error_treatment>)
	<Error_treatment> ::= <error_treatment_constructor> < ( <constructor_param>-List ) >-Opt
	<Error_treatment> ::= <error_treatment__token> < [ <token_param>-List ] >-Opt
The standard expansions.

	<Error_treatment> ::= <Label>
This gives an error_jump to the label.

	<Error_treatment> ::= [ <Error_code>-List]
		<Error_code> ::= overflow
		<Error_code> ::= nil_access
		<Error_code> ::= stack_overflow
Produces trap with the <Error_code>s as arguments.

3.2.7. Exp

	<Exp> ::= <ExpTerm>
	<Exp> ::= <ExpTerm> <BinaryOp> <ExpTerm>
The <BinaryOp>s include the arithmetic, offset, logical operators and assignment and are given in table 1. In this expansion, any error_treatments are taken to be wrap.

The names like *+. (i.e. add_to_ptr) do have a certain logic; the * indicates that the left operand must be a pointer expression and the. that the other is an offset

The further expansions of <Exp> are all <ExpTerm>s

3.2.7.1. ExpTerm

	<ExpTerm> ::= EXP ? ( <Exp> , <Exp>, <Exp>)
	<ExpTerm> ::= <exp_constructor> < ( <constructor_param>-List ) >-Opt
	<ExpTerm> ::= <exp_token> < [ <token_param>-List ] >-Opt
The standard expansions.

	<ExpTerm> ::= <ClosedExp>
For <ClosedExp>, see
section 3.3.

	<ExpTerm> ::= ( <Exp> )
	<ExpTerm> ::= - ( <Exp> )
The negate constructor.

	<ExpTerm> ::= Sizeof ( <Shape> )
This produces the EXP OFFSET for an index multiplier for arrays of <Shape>. It is the shape_offset of <Shape> padded up to its alignment.

	<ExpTerm> ::= <Tag>
This produces an obtain_tag.

	<ExpTerm> ::= * <ident>
The <ident> must have been declared as a variable TAG and the construction produces a contents operation with its declared SHAPE.

	<ExpTerm> ::= * ( <Shape> ) <ExpTerm>
This produces a contents operation with the given <Shape>.

	<ExpTerm> ::= <Assertion>
For <Assertion>, see section 3.3.1

	<ExpTerm> ::= Case <Exp> ( <RangeDest>-List )
		<RangeDest> ::= <Signed_Nat> < : <Signed_Nat> >-Opt -> <Label>
This produces a case operation.

	<ExpTerm> ::= Cons [ <Exp> ] ( < <Offset> : <Exp> >-List )
		<Offset> ::= <Exp>
This produces a make_compound with the [ <Exp> ] as the size and fields given by < <Offset> : <Exp> >-List.

	<ExpTerm> ::= [ <Variety> ] <ExpTerm>
This produces a change_variety with a wrap error_treatment.

	<ExpTerm> ::= <Signed_Nat> ( <Variety> )
This produces a make_int of the <Signed_Nat> with the given variety.

	<ExpTerm> ::= <floating_denotation> < E <Signed_Nat> >-Opt <Rounding_Mode>-Opt
	<ExpTerm> ::= - <floating_denotation> < E <Signed_Nat> >-Opt <Rounding_Mode>-Opt
Produces a make_floating.

	<ExpTerm> ::= <ProcVal> [ <Shape> ] ( <Exp>-List-Opt < Varpar <Exp> >-Opt)
	
	<ProcVal> ::= <Tag>
	<ProcVal> ::= ( <Exp> )
Produces an apply_proc with the given parameters returning the given <Shape>.

	<ExpTerm> ::= 		<ProcVal> [ <Shape> ]
				[ <Act_Callers>-Opt ; <Act_Callees>-Opt <; <Postlude>>-Opt ] 
				<ProcProps>-Opt
		<Act_Callers> ::= <<Exp> <: <ident>>-Opt>-List <...>-Opt
		<Act_Callees> ::= <Exp>-List <...>-Opt
	  	<Act_Callees> ::= Dynamic ( <Exp> , <Exp> ) <...>-Opt
	  	<Act_Callees> ::= Same
		<Postlude> ::= <Exp>
Produces an apply_general_proc with the actual caller parameters given by <Act_Callers> and the calle parameters given by <Act_Callees>; the <...> option indicates that the procedure is expecting a variable number of parameters. Any <ident>s introduced in <Act_Callers> are in scope in <Postlude>.

	<Exp> ::= <ProcVal> Tail_call [ <Act_Callees>-Opt ]
Produces a tail_call with the callee parameters given and same caller parameters as those of the calling procedure.

	<ExpTerm> ::= Proc <Proc_defn>
Produces a make_proc. For <Proc_defn>, see section 3.1.7

	<ExpTerm> ::= <String> ( <Variety> )
Produces a make_nof_int of the given variety.

	<ExpTerm> ::= # <String>
This produces a TDF fail_installer; this construction is useful for narrowing down SHAPE errors detected by the translator.

3.2.8. Floating_variety

	<Floating_variety> ::= FLOATING_VARIETY ? 
	                                              ( <Exp> , <Floating_variety>, <Floating_variety>)
	<Floating_variety> ::= <floating_variety_constructor> < ( <constructor_param>-List ) >-Opt
	<Floating_variety> ::= <floating_variety__token> < [ <token_param>-List ] >-Opt
The standard constructions.

	<Floating_variety> ::= Float
An IEEE 32 bit floating variety.

	<Floating_variety> ::= Double
An IEEE 64 bit floating variety.

3.2.9. Label

	<Label> ::= <label_token> < [ <token_param>-List ] >-Opt
The standard token application.

	<Label> ::= <ident>
The <ident> will be declared as a LABEL, whose scope is the current procedure.

3.2.10. Nat

	<Nat> ::= NAT ? ( <Exp> , <Nat>, <Nat>)
	<Nat> ::= <nat_constructor> < ( <constructor_param>-List ) >-Opt
	<Nat> ::= <nat_token> < [ <token_param>-List ] >-Opt
The standard expansions.

	<Nat> ::= <integer_denotation>
Produces a make_nat on the integer

	<Nat> ::= <character>
Produces a make_nat on the ASCII value of the character.

3.2.11. Ntest

	<Ntest> ::= NTEST ? ( <Exp> , <Ntest>, <Ntest>)
	<Ntest> ::= <ntest_constructor> < ( <constructor_param>-List ) >-Opt
	<Ntest> ::= <ntest_token> < [ <token_param>-List ] >-Opt
The standard expansions.

	<Ntest> ::= !<
Produces not_less_than.

	<Ntest> ::= !<=
Produces not_less_than_or_equal.

	<Ntest> ::= !=
Produces not_equal.

	<Ntest> ::= !>
Produces not_greater_than.

	<Ntest> ::= !>=
Produces not_greater_than_or_equal.

	<Ntest> ::= !Comparable
Produces not_comparable.

	<Ntest> ::= <
Produces less_than.

	<Ntest> ::= <=
Produces less_than_or_equal.

	<Ntest> ::= ==
Produces equal.

	<Ntest> ::= >
Produces greater_than.

	<Ntest> ::= >=
Produces greater_than_or_equal.

3.2.12. Rounding_mode

	<Rounding_mode> ::= ROUNDING_MODE? 
	                                               ( <Exp> , <Rounding_mode>, <Rounding_mode>)
	<Rounding_mode> ::= <ntest_constructor> < ( <constructor_param>-List ) >-Opt
	<Rounding_mode> ::= <ntest_token> < [ <token_param>-List ] >-Opt
There are no constructions for <Rounding_mode> other than the standard ones.

3.2.13. Shape

	<Shape> ::= SHAPE ? ( <Exp> , <Shape>, <Shape>)
	<Shape> ::= <shape_constructor> < ( <constructor_param>-List ) >-Opt
	<Shape> ::= <shape_token> < [ <token_param>-List ] >-Opt
The standard expansions.

	<Shape> ::= Float
The shape for an IEEE 32 bit float.

	<Shape> ::= Double
The shape for an IEEE 64 bit float.

	<Shape> ::= <Sign>-Opt Int
		<Sign> ::= Signed
		<Sign> ::= Unsigned
The shape for a 32 bit signed or unsigned integer. The default is signed.

	<Shape> ::= <Sign>-Opt Long
The shape for a 32 bit signed or unsigned integer.

	<Shape> ::= <Sign>-Opt Short
The shape for a 16 bit signed or unsigned integer.

	<Shape> ::= <Sign>-Opt Char
The shape for a 8 bit signed or unsigned integer.

	<Shape> ::= Ptr <Shape>
The SHAPE pointer(alignment(<Shape>)).

3.2.14. Signed_Nat

	<Signed_Nat> ::= SIGNED_NAT ? ( <Exp> , <Signed_Nat>, <Signed_Nat>)
	<Signed_Nat> ::= <signed_nat_constructor> < ( <constructor_param>-List ) >-Opt
	<Signed_Nat> ::= <signed_nat_token> < [ <token_param>-List ] >-Opt
The standard expansions.

	<Signed_Nat> ::= <integer_denotation>
	<Signed_Nat> ::= - <integer_denotation>
This produces a make_signed_nat on the integer value.

	<Signed_Nat> ::= <character>
	<Signed_Nat> ::= - <character>
This produces a make_signed_nat on the ASCII value of the character.

	<Signed_Nat> ::= LINE
This produces a make_signed_nat on the current line number of the file being compiled - useful for writing test programs.

	<Signed_Nat> ::= + <Nat>
	<Signed_Nat> ::= - <Nat>
This produces an appropriately signed <Signed_Nat> from a <Nat>.

3.2.15. String

	<String> ::= STRING? ( <Exp> , <String>, <String>)
	<String> ::= <string_constructor> < ( <constructor_param>-List ) >-Opt
	<String> ::= <string_token> < [ <token_param>-List ] >-Opt
The standard expansions

	<String> ::= <string>
Produces a make_string.

3.2.16. Tag

	<Tag> ::= <tag_token> < [ <token_param>-List ] >-Opt
The standard token application.

	<Tag> ::= <ident>
This gives an obtain_tag; the <ident> must been declared as a TAG either globally or locally.

3.2.17. Token

TOKEN is rather a limited first-class sort. There is no explicit construction given for token_apply_token, since the only place where it can occur is in an expansion of a token parameter of another token; here it is produced implicitly. The only place where <Token> is expanded is in an actual TOKEN parameter of a token application; other uses (e.g. as in <shape_token>) are always <ident>s.

	<Token> ::= <ident>
The <ident> must have been declarered by a <Tokdec> or <Tokdec> or is a formal parameter of TOKEN.

	<Token> ::= Use <Tok_Defn>
This produces a use_tokdef. For <Tok_Defn> see
section 3.1.2. The critical use of this construction is to provide an actual TOKEN parameter to a token application where the <Tok_Defn> contains uses of tags or labels local to a procedure.

3.2.18. Transfer_mode

	<Transfer_mode> ::= TRANSFER_MODE ? ( <Exp> , <Transfer_mode>, <Transfer_mode>)
	<Transfer_mode> ::= <transfer_mode_constructor> < ( <constructor_param>-List ) >-Opt
	<Transfer_mode> ::= <transfer_mode_token> < [ <token_param>-List ] >-Opt
There are no expansions for <Transfer_mode> other than the standard expansions.

3.2.19. Variety

	<Variety> ::= VARIETY ? ( <Exp> , <Variety>, <Variety>)
	<Variety> ::= <variety_constructor> < ( <constructor_param>-List ) >-Opt
	<Variety> ::= <variety_token> < [ <token_param>-List ] >-Opt
The standard expansions.

	<Variety> ::= <Signed_Nat> : <Signed_Nat>
This produces var_limits.

	<Variety> ::= <Sign>-Opt Int
	<Variety> ::= <Sign>-Opt Long
	<Variety> ::= <Sign>-Opt Short
	<Variety> ::= <Sign>-Opt Char
This produces the variety of the appropriate integer shape.


3.3. Control structure and local declarations

The control and declaration structure is given by <ClosedExp>:
	<ClosedExp> ::= { <ExpSeq> }
	
	<ExpSeq> ::= <Exp>-Opt
	<ExpSeq> ::= <ExpSeq> ; <Exp>-Opt
This produces a TDF sequence if there is more than one <Exp>-Opt; if there is only one it is simply the production for <Exp>-Opt; any empty <Exp>-Opt produce make_top.

	<ClosedExp> ::= <ConditionalExp>
	<ClosedExp> ::= <RepeatExp>
	<ClosedExp> ::= <LabelledExp>
	<ClosedExp> ::= <Local_Defn>
The effect of these, together with the expansion of <Assertion> is given below.

3.3.1. ConditionalExp and Assertion

	<ConditionalExp> ::= ? { <ExpSeq> | <LabelSetting>-Opt <ExpSeq> }
	
	<LabelSetting> ::= : <Label> :
This produces a TDF conditional. The scope of a LABEL <ident> which may be introduced by <Label> is the first <ExpSeq>. A branch to the second half of the conditional will usually be made by the failure of an <Assertion> ( ie a TDF _test) in the first half.

	<Assertion> ::= <Query> ( <Exp> <Ntest> <Exp> <FailDest>-Opt )
	
	<Query> ::= ?
The assertion will be translated as an integer_test

	<Query> ::= F?
The assertion will be translated as a floating_test
with a wrap error_treatment.

	<Query> ::= *?
The assertion will be translated as a pointer_test.

	<Query> ::=.?
The assertion will be translated as an offset_test.

	<Query> ::= P?
The assertion will be translated as a proc_test.

	<FailDest> ::= | <Label>
	
The <Assertion> will produce the appropriate _test on its component <Exp>s. If the test fails, then control will pass to the <FailDest>-Opt. If <FailDest>-Opt is not empty, this is the <Label>. Otherwise, the <Assertion> must be in the immediate context of a <ConditionalExp> or <RepeatExp> with an empty <LabelSetting>-Opt; in which case this is treated as an anonymous label and control passes to there. For example, the following <Conditional> delivers the maximum of two integers:
	?{ ?(a >= b); a | b }
This is equivalent to:
	?{ ?(a >= b | L ); a | :L: b }
without the hassle of having to invent the LABEL name, L.

3.3.2. RepeatExp

	<RepeatExp> ::= Rep <Starter>-Opt { <LabelSetting>-Opt <ExpSeq> }
	
	<Starter> = ( <ExpSeq> )
This produces a TDF repeat. The loop will usually repeat by an <Assertion> failing to the <LabelSetting>-Opt; an empty <LabelSetting>-Opt will follow the same conventions as one in a <Conditional>. An empty <Starter>-Opt will produce make_top.

3.3.3. LabelledExp

	<LabelledExp> ::= Labelled { <ExpSeq> <Places> }
	
	<Places> ::= <Place>
	<Places> ::= <Places> <Place>
	
	<Place> ::= | : <Label> : <ExpSeq>
This produces a TDF labelled with the obvious parameters. The scope of any LABEL <idents> introduced by the <Label>s is the <LabelledExp>.

3.3.4. Local_Defn

A <Local_Defn> introduces an <ident> as a TAG for the scope of the component <ClosedExp>. Any containing an <Access> visible is also available globally - however it will only make sense in the constructor env_offset.

	<Local_Defn> ::= Var <ident> <Access>-Opt <VarInit> <ClosedExp>
	
	<VarInit> ::= = <Exp>
This <Local_Defn> produces a TDF variable with the obvious parameters.

	<Local_Defn> ::= Var <ident> <Access>-Opt : <Shape> <VarInit>-Opt <ClosedExp>
Also a TDF variable. An empty <VarInit>-Opt gives make_value(<Shape>) as the initialisation to the variable. Using this form of variable definition also has the advantage of allowing one to use the simple form of the contents operation ( * in
section 3.2.7 ).

	<Local_Defn> ::= Let <ident> <Access>-Opt = <Exp> <ClosedExp>
This produces a TDF identify with the obvious parameters.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/pl/pl5.html100644 1750 1750 13273 6466607532 16704 0ustar brooniebroonie Example PL_TDF programs

PL_TDF Definition

January 1998

next section previous section current document TenDRA home page document index


4.1 - Sieve of Erastothenes
4.2 - Example with structures
4.3 - Test for case
4.4 - Example of use of high-order TOKENs
4.5 - A test for long jumps

4 Example PL_TDF programs


4.1. Sieve of Erastothenes

	/* Print out the primes less than 10000 */
	String s1 = "%d\t";					/* good strings for printf */
	String s2 = "\n";
	
	Var n: nof(10000, Char);					/* will contain1 for prime; 0 for composite */
	
	Tokdef N = [ind:EXP]EXP n *+. (Sizeof(Char) .* ind);
					/* Token delivering pointer to element of n */
	
	Iddec printf : proc;				/* definition provided by ansi library */
	
	Proc main = top ()
		Var i:Int
		Var j:Int
		{ Rep (i = 2(Int))
			{ 	/* set i-th element of n to 1 */
			 N[* i] = 1(Char);
			 i = (* i + 1(Int));
			 ?(* i >= 10000(Int))			/* NB assertion fails to continue loop */
			}
		Rep (i = 2(Int) )
		 	{ 
		 	 ?{ 	?( *(Char)N[* i] == 1(Char));
				/* if its a prime ... */
		 	 	Rep ( j = (* i + * i) )
		 	 	{ /*... wipe out composites */
		 	 	N[* j] = 0(Char);
		 	 	j = (* j + * i);
		 	 	?(* j >= 10000(Int))
		 	 }
		 	 | make_top
		 	 };
		 	 i = (* i + 1(Int));
		 	 ?(* i >= 100(Int)) 
		 	 };
		 Rep (i = 2(Int); j = 0(Int) )
		 	{ 	?{ 	?( *(Char)N[* i] == 1(Char));
					/* if it's a prime, print it */
		 	 		printf[top](s1, * i);
		 			 j = (* j + 1(Int));
		 	 		?{ 	?( * j == 5(Int));
						/* print new line */
		 	 			printf[top](s2);
		 	 			j = 0(Int)
		 	 		 | make_top
		 	 		}
		 	 	| make_top
		 	 	};
		 	 	i = (* i + 1(Int));
		 	 	?(* i >= 10000(Int))
		 	 }; 
		 return(make_top)
		 };
	
	Keep (main)			/* main will be an external name; so will printf since it is not defined */

4.2. Example with structures

	Struct C (re:Double, im:Double);
			/* define TOKENs : C as a SHAPE for complex, with field offsets .re and .im
				and selectors re and im */
	
	Iddec printf:proc;
	
	Proc addC = C (lv:C, rv:C) 					/* add two complex numbers */
		Let l = * lv
		Let r = * rv
		{ return( Cons[shape_offset(C)] ( .re: re[l] F+ re[r], .im: im[l] F+ im[r]) ) } ;
		
	String s1 = "Ans = (%g, %g)\n";
	
	Proc main = top()
		Let x = Cons[shape_offset(C)] (.re: 1.0(Double), .im:2.0(Double)) 
		Let y = Cons[shape_offset(C)] (.re: 3.0(Double), .im:4.0(Double))
		Let z = addC[C](x,y)
		{	printf[top](s1, re[z], im[z]);
				/* prints out "Ans = (4, 6)" */
			return(make_top)
		};
	
	Keep(main)

4.3. Test for case

	Iddec printf:proc;
	
	String s1 = "%d is not in [%d,%d]\n";
	String s2 = "%d OK\n";
	
	Proc test = top(i:Int, l:Int, u:Int)					/* report whether l<=i<=u */
	 	?{ 	?(* i >= * l); ?(* i <= * u);
	 		printf[top](s2, * i); 
	 		return(make_top)
	 	| 	printf[top](s1, * i, * l, * u);
	 		return(make_top)
	 	};
	
	String s3 = "ERROR with %d\n";
	 
	Proc main = top()				/* check to see that case is working */
	Var i:Int = 0(Int)
		 Rep { 
	 		Labelled {
	 			Case * i (0 -> l0, 1 -> l1, 2:3 -> l2, 4:10000 -> l3)
	 			| :l0: test[top](* i, 0(Int), 0(Int))
	 			| :l1: test[top](* i, 1(Int), 1(Int))
	 			| :l2: test[top](* i, 2(Int), 3(Int))
	 			| :l3: printf[top](s3, * i)
	 		};
		 i = (* i + 1(Int));
	 	?(* i > 3(Int));
	 	return(make_top)
	 };
	 
	Keep (main, test)

4.4. Example of use of high-order TOKENs

	Tokdef IF = [ boolexp:TOKEN[LABEL]EXP, thenpt:EXP, elsept:EXP] EXP
				?{ boolexp[lab]; thenpt | :lab: elsept };
			/* IF is a TOKEN which can be used to mirror a standard if ... then ... else
				 construction; the boolexp is a formal TOKEN with a LABEL parameter
	 			 which is jumped to if the boolean is false */
		
	Iddec printf: proc;
	
	String cs = "Correct\n";
	String ws = "Wrong\n";
	
	Proc main = top()
		Var i:Int = 0(Int) 
		{
		 	IF[ Use [l:LABEL]EXP ?(* i == 0(Int) | l), printf[top](cs), printf[top](ws) ];
				/* in other words if (i==0) printf("Correct") else printf("Wrong") */
		 	IF[ Use [l:LABEL]EXP ?(* i != 0(Int) | l), printf[top](ws), printf[top](cs) ];
		 	i = IF[ Use [l:LABEL]EXP ?(* i != 0(Int) | l), 2(Int), 3(Int)];
		 	IF[ Use [l:LABEL]EXP ?(* i == 3(Int) | l), printf[top](cs), printf[top](ws) ];
	 		return(make_top)
		 };
	
	Keep (main)

4.5. A test for long jumps

	Iddec printf:proc;
	
	Proc f = bottom(env:pointer(frame_alignment), lab:pointer(code_alignment) )
	{
		long_jump(* env, * lab)
	};
	
	String s1 = "Should not reach here\n";
	String s2 = "long-jump OK\n";
	
	Proc main = top()
	Labelled{
		 	f[bottom](current_env, make_local_lv(l));
		 	printf[top](s1);			/* should never reach here */
		 	return(make_top)
		       | :l: 
		 	printf[top](s2);
		 	return(make_top)
		      };
	
	Keep (main)

Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/pl/pl6.html100644 1750 1750 5113 6466607532 16657 0ustar brooniebroonie Use of the PL_TDF compiler

PL_TDF Definition

January 1998

next section
previous section current document TenDRA home page document index


5 Use of the PL_TDF compiler

Conventionally, PL_TDF programs are held in normal text files with suffix .pl. The PL_TDF compiler is invoked by:

	pl [-v] [-Iinclude_path ...] [-g] [-V ] infile.pl outfile.j
This compiles infile.pl to TDF in outfile.j. This file can be linked and loaded just as any other .j file using tcc.

The -v option will produce a cut-down pretty print of the TDF for the definitions and declarations of the tags, tokens and al_tags of the program on the standard output.

The -I options will defines the paths for any #include pre-processing directives in the text.

The -g option will put line number information into the TDF.

The -V option will print version information of both pl and the TDF it produces.

Compile-time error reporting is rather rudimentary and error recovery non-existent. Only the first error found will be reported on the standard error channel. This will give some indication of the type of error, together with the text line number and a print-out of the line, marking the place within the line where the error was detected.

Errors which can only be detected at translate-time are much more difficult to correct. These are usually shape or alignment errors, particularly in the construction of offsets. Try compiling and translating with the -g option. On the error, the translator will output the source filename and an approximate line-number corresponding to the position of the error in the PL_TDF.

Translating with the -g option may sometimes give warning messages from the system assembler being used; some assemblers object to being given line number information in anything else but the .text segment of the program. The main intention of the -g option is to detect and correct errors errors thrown up by the translators and not for run-time de-bugging, so do not regard a warning like this as a bug in the system.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/pl/table1.gif100644 1750 1750 11061 6466607532 17146 0ustar brooniebroonieGIF87aìi€ÿÿÿ,ìiþŒ©Ëí£œ´Ú‹ó¼kÅ}âH–扦êʶî !°„ˆÍt4Ì÷þ ‡D Úm” dñ J§Ôê§3èv3OÒ ‹Çä²ëíjì¯ù Ëçodš»–eNºÿ((‚f£¦‡˜ÃÅ4Øèø w籕§Çxi¹ÉÙéù *:JZjzŠšªºÊÚêú +;K[k{‹›«»ËÛëû ,N^n~Žž®¾ÎÞîþ/?O_oŸ¯¿Ïßùo™…€øê ÂA)\¸ê!Ä&9N¤%ñbŒþ5ÆÊèqCÅO C†*²!I“ YJPé ¥Ë {òBB&…fkœTŠ8r'Î>jæ\Á³'“Ÿ¾0¤3(R>šd´Ðõ(Ç NM@õzu#–°ã(qZsèŒ"'™k^ç€zz®¢— PéÀžž`Ȫ;ËúÓ¤¿^m쟗p;íÙÙ‹ø¶ÙÎ |ðÂO|5õ +9ݺ³ÍûÙ³//2´É› }̶_½òÈ÷ŽoöÖKÏ=óÞv(¹Ÿ2=ù gîb¬A› ¾ó¯Fûòs5 axYüªxåß}?’Ñ/Z/‹R d(Ûœï&ž‚ÿȰ–+-0€Ò²Öúb³+qiCK2œ¢.!9A „ï£^çVÁ.EÐiÊ“5| å;½š R¤6Žî*La)‘ÔÍHˆl*¡¿2ÃÐÅÌ9rûÞåDèƒ&gƒT*aþs˜«AI-l“Û@.Ênn1êÂ$ÊäÅ ʆ¾‹ß+GŠ$ „¢H}8•}TtÒÊŽ ¤övEÈ1ô1{‹\IMgÈ‹]Ï‘‘,¤O˜ºJZ²yoô—½ŠÊPŠr”èÂÞ ÿèDMÖo’Så*/¹ÃLº’W¬tÝ,YUËçÝrX¹ÄÜ.“L2 (Ì1U…&2¨‘.óÏ1/à¶û@ó7oC%¦ž™h¶ŽD d&MîƒMüEJEÓÿ.“'6¨Q´&ËÂ9M"Ô…2icgdgxꢂ:cR|Úó3eîLi¬1Ÿ;)þ䆆‰º”þò¸ ~jJ!\Ñㆨ $R #aáºÍ¢µm¤Þ„¦>Ëb¾-áÐKc$ÈxäA1 Ñ“mÌÌX¥áü³ƒá§CqD±‰ƒ;¨0; ÉÁí§ ÄÚ?7ÓÓ”®Ñ-‘©‰R ßÄT¦!][M幟òéGEdbIÛmô•¿œÂYkS‰“òï­ÜtVcwªµÒ²­ní¥,ùÚ%¿Â°…åL“JØh –¦‰]{ØÕ5V±†íê`‘GÊÌjv³œ…‡)]¥W\NVž‹Eìh ZÙ1þõ´ûJícÄZÔr“—-üfËÚÚvq¼2"!ÚÎbK©5|-ìDþ;1”æV¶´…ìÞ’›´å>ª›§hO蔢̕®d—¹ÆëÚÈp_ƒ=¯šFÓî ¬ 5Кœ ¸Ã]f{ ZQÅDpï¨CåšL fƦ¥ècúÉBeTP»Ð: ¬ˆ‰2ÉI+Œì¹KFl½èE8mªì W€Ý›ƒ%ÔûsˆJ«°ekWâ”F'„«QmÚ²Ð&…3Æ`ˆÚ!Œ®¸ÆÇ•±ªVô*.V½«cLM ƒ…4S²ãÐTÕ¬¶X¬Æ}²k?»ÛŽÊ„2ZûGf‚u¹ÈÍUo˜Å\)óòÍëÝÞˆaLç£Ú™¾lÎ3WS%þç¾úŠ{nówíÕB{·¾Ÿì¬£ éH[Ìy5ó&áSÞ¹˜Î4h-]æN{Р樨G]éMwïÔgú2%c»¬XûFK"€£jܵ¶·_¤õš‘YgBwº ¡ÅJeÞ¾rذýHp-ÞÏE¿º!â„rˆÓìê:×Z{vf %Þi»t¡/®£·Ivßö ÈûmB‹¬ÓþŽ:…»&`ºAL¡x·›˜ÁÞ6#$aÃн/ì»'œèE·úÜ^þ·´¯(0 FäY­òhŠ:LD†.‡¨Ú¶Ó±¹}ÍÕá(NKPüÇï~çD«ï­±Kƒü˜‘\Èóf¸²¿Ýcþ¦Z桲ñŸ×#+ÛèÃùVcC=«@½Ü ÿv•Û¤£?É$¿þæŽA@VO™£NVSÕskåšÑYLóLª¾j>ãYëDS´”×Îöߺº•q—;nßÎéºûîL'»Þ³YÚCÿ]‰|÷¹›-éÄ+~ñ¥4‘½×Á~î¶”|< ö¾[¾Ùˆ„¼h7/öÎc}r 'æ_zÏw3óœÇûÖU^öÔ’ÖðcÏzíoòž›dÖ'Ù£ex?y°æ3½é½§žâvÂߨð†}€µ¸G£¯Hñý9§|̆R›çP±Ú¯Ù|ç{­åv;zZqô{8õ¥QwŸÖ~ïþ/?o½n:×EDääП[Dõz_:db%€”õ}W³UöW?ÃdC*AucúçzÆ„pS”{gÕ¥z~ê§,XØE!˜åo޵TB§6Yrm"ˆkÈ|.8z Å,(ƒÅx~—zÒXVò•„bX‹’4ˆÝ¸Øp󨋤Zè‡ùè¬ç¢wz¯6ˆ)ñH†yh „}Ö9è ‘‹ˆxih‘‰‘âb‰v‰¦6èXˆɈIyºT9ŒY’¾ô‘ É‘"jYRXoêb£;3i\ö¨a2‰GÂqq}Ä“#×[rþ“mE“ÜØ‹å˜Oƨ”.™’ÃÔÅHcÇˆŠ©Ø6x“ ù”oSŒüp~ムǔæ¥2O)kWy‹ü–‚R"÷¹x_úh“ü(•O…—TÔ ÜÂFZæ&v •iÇklySÅ÷x8Iôxøè3`-˜SØ‘lŘ+¹Z*9tw’–©[$™™•·™Ÿi’˜ùŽ)=‰š©©šô°‘,1”p † 9’‘¹•t(›1¹]›()˜¡Yš­)„·é‘09œ§Eo߃UI©€\yyÇ9‚axŒ5éB—ŠIMÎ9u‰>°Y”bõoÕÉœ¦§œá©QLÄ5ûÄFPF&”wø1þØ™ Õ:Âù‚·uMYOh©{ð9œZ¢¥0žä)[·qeàI›&õŸÁwG,¢$¶À—éŽ*8“‰"üùO’&¸ð bcV‰Ñžê<Úz“ {È›ëHš™›ú†ôy+š[袟£¶(£'Z›-Š£€©£œé\¶¹£ºy£>zxyµšGФIJÀ)™ÄI™¾©¢%º˜PJ„5:¥)Z¥RÊ=*š,é“D*xXªž;sD£Z:›ö†]µö*޾(f§—h¥[Ê^#‘ÒÔ¤n”Ö¦¢8–÷ˆ¦¸)|0'5ó!{‚3uÁMý¤z‰ᨑ‚2GTºê¦ue©GŠ¢z}Ú¸SßZƒ7µ§Q ­/ú<3šŽ7cvOú®g:8¨T¤,º›`„@ °)°]z™žù›‰‰°î* v:¤Û™©¤K±+ L ˆî9¯ ›¥÷ú¬‹…ãZœT²©j¯ "{²›°šÉWq•”ŠŽÚQÖ2qkᚎ¬Øv&Sþ7¬¬Ú°âÚ“í´–Ò‰X6­<”g PúÓ©Z…—jé±&«|×÷´V¦NÇ7n}ª]›è¦´©÷f–’ÊTy)rÅ•¡M;ekÔGg@h>»œ>ôc-Ø‚;×–&hŸX›$j®Gb3$;'—„â²6§·Ödg+}H'sê¤ò“Šx&`“bÍ8¹ü'!<Ë«Ž+'M‚®„9€j;T{µW¸›«a"·hK€8šÒÉ[nÉŒG)­Í*­SǺE{…»šµ£»)–¨U5¸ª»sÄhgõš¸ëD/×±c¦@jšÕ¢¨¾—jû£\Ú²·‹²Ý«²ð*¦)+¸˲þ K¼1j…ò:g—‹¯ã{˜ò«¾Fj±ù«¿‹±!Y¾ó»¾ÕÛ¾6ú¾ýê¾Q¿•š¾ñ[™Þ[¤ÿêÀaÚV7 5¥‹ ³²Gfrnû¸»ƒµòŒaÁ'¨ ‡múº6“´'‚Š6[˵ÛåA£è‡·Ýö¿˜¼ZÛ¼¿×µê¡ªv+¼¡h¹Nkº¶©hD}|ë”ÓªÁ95º:L$IŒºdÄ’X¯yk¸{U ø¹ž«ÄŒ {MÜÅlP{[»Ì þiJ5¬Œì`&G#/fƒ• µOLŽæ‡Å¹ ¹"¹®»$7𤠻^¬}ï†Es|cs;W½Hq¤eºH9ž[2›þÃYüÂëš•-D‚í¸ÃD;ÀÉÖÈOÖ¼~ymåÄdEW̳“;‡x–À÷KŒg$ºvwà lW²üÊ,\²­K‹ ¾À»Ä+ ¾ì›½¾lÌý z¹¼±Ÿì°<°‰¿ûKÍÕŒšÊl¢ ËûË^ŠÌ|Ì$‹¾¼ü…_Á[ÌàŒÍ–÷Z³ü³äÜÌé,ÌΜ¦¼€£Œ¡gÄá|¾^ $á·Ã\s ;ÁW¬ÂE uÈwžô¼È-϶ú&ÕFÇ„ Ï­ü²ýÎs_žJN#[Ñ}(Î^‰N_S^-ÐüЮ ˜üL¿2ÄÆå\Ë ÜÏõÓ½Ò4½À2ÝÒlÈÊ "=ÓqÖÓ!üÓ<­Í¶<ÔgÔºÌã|ÒÛÌ†Ö ÕQ}†;tendra-doc-4.1.2.orig/doc/port/ 40755 1750 1750 0 6507734476 15550 5ustar brooniebroonietendra-doc-4.1.2.orig/doc/port/port1.html100644 1750 1750 4376 6466607534 17610 0ustar brooniebroonie TDF and Portability

TDF and Portability

January 1998

next section previous section current document TenDRA home page document index


1 - Introduction
2 - Portability
3 - TDF
4 - Conclusions

1. Introduction

TDF is the name of the technology developed at DRA which has been adopted by the Open Software Foundation (OSF), Unix System Laboratories (USL), the European Community's Esprit Programme and others as their Architecture Neutral Distribution Format (ANDF). To date much of the discussion surrounding it has centred on the question, "How do you distribute portable software?". This paper concentrates on the more difficult question, "How do you write portable software in the first place?" and shows how TDF can be a valuable tool to aid the writing of portable software. Most of the discussion centres on programs written in C and is Unix specific. This is because most of the experience of TDF to date has been in connection with C in a Unix environment, and not because of any inbuilt bias in TDF.

It is assumed that the reader is familiar with the ANDF concept (although not necessarily with the details of TDF), and with the problems involved in writing portable C code.

The discussion is divided into two sections. Firstly some of the problems involved in writing portable programs are considered. The intention is not only to catalogue what these problems are, but to introduce ways of looking at them which will be important in the second section. This deals with the TDF approach to portability.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/port/port3.html100644 1750 1750 151710 6466607533 17644 0ustar brooniebroonie TDF and Portability: Portability

TDF and Portability

January 1998

next section previous section current document TenDRA home page document index


2.1 - Portable Programs
2.1.1 - Definitions and Preliminary Discussion
2.1.2 - Separation and Combination of Code
2.1.3 - Application Programming Interfaces
2.1.4 - Compilation Phases
2.2 - Portability Problems
2.2.1 - Programming Problems
2.2.2 - Code Transformation Problems
2.2.3 - Code Combination Problems
2.2.4 - API Problems
2.2.4.1 - API Checking
2.2.4.2 - API Implementation Errors
2.2.4.3 - System Header Problems
2.2.4.4 - System Library Problems
2.3 - APIs and Portability
2.3.1 - Target Dependent Code
2.3.2 - Making APIs Explicit
2.3.3 - Choosing an API
2.3.4 - Alternative Program Versions

2. Portability

We start by examining some of the problems involved in the writing of portable programs. Although the discussion is very general, and makes no mention of TDF, many of the ideas introduced are of importance in the second half of the paper, which deals with TDF.


2.1. Portable Programs

2.1.1. Definitions and Preliminary Discussion

Let us firstly say what we mean by a portable program. A program is portable to a number of machines if it can be compiled to give the same functionality on all those machines. Note that this does not mean that exactly the same source code is used on all the machines. One could envisage a program written in, say, 68020 assembly code for a certain machine which has been translated into 80386 assembly code for some other machine to give a program with exactly equivalent functionality. This would, under our definition, be a program which is portable to these two machines. At the other end of the scale, the C program:

	#include <stdio.h>
	
	int main ()
	{
	    fputs ( "Hello world\n", stdout ) ;
	    return ( 0 ) ;
	}
which prints the message, "Hello world", onto the standard output stream, will be portable to a vast range of machines without any need for rewriting. Most of the portable programs we shall be considering fall closer to the latter end of the spectrum - they will largely consist of target independent source with small sections of target dependent source for those constructs for which target independent expression is either impossible or of inadequate efficiency.

Note that we are defining portability in terms of a set of target machines and not as some universal property. The act of modifying an existing program to make it portable to a new target machine is called porting. Clearly in the examples above, porting the first program would be a highly complex task involving almost an entire rewrite, whereas in the second case it should be trivial.

2.1.2. Separation and Combination of Code

So why is the second example above more portable (in the sense of more easily ported to a new machine) than the first? The first, obvious, point to be made is that it is written in a high-level language, C, rather than the low-level languages, 68020 and 80386 assembly codes, used in the first example. By using a high-level language we have abstracted out the details of the processor to be used and expressed the program in an architecture neutral form. It is one of the jobs of the compiler on the target machine to transform this high-level representation into the appropriate machine dependent low-level representation.

The second point is that the second example program is not in itself complete. The objects fputs and stdout, representing the procedure to output a string and the standard output stream respectively, are left undefined. Instead the header stdio.h is included on the understanding that it contains the specification of these objects.

A version of this file is to be found on each target machine. On a particular machine it might contain something like:

	typedef struct {
	    int __cnt ;
	    unsigned char *__ptr ;
	    unsigned char *__base ;
	    short __flag ;
	    char __file ;
	} FILE ;
	
	extern FILE __iob [60] ;
	#define stdout ( &__iob [1] )
	
	extern int fputs ( const char *, FILE * ) ;
meaning that the type FILE is defined by the given structure, __iob is an external array of 60 FILE's, stdout is a pointer to the second element of this array, and that fputs is an external procedure which takes a const char * and a FILE * and returns an int. On a different machine, the details may be different (exactly what we can, or cannot, assume is the same on all target machines is discussed below).

These details are fed into the program by the pre-processing phase of the compiler. (The various compilation phases are discussed in more detail later - see Fig. 1.) This is a simple, preliminary textual substitution. It provides the definitions of the type FILE and the value stdout (in terms of __iob), but still leaves the precise definitions of __iob and fputs still unresolved (although we do know their types). The definitions of these values are not provided until the final phase of the compilation - linking - where they are linked in from the precompiled system libraries.

Note that, even after the pre-processing phase, our portable program has been transformed into a target dependent form, because of the substitution of the target dependent values from stdio.h. If we had also included the definitions of __iob and, more particularly, fputs, things would have been even worse - the procedure for outputting a string to the screen is likely to be highly target dependent.

To conclude, we have, by including stdio.h, been able to effectively separate the target independent part of our program (the main program) from the target dependent part (the details of stdout and fputs). It is one of the jobs of the compiler to recombine these parts to produce a complete program.

2.1.3. Application Programming Interfaces

As we have seen, the separation of the target dependent sections of a program into the system headers and system libraries greatly facilitates the construction of portable programs. What has been done is to define an interface between the main program and the existing operating system on the target machine in abstract terms. The program should then be portable to any machine which implements this interface correctly.

The interface for the "Hello world" program above might be described as follows : defined in the header stdio.h are a type FILE representing a file, an object stdout of type FILE * representing the standard output file, and a procedure fputs with prototype:

	int fputs ( const char *s, FILE *f ) ;
which prints the string s to the file f. This is an example of an Application Programming Interface (API). Note that it can be split into two aspects, the syntactic (what they are) and the semantic (what they mean). On any machine which implements this API our program is both syntactically correct and does what we expect it to.

The benefit of describing the API at this fairly high level is that it leaves scope for a range of implementation (and thus more machines which implement it) while still encapsulating the main program's requirements.

In the example implementation of stdio.h above we see that this machine implements this API correctly syntactically, but not necessarily semantically. One would have to read the documentation provided on the system to be sure of the semantics.

Another way of defining an API for this program would be to note that the given API is a subset of the ANSI C standard. Thus we could take ANSI C as an "off the shelf" API. It is then clear that our program should be portable to any ANSI-compliant machine.

It is worth emphasising that all programs have an API, even if it is implicit rather than explicit. However it is probably fair to say that programs without an explicit API are only portable by accident. We shall have more to say on this subject later.

2.1.4. Compilation Phases

The general plan for how to write the extreme example of a portable program, namely one which contains no target dependent code, is now clear. It is shown in the compilation diagram in Fig. 1 which represents the traditional compilation process. This diagram is divided into four sections. The left half of the diagram represents the actual program and the right half the associated API. The top half of the diagram represents target independent material - things which only need to be done once - and the bottom half target dependent material - things which need to be done on every target machine.

FIGURE 1. Traditional Compilation Phases


So, we write our target independent program (top left), conforming to the target independent API specification (top right). All the compilation actually takes place on the target machine. This machine must have the API correctly implemented (bottom right). This implementation will in general be in two parts - the system headers, providing type definitions, macros, procedure prototypes and so on, and the system libraries, providing the actual procedure definitions. Another way of characterising this division is between syntax (the system headers) and semantics (the system libraries).

The compilation is divided into three main phases. Firstly the system headers are inserted into the program by the pre-processor. This produces, in effect, a target dependent version of the original program. This is then compiled into a binary object file. During the compilation process the compiler inserts all the information it has about the machine - including the Application Binary Interface (ABI) - the sizes of the basic C types, how they are combined into compound types, the system procedure calling conventions and so on. This ensures that in the final linking phase the binary object file and the system libraries are obeying the same ABI, thereby producing a valid executable. (On a dynamically linked system this final linking phase takes place partially at run time rather than at compile time, but this does not really affect the general scheme.)

The compilation scheme just described consists of a series of phases of two types ; code combination (the pre-processing and system linking phases) and code transformation (the actual compilation phases). The existence of the combination phases allows for the effective separation of the target independent code (in this case, the whole program) from the target dependent code (in this case, the API implementation), thereby aiding the construction of portable programs. These ideas on the separation, combination and transformation of code underlie the TDF approach to portability.


2.2. Portability Problems

We have set out a scheme whereby it should be possible to write portable programs with a minimum of difficulties. So why, in reality, does it cause so many problems? Recall that we are still primarily concerned with programs which contain no target dependent code, although most of the points raised apply by extension to all programs.

2.2.1. Programming Problems

A first, obvious class of problems concern the program itself. It is to be assumed that as many bugs as possible have been eliminated by testing and debugging on at least one platform before a program is considered as a candidate for being a portable program. But for even the most self-contained program, working on one platform is no guarantee of working on another. The program may use undefined behaviour - using uninitialised values or dereferencing null pointers, for example - or have built-in assumptions about the target machine - whether it is big-endian or little-endian, or what the sizes of the basic integer types are, for example. This latter point is going to become increasingly important over the next couple of years as 64-bit architectures begin to be introduced. How many existing programs implicitly assume a 32-bit architecture?

Many of these built-in assumptions may arise because of the conventional porting process. A program is written on one machine, modified slightly to make it work on a second machine, and so on. This means that the program is "biased" towards the existing set of target machines, and most particularly to the original machine it was written on. This applies not only to assumptions about endianness, say, but also to the questions of API conformance which we will be discussing below.

Most compilers will pick up some of the grosser programming errors, particularly by type checking (including procedure arguments if prototypes are used). Some of the subtler errors can be detected using the -Wall option to the Free Software Foundation's GNU C Compiler (gcc) or separate program checking tools such as lint, for example, but this remains a very difficult area.

2.2.2. Code Transformation Problems

We now move on from programming problems to compilation problems. As we mentioned above, compilation may be regarded as a series of phases of two types : combination and transformation. Transformation of code - translating a program in one form into an equivalent program in another form - may lead to a variety of problems. The code may be transformed wrongly, so that the equivalence is broken (a compiler bug), or in an unexpected manner (differing compiler interpretations), or not at all, because it is not recognised as legitimate code (a compiler limitation). The latter two problems are most likely when the input is a high level language, with complex syntax and semantics.

Note that in Fig. 1 all the actual compilation takes place on the target machine. So, to port the program to n machines, we need to deal with the bugs and limitations of n, potentially different, compilers. For example, if you have written your program using prototypes, it is going to be a large and rather tedious job porting it to a compiler which does not have prototypes (this particular example can be automated; not all such jobs can). Other compiler limitations can be surprising - not understanding the L suffix for long numeric literals and not allowing members of enumeration types as array indexes are among the problems drawn from my personal experience.

The differing compiler interpretations may be more subtle. For example, there are differences between ANSI and "traditional" C which may trap the unwary. Examples are the promotion of integral types and the resolution of the linkage of static objects.

Many of these problems may be reduced by using the "same" compiler on all the target machines. For example, gcc has a single front end (C -> RTL) which may be combined with an appropriate back end (RTL -> target) to form a suitable compiler for a wide range of target machines. The existence of a single front end virtually eliminates the problems of differing interpretation of code and compiler quirks. It also reduces the exposure to bugs. Instead of being exposed to the bugs in n separate compilers, we are now only exposed to bugs in one half-compiler (the front end) plus n half-compilers (the back ends) - a total of ( n + 1 ) / 2. (This calculation is not meant totally seriously, but it is true in principle.) Front end bugs, when tracked down, also only require a single workaround.

2.2.3. Code Combination Problems

If code transformation problems may be regarded as a time consuming irritation, involving the rewriting of sections of code or using a different compiler, the second class of problems, those concerned with the combination of code, are far more serious.

The first code combination phase is the pre-processor pulling in the system headers. These can contain some nasty surprises. For example, consider a simple ANSI compliant program which contains a linked list of strings arranged in alphabetical order. This might also contain a routine:

	void index ( char * ) ;
which adds a string to this list in the appropriate position, using strcmp from string.h to find it. This works fine on most machines, but on some it gives the error:

	Only 1 argument to macro 'index'
The reason for this is that the system version of string.h contains the line:

	#define index ( s, c ) strchr ( s, c )
But this is nothing to do with ANSI, this macro is defined for compatibility with BSD.

In reality the system headers on any given machine are a hodge podge of implementations of different APIs, and it is often virtually impossible to separate them (feature test macros such as _POSIX_SOURCE are of some use, but are not always implemented and do not always produce a complete separation; they are only provided for "standard" APIs anyway). The problem above arose because there is no transitivity rule of the form : if program P conforms to API A, and API B extends A, then P conforms to B. The only reason this is not true is these namespace problems.

A second example demonstrates a slightly different point. The POSIX standard states that sys/stat.h contains the definition of the structure struct stat, which includes several members, amongst them:

	time_t st_atime ;
representing the access time for the corresponding file. So the program:

	#include <sys/types.h>
	#include <sys/stat.h>
	
	time_t st_atime ( struct stat *p )
	{
	    return ( p->st_atime ) ;
	}
should be perfectly valid - the procedure name st_atime and the field selector st_atime occupy different namespaces (see however the appendix on namespaces and APIs below). However at least one popular operating system has the implementation:

	struct stat {
	    ....
	    union {
		time_t st__sec ;
		timestruc_t st__tim ;
	    } st_atim ;
	    ....
	} ;
	#define st_atime st_atim.st__sec
This seems like a perfectly legitimate implementation. In the program above the field selector st_atime is replaced by st_atim.st__sec by the pre-processor, as intended, but unfortunately so is the procedure name st_atime, leading to a syntax error.

The problem here is not with the program or the implementation, but in the way they were combined. C does not allow individual field selectors to be defined. Instead the indiscriminate sledgehammer of macro substitution was used, leading to the problem described.

Problems can also occur in the other combination phase of the traditional compilation scheme, the system linking. Consider the ANSI compliant routine:

	#include <stdio.h>
	
	int open ( char *nm )
	{
	    int c, n = 0 ;
	    FILE *f = fopen ( nm, "r" ) ;
	    if ( f == NULL ) return ( -1 ) ;
	    while ( c = getc ( f ), c != EOF ) n++ ;
	    ( void ) fclose ( f ) ;
	    return ( n ) ;
	}
which opens the file nm, returning its size in bytes if it exists and -1 otherwise. As a quick porting exercise, I compiled it under six different operating systems. On three it worked correctly; on one it returned -1 even when the file existed; and on two it crashed with a segmentation error.

The reason for this lies in the system linking. On those machines which failed the library routine fopen calls (either directly or indirectly) the library routine open (which is in POSIX, but not ANSI). The system linker, however, linked my routine open instead of the system version, so the call to fopen did not work correctly.

So code combination problems are primarily namespace problems. The task of combining the program with the API implementation on a given platform is complicated by the fact that, because the system headers and system libraries contain things other than the API implementation, or even because of the particular implementation chosen, the various namespaces in which the program is expected to operate become "polluted".

2.2.4. API Problems

We have said that the API defines the interface between the program and the standard library provided with the operating system on the target machine. There are three main problems concerned with APIs. The first, how to choose the API in the first place, is discussed separately. Here we deal with the compilation aspects : how to check that the program conforms to its API, and what to do about incorrect API implementations on the target machine(s).

2.2.4.1. API Checking

The problem of whether or not a program conforms to its API - not using any objects from the operating system other than those specified in the API, and not making any unwarranted assumptions about these objects - is one which does not always receive sufficient attention, mostly because the necessary checking tools do not exist (or at least are not widely available). Compiling the program on a number of API compliant machines merely checks the program against the system headers for these machines. For a genuine portability check we need to check against the abstract API description, thereby in effect checking against all possible implementations.

Recall from above that the system headers on a given machine are an amalgam of all the APIs it implements. This can cause programs which should compile not to, because of namespace clashes; but it may also cause programs to compile which should not, because they have used objects which are not in their API, but which are in the system headers. For example, the supposedly ANSI compliant program:

	#include <signal.h>
	int sig = SIGKILL ;
will compile on most systems, despite the fact that SIGKILL is not an ANSI signal, because SIGKILL is in POSIX, which is also implemented in the system signal.h. Again, feature test macros are of some use in trying to isolate the implementation of a single API from the rest of the system headers. However they are highly unlikely to detect the error in the following supposedly POSIX compliant program which prints the entries of the directory nm, together with their inode numbers:

	#include <stdio.h>
	#include <sys/types.h>
	#include <dirent.h>
	
	void listdir ( char *nm )
	{
	    struct dirent *entry ;
	    DIR *dir = opendir ( nm ) ;
	    if ( dir == NULL ) return ;
	    while ( entry = readdir ( dir ), entry != NULL ) {
		printf ( "%s : %d\n", entry->d_name, ( int ) entry->d_ino ) ;
	    }
	    ( void ) closedir ( dir ) ;
	    return ;
	}
This is not POSIX compliant because, whereas the d_name field of struct dirent is in POSIX, the d_ino field is not. It is however in XPG3, so it is likely to be in many system implementations.

The previous examples have been concerned with simply telling whether or not a particular object is in an API. A more difficult, and in a way more important, problem is that of assuming too much about the objects which are in the API. For example, in the program:

	#include <stdio.h>
	#include <stdlib.h>
	
	div_t d = { 3, 4 } ;
	
	int main ()
	{
	    printf ( "%d,%d\n", d.quot, d.rem ) ;
	    return ( 0 ) ;
	}
the ANSI standard specifies that the type div_t is a structure containing two fields, quot and rem, of type int, but it does not specify which order these fields appear in, or indeed if there are other fields. Therefore the initialisation of d is not portable. Again, the type time_t is used to represent times in seconds since a certain fixed date. On most systems this is implemented as long, so it is tempting to use ( t & 1 ) to determine for a time_t t whether this number of seconds is odd or even. But ANSI actually says that time_t is an arithmetic, not an integer, type, so it would be possible for it to be implemented as double. But in this case ( t & 1 ) is not even type correct, so it is not a portable way of finding out whether t is odd or even.

2.2.4.2. API Implementation Errors

Undoubtedly the problem which causes the writer of portable programs the greatest headache (and heartache) is that of incorrect API implementations. However carefully you have chosen your API and checked that your program conforms to it, you are still reliant on someone (usually the system vendor) having implemented this API correctly on the target machine. Machines which do not implement the API at all do not enter the equation (they are not suitable target machines), what causes problems is incorrect implementations. As the implementation may be divided into two parts - system headers and system libraries - we shall similarly divide our discussion. Inevitably the choice of examples is personal; anyone who has ever attempted to port a program to a new machine is likely to have their own favourite examples.

2.2.4.3. System Header Problems

Some header problems are immediately apparent because they are syntactic and cause the program to fail to compile. For example, values may not be defined or be defined in the wrong place (not in the header prescribed by the API).

A common example (one which I have to include a workaround for in virtually every program I write) is that EXIT_SUCCESS and EXIT_FAILURE are not always defined (ANSI specifies that they should be in stdlib.h). It is tempting to change exit (EXIT_FAILURE) to exit (1) because "everyone knows" that EXIT_FAILURE is 1. But this is to decrease the portability of the program because it ties it to a particular class of implementations. A better workaround would be:

	#include <stdlib.h>
	#ifndef EXIT_FAILURE
	#define EXIT_FAILURE 1
	#endif
which assumes that anyone choosing a non-standard value for EXIT_FAILURE is more likely to put it in stdlib.h. Of course, if one subsequently came across a machine on which not only is EXIT_FAILURE not defined, but also the value it should have is not 1, then it would be necessary to resort to #ifdef machine_name statements. The same is true of all the API implementation problems we shall be discussing : non-conformant machines require workarounds involving conditional compilation. As more machines are considered, so these conditional compilations multiply.

As an example of things being defined in the wrong place, ANSI specifies that SEEK_SET, SEEK_CUR and SEEK_END should be defined in stdio.h, whereas POSIX specifies that they should also be defined in unistd.h. It is not uncommon to find machines on which they are defined in the latter but not in the former. A possible workaround in this case would be:

	#include <stdio.h>
	#ifndef SEEK_SET
	#include <unistd.h>
	#endif
Of course, by including "unnecessary" headers like unistd.h the risk of namespace clashes such as those discussed above is increased.

A final syntactic problem, which perhaps should belong with the system header problems above, concerns dependencies between the headers themselves. For example, the POSIX header unistd.h declares functions involving some of the types pid_t, uid_t etc, defined in sys/types.h. Is it necessary to include sys/types.h before including unistd.h, or does unistd.h automatically include sys/types.h? The approach of playing safe and including everything will normally work, but this can lead to multiple inclusions of a header. This will normally cause no problems because the system headers are protected against multiple inclusions by means of macros, but it is not unknown for certain headers to be left unprotected. Also not all header dependencies are as clear cut as the one given, so that what headers need to be included, and in what order, is in fact target dependent.

There can also be semantic errors in the system headers : namely wrongly defined values. The following two examples are taken from real operating systems. Firstly the definition:

	#define DBL_MAX 1.797693134862316E+308
in float.h on an IEEE-compliant machine is subtly wrong - the given value does not fit into a double - the correct value is:

	#define DBL_MAX 1.7976931348623157E+308
Again, the type definition:

	typedef int size_t ; /* ??? */
(sic) is not compliant with ANSI, which says that size_t is an unsigned integer type. (I'm not sure if this is better or worse than another system which defines ptrdiff_t to be unsigned int when it is meant to be signed. This would mean that the difference between any two pointers is always positive.) These particular examples are irritating because it would have cost nothing to get things right, correcting the value of DBL_MAX and changing the definition of size_t to unsigned int. These corrections are so minor that the modified system headers would still be a valid interface for the existing system libraries (we shall have more to say about this later). However it is not possible to change the system headers, so it is necessary to build workarounds into the program. Whereas in the first case it is possible to devise such a workaround:

	#include <float.h>
	#ifdef machine_name
	#undef DBL_MAX
	#define DBL_MAX 1.7976931348623157E+308
	#endif
for example, in the second, because size_t is defined by a typedef it is virtually impossible to correct in a simple fashion. Thus any program which relies on the fact that size_t is unsigned will require considerable rewriting before it can be ported to this machine.

2.2.4.4. System Library Problems

The system header problems just discussed are primarily syntactic problems. By contrast, system library problems are primarily semantic - the provided library routines do not behave in the way specified by the API. This makes them harder to detect. For example, consider the routine:

	void *realloc ( void *p, size_t s ) ;
which reallocates the block of memory p to have size s bytes, returning the new block of memory. The ANSI standard says that if p is the null pointer, then the effect of realloc ( p, s ) is the same as malloc ( s ), that is, to allocate a new block of memory of size s. This behaviour is exploited in the following program, in which the routine add_char adds a character to the expanding array, buffer:

	#include <stdio.h>
	#include <stdlib.h>
	
	char *buffer = NULL ;
	int buff_sz = 0, buff_posn = 0 ;
	
	void add_char ( char c )
	{
	    if ( buff_posn >= buff_sz ) {
		buff_sz += 100 ;
		buffer = ( char * ) realloc ( ( void * ) buffer, buff_sz * sizeof ( char ) ) ;
		if ( buffer == NULL ) {
		    fprintf ( stderr, "Memory allocation error\n" ) ;
		    exit ( EXIT_FAILURE ) ;
		}
	    }
	    buffer [ buff_posn++ ] = c ;
	    return ;
	}
On the first call of add_char, buffer is set to a real block of memory (as opposed to NULL) by a call of the form realloc ( NULL, s ). This is extremely convenient and efficient - if it was not for this behaviour we would have to have an explicit initialisation of buffer, either as a special case in add_char or in a separate initialisation routine.

Of course this all depends on the behaviour of realloc ( NULL, s ) having been implemented precisely as described in the ANSI standard. The first indication that this is not so on a particular target machine might be when the program is compiled and run on that machine for the first time and does not perform as expected. To track the problem down will demand time debugging the program.

Once the problem has been identified as being with realloc a number of possible workarounds are possible. Perhaps the most interesting is to replace the inclusion of stdlib.h by the following:

	#include <stdlib.h>
	#ifdef machine_name
	#define realloc ( p, s )\
	    ( ( p ) ? ( realloc ) ( p, s ) : malloc ( s ) )
	#endif
where realloc ( p, s ) is redefined as a macro which is the result of the procedure realloc if p is not null, and malloc ( s ) otherwise. (In fact this macro will not always have the desired effect, although it does in this case. Why (exercise)?)

The only alternative to this trial and error approach to finding API implementation problems is the application of personal experience, either of the particular target machine or of things that are implemented wrongly by many machines and as such should be avoided. This sort of detailed knowledge is not easily acquired. Nor can it ever be complete: new operating system releases are becoming increasingly regular and are on occasions quite as likely to introduce new implementation errors as to solve existing ones. It is in short a "black art".


2.3. APIs and Portability

We now return to our discussion of the general issues involved in portability to more closely examine the role of the API.

2.3.1. Target Dependent Code

So far we have been considering programs which contain no conditional compilation, in which the API forms the basis of the separation of the target independent code (the whole program) and the target dependent code (the API implementation). But a glance at most large C programs will reveal that they do contain conditional compilation. The code is scattered with #if's and #ifdef's which, in effect, cause the pre-processor to construct slightly different programs on different target machines. So here we do not have a clean division between the target independent and the target dependent code - there are small sections of target dependent code spread throughout the program.

Let us briefly consider some of the reasons why it is necessary to introduce this conditional compilation. Some have already been mentioned - workarounds for compiler bugs, compiler limitations, and API implementation errors; others will be considered later. However the most interesting and important cases concern things which need to be done genuinely differently on different machines. This can be because they really cannot be expressed in a target independent manner, or because the target independent way of doing them is unacceptably inefficient.

Efficiency (either in terms of time or space) is a key issue in many programs. The argument is often advanced that writing a program portably means using the, often inefficient, lowest common denominator approach. But under our definition of portability it is the functionality that matters, not the actual source code. There is nothing to stop different code being used on different machines for reasons of efficiency.

To examine the relationship between target dependent code and APIs, consider the simple program:

	#include <stdio.h>
	
	int main ()
	{
	#ifdef mips
	    fputs ( "This machine is a mips\n", stdout ) ;
	#endif
	    return ( 0 ) ;
	}
which prints a message if the target machine is a mips. What is the API of this program? Basically it is the same as in the "Hello world" example discussed in sections 2.1.1
and 2.1.2, but if we wish the API to fully describe the interface between the program and the target machine, we must also say that whether or not the macro mips is defined is part of the API. Like the rest of the API, this has a semantic aspect as well as a syntactic - in this case that mips is only defined on mips machines. Where it differs is in its implementation. Whereas the main part of the API is implemented in the system headers and the system libraries, the implementation of either defining, or not defining, mips ultimately rests with the person performing the compilation. (In this particular example, the macro mips is normally built into the compiler on mips machines, but this is only a convention.)

So the API in this case has two components : a system-defined part which is implemented in the system headers and system libraries, and a user-defined part which ultimately relies on the person performing the compilation to provide an implementation. The main point to be made in this section is that introducing target dependent code is equivalent to introducing a user-defined component to the API. The actual compilation process in the case of programs containing target dependent code is basically the same as that shown in Fig. 1. But whereas previously the vertical division of the diagram also reflects a division of responsibility - the left hand side is the responsibility of the programmer (the person writing the program), and the right hand side of the API specifier (for example, a standards defining body) and the API implementor (the system vendor) - now the right hand side is partially the responsibility of the programmer and the person performing the compilation. The programmer specifies the user-defined component of the API, and the person compiling the program either implements this API (as in the mips example above) or chooses between a number of alternative implementations provided by the programmer (as in the example below).

Let us consider a more complex example. Consider the following program which assumes, for simplicity, that an unsigned int contains 32 bits:

	#include <stdio.h>
	#include "config.h"
	
	#ifndef SLOW_SHIFT
	#define MSB ( a ) ( ( unsigned char ) ( a >> 24 ) )
	#else
	#ifdef BIG_ENDIAN
	#define MSB ( a ) *( ( unsigned char * ) &( a ) )
	#else
	#define MSB ( a ) *( ( unsigned char * ) &( a ) + 3 )
	#endif
	#endif
	
	unsigned int x = 100000000 ;
	
	int main ()
	{
	    printf ( "%u\n", MSB ( x ) ) ;
	    return ( 0 ) ;
	}
The intention is to print the most significant byte of x. Three alternative definitions of the macro MSB used to extract this value are provided. The first, if SLOW_SHIFT is not defined, is simply to shift the value right by 24 bits. This will work on all 32-bit machines, but may be inefficient (depending on the nature of the machine's shift instruction). So two alternatives are provided. An unsigned int is assumed to consist of four unsigned char's. On a big-endian machine, the most significant byte is the first of these unsigned char's; on a little-endian machine it is the fourth. The second definition of MSB is intended to reflect the former case, and the third the latter.

The person compiling the program has to choose between the three possible implementations of MSB provided by the programmer. This is done by either defining, or not defining, the macros SLOW_SHIFT and BIG_ENDIAN. This could be done as command line options, but we have chosen to reflect another commonly used device, the configuration file. For each target machine, the programmer provides a version of the file config.h which defines the appropriate combination of the macros SLOW_SHIFT and BIG_ENDIAN. The person performing the compilation simply chooses the appropriate config.h for the target machine.

There are two possible ways of looking at what the user-defined API of this program is. Possibly it is most natural to say that it is MSB, but it could also be argued that it is the macros SLOW_SHIFT and BIG_ENDIAN. The former more accurately describes the target dependent code, but is only implemented indirectly, via the latter.

2.3.2. Making APIs Explicit

As we have said, every program has an API even if it is implicit rather than explicit. Every system header included, every type or value used from it, and every library routine used, adds to the system-defined component of the API, and every conditional compilation adds to the user-defined component. What making the API explicit does is to encapsulate the set of requirements that the program has of the target machine (including requirements like, I need to know whether or not the target machine is big-endian, as well as, I need fputs to be implemented as in the ANSI standard). By making these requirements explicit it is made absolutely clear what is needed on a target machine if a program is to be ported to it. If the requirements are not explicit this can only be found by trial and error. This is what we meant earlier by saying that a program without an explicit API is only portable by accident.

Another advantage of specifying the requirements of a program is that it may increase their chances of being implemented. We have spoken as if porting is a one-way process; program writers porting their programs to new machines. But there is also traffic the other way. Machine vendors may wish certain programs to be ported to their machines. If these programs come with a list of requirements then the vendor knows precisely what to implement in order to make such a port possible.

2.3.3. Choosing an API

So how does one go about choosing an API? In a sense the user-defined component is easier to specify than the system-defined component because it is less tied to particular implementation models. What is required is to abstract out what exactly needs to be done in a target dependent manner and to decide how best to separate it out. The most difficult problem is how to make the implementation of this API as simple as possible for the person performing the compilation, if necessary providing a number of alternative implementations to choose between and a simple method of making this choice (for example, the config.h file above). With the system-defined component the question is more likely to be, how do the various target machines I have in mind implement what I want to do? The abstraction of this is usually to choose a standard and widely implemented API, such as POSIX, which provides all the necessary functionality.

The choice of "standard" API is of course influenced by the type of target machines one has in mind. Within the Unix world, the increasing adoption of Open Standards, such as POSIX, means that choosing a standard API which is implemented on a wide variety Unix boxes is becoming easier. Similarly, choosing an API which will work on most MSDOS machines should cause few problems. The difficulty is that these are disjoint worlds; it is very difficult to find a standard API which is implemented on both Unix and MSDOS machines. At present not much can be done about this, it reflects the disjoint nature of the computer market.

To develop a similar point : the drawback of choosing POSIX (for example) as an API is that it restricts the range of possible target machines to machines which implement POSIX. Other machines, for example, BSD compliant machines, might offer the same functionality (albeit using different methods), so they should be potential target machines, but they have been excluded by the choice of API. One approach to the problem is the "alternative API" approach. Both the POSIX and the BSD variants are built into the program, but only one is selected on any given target machine by means of conditional compilation. Under our "equivalent functionality" definition of portability, this is a program which is portable to both POSIX and BSD compliant machines. But viewed in the light of the discussion above, if we regard a program as a program-API pair, it could be regarded as two separate programs combined on a single source code tree. A more interesting approach would be to try to abstract out what exactly the functionality which both POSIX and BSD offer is and use that as the API. Then instead of two separate APIs we would have a single API with two broad classes of implementations. The advantage of this latter approach becomes clear if wished to port the program to a machine which implements neither POSIX nor BSD, but provides the equivalent functionality in a third way.

As a simple example, both POSIX and BSD provide very similar methods for scanning the entries of a directory. The main difference is that the POSIX version is defined in dirent.h and uses a structure called struct dirent, whereas the BSD version is defined in sys/dir.h and calls the corresponding structure struct direct. The actual routines for manipulating directories are the same in both cases. So the only abstraction required to unify these two APIs is to introduce an abstract type, dir_entry say, which can be defined by:

	typedef struct dirent dir_entry ;
on POSIX machines, and:

	typedef struct direct dir_entry ;
on BSD machines. Note how this portion of the API crosses the system-user boundary. The object dir_entry is defined in terms of the objects in the system headers, but the precise definition depends on a user-defined value (whether the target machine implements POSIX or BSD).

2.3.4. Alternative Program Versions

Another reason for introducing conditional compilation which relates to APIs is the desire to combine several programs, or versions of programs, on a single source tree. There are several cases to be distinguished between. The reuse of code between genuinely different programs does not really enter the argument : any given program will only use one route through the source tree, so there is no real conditional compilation per se in the program. What is more interesting is the use of conditional compilation to combine several versions of the same program on the same source tree to provide additional or alternative features.

It could be argued that the macros (or whatever) used to select between the various versions of the program are just part of the user-defined API as before. But consider a simple program which reads in some numerical input, say, processes it, and prints the results. This might, for example, have POSIX as its API. We may wish to optionally enhance this by displaying the results graphically rather than textually on machines which have X Windows, the compilation being conditional on some boolean value, HAVE_X_WINDOWS, say. What is the API of the resultant program? The answer from the point of view of the program is the union of POSIX, X Windows and the user-defined value HAVE_X_WINDOWS. But from the implementation point of view we can either implement POSIX and set HAVE_X_WINDOWS to false, or implement both POSIX and X Windows and set HAVE_X_WINDOWS to true. So what introducing HAVE_X_WINDOWS does is to allow flexibility in the API implementation.

This is very similar to the alternative APIs discussed above. However the approach outlined will really only work for optional API extensions. To work in the alternative API case, we would need to have the union of POSIX, BSD and a boolean value, say, as the API. Although this is possible in theory, it is likely to lead to namespace clashes between POSIX and BSD.


Appendix: Namespaces and APIs

Namespace problems are amongst the most difficult faced by standard defining bodies (for example, the ANSI and POSIX committees) and they often go to great lengths to specify which names should, and should not, appear when certain headers are included. (The position is set out in D. F. Prosser, Header and name space rules for UNIX systems (private communication), USL, 1993.)

For example, the intention, certainly in ANSI, is that each header should operate as an independent sub-API. Thus va_list is prohibited from appearing in the namespace when stdio.h is included (it is defined only in stdarg.h) despite the fact that it appears in the prototype:

	int vprintf ( char *, va_list ) ;
This seeming contradiction is worked round on most implementations by defining a type __va_list in stdio.h which has exactly the same definition as va_list, and declaring vprintf as:

	int vprintf ( char *, __va_list ) ;
This is only legal because __va_list is deemed not to corrupt the namespace because of the convention that names beginning with __ are reserved for implementation use.

This particular namespace convention is well-known, but there are others defined in these standards which are not generally known (and since no compiler I know tests them, not widely adhered to). For example, the ANSI header errno.h reserves all names given by the regular expression:

	E[0-9A-Z][0-9a-z_A-Z]+
against macros (i.e. in all namespaces). By prohibiting the user from using names of this form, the intention is to protect against namespace clashes with extensions of the ANSI API which introduce new error numbers. It also protects against a particular implementation of these extensions - namely that new error numbers will be defined as macros.

A better example of protecting against particular implementations comes from POSIX. If sys/stat.h is included names of the form:

	st_[0-9a-z_A-Z]+
are reserved against macros (as member names). The intention here is not only to reserve field selector names for future extensions to struct stat (which would only affect API implementors, not ordinary users), but also to reserve against the possibility that these field selectors might be implemented by macros. So our st_atime example in section 2.2.3
is strictly illegal because the procedure name st_atime lies in a restricted namespace. Indeed the namespace is restricted precisely to disallow this program.

As an exercise to the reader, how many of your programs use names from the following restricted namespaces (all drawn from ANSI, all applying to all namespaces)?

	is[a-z][0-9a-z_A-Z]+				(ctype.h)
	to[a-z][0-9a-z_A-Z]+				(ctype.h)
	str[a-z][0-9a-z_A-Z]+				(stdlib.h)
With the TDF approach of describing APIs in abstract terms using the #pragma token syntax most of these namespace restrictions are seen to be superfluous. When a target independent header is included precisely the objects defined in that header in that version of the API appear in the namespace. There are no worries about what else might happen to be in the header, because there is nothing else. Also implementation details are separated off to the TDF library building, so possible namespace pollution through particular implementations does not arise.

Currently TDF does not have a neat way of solving the va_list problem. The present target independent headers use a similar workaround to that described above (exploiting a reserved namespace). (See the footnote in section 3.4.1.1.)

None of this is intended as criticism of the ANSI or POSIX standards. It merely shows some of the problems that can arise from the insufficient separation of code.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/port/port4.html100644 1750 1750 222266 6466607534 17653 0ustar brooniebroonie TDF and Portability: TDF

TDF and Portability

January 1998

next section previous section current document TenDRA home page document index


3.1 - Features of TDF
3.1.1 - Capsule Structure
3.1.2 - Tokens
3.2 - TDF Compilation Phases
3.2.1 - API Description (Top Right)
3.2.2 - Production (Top Left)
3.2.3 - API Implementation (Bottom Right)
3.2.4 - Installation (Bottom Left)
3.2.5 - Illustrated Example
3.3 - Aspects of the TDF System
3.3.1 - The C to TDF Producer
3.3.2 - C to TDF Mappings
3.3.3 - TDF Linking
3.3.4 - The TDF Installers
3.4 - TDF and APIs
3.4.1 - API Description
3.4.1.1 - The Description Process
3.4.1.2 - Resolving Conflicts
3.4.1.3 - The Benefits of API Description
3.4.2 - TDF Library Building
3.4.2.1 - System Header Problems
3.4.2.2 - System Library Problems
3.4.2.3 - TDF Library Builders
3.5 - TDF and Conditional Compilation
3.5.1 - User-Defined APIs
3.5.2 - User Defined Tokens - Example
3.5.3 - Conditional Compilation within TDF
3.5.4 - Alternative Program Versions

3. TDF

Having discussed many of the problems involved with writing portable programs, we now eventually turn to TDF. Firstly a brief technical overview is given, indicating those features of TDF which facilitate the separation of program. Secondly the TDF compilation scheme is described. It is shown how the features of TDF are exploited to aid in the separation of target independent and target dependent code which we have indicated as characterising portable programs. Finally, the various constituents of this scheme are considered individually, and their particular roles are described in more detail.


3.1. Features of TDF

It is not the purpose of this paper to explain the exact specification of TDF - this is described elsewhere (see [6] and [4]) - but rather to show how its general design features make it suitable as an aid to writing portable programs.

TDF is an abstraction of high-level languages - it contains such things as exps (abstractions of expressions and statements), shapes (abstractions of types) and tags (abstractions of variable identifiers). In general form it is an abstract syntax tree which is flattened and encoded as a series of bits, called a capsule. This fairly high level of definition (for a compiler intermediate language) means that TDF is architecture neutral in the sense that it makes no assumptions about the underlying processor architecture.

The translation of a capsule to and from the corresponding syntax tree is totally unambiguous, also TDF has a "universal" semantic interpretation as defined in the TDF specification.

3.1.1. Capsule Structure

A TDF capsule consists of a number of units of various types. These are embedded in a general linkage scheme (see Fig. 2). Each unit contains a number of variable objects of various sorts (for example, tags and tokens) which are potentially visible to other units. Within the unit body each variable object is identified by a unique number. The linking is via a set of variable objects which are global to the entire capsule. These may in turn be associated with external names. For example, in Fig. 2, the fourth variable of the first unit is identified with the first variable of the third unit, and both are associated with the fourth external name.

FIGURE 2. TDF Capsule Structure


This capsule structure means that the combination of a number of capsules to form a single capsule is a very natural operation. The actual units are copied unchanged into the resultant capsule - it is only the surrounding linking information that needs changing. Many criteria could be used to determine how this linking is to be organised, but the simplest is to link two objects if and only if they have the same external name. This is the scheme that the current TDF linker has implemented. Furthermore such operations as changing an external name or removing it altogether ("hiding") are very simple under this linking scheme.

3.1.2. Tokens

So, the combination of program at this high level is straightforward. But TDF also provides another mechanism which allows for the combination of program at the syntax tree level, namely tokens. Virtually any node of the TDF tree may be a token : a place holder which stands for a subtree. Before the TDF can be decoded fully the definition of this token must be provided. The token definition is then macro substituted for the token in the decoding process to form the complete tree (see Fig. 3).

FIGURE 3. TDF Tokens


Tokens may also take arguments (see Fig. 4). The actual argument values (from the main tree) are substituted for the formal parameters in the token definition.

FIGURE 4. TDF Tokens (with Arguments)


As mentioned above, tokens are one of the types of variable objects which are potentially visible to external units. This means that a token does not have to be defined in the same unit as it is used in. Nor do these units have originally to have come from the same capsule, provided they have been linked before they need to be fully decoded. Tokens therefore provide a mechanism for the low-level separation and combination of code.


3.2. TDF Compilation Phases

We have seen how one of the great strengths of TDF is the fact that it facilitates the separation and combination of program. We now demonstrate how this is applied in the TDF compilation strategy. This section is designed only to give an outline of this scheme. The various constituent phases are discussed in more detail later.

Again we start with the simplest case, where the program contains no target dependent code. The strategy is illustrated in Fig. 5, which should be compared with the traditional compilation strategy shown in Fig. 1. The general layout of the diagrams is the same. The left halves of the diagrams refers to the program itself, and the right halves to the corresponding API. The top halves refer to machine independent material, and the bottom halves to what happens on each target machine. Thus, as before, the portable program appears in the top left of the diagram, and the corresponding API in the top right.

The first thing to note is that, whereas previously all the compilation took place on the target machines, here the compilation has been split into a target independent (C -> TDF) part, called production, and a target dependent (TDF -> target) part, called installation . One of the synonyms for TDF is ANDF, Architecture Neutral Distribution Format, and we require that the production is precisely that - architecture neutral - so that precisely the same TDF is installed on all the target machines.

This architecture neutrality necessitates a separation of code. For example, in the "Hello world" example discussed in sections 2.1.1 and 2.1.2, the API specifies that there shall be a type FILE and an object stdout of type FILE *, but the implementations of these may be different on all the target machines. Thus we need to be able to abstract out the code for FILE and stdout from the TDF output by the producer, and provide the appropriate (target dependent) definitions for these objects in the installation phase.

FIGURE 5. TDF Compilation Phases


3.2.1. API Description (Top Right)

The method used for this separation is the token mechanism. Firstly the syntactic element of the API is described in the form of a set of target independent headers. Whereas the target dependent, system headers contain the actual implementation of the API on a particular machine, the target independent headers express to the producer what is actually in the API, and which may therefore be assumed to be common to all compliant target machines. For example, in the target independent headers for the ANSI standard, there will be a file stdio.h containing the lines:

	#pragma token TYPE FILE # ansi.stdio.FILE
	#pragma token EXP rvalue : FILE * : stdout # ansi.stdio.stdout
	#pragma token FUNC int ( const char *, FILE * ) : fputs # ansi.stdio.fputs
These #pragma token directives are extensions to the C syntax which enable the expression of abstract syntax information to the producer. The directives above tell the producer that there exists a type called FILE, an expression stdout which is an rvalue (that is, a non-assignable value) of type FILE *, and a procedure fputs with prototype:

	int fputs ( const char *, FILE * ) ;
and that it should leave their values unresolved by means of tokens (for more details on the #pragma token directive see [3]). Note how the information in the target independent header precisely reflects the syntactic information in the ANSI API.

The names ansi.stdio.FILE etc. give the external names for these tokens, those which will be visible at the outermost layer of the capsule; they are intended to be unique (this is discussed below). It is worth making the distinction between the internal names and these external token names. The former are the names used to represent the objects within C, and the latter the names used within TDF to represent the tokens corresponding to these objects.

3.2.2. Production (Top Left)

Now the producer can compile the program using these target independent headers. As will be seen from the "Hello world" example, these headers contain sufficient information to check that the program is syntactically correct. The produced, target independent, TDF will contain tokens corresponding to the various uses of stdout, fputs and so on, but these tokens will be left undefined. In fact there will be other undefined tokens in the TDF. The basic C types, int and char are used in the program, and their implementations may vary between target machines. Thus these types must also be represented by tokens. However these tokens are implicit in the producer rather than explicit in the target independent headers.

Note also that because the information in the target independent headers describes abstractly the contents of the API and not some particular implementation of it, the producer is in effect checking the program against the API itself.

3.2.3. API Implementation (Bottom Right)

Before the TDF output by the producer can be decoded fully it needs to have had the definitions of the tokens it has left undefined provided. These definitions will be potentially different on all target machines and reflect the implementation of the API on that machine.

The syntactic details of the implementation are to be found in the system headers. The process of defining the tokens describing the API (called TDF library building) consists of comparing the implementation of the API as given in the system headers with the abstract description of the tokens comprising the API given in the target independent headers. The token definitions thus produced are stored as TDF libraries, which are just archives of TDF capsules.

For example, in the example implementation of stdio.h given in section 2.1.2, the token ansi.stdio.FILE will be defined as the TDF compound shape corresponding to the structure defining the type FILE (recall the distinction between internal and external names). __iob will be an undefined tag whose shape is an array of 60 copies of the shape given by the token ansi.stdio.FILE, and the token ansi.stdio.stdout will be defined to be the TDF expression corresponding to a pointer to the second element of this array. Finally the token ansi.stdio.fputs is defined to be the effect of applying the procedure given by the undefined tag fputs. (In fact, this picture has been slightly simplified for the sake of clarity. See the section on C -> TDF mappings in section 3.3.2.)

These token definitions are created using exactly the same C -> TDF translation program as is used in the producer phase. This program knows nothing about the distinction between target independent and target dependent TDF, it merely translates the C it is given (whether from a program or a system header) into TDF. It is the compilation process itself which enables the separation of target independent and target dependent TDF.

In addition to the tokens made explicit in the API, the implicit tokens built into the producer must also have their definitions inserted into the TDF libraries. The method of definition of these tokens is slightly different. The definitions are automatically deduced by, for example, looking in the target machine's limits.h header to find the local values of CHAR_MIN and CHAR_MAX , and deducing the definition of the token corresponding to the C type char from this. It will be the variety (the TDF abstraction of integer types) consisting of all integers between these values.

Note that what we are doing in the main library build is checking the actual implementation of the API against the abstract syntactic description. Any variations of the syntactic aspects of the implementation from the API will therefore show up. Thus library building is an effective way of checking the syntactic conformance of a system to an API. Checking the semantic conformance is far more difficult - we shall return to this issue later.

3.2.4. Installation (Bottom Left)

The installation phase is now straightforward. The target independent TDF representing the program contains various undefined tokens (corresponding to objects in the API), and the definitions for these tokens on the particular target machine (reflecting the API implementation) are to be found in the local TDF libraries. It is a natural matter to link these to form a complete, target dependent, TDF capsule. The rest of the installation consists of a straightforward translation phase (TDF -> target) to produce a binary object file, and linking with the system libraries to form a final executable. Linking with the system libraries will resolve any tags left undefined in the TDF.

3.2.5. Illustrated Example

In order to help clarify exactly what is happening where, Fig. 6 shows a simple example superimposed on the TDF compilation diagram.

FIGURE 6. Example Compilation


The program to be translated is simply:

	FILE f ;
and the API is as above, so that FILE is an abstract type. This API is described as target independent headers containing the #pragma token statements given above. The producer combines the program with the target independent headers to produce a target independent capsule which declares a tag f whose shape is given by the token representing FILE, but leaves this token undefined. In the API implementation, the local definition of the type FILE from the system headers is translated into the definition of this token by the library building process. Finally in the installation, the target independent capsule is combined with the local token definition library to form a target dependent capsule in which all the tokens used are also defined. This is then installed further as described above.


3.3. Aspects of the TDF System

Let us now consider in more detail some of the components of the TDF system and how they fit into the compilation scheme.

3.3.1. The C to TDF Producer

Above it was emphasised how the design of the compilation strategy aids the representation of program in a target independent manner, but this is not enough in itself. The C -> TDF producer must represent everything symbolically; it cannot make assumptions about the target machine. For example, the line of C containing the initialisation:

	int a = 1 + 1 ;
is translated into TDF representing precisely that, 1 + 1, not 2, because it does not know the representation of int on the target machine. The installer does know this, and so is able to replace 1 + 1 by 2 (provided this is actually true).

As another example, in the structure:

	struct tag {
	    int a ;
	    double b ;
	} ;
the producer does not know the actual value in bits of the offset of the second field from the start of the structure - it depends on the sizes of int and double and the alignment rules on the target machine. Instead it represents it symbolically (it is the size of int rounded up to a multiple of the alignment of double). This level of abstraction makes the tokenisation required by the target independent API headers very natural. If we only knew that there existed a structure struct tag with a field b of type double then it is perfectly simple to use a token to represent the (unknown) offset of this field from the start of the structure rather than using the calculated (known) value. Similarly, when it comes to defining this token in the library building phase (recall that this is done by the same C -> TDF translation program as the production) it is a simple matter to define the token to be the calculated value.

Furthermore, because all the producer's operations are performed at this very abstract level, it is a simple matter to put in extra portability checks. For example, it would be a relatively simple task to put most of the functionality of lint (excluding intermodular checking) or gcc's -Wall option into the producer, and moreover have these checks applied to an abstract machine rather than a particular target machine. Indeed a number of these checks have already been implemented.

These extra checks are switched on and off by using #pragma statements. (For more details on the #pragma syntax and which portability checks are currently supported by the producer see [3].) For example, ANSI C states that any undeclared function is assumed to return int, whereas for strict portability checking it is more useful to have undeclared functions marked as an error (indeed for strict API checking this is essential). This is done by inserting the line:

	#pragma no implicit definitions
either at the start of each file to be checked or, more simply, in a start-up file - a file which can be #include'd at the start of each source file by means of a command line option.

Because these checks can be turned off as well as on it is possible to relax as well as strengthen portability checking. Thus if a program is only intended to work on 32-bit machines, it is possible to switch off certain portability checks. The whole ethos underlying the producer is that these portability assumptions should be made explicit, so that the appropriate level of checking can be done.

As has been previously mentioned, the use of a single front-end to any compiler not only virtually eliminates the problems of differing code interpretation and compiler quirks, but also reduces the exposure to compiler bugs. Of course, this also applies to the TDF compiler, which has a single front-end (the producer) and multiple back-ends (the installers). As regards the syntax and semantics of the C language, the producer is by default a strictly ANSI C compliant compiler. (Addition to the October 1993 revision : Alas, this is no longer true; however strict ANSI can be specified by means of a simple command line option (see [1]). The decision whether to make the default strict and allow people to relax it, or to make the default lenient and allow people to strengthen it, is essentially a political one. It does not really matter in technical terms provided the user is made aware of exactly what each compilation mode means in terms of syntax, semantics and portability checking.) However it is possible to change its behaviour (again by means of #pragma statements) to implement many of the features found in "traditional" or "K&R" C. Hence it is possible to precisely determine how the producer will interpret the C code it is given by explicitly describing the C dialect it is written in in terms of these #pragma statements.

3.3.2. C to TDF Mappings

The nature of the C -> TDF transformation implemented by the producer is worth considering, although not all the features described in this section are fully implemented in the current (October 1993) producer. Although it is only indirectly related to questions of portability, this mapping does illustrate some of the problems the producer has in trying to represent program in an architecture neutral manner.

Once the initial difficulty of overcoming the syntactic and semantic differences between the various C dialects is overcome, the C -> TDF mapping is quite straightforward. In a hierarchy from high level to low level languages C and TDF are not that dissimilar - both come towards the bottom of what may legitimately be regarded as high level languages. Thus the constructs in C map easily onto the constructs of TDF (there are a few exceptions, for example coercing integers to pointers, which are discussed in [3]). Eccentricities of the C language specification such as doing all integer arithmetic in the promoted integer type are translated explicitly into TDF. So to add two char's, they are promoted to int's, added together as int's, and the result is converted back to a char. These rules are not built directly into TDF because of the desire to support languages other than C (and even other C dialects).

A number of issues arise when tokens are introduced. Consider for example the type size_t from the ANSI standard. This is a target dependent integer type, so bearing in mind what was said above it is natural for the producer to use a tokenised variety (the TDF representation of integer types) to stand for size_t. This is done by a #pragma token statement of the form:

	#pragma token VARIETY size_t # ansi.stddef.size_t
But if we want to do arithmetic on size_t's we need to know the integer type corresponding to the integral promotion of size_t . But this is again target dependent, so it makes sense to have another tokenised variety representing the integral promotion of size_t. Thus the simple token directive above maps to (at least) two TDF tokens, the type itself and its integral promotion.

As another example, suppose that we have a target dependent C type, type say, and we define a procedure which takes an argument of type type. In both the procedure body and at any call of the procedure the TDF we need to produce to describe how C passes this argument will depend on type. This is because C does not treat all procedure argument types uniformly. Most types are passed by value, but array types are passed by address. But whether or not type is an array type is target dependent, so we need to use tokens to abstract out the argument passing mechanism. For example, we could implement the mechanism using four tokens : one for the type type (which will be a tokenised shape), one for the type an argument of type type is passed as, arg_type say, (which will be another tokenised shape), and two for converting values of type type to and from the corresponding values of type arg_type (these will be tokens which take one exp argument and give an exp). For most types, arg_type will be the same as type and the conversion tokens will be identities, but for array types, arg_type will be a pointer to type and the conversion tokens will be "address of" and "contents of".

So there is not the simple one to one correspondence between #pragma token directives and TDF tokens one might expect. Each such directive maps onto a family of TDF tokens, and this mapping in a sense encapsulates the C language specification. Of course in the TDF library building process the definitions of all these tokens are deduced automatically from the local values.

3.3.3. TDF Linking

We now move from considering the components of the producer to those of the installer. The first phase of the installation - linking in the TDF libraries containing the token definitions describing the local implementation of the API - is performed by a general utility program, the TDF linker (or builder). This is a very simple program which is used to combine a number of TDF capsules and libraries into a single capsule. As has been emphasised previously, the capsule structure means that this is a very natural operation, but, as will be seen from the previous discussion (particularly section 2.2.3), such combinatorial phases are very prone to namespace problems.

In TDF tags, tokens and other externally named objects occupy separate namespaces, and there are no constructs which can cut across these namespaces in the way that the C macros do. There still remains the problem that the only way to know that two tokens, say, in different capsules are actually the same is if they have the same name. This, as we have already seen in the case of system linking, can cause objects to be identified wrongly.

In the main TDF linking phase - linking in the token definitions at the start of the installation - we are primarily linking on token names, these tokens being those arising from the use of the target independent headers. Potential namespace problems are virtually eliminated by the use of unique external names for the tokens in these headers (such as ansi.stdio.FILE in the example above). This means that there is a genuine one to one correspondence between tokens and token names. Of course this relies on the external token names given in the headers being genuinely unique. In fact, as is explained below, these names are normally automatically generated, and uniqueness of names within a given API is checked. Also incorporating the API name into the token name helps to ensure uniqueness across APIs. However the token namespace does require careful management. (Note that the user does not normally have access to the token namespace; all variable and procedure names map into the tag namespace.)

We can illustrate the "clean" nature of TDF linking by considering the st_atime example given in section 2.2.3. Recall that in the traditional compilation scheme the problem arose, not because of the program or the API implementation, but because of the way they were combined by the pre-processor. In the TDF scheme the target independent version of sys/stat.h will be included. Thus the procedure name st_atime and the field selector st_atime will be seen to belong to genuinely different namespaces - there are no macros to disrupt this. The former will be translated into a TDF tag with external name st_atime, whereas the latter is translated into a token with external name posix.stat.struct_stat.st_atime , say. In the TDF library reflecting the API implementation, the token posix.stat.struct_stat.st_atime will be defined precisely as the system header intended, as the offset corresponding to the C field selector st_atim.st__sec. The fact that this token is defined using a macro rather than a conventional direct field selector is not important to the library building process. Now the combination of the program with the API implementation in this case is straightforward - not only are the procedure name and the field selector name in the TDF now different, but they also lie in distinct namespaces. This shows how the separation of the API implementation from the main program is cleaner in the TDF compilation scheme than in the traditional scheme.

TDF linking also opens up new ways of combining code which may solve some other namespace problems. For example, in the open example in section 2.2.3, the name open is meant to be internal to the program. It is the fact that it is not treated as such which leads to the problem. If the program consisted of a single source file then we could make open a static procedure, so that its name does not appear in the external namespace. But if the program consists of several source files the external name is necessary for intra-program linking. The TDF linker allows this intra-program linking to be separated from the main system linking. In the TDF compilation scheme described above each source file is translated into a separate TDF capsule, which is installed separately to a binary object file. It is only the system linking which finally combines the various components into a single program. An alternative scheme would be to use the TDF linker to combine all the TDF capsules into a single capsule in the production phase and install that. Because all the intra-program linking has already taken place, the external names required for it can be "hidden" - that is to say, removed from the tag namespace. Only tag names which are used but not defined (and so are not internal to the program) and main should not be hidden. In effect this linking phase has made all the internal names in the program (except main) static.

In fact this type of complete program linking is not always feasible. For very large programs the resulting TDF capsule can to be too large for the installer to cope with (it is the system assembler which tends to cause the most problems). Instead it may be better to use a more judiciously chosen partial linking and hiding scheme.

3.3.4. The TDF Installers

The TDF installer on a given machine typically consists of four phases: TDF linking, which has already been discussed, translating TDF to assembly source code, translating assembly source code to a binary object file, and linking binary object files with the system libraries to form the final executable. The latter two phases are currently implemented by the system assembler and linker, and so are identical to the traditional compilation scheme.

It is the TDF to assembly code translator which is the main part of the installer. Although not strictly related to the question of portability, the nature of the translator is worth considering. Like the producer (and the assembler), it is a transformational, as opposed to a combinatorial, compilation phase. But whereas the transformation from C to TDF is "difficult" because of the syntax and semantics of C and the need to represent everything in an architecture neutral manner, the transformation from TDF to assembly code is much easier because of the unambiguous syntax and uniform semantics of TDF, and because now we know the details of the target machine, it is no longer necessary to work at such an abstract level.

The whole construction of the current generation of TDF translators is based on the concept of compilation as transformation. They represent the TDF they read in as a syntax tree, virtually identical to the syntax tree comprising the TDF. The translation process then consists of continually applying transformations to this tree - in effect TDF -> TDF transformations - gradually optimising it and changing it to a form where the translation into assembly source code is a simple transcription process (see [7]).

Even such operations as constant evaluation - replacing 1 + 1 by 2 in the example above - may be regarded as TDF -> TDF transformations. But so may more complex optimisations such as taking constants out of a loop, common sub-expression elimination, strength reduction and so on. Some of these transformations are universally applicable, others can only be applied on certain classes of machines. This transformational approach results in high quality code generation (see [5]) while minimising the risk of transformational errors. Moreover the sharing of so much code - up to 70% - between all the TDF translators, like the introduction of a common front-end, further reduces the exposure to compiler bugs.

Much of the machine ABI information is built into the translator in a very simple way. For example, to evaluate the offset of the field b in the structure struct tag above, the producer has already done all the hard work, providing a formula for the offset in terms of the sizes and alignments of the basic C types. The translator merely provides these values and the offset is automatically evaluated by the constant evaluation transformations. Other aspects of the ABI, for example the procedure argument and result passing conventions, require more detailed attention.

One interesting range of optimisations implemented by many of the current translators consists of the inlining of certain standard procedure calls. For example, strlen ( "hello" ) is replaced by 5. As it stands this optimisation appears to run the risk of corrupting the programmer's namespace - what if strlen was a user-defined procedure rather than the standard library routine (cf. the open example in section 2.2.3)? This risk only materialises however if we actually use the procedure name to spot this optimisation. In code compiled from the target independent headers all calls to the library routine strlen will be implemented by means of a uniquely named token, ansi.string.strlen say. It is by recognising this token name as the token is expanded that the translators are able to ensure that this is really the library routine strlen.

Another example of an inlined procedure of this type is alloca. Many other compilers inline alloca, or rather they inline __builtin_alloca and rely on the programmer to identify alloca with __builtin_alloca. This gets round the potential namespace problems by getting the programmer to confirm that alloca in the program really is the library routine alloca. By the use of tokens this information is automatically provided to the TDF translators.


3.4. TDF and APIs

What the discussion above has emphasised is that the ability to describe APIs abstractly as target independent headers underpins the entire TDF approach to portability. We now consider this in more detail.

3.4.1. API Description

The process of transforming an API specification into its description in terms of #pragma token directives is a time-consuming but often fascinating task. In this section we discuss some of the issues arising from the process of describing an API in this way.

3.4.1.1. The Description Process

As may be observed from the example given in section 3.2.1, the #pragma token syntax is not necessarily intuitively obvious. It is designed to be a low-level description of tokens which is capable of expressing many complex token specifications. Most APIs are however specified in C-like terms, so an alternative syntax, closer to C, has been developed in order to facilitate their description. This is then transformed into the corresponding #pragma token directives by a specification tool called tspec (see [2]), which also applies a number of checks to the input and generates the unique token names. For example, the description leading to the example above was:

	+TYPE FILE ;
	+EXP FILE *stdout ;
	+FUNC int fputs ( const char *, FILE * ) ;
Note how close this is to the English language specification of the API given previously. (There are a number of open issues relating to tspec and the #pragma token syntax, mainly concerned with determining the type of syntactic statements that it is desired to make about the APIs being described. The current scheme is adequate for those APIs so far considered, but it may need to be extended in future.)

tspec is not capable of expressing the full power of the #pragma token syntax. Whereas this makes it easier to use in most cases, for describing the normal C-like objects such as types, expressions and procedures, it cannot express complex token descriptions. Instead it is necessary to express these directly in the #pragma token syntax. However this is only rarely required : the constructs offsetof, va_start and va_arg from ANSI are the only examples so far encountered during the API description programme at DRA. For example, va_arg takes an assignable expression of type va_list and a type t and returns an expression of type t. Clearly, this cannot be expressed abstractly in C-like terms; so the #pragma token description:

	#pragma token PROC ( EXP lvalue : va_list : e, TYPE t )\
	    EXP rvalue : t : va_arg # ansi.stdarg.va_arg
must be used instead.

Most of the process of describing an API consists of going through its English language specification transcribing the object specifications it gives into the tspec syntax (if the specification is given in a machine readable form this process can be partially automated). The interesting part consists of trying to interpret what is written and reading between the lines as to what is meant. It is important to try to represent exactly what is in the specification rather than being influenced by one's knowledge of a particular implementation, otherwise the API checking phase of the compilation will not be checking against what is actually in the API but against a particular way of implementing it.

There is a continuing API description programme at DRA. The current status (October 1993) is that ANSI (X3.159), POSIX (1003.1), XPG3 (X/Open Portability Guide 3) and SVID (System V Interface Definition, 3rd Edition) have been described and extensively tested. POSIX2 (1003.2), XPG4, AES (Revision A), X11 (Release 5) and Motif (Version 1.1) have been described, but not yet extensively tested.

There may be some syntactic information in the paper API specifications which tspec (and the #pragma token syntax) is not yet capable of expressing. In particular, some APIs go into very careful management of namespaces within the API, explicitly spelling out exactly what should, and should not, appear in the namespaces as each header is included (see the appendix on namespaces and APIs below). What is actually being done here is to regard each header as an independent sub-API. There is not however a sufficiently developed "API calculus" to allow such relationships to be easily expressed.

3.4.1.2. Resolving Conflicts

Another consideration during the description process is to try to integrate the various API descriptions. For example, POSIX extends ANSI, so it makes sense to have the target independent POSIX headers include the corresponding ANSI headers and just add the new objects introduced by POSIX. This does present problems with APIs which are basically compatible but have a small number of incompatibilities, whether deliberate or accidental. As an example of an "accidental" incompatibility, XPG3 is an extension of POSIX, but whereas POSIX declares malloc by means of the prototype:

	void *malloc ( size_t ) ;
XPG3 declares it by means of the traditional procedure declaration:

	void *malloc ( s )
	size_t s ;
These are surely intended to express the same thing, but in the first case the argument is passed as a size_t and in the second it is firstly promoted to the integer promotion of size_t. On most machines these are compatible, either because of the particular implementation of size_t, or because the procedure calling conventions make them compatible. However in general they are incompatible, so the target independent headers either have to reflect this or have to read between the lines and assume that the incompatibility was accidental and ignore it.

As an example of a deliberate incompatibility, both XPG3 and SVID3 declare a structure struct msqid_ds in sys/msg.h which has fields msg_qnum and msg_qbytes. The difference is that whereas XPG3 declares these fields to have type unsigned short, SVID3 declares them to have type unsigned long. However for most purposes the precise types of these fields is not important, so the APIs can be unified by making the types of these fields target dependent. That is to say, tokenised integer types __msg_q_t and __msg_l_t are introduced. On XPG3-compliant machines these will both be defined to be unsigned short, and on SVID3-compliant machines they will both be unsigned long. So, although strict XPG3 and strict SVID3 are incompatible, the two extension APIs created by adding these types are compatible. In the rare case when the precise type of these fields is important, the strict APIs can be recovered by defining the field types to be unsigned short or unsigned long at produce-time rather than at install-time. (XPG4 uses a similar technique to resolve this incompatibility. But whereas the XPG4 types need to be defined explicitly, the tokenised types are defined implicitly according to whatever the field types are on a particular machine.)

This example shows how introducing extra abstractions can resolve potential conflicts between APIs. But it may also be used to resolve conflicts between the API specification and the API implementations. For example, POSIX specifies that the structure struct flock defined in fcntl.h shall have a field l_pid of type pid_t. However on at least two of the POSIX implementations examined at DRA, pid_t was implemented as an int, but the l_pid field of struct flock was implemented as a short (this showed up in the TDF library building process). The immediate reaction might be that these system have not implemented POSIX correctly, so they should be cast into the outer darkness. However for the vast majority of applications, even those which use the l_pid field, its precise type is not important. So the decision was taken to introduce a tokenised integer type, __flock_pid_t, to stand for the type of the l_pid field. So although the implementations do not conform to strict POSIX, they do to this slightly more relaxed extension. Of course, one could enforce strict POSIX by defining __flock_pid_t to be pid_t at produce-time, but the given implementations would not conform to this stricter API.

Both the previous two examples are really concerned with the question of determining the correct level of abstraction in API specification. Abstraction is inclusive and allows for API evolution, whereas specialisation is exclusive and may lead to dead-end APIs. The SVID3 method of allowing for longer messages than XPG3 - changing the msg_qnum and msg_qbytes fields of struct msqid_ds from unsigned short to unsigned long - is an over-specialisation which leads to an unnecessary conflict with XPG3. The XPG4 method of achieving exactly the same end - abstracting the types of these fields - is, by contrast, a smooth evolutionary path.

3.4.1.3. The Benefits of API Description

The description process is potentially of great benefit to bodies involved in API specification. While the specification itself stays on paper the only real existence of the API is through its implementations. Giving the specification a concrete form means not only does it start to be seen as an object in its own right, rather than some fuzzy document underlying the real implementations, but also any omissions, insufficient specifications (where what is written down does not reflect what the writer actually meant) or built-in assumptions are more apparent. It may also be able to help show up the kind of over-specialisation discussed above. The concrete representation also becomes an object which both applications and implementations can be automatically checked against. As has been mentioned previously, the production phase of the compilation involves checking the program against the abstract API description, and the library building phase checks the syntactic aspect of the implementation against it.

The implementation checking aspect is considered below. Let us here consider the program checking aspect by re-examining the examples given in section 2.2.4.1. The SIGKILL example is straightforward; SIGKILL will appear in the POSIX version of signal.h but not the ANSI version, so if the program is compiled with the target independent ANSI headers it will be reported as being undefined. In a sense this is nothing to do with the #pragma token syntax, but with the organisation of the target independent headers. The other examples do however rely on the fact that the #pragma token syntax can express syntactic information in a way which is not possible directly from C. Thus the target independent headers express exactly the fact that time_t is an arithmetic type, about which nothing else is known. Thus ( t & 1 ) is not type correct for a time_t t because the binary & operator does not apply to all arithmetic types. Similarly, for the type div_t the target independent headers express the information that there exists a structure type div_t and field selectors quot and rem of div_t of type int, but nothing about the order of these fields or the existence of other fields. Thus any attempt to initialise a div_t will fail because the correspondence between the values in the initialisation and the fields of the structure is unknown. The struct dirent example is entirely analogous, except that here the declarations of the structure type struct dirent and the field selector d_name appear in both the POSIX and XPG3 versions of dirent.h, whereas the field selector d_ino appears only in the XPG3 version.

3.4.2. TDF Library Building

As we have said, two of the primary problems with writing portable programs are dealing with API implementation errors on the target machines - objects not being defined, or being defined in the wrong place, or being implemented incorrectly - and namespace problems - particularly those introduced by the system headers. The most interesting contrast between the traditional compilation scheme (Fig. 1) and the TDF scheme (Fig. 5) is that in the former the program comes directly into contact with the "real world" of messy system headers and incorrectly implemented APIs, whereas in the latter there is an "ideal world" layer interposed. This consists of the target independent headers, which describe all the syntactic features of the API where they are meant to be, and with no extraneous material to clutter up the namespaces (like index and the macro st_atime in the examples given in section 2.2.3), and the TDF libraries, which can be combined "cleanly" with the program without any namespace problems. All the unpleasantness has been shifted to the interface between this "ideal world" and the "real world"; that is to say, the TDF library building.

The importance of this change may be summarised by observing that previously all the unpleasantnesses happened in the left hand side of the diagram (the program half), whereas in the TDF scheme they are in the right hand side (the API half). So API implementation problems are seen to be a genuinely separate issue from the main business of writing programs; the ball is firmly in the API implementor's court rather than the programmer's. Also the problems need to be solved once per API rather than once per program.

It might be said that this has not advanced us very far towards actually dealing with the implementation errors. The API implementation still contains errors whoever's responsibility it is. But the TDF library building process gives the API implementor a second chance. Many of the syntactic implementation problems will be shown up as the library builder compares the implementation against the abstract API description, and it may be possible to build corrections into the TDF libraries so that the libraries reflect, not the actual implementation, but some improved version of it.

To show how this might be done, we reconsider the examples of API implementation errors given in section 2.2.4.2. As before we may divide our discussion between system header problems and system library problems. Recall however the important distinction, that whereas previously the programmer was trying to deal with these problems in a way which would work on all machines (top left of the compilation diagrams), now the person building the TDF libraries is trying to deal with implementation problems for a particular API on a particular machine (bottom right).

3.4.2.1. System Header Problems

Values which are defined in the wrong place, such as SEEK_SET in the example given, present no difficulties. The library builder will look where it expects to find them and report that they are undefined. To define these values it is merely a matter of telling the library builder where they are actually defined (in unistd.h rather than stdio.h).

Similarly, values which are undefined are also reported. If these values can be deduced from other information, then it is a simple matter to tell the library builder to use these deduced values. For example, if EXIT_SUCCESS and EXIT_FAILURE are undefined, it is probably possible to deduce their values from experimentation or experience (or guesswork).

Wrongly defined values are more difficult. Firstly they are not necessarily detected by the library builder because they are semantic rather than syntactic errors. Secondly, whereas it is easy to tell the library builder to use a corrected value rather than the value given in the implementation, this mechanism needs to be used with circumspection. The system libraries are provided pre-compiled, and they have been compiled using the system headers. If we define these values differently in the TDF libraries we are effectively changing the system headers, and there is a risk of destroying the interface with the system libraries. For example, changing a structure is not a good idea, because different parts of the program - the main body and the parts linked in from the system libraries - will have different ideas of the size and layout of this structure. (See the struct flock example in section 3.4.1.2 for a potential method of resolving such implementation problems.)

In the two cases given above - DBL_MAX and size_t - the necessary changes are probably "safe". DBL_MAX is not a special value in any library routines, and changing size_t from int to unsigned int does not affect its size, alignment or procedure passing rules (at least not on the target machines we have in mind) and so should not disrupt the interface with the system library.

3.4.2.2. System Library Problems

Errors in the system libraries will not be detected by the TDF library builder because they are semantic errors, whereas the library building process is only checking syntax. The only realistic ways of detecting semantic problems is by means of test suites, such as the Plum-Hall or CVSA library tests for ANSI and VSX for XPG3, or by detailed knowledge of particular API implementations born of personal experience. However it may be possible to build workarounds for problems identified in these tests into the TDF libraries.

For example, the problem with realloc discussed in section 2.2.4.4 could be worked around by defining the token representing realloc to be the equivalent of:

	#define realloc ( p, s ) ( void *q = ( p ) ? ( realloc ) ( q, s ) : malloc ( s ) )
(where the C syntax has been extended to allow variables to be introduced inside expressions) or:

	static void *__realloc ( void *p, size_t s )
	{
	    if ( p == NULL ) return ( malloc ( s ) ) ;
	    return ( ( realloc ) ( p, s ) ) ;
	}
	
	#define realloc ( p, s ) __realloc ( p, s )
Alternatively, the token definition could be encoded directly into TDF (not via C), using the TDF notation compiler (see [9]).

3.4.2.3. TDF Library Builders

The discussion above shows how the TDF libraries are an extra layer which lies on top of the existing system API implementation, and how this extra layer can be exploited to provide corrections and workarounds to various implementation problems. The expertise of particular API implementation problems on particular machines can be captured once and for all in the TDF libraries, rather than being spread piecemeal over all the programs which use that API implementation. But being able to encapsulate this expertise in this way makes it a marketable quantity. One could envisage a market in TDF libraries: ranging from libraries closely reflecting the actual API implementation to top of the range libraries with many corrections and workarounds built in.

All of this has tended to paint the system vendors as the villains of the piece for not providing correct API implementations, but this is not entirely fair. The reason why API implementation errors may persist over many operating system releases is that system vendors have as many porting problems as anyone else - preparing a new operating system release is in effect a huge porting exercise - and are understandably reluctant to change anything which basically works. The use of TDF libraries could be a low-risk strategy for system vendors to allow users the benefits of API conformance without changing the underlying operating system.

Of course, if the system vendor's porting problems could be reduced, they would have more confidence to make their underlying systems more API conformant, and thereby help reduce the normal programmer's porting problems. So whereas using the TDF libraries might be a short-term workaround for API implementation problems, the rest of the TDF porting system might help towards a long-term solution.

Another interesting possibility arises. As we said above, many APIs, for example POSIX and BSD, offer equivalent functionality by different methods. It may be possible to use the TDF library building process to express one in terms of the other. For example, in the struct dirent example10 in section 2.3.3, the only differences between POSIX and BSD were that the BSD version was defined in a different header and that the structure was called struct direct. But this presents no problems to the TDF library builder : it is perfectly simple to tell it to look in sys/dir.h instead of dirent.h , and to identify struct direct with struct dirent. So it may be possible to build a partial POSIX lookalike on BSD systems by using the TDF library mechanism.


3.5. TDF and Conditional Compilation

So far our discussion of the TDF approach to portability has been confined to the simplest case, where the program itself contains no target dependent code. We now turn to programs which contain conditional compilation. As we have seen, many of the reasons why it is necessary to introduce conditional compilation into the traditional compilation process either do not arise or are seen to be distinct phases in the TDF compilation process. The use of a single front-end (the producer) virtually eliminates problems of compiler limitations and differing interpretations and reduces compiler bug problems, so it is not necessary to introduce conditionally compiled workarounds for these. Also API implementation problems, another prime reason for introducing conditional compilation in the traditional scheme, are seen to be isolated in the TDF library building process, thereby allowing the programmer to work in an idealised world one step removed from the real API implementations. However the most important reason for introducing conditional compilation is where things, for reasons of efficiency or whatever, are genuinely different on different machines. It is this we now consider.

3.5.1. User-Defined APIs

The things which are done genuinely differently on different machines have previously been characterised as comprising the user-defined component of the API. So the real issue in this case is how to use the TDF API description and representation methods within one's own programs. A very simple worked example is given below (in section 3.5.2), for more detailed examples see [8].

For the MSB example given in section 2.3 we firstly have to decide what the user-defined API is. To fully reflect exactly what the target dependent code is, we could define the API, in tspec terms, to be:

	+MACRO unsigned char MSB ( unsigned int a ) ;
where the macro MSB gives the most significant byte of its argument, a. Let us say that the corresponding #pragma token statement is put into the header msb.h. Then the program can be recast into the form:

	#include <stdio.h>
	#include "msb.h"
	
	unsigned int x = 100000000 ;
	
	int main ()
	{
	    printf ( "%u\n", MSB ( x ) ) ;
	    return ( 0 ) ;
	}
The producer will compile this into a target independent TDF capsule which uses a token to represent the use of MSB, but leaves this token undefined. The only question that remains is how this token is defined on the target machine; that is, how the user-defined API is implemented. On each target machine a TDF library containing the local definition of the token representing MSB needs to be built. There are two basic possibilities. Firstly the person performing the installation could build the library directly, by compiling a program of the form:

	#pragma implement interface "msb.h"
	#include "config.h"
	
	#ifndef SLOW_SHIFT
	#define MSB ( a ) ( ( unsigned char ) ( a >> 24 ) )
	#else
	#ifdef BIG_ENDIAN
	#define MSB ( a ) *( ( unsigned char * ) &( a ) )
	#else
	#define MSB ( a ) *( ( unsigned char * ) &( a ) + 3 )
	#endif
	#endif
with the appropriate config.h to choose the correct local implementation of the interface described in msb.h. Alternatively the programmer could provide three alternative TDF libraries corresponding to the three implementations, and let the person installing the program choose between these. The two approaches are essentially equivalent, they just provide for making the choice of the implementation of the user-defined component of the API in different ways. An interesting alternative approach would be to provide a short program which does the selection between the provided API implementations automatically. This approach might be particularly effective in deciding which implementation offers the best performance on a particular target machine.

3.5.2. User Defined Tokens - Example

As an example of how to define a simple token consider the following example. We have a simple program which prints "hello" in some language, the language being target dependent. Our first task is choose an API. We choose ANSI C extended by a tokenised object hello of type char * which gives the message to be printed. This object will be an rvalue (i.e. it cannot be assigned to). For convenience this token is declared in a header file, tokens.h say. This particular case is simple enough to encode by hand; it takes the form:

	#pragma token EXP rvalue : char * : hello #
	#pragma interface hello
consisting of a #pragma token directive describing the object to be tokenised, and a #pragma interface directive to show that this is the only object in the API. An alternative would be to generate tokens.h from a tspec specification of the form:

	+EXP char *hello ;
The next task is to write the program conforming to this API. This may take the form of a single source file, hello.c, containing the lines:

	#include <stdio.h>
	#include "tokens.h"
	
	int main ()
	{
	    printf ( "%s\n", hello ) ;
	    return ( 0 ) ;
	}
The production process may be specified by means of a Makefile. This uses the TDF C compiler, tcc, which is an interface to the TDF system which is designed to be like cc, but with extra options to handle the extra functionality offered by the TDF system (see [1]).

	produce : hello.j
		echo "PRODUCTION COMPLETE"
	
	hello.j : hello.c tokens.h
		echo "PRODUCTION : C->TDF"
		tcc -Fj hello.c
The production is run by typing make produce. The ANSI API is the default, and so does not need to be specified to tcc. The program hello.c is compiled to a target independent capsule, hello.j. This will use a token to represent hello, but it will be left undefined.

On each target machine we need to create a token library giving the local definitions of the objects in the API. We shall assume that the library corresponding to the ANSI C API has already been constructed, so that we only need to define the token representing hello. This is done by means of a short C program, tokens.c, which implements the tokens declared in tokens.h. This might take the form:

	#pragma implement interface "tokens.h"
	#define hello "bonjour"
to define hello to be "bonjour". On a different machine, the definition of hello could be given as "hello", "guten Tag", "zdrastvetye" (excuse my transliteration) or whatever (including complex expressions as well as simple strings). Note the use of #pragma implement interface to indicate that we are now implementing the API described in tokens.h, as opposed to the use of #include earlier when we were just using the API.

The installation process may be specified by adding the following lines to the Makefile:

	install : hello
		echo "INSTALLATION COMPLETE"
	
	hello : hello.j tokens.tl
		echo "INSTALLATION : TDF->TARGET"
		tcc -o hello -J. -jtokens hello.j
	
	tokens.tl : tokens.j
		echo "LIBRARY BUILDING : LINKING LIBRARY"
		tcc -Ymakelib -o tokens.tl tokens.j
	
	tokens.j : tokens.c tokens.h
		echo "LIBRARY BUILDING : DEFINING TOKENS"
		tcc -Fj -not_ansi tokens.c
The complete installation process is run by typing make install. Firstly the file tokens.c is compiled to give the TDF capsule tokens.j containing the definition of hello. The -not_ansi flag is needed because tokens.c does not contain any real C (declarations or definitions), which is not allowed in ANSI C. The next step is to turn the capsule tokens.j into a TDF library, tokens.tl, using the -Ymakelib option to tcc (with older versions of tcc it may be necessary to change this option to -Ymakelib -M -Fj). This completes the API implementation.

The final step is installation. The target independent TDF, hello.j, is linked with the TDF libraries tokens.tl and ansi.tl (which is built into tcc as default) to form a target dependent TDF capsule with all the necessary token definitions, which is then translated to a binary object file and linked with the system libraries. All of this is under the control of tcc.

Note the four stages of the compilation : API specification, production, API implementation and installation, corresponding to the four regions of the compilation diagram (Fig. 5).

3.5.3. Conditional Compilation within TDF

Although tokens are the main method used to deal with target dependencies, TDF does have built-in conditional compilation constructs. For most TDF sorts X (for example, exp, shape or variety) there is a construct X_cond which takes an exp and two X's and gives an X. The exp argument will evaluate to an integer constant at install time. If this is true (nonzero), the result of the construct is the first X argument and the second is ignored; otherwise the result is the second X argument and the first is ignored. By ignored we mean completely ignored - the argument is stepped over and not decoded. In particular any tokens in the definition of this argument are not expanded, so it does not matter if they are undefined.

These conditional compilation constructs are used by the C -> TDF producer to translate certain statements containing:

	#if condition
where condition is a target dependent value. Thus, because it is not known which branch will be taken at produce time, the decision is postponed to install time. If condition is a target independent value then the branch to be taken is known at produce time, so the producer only translates this branch. Thus, for example, code surrounded by #if 0 ... #endif will be ignored by the producer.

Not all such #if statements can be translated into TDF X_cond constructs. The two branches of the #if statement are translated into the two X arguments of the X_cond construct; that is, into sub-trees of the TDF syntax tree. This can only be done if each of the two branches is syntactically complete.

The producer interprets #ifdef (and #ifndef) constructs to mean, is this macro is defined (or undefined) at produce time? Given the nature of pre-processing in C this is in fact the only sensible interpretation. But if such constructs are being used to control conditional compilation, what is actually intended is, is this macro defined at install time? This distinction is necessitated by the splitting of the TDF compilation into production and installation - it does not exist in the traditional compilation scheme. For example, in the mips example in section 2.3, whether or not mips is defined is intended to be an installer property, rather than what it is interpreted as, a producer property. The choice of the conditional compilation path may be put off to install time by, for example, changing #ifdef mips to #if is_mips where is_mips is a tokenised integer which is either 1 (on those machines on which mips would be defined) or 0 (otherwise). In fact in view of what was said above about syntactic completeness, it might be better to recast the program as:

	#include <stdio.h>
	#include "user_api.h" /* For the spec of is_mips */
	
	int main ()
	{
	    if ( is_mips ) {
		fputs ( "This machine is a mips\n", stdout ) ;
	    }
	    return ( 0 ) ;
	}
because the branches of an if statement, unlike those of an #if statement, have to be syntactically complete is any case. The installer will optimise out the unnecessary test and any unreached code, so the use of if ( condition ) is guaranteed to produce as efficient code as #if condition.

In order to help detect such "installer macro" problems the producer has a mode for detecting them. All #ifdef and #ifndef constructs in which the compilation path to be taken is potentially target dependent are reported (see [3] and [8]).

The existence of conditional compilation within TDF also gives flexibility in how to approach expressing target dependent code. Instead of a "full" abstraction of the user-defined API as target dependent types, values and functions, it can be abstracted as a set of binary tokens (like is_mips in the example above) which are used to control conditional compilation. This latter approach can be used to quickly adapt existing programs to a TDF-portable form since it is closer to the "traditional" approach of scattering the program with #ifdef's and #ifndef's to implement target dependent code. However the definition of a user-defined API gives a better separation of target independent and target dependent code, and the effort to define such as API may often be justified. When writing a new program from scratch the API rather than the conditional compilation approach is recommended.

The latter approach of a fully abstracted user-defined API may be more time consuming in the short run, but this may well be offset by the increased ease of porting. Also there is no reason why a user-defined API, once specified, should not serve more than one program. Similar programs are likely to require the same abstractions of target dependent constructs. Because the API is a concrete object, it can be reused in this way in a very simple fashion. One could envisage libraries of private APIs being built up in this way.

3.5.4. Alternative Program Versions

Consider again the program described in section 2.3.4 which has optional features for displaying its output graphically depending on the boolean value HAVE_X_WINDOWS. By making HAVE_X_WINDOWS part of the user-defined API as a tokenised integer and using:

	#if HAVE_X_WINDOWS
to conditionally compile the X Windows code, the choice of whether or not to use this version of the program is postponed to install time. If both POSIX and X Windows are implemented on the target machine the installation is straightforward. HAVE_X_WINDOWS is defined to be true, and the installation proceeds as normal. The case where only POSIX is implemented appears to present problems. The TDF representing the program will contain undefined tokens representing objects from both the POSIX and X Windows APIs. Surely it is necessary to define these tokens (i.e. implement both APIs) in order to install the TDF. But because of the use of conditional compilation, all the applications of X Windows tokens will be inside X_cond constructs on the branch corresponding to HAVE_X_WINDOWS being true. If it is actually false then these branches are stepped over and completely ignored. Thus it does not matter that these tokens are undefined. Hence the conditional compilation constructs within TDF give the same flexibility in the API implementation is this case as do those in C.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/port/port5.html100644 1750 1750 10516 6466607534 17625 0ustar brooniebroonie TDF and Portability: Conclusions

TDF and Portability

January 1998

next section
previous section current document TenDRA home page document index


4. Conclusions

The philosophy underlying the whole TDF approach to portability is that of separation or isolation. This separation of the various components of the compilation system means that to a large extent they can be considered independently. The separation is only possible because the definition of TDF has mechanisms which facilitate it - primarily the token mechanism, but also the capsule linkage scheme.

The most important separation is that of the abstract description of the syntactic aspects of the API, in the form of the target independent headers, from the API implementation. It is this which enables the separation of target independent from target dependent code which is necessary for any Architecture Neutral Distribution Format. It also means that programs can be checked against the abstract API description, instead of against a particular implementation, allowing for effective API conformance testing of applications. Furthermore, it isolates the actual program from the API implementation, thereby allowing the programmer to work in the idealised world envisaged by the API description, rather than the real world of API implementations and all their faults.

This isolation also means that these API implementation problems are seen to be genuinely separate from the main program development. They are isolated into a single process, TDF library building, which needs to be done only once per API implementation. Because of the separation of the API description from the implementation, this library building process also serves as a conformance check for the syntactic aspects of the API implementation. However the approach is evolutionary in that it can handle the current situation while pointing the way forward. Absolute API conformance is not necessary; the TDF libraries can be used as a medium for workarounds for minor implementation errors.

The same mechanism which is used to separate the API description and implementation can also be used within an application to separate the target dependent code from the main body of target independent code. This use of user-defined APIs also enables a separation of the portability requirements of the program from the particular ways these requirements are implemented on the various target machines. Again, the approach is evolutionary, and not prescriptive. Programs can be made more portable in incremental steps, with the degree of portability to be used being made a conscious decision.

In a sense the most important contribution TDF has to portability is in enabling the various tasks of API description, API implementation and program writing to be considered independently, while showing up the relationships between them. It is often said that well specified APIs are the solution to the world's portability and interoperability problems; but by themselves they can never be. Without methods of checking the conformance of programs which use the API and of API implementations, the APIs themselves will remain toothless. TDF, by providing syntactic API checking for both programs and implementations, is a significant first step towards solving this problem.

[1] tcc User's Guide, DRA, 1993.
[2] tspec - An API Specification Tool, DRA, 1993.
[3] The C to TDF Producer, DRA, 1993.
[4] A Guide to the TDF Specification, DRA, 1993.
[5] TDF Facts and Figures, DRA, 1993.
[6] TDF Specification, DRA, 1993.
[7] The 80386/80486 TDF Installer, DRA, 1992.
[8] A Guide to Porting using TDF, DRA, 1993.
[9] The TDF Notation Compiler, DRA, 1993.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tcc/ 40755 1750 1750 0 6507734500 15321 5ustar brooniebroonietendra-doc-4.1.2.orig/doc/tcc/table1.gif100644 1750 1750 34643 6466607533 17320 0ustar brooniebroonieGIF87aH,¡æææÿÿÿ,H,þ”©Ëí£œ´Ú‹³Þ¼û†âH–扦êʶî ÇòL×öçúÎ÷þ ‡Ä¢ñˆL*—̦ó J§ÔªõŠÍj·Ü®÷ ‹Çä²ùŒN«×ì¶û ËçôºýŽÏë÷ü¾ÿ(8HXhxˆ˜¨¸ÈØèø)9IYiy‰™©¹ÉÙéù *:JZjzŠšªºÊÚêú +;K[k{‹›«»ËÛëû ,N^n~Žž®¾ÎÞîþ/?O_߀ŸàoŸ?~ >ÐgGB ú&,´g‹ß¿„'ü×àCþ3 hØ£À ,ù‘#Jˆ°$.©’$ÀŠ Šd r%‰›1/`ä9’¥/ž0e椙Ò&J Užt:*T=ZlXµ'Ç™-ƤÚ¨PQDË‚dZ”«´F]Êlû¶ªW¸VÝþt«5iܸTázÕ9vÓͨ3ß.¼‘°MíöKò±d§~ñ*\8åݯŽ#>…ùðÏÌ›'=êqkcº«#O¦;²µØÒ{K¯n*ö³'¬`)ß•«¸'oÆg7Ï•ðxVÄ)gœòpܺ«óÈm=ûìÚ»{ÿ>¼øñäË›?ßý¤úõìÛ»?¾üùôëÛ¿?¿þýüþûûÿ`€ûÉ@ˆ`‚ .È`ƒ>a„ÈÏ„^ˆa†nÈa‡~bˆ"ŽH¢z½Q`‰*®è`…,¾cŒ2ÎHc6fx¢)ÞÈ#†.ödBId‘æØÆŽFùã’N> e”R‚ˆ$JNÉc“XnÉe—^ YåW~£–dž‰fšj^¦c®I¢™pÎIg\¶™Æ›vz(çž~þ hx¢¡g > `h¢Š.*â gš……¦iæ@ Nz ¦ŒnÊ©ŒŽš¡çI-R¨)™r⪪º`Ьv k¬1~ZF¨®BZ*¥ˆ*¨ê«#ú*k°Â’H+o¾zþk¤©¢Êlɋ婷’Úl¯ê‘º¬²‘6›*µÐ n¸;ƱJNÛí²ØrËí—Òb«®?Ýöšíº®ÂË®³úŠËo¿+†¹¢Kð¾ùšºk‚Öz+¯º™¼oº ü­¿o p¶â{¯ÃGìå»ïhð¹{Ìq¼%_Ìr¬ƒ+«–Ökí´GYiÙ2›.Í2ë«m»A;ÜrÑ‹¾üE®ðnÚg‰7 5¸H{¡4ÑŒ6gÕQoÍèÔ÷5×bm¡×\h kØd¯Í¶‚fov§j·M÷Úok7§s×Í÷Öwg!`à‚NxᆎxâŠ/^8(¢÷¨ã:BþªäIR^«åVbn¬ækÁDMûI!\Ô`™G º­Ë z^•ŽÓé´¯ózÀ¬sWEî¼»iŸÿ=¼Cßø:i¾ØU¿ùv”KQuÖV\݆:L¿A–ôÎ=½÷žQƒ|Ò»—¿]óÉuûiÖöê'bo½fq›è55¥Úû]¹O<^ ïJ™Ês¬˜õõÆ3Xé‡pÌB;ŠìÏ2¥»ŠÿD£¿úP3ıËÉç¹Ù  ¬ŒýTs› ž…Ìk*x?ؼÐ(4dáýÂÂi ðlY Ü”‚½ÑHD/ÿcàü¦#ÁíYF|Ä mº·D°x¯;JþóÞA¤Šx{Џ˜0’ЋœÓØ UFÝÅAŒT`c‹àÆÞ‘ñ\#ãîˆÇ<êq|ì£ûxÆÈÑ‘j¬Ü ë˜Å9’ykTä"¯Ç62D‘Ô@%g; Þà’Äà$TçÉöÕ ”k‰£ür@J`¤’ €YÞ﮳“áiѹ#å*…qK%´27¼ñaXðÇ@¼° #½_0§Ç=×øP™®ßp¾gAcªÅ™à+âó s\&a—‰yßè`h˜ žfˆ!bý(âͽ¤æ/<Ž7ù@éQ0Gâ<'_ìOrbC›HH _¢OÔ4s~9ô  ˆyMøEp™ÆÑgPþê¹ÁàS“_ÉL[rr; ÂsŸ…Ta OØœsô1/è¦S8Îޓ̹ AS:DåÔ}¸¡^õçÐlv|$g@¥ãùÍ2“ñëßO£ÈAµ('‰} PåIM¦Æ´Ô¼)FG¨ÃŽvîC}$é99®òS;vÌd.³™ÏŒfe™`~²bÅÔf7Ã5¯qÆò›ÝTg]²!ˆ¼C—‹gþÙÚÒr©%ŸG©<&vI¶òÊç»Užò‡¦u×;×ÕY¶Òh]Ù¸‹š•²um1w‹Õ§³ƒÈNp ¨L€ò™™ÇäÞ©Y‹Do¸J” i\}\çF~PL&2™¨=;Ï'œÁvÅ»h“—¾è5!v_zà ; 5 é ³Ón¿s¼0=o…M8º »ÜD´áHÀß.{·¥&M:•[g¶˜Ãöd©ºÃ«h꺆ðUgðˆÝo~·ÜÔ>wö^‚Ãý&5ÀüøG)úØÜbz¯.„©ÀúЩB§ÃþÆöˆÙY–pö›î¤aËQJš2\xà­±»W\›þ“ÿ”ä#uh³Éýe“™ã×Ó ¡In>ÓÄÅÖéÿN=; ã׉M­µÑW¬ë©r°æiyÍŒÞÛ¦>]˜c_5­mé5÷YÔ€˜%Ú-)ÊK¾½Óz¾3‰C·58ïÀÜ9ðs…Ïyò¼ _øQÛ‰WüáïÞxC(Í”¯¼å/yÀª‡‘—üâÙy´nþ‡¡ýç YzІøïŽ!³éÎfp¾¤‰V-ÑŸn¿ÏÞ’ dý-v/o¡ËnŒ¦«ÝªþÉRÀ_k‚18Ð`Gsè°cvÖ{ liÂZûª6f÷LÌÝiš]{Â5 ‰7ŽŒæºö¦¡N:o¨ß*þ㽞솨Uõ[r„Ï1펧Á^}§~‰0€)ð^÷Ä'7dÆYõ÷A÷÷lX§`Ý%n݆Nê`å5qÆ÷ [Ï7rЖö'V±7'^ua·Dù}$¸€ñgs-tÌ‚¨ÕC]÷~ÀÆS;\Ï¥@ßsDÍA~•±kqùTL?èôTKç{´ƒ&P…Ζz¡3z‹–…Óµ…oÕ…öxŒ†bhe W† x†¨—†L&f™‡q(‡sH‡õñ…Ä׆n8†h˜‡Žv‡Ô‡z¸†šˆˆvz´¡6Šøz~§e£•oÃp…±”`S8hµIŒèZ\dþ‰¡SK˜X “¸ˆì׉ˆ{¯T|®‡YÃ×¶UÄt ¢ˆ…"(uÉ>Çå_µÖcp—‹½Äb çT Õ}QØLÅ}Þ7}ÆfN(}OQŠ£ ‹D%|í§uþa§^×HpŽ8÷ñçpíÔsú÷ü—ðwpÞæo±•mðfEv÷L#èR%È€(}æx"ô€*á8GUý7_$5ƒùõaö†P÷ÕŽ¸uˆ8=Xm mGaùØW>¥ræsïf.eaïG4¨Žý‘˜Q‘½ °|g„aMÜ“i{—øbH×[ÔVv܇„×§„è?M¸LO(k×þgjS‡|︇¦W+ ˆé÷‡<öŒè·S‰ N {tVˆi×mµ•†¸”„ø•‰•·7–ö†u¨–kÉ–myfe™XgI–]Yor|ƒyv‰–a¹Uz¹—ƒØ}¤ÇJÐe…(³H[»¡qñ¦…ƒÉбd˜¿ÃIžö XÉÊǘª·‘©ƒ§èÆç[”9I¯8š{`™©|§yƒÌ”kØ'\BéKå§Z¨Ö¯ikB%›¿´o3™D·È›ÓÔšÄÕ\ÑqQ| š¼XJVY_ÿ(’ßæ‘:Ym%ä7&ƒK˜= Ç‘!wâXŒ&™icCxc¦yEÓñˆñE’ÍÙ½©oâ÷˜þFts4š1æž©)0'~ö÷pè6„ƹ˜Ê©†-¥€/·r"§s[·„íÔΩ€7a1H`“±QjmæŽô£¡»hǹŠÉI}“\F·=Wg”JG_(ZÛFUÈ´Ÿ#(vî‡o²Æ(9A~½–{¬©jÇ tyX©Á~Î  b©•p†w»Ã|J{@e~é||é•P¥ƒØ—TJIp)¢Xš¥Nzž\Ú™|å–cJ¦ej¦‹£¥¯¦˜¦QÀ¤tô¦6¹Xk¦€é’£H•!È’lgŠs™§I—hqú£|™_ù§V8‹9ÅEÆ3i ÈQ Y¨‡š˜”J‰xJþšEºšZš}·šŠš¥ ¨Ë—š»y¢Äx›¤Mø›Uu¹i\àG“¼TÁ”Ögh!„»fvA|‘b¸8 £Ú§#*_"™a1”Ašº©˜Tç8nÌ)^ª^)ëæ©$8À´ E•Ë •@¬ Š“èùaÁuĬ›j3u’âÈmÚÖYWÅOõ¡•s¼„O4 ¬ô™6°!ÊåZ©ÓXSú·‘õº N)tá6bá¹ ö‘2·nF¸,*Cì8O;±‰NëH¢š`°Ê³ƒG§£Ä(¡ÿê°-ʲH¤rÕˆ}Â*³´úK¹£È—“#³«ßþi£Þ›' –VºŠCu`—6G ¦;Ä¡–`´€v:!J°/›¤bæh„¹iíÚ¡mš|tZ§xI†b{©H[—f»§^{j‹˜h¤n{˜Ir¦uk·w‹·vȶk'·Y ·OÚ·~K¶|¸¡:¸lX¸†{¤Aè¡°Hµ_Õ¥ƒ¢YíúhúŠ^KS›ŠÛ¶|ê™G¦§- w·ié™éšû™h¹qÅj˜‹q`Ôºk{ºŸj®PÛv—uìÊ«”D¾k/†jû–j¾Hv¾¹«¤›ß·[¶¹\æ·0ù¼¬Æ}Éqv)§DF¹¸s*tWumæ¥l§„Šz¡ô¤‚=§þPU¾èækbwo͉Sjqµ¬…÷ª‘2vr*å½xiž·ê]²‹Š+ê}Å €ŽÛoêQÐCŸf¼ð tÖÉjĦQì¿6¿.+DÓ‰p\[¶‡[‚ç¾ãkcáärRW¿)|¾-§ (<žÚH“ó:ìu&±+Ä­BÊuG”Àª«Š"|„"õ»À+­HDa.z~G×t¹Ú,CÄI«|ª¿X›óJv2:°ÄUt¸B ĸ›—´T»a¶MÄZÆ¡[K…¶·_ê¡á¦›i, \˜$|>–:ÆÝ‰‹œëÆ|,hm,˜€ÜÇ"|¥„Èbš·‹ÌÈ|¦`;eˆ||Ìþe’\ÈzL¸–\¬HŠÉš ¹H;kƒ,¸š(¬¦GÁž‰¤ªÊú(Æ^£¬¨¼¹Aܘ„pxXÀ¨|?'bøÉ«¬¤²$Æh÷ʦ¯žGËx,gMù¸ÍÆm²,º‰L•JÌš‰ÏHÀÐ,Ä‚ŠbìÛk¿[¼ÿuŒ÷X\UìjÊE´·\É‘U,œEùŸC‰¼N…Œ¹½ÁÙ£åÇiaü¹ï †êu¸¿øøÏü&mö+žëœ¼/NURsÑù]åØtÞ…¾áIŽî(¾l¯Y`³ÜdET“jºNÞlê Ž½ÑHÐ.8¿0,ƒ–rHÈ ùkºÉz¿PxÃüIQèHÄòÅž°þ,ˆ†L\Øqå¾3R«µsyЖÐóÔŸ²ÓÐ(÷ÐG›nZ ŠÊnâ»À/ìÓ'h¥ÙÏï{J-½¿h¢¬Æþú¬é•bc^ŒÃØë“ÓÛÁ¦V³jýÖÞ:³æ‡Ðª£Dé¹Ë)¥i{ÍÚeK²\°pDÉ­ºãI®¤ÏˆêØ‚,Ôž¼Éˆ÷¿š}°†<¥žm®Ç¢ý¸VâÈ©­Ú«‡Ì¦ÝËl¤°½º²}È´]Û¤½Ç¸Ê²½œ‡–œé°ÒÖMzÉŸMš[jÙaùL¢¼~ðšÜ˹ÜÊR©¸ÈýÆ®«Ì’š”¶ì|œèËq×Û{‡™˜ÊØþ(›ØË­Þ@ÝÝOi½½K<“×3lIWumýÞ€-Œ´9Œ°p­JßÃë£,kªî ›6}­©œ §›À¸Þ¶MÖÞ݃-Èt¬lÁNm^ºØ‡Øà™ œ°æ{T4V¢ˆ˜½¾x“þ,ŒÞ³þÐEì«ÙÙþ¼x­Å<:ÒEõ×]ü“à,rÏ+ •Ì5Φž^Zù x¢(ê1^ɸthW[ÞqŒÙeÍÛ2žÉ»ëŸþÇ´^먶¶Û§ÎÚ¿ìÁNX®mƹ.ëˆËÙÆnë›ìÊ®ë‘ Ú´íë)ëz¦Ž˜IÖJ£ý§öZ<«ºì¦0íÏ—oÖžIæ;¤§}Þ> r([ܶ{êÍݹíêæí¨ †îŒÝÚ޸tçg­¶«œªË˜}‚ý\5ʽÒ[ŸHŒl‚þOž«ïnÄǯüÜz·‚^î8­ eâ-œÓòö‹¬‹ÒEñßÑA}Ï~—ÿîsŒÚ‹ éæïþÖ&-Må8€*_ä¦K…‹ ‘íΔY}¡;z¾ ¿Ž0ÇÕ’R_Q²ñºEóÃJô”þÑ— ´1ÛÄ}”.{ð­³ðìt žÅw­upàA=ô¯.áY‹ +뜬ÛWO®é-Ä^wÎþíÀ÷µÞïƒë…÷€øÍ®ì¾.ìù‘Ï8~ðOù†øYÿÚŒoì‹o¬ãÒŒíÏ7 ”F_©ÞŽúà¾g%‘Ƙ{æ.î°+×uú–øú˜j÷^Ê®åo[ïìÞ­lÜÌ·Ýš žO÷}ð,ßçü-oàÌØå,¾ÐÇ&aÓ#œO\½È+ñ;+ Ê/¤þ&ßõG¯ñÍñ5½¢ï Å8¯¼œ‚/ÉæBOþå/ªå)P?ÏÆÎ/óÐM /-AÛ;HÂvíŠìb.5èQDpÓBîtP22YqíèÚ¾ñ\ßùÞÿºLX4‡5Xj¢:>¡(O…R]TN®&¥ùÚZ“™’³£¥~º–%½}ÓñGµßñy½gß÷ã’¦ÄFÀÿúÂBF^H@ÎL¹ä"‡Ö¬XhÎ&+1É8KF9¯2A 1—Y[]‹ _e‰bKgo•„jqyov{ƒ…‡ùˆs“¥lV•ƒg¡§©Ÿ~«]¯±ý´·½¿ÁóÅy»É¡ÌÏÕ×ÙÛAÝ÷Òþá}äçíïñoëó…øïöý8ð@‚Å1˜aC ¦’8‘bE‹1fÔ¸‘cGA†9’dI“'Q¦TùÑ!¬–_Æ”ÙâÁšoÎÔ¹óXN>óå9”è+¡üŽÚKZ”iÓN‘ù’èè_ššmÓ’eFŽzK59XZëh:°a¡–ñº‰f«ÌžÕó©Í2@´Œ2ë5žÂ¼i2“Ô¯‡\#gcY7®Ú»xs#ìíÜÆíBÝ% k#[“bd}¦„P([©º|†Æùt§KXK[=5T¥RT(Ñ©=8M!RŸÅZµ¨*¼)á®­:xîÖ±]ã†íéxþWÉ€­w¤æBÏΨ MƳ7Ž7)®Ë]6Æ?Zëxö`Ê”gï¾ùwÄ’oŽÒ%5÷äkÏÀ·Á¬ð¨#+ ëR&¼þ²‰ûЪC½Àº’á‘Ll+Ð?m“Ã>Gø`¬…á\` àR#Ÿ ã{‘-côï ß²ë »wò£1Á­~¤/³^.dŽSÊ{ƒÅ‹lJØо'“tE/TäoÄ)I¤ë8-ÏCò®Œ#Á.ßDðIýÄDòÙ!˵)VäsN5âdÂïLY!ºK³Ðf5 4½úL”@øÆûÂN:"iA:@cTëLF@ÝÑ’ôrþÃB?åX¤ÓOI•dÌîî’H;ÌÁs²0yí©ÊdÑ•[­ÉÉA`ÏÙ‹ )Ì,Ö‚¼ fµº ObÉ¡ökeÍ–(lÁáoµ 7&pµÂŽ\qÑ(ÛsÁK×Ýw•u¥yé­×Þ{ñÍWß}ùí÷ÞuÍ…W࣠0v NX„§a¸'…!~×adŽØbq'NFO_똧*EaÆGZ›fÖ‹…r³ÍJ®Ö-Hðuœ$ñèxæ%QNÙ`@XFÊd˜d9f\I†ì°œõx-žOuÂSœm¹æ^#Ž³Ù¸Àú6Ú<=óS¬ ÌïÔêž{.l±C˸byu¹jCµþ-xÄE'4¯Iü¤`ÐæFoÎ •ÐC4Ì82ÈáN+q‘I#´ÈÁáyÚMO|Ô‹¾-×ú¼-;¤ï7M­‚|¿7Ö[r¤¯S¹í ?9œDÌz0ÄM!ö¼kì±ÎÍ[ü›æ¼]ö,•<ðm×Q·Fi©…šËsñÔÖ¸Ìtç®ÆæääTQº2;¡|ÒÜ£6µ»QK]\³å×_Ú›‰©U`ù™ÜYªh†¶Á “o…þrì×?Êä¸(à_˜À—0Tà ÂÀ^9‚ô™¼ü•A nƒôàAøAö9E‚4!_ض¶®P)#lJ YCä †±þ’á æB´ Ç&¸ÔËÜyІ8»Ñ™ò ¨®òlz= ΊÖ!ú¥PšRAYh?%ªn-Ž‹ Qq´xMqòÎ ‘0Æd5Љ\dZ۸íY*T«X yP•DíÍjÓ™ØR´5îEÆ # çè$A¢ :šƒší¾Çª@¤†‚ÒEÖÚÒ9Ó`®SÚ©¥Zg7Å´‰ox£Ó{ô(òPi © ß•ªÔÉǙɔޛ]\•J(Þ£’ jœ.w‰D0HÀCß'÷¤£B޲–ÀƒÙåréÌ…¨w£Þ gÌÇÍ U‚»e1¢ùIœè°khô˜àt9&ü˜ÉsÛTþ$–è’NSon¤4•*—1†%™ï•²K”X'Rr.MŠºg0Ï¹Ä Òè'‚‘jfg¾Ij"½ñãÕ$IGQoST£#Ѫ·ï9ÊŸl±#ó)H†T*õZUؾbµì2¼ìå‹trÓÔÉ´ZÍ«ÙMùw-Ÿ.œÛÂáQST"•©ïSj®šUjèôg<•êU‡Án•«]õêWÁV”§G×^®…cmAëÉ]žô~·ó“ ž×ªUʉv#•æ×ðÆ7o¦“tº * —kÙGUÉyAq.pî¨H e´kÜ1Pæ¶;PÝ$ó»½’å*Í‘M³µdeo{ÜÝEï‡Ð잃7p_`Îh¿h:el §Íñ“æ-/=Ó+L¯w’³è3y¥Òë¶•và²{Ê<®H´4¦¯mƒç›Ž&2ļ$kN-©Y¶öX²yò‘ÓøÖ"™ÉEnî’›e£Q°(D–²â~<+_®JfÊ–þ¹üåu‰•Ìe6ó™Ñœæye™€av3 Ÿ,æ7Ï™WF®2ñ,XÎvö°etì”­©^)Þ*nC볡]wÌ?‡.Ñ&¶óDm‡èk¥v±j]-œQ+4=¯–,qék+)<èu€™2Ld¤ŠÄÈ :ÉTÔôfÕhú¹¨Îû„!©]i+òØ£¯û¥nÓf¶=Ò·Ê…çE[u(ô츷`*6Ùp+Ü¿NlbÅõɵ>¾”ÅÀœŠÍrèe¢ .UôìëacÛ¥ ‹·>ÖEŠÇG¼i ¸šÎôÝ?±ÛÝøŠýf}ÇÉPV®x¬¥Xœ_Ì£‘ôÜøÍ™øÙ/ÑìÝØ¬ã|æþÒKí­8í­!7îïœ÷¾½„KÆÞÚÀß™·Iÿ;UÉSÞ˜\©»M^;¯;ã°$½QŸ/E‰wë¶KÀ ºó¾©óä'T¤¡'nk;K½öµ§#lKh·“ž‚q´Iµ(åf=ÆÂ%ñ²kŒhj®ˆö"Œ7zÈš}ì$=’|ÏÛ06«yw¬ßq"—ÑzïDjzã©åei(ºq¾sžñŒkW+òW–<ª¡\ù7_þÕ™×|˜9kÏÞòcVóéQŸzÕ¯¾«…oIèIßP©/5ö\†}¡±\ûÒÏ~ϺoòíýÌ4À7þACåÆà¸jXËø„/U"÷þv` ÿf×]~\›iÁ˜ö‹ÅOìa-øZÿ¾åܧdã¡Iýf.5’!Ø®ãâ ×CÊ‘¢WiE9û—F?©ïCî­–dI{ÒÍü¬è¾ŽÑ¶'3$,î´ã ®©ú°²­Í äŽ8Eî$þþ åmFÃtB$¨†IDl¯üø®ÝB)À"Å ÖiŒgtÀkÄoÈ‹–²ËüFnBÔüxﳺoêȇîÈ®lrÉFŽ.ž–ð·’ëæ\N¢BEãDФ|øÏõ O×LÍ÷b¨Emô`e´À½¸0“ÎÉÄ0Ðäl ¬ ÇïñàPÈäpªíÐôX¯ýþð1=" âõ0Õ/ñ® qÓZpíª'°!‘Ÿªí(‘ kO!eòhúœÌI±úê‡÷žg‹’ïg¾§ðbT@qhDQÕ¾Ï 5†$1˜LJÁ(Š,Ý(íwVñ¶rͳ¾ð³*!LR¦ºÊå×xÍý ƒÅ¶.¸²íéò踶bHŒþ p”M ©Ù0ª:¸ { Žj´qåÖ¯¯‘ˆøë‰”±½ˆ•†/«i|Ȥó­™äp¾IÃhCPlg© .'pþ­ûMMl¦ž ®¿²Ð!Nä ¯°ro‡@!ðÏÄ'£’™rŽç Éßþé´(,6€0£ÇPðÎ 2Y¤†'‰¸‰I™ tN2#ÁòÆMŸ0ܶIÊÃÊi .L’»4)JxP¾¸+¶Ò±74¬ÞÂ+ b°Ür²ì8)––0S|Ë)ÒÒ¨l¯/Ùnì[Þ«»î6R A¶®OÄ‘ëÌÑt„..É®j¸nÅ®2}2Ì¢ð îÆñ¹Ø0ÙO§11‰Pñ Í*{ñ,kdæP(î‘øÏ&±ïÎ*2{¥ê,°,./ Ñ2+ña&SËZ“ù‘cóªT3-k©TSy³7}ó7Oo6a*7£ê6?“8oÈ879‹S8‹9)°þå:“ðܰÕªÓ.cXT«ÛJ9·o:r²ðð:#h†ñ)f‘ /±ÁÎÏ 5ÉÆÐhŒÅ—Æ3×<à Ék ¦#è"DØ82ã®Ú´&}ª­‹ÆQC¾ÑQÑøžÓÁ´«'K *a›\®*W,Cmrõë%0ÔË;«ÇÉÍÎþ ‹æˆ—2tè°m¿€Q/Žæž‹yP1ñó„L´2p-sä òvÎn°)Ù‰Aýñ礲zþd¡ÀÍCí‘5Ûeo8érlš°“PjFÊjˆ µ0á.PórK·î,j0S†~ô ¡G©óM?/NépNCþ4+O:™ÊOiPÕ9Û“PuÓPeQj7óQ!5R%µ_5E‡µ÷.5 +õ!6S;•¹> E'/^‹Nc1f®Ò¯¬“Ó<‘Öì4y2•ÀÒÜpˆÐBí$+=¯Wý̾^u…jÕCiqVÑ23™¤W}ñè@qˆj-1’]ŽõØ ²þèsNq@ão·Œ‹¶ ”ÉlºQë:êFÚË1K?¡„QⱈnU¶õ¸\iÚþô8NJƨÔ[†‰sÂi£4ô=ü1ÇÂÈH)j&I4# ©’Wò!›”C«” ËzÞ)å T,U5¯ÉWþ!V›äè`ç© ó©%]Ô¼dR™ »3`CfW­&wu?ïµËæqâ"eð\å1å±wGt*Ò&à\r.'àÚãúëjLE¡•Gì­Lr®êŠ2Oó¥Çê,á¶–Y½n£â¥x4iÓ»jrU6L×oÝt*÷’!½(ìàk-ÝÏ È´]7W;Í [VFeÄoÓ¦E3ÝGY3q™§d¸Õÿ4Ñ5cXBVˆL4—6…òoGÕGC5Á>×XCWÄF—t_Oׄ2µ¬VtµjRewvi·v9¢t}ìu-¨uLw+ˆwaÓw!¨u!´Õ6÷Yìr·oeþˆÖ$45ïXÐ%„uhÉhW÷V×…X¡·ò¤—{) a’‚|ëL2ËùÒ—;ësÈîóH³lûH['—Ú4Š£“Yê.mÒ5·þˆ~ç·¶¾QLÙ1l¶Mº´My†xÓ,=L}€RŒ-’ %öÁêFœ0Kñ­d)R"2)',Q½Œc·Ñc? dö׺É:6¡“h½’Àb˜Áv6*jÖ´prã0?¥9‡0b3LT¤Vú$s)µä¿Žgxˆ¸ç2؆Un;Æô;w˜F“nißUƒ£K{Ðl§VÂL5ÈÊv1ñ÷­r†VÙ æè–0íÖ³Þ¤¯hoùþŒ­´~—“Ukq 7Zª }(ï&v4Ó°ÙÊH_‹r124ó8WS÷…7€Wu/y€2Ù’7¹:Ùð>Y€BùõF™“ùÐvUy•YYRq·…N”_ÙÑbeJ¹ kinY sÙ–ßwzEóÄsU{‘97Ó°ÎU‹¹áôõ‘Ý²Ôæ0^ ¹™—u×]3ÇþÆx®Šø~ô€óöÔŽõèý ÙJLw¸º<#%ÔÂ,"›P`‹øë¦8xÞ¹Bãùtã´m²ïƒ=­÷g Òo0'o *Ÿ=qc ú{k³™¶ƒÍ~ÆËžžÒžrG‹þ!ò•Œ®¼Tnƒ?™Yòmt]ÓÚD2™Üí…ýk^ÕÔê¬F‰‹¥Ùy–#Çš/f—×§˜§=S¨ç‡¨£Î¨¨“Y©Ó…©µÔ©Ÿ:•[¹ª­úªU©óDª ª#”«§º’E¬ÝÅ«•¬ÃºgÍô«Ù‹íš‘×ùеó4Ò”±‰-U‚Ýzº‚’Œ ƒZY-`Ï¢°£:ƒO®ÙWûð:O[:wÒ qmÕ™ØP¦áñņ ³éØœ ¸]Ÿ­mD£Øö~+ê¯Úìø3@-­x·›M‚×¶w.¸C[6MØëo;iï-ߨjE¥)yØŒV¡çÙBíÙþ ‹zcñ5¨a8vFÒ²W6pêÉG% ê–„‡ã,jË®$7®i=PF¿¹ùQ§—›gËSK3–é”’ˆ;TéRö²RI¹ç~§Ÿ“¼W ¹ÿƒè*TÆî;Š]rÁbp;©½õ¥"Š/7)ñâk§ðÁÅcæäR}Xîì+Õ•×<³DˉÅnÀ»Rz {\_{W)tªš™Ì¬ÕcZ|’ÇZŃ­ÓÁWÇ1F«ë”ÇÃÆóÈ}¼ø°úÈ‘<ÉÁªÈgŠÈƒœÉÙÉu\S5VÊ;—k«Üʳ\NÿOE÷Çc•˜™>w̵åZw2ª»Å P\Dê/Êœl{þ¼Tnšå¯z*{[dËI'ñâ¼}•Ù˜ŸÙaÄ~›Û¸¦ÏëUÃõdU{7®m:ðýÔÏŒ‹í\!­A÷¶¸¢Í'å5Ýì4] £Ão5ûÔýv‚J• ]Q”J¸n‰>’ò›‡a7¬™–ioLÀ{R¿[Mxn¾'è4X×Ih׎×tƒ½²[)HÅjð·ZRîŸÍüä†ÍdL×}N1­d»(®ã””»¿r¡ÉÝÂ6{r*±3Ä_±Xì\|½·œê!Û?ºE<²Ýé,îU¶ÃvØñ¨à”ØE\Ø:¢ôB]¶ØI\eöàÍò£þˆ³Xž±™&ù$¥±Òíîâ#¼ÁÓá#]Õ稌G‰î>nK*=ÑqFGÜÄ‘‹Ûv2avAWô‘‚ØH(œa»®Õ{ú¨ØÍÛ= ȵò½[uê=äÝÛ×ûp•/§¤>ļü¯­÷¦\“µë÷ÝuϾìw|é×>í=îßÞìGÉíþîñÞ_ $çžËÑ>îûðÝÞïŸÊ¯\í ˯Ü"šº[”^H=®»ó>Óœ­ù>‹à\ß«ûXB)šì?šÅ\7üü,ÙÍ—·¬Ë,mæ×ºî\|Pʇl‰˜ ÿ²k_ÑKì[Ù@éQÔˆÔ}þÔôöáØŒ ð§Žd¶†£rÖç¾`½6ƒwÛå¹»I+~ébéäí»¿ñßâÔQiº;ØMš/ µÖE·½õÍK”Ô a¡!b¢"“N#É™ŸÙ$åâaL$ÈG¥'ÙZž¤¤‹ÉBèÒ*¨(çFÑæë–K‹mèì#îf-îÈé,íþ$æ1rfò2³fóQÕóÙô”´5¦`ö´ãà-w8¶xùÔ3°93Õ¶¹»y!|übäúM:½¾ò~?‘?@F,hð 6„þî)d×ð!Ĉ÷1œx¬¢ED3rìèÑãÆßD. Iò$Ê”ïT"3ɲ†Ë—2gÒWS£½œ:wòìéó'РB‡-jô(Ò¤J—2mêô)Ô¨EoŠI•ÚÕ¬Z·:ãúÃ*U°^Ç’…(öêYši˲m»Ð->9¥¬ WW˜Š~N`:„ëWÞ_÷¬ÐÍx7 Vz{÷ ìXÝã®0[E3œì°"i˜ÿEî×3 Áiô¸ fa»bÅL€¡þŒ„ò'W­tV²SJÏm{»`»4´ßÍ,òVƒwíªVå y®]Ñdé½VXFØÈª‚8há*S{’$9)³BhElÖ(q7uŠuva¨þºŸ|dâ¥4˜ Í}‡œ€¢ªøÇGƒŒFwÝÝ st·w„ xë¬ö^o³ýÐréáÇ*¯a•Ïlðbbk¤t¢aôqrÛ‡%xÒ á¤6DöxdF ÙÙ`J&r"’Qv¢gNJyeVVZÆ$–]z)PxU~9&–ZZd¦zdªy$šµÙЛkÊQœÊIu'žyê¹'Ÿ}úù' þêI%—sZh˜ˆºhp„‚È(¤nÕ §˜‘ZJÖ¤ ‰¨›V˜-ñÖ`&ti§ŽŠªZ£:HQh’J*Z¦_’‰—©«¢f«W¸D\Æp eAäí¸"„À¼&H|&ÚèÛ²çÙˆlÎvÉ´¼’ä«AÀúq s-gÝ…ý=]mÓåX!+° ,äÆ oÚ~Äm±ë•CçÚªoðuõž|±Í§_u?ø®ƒõÞÇ Ðk¯HøD`Ä$:l§ý6^L;æ6ÜoÓݶ yë½7ß}ûý7à{ÝhÝ…ç»v¥†+ªÜݽ¸ÒÇ#¹][®Øàénóí¶U3Лœ_Îxâ³^¢_Ü{þ´Ù´Ùðèegþɸ©ß»:cÄRþ¹®»‡í{åCº‡Ý¾gêˋΧávôn5çLû/Ñ“f-1ÅUoÚ°ÅÇ|ãÜßÖ s†`‡ëñç¿q{ƒØÉêªÌß… Îö®{ÍÛ?ýæ£ö÷ü¨¼v£«VÍ[ÇŸÇR7¯ #|õÛâZ‡37oþ€ç‰_…nÁ ÄäAa‹oø¾úhA~2²Ù¹QAû Oõ¢•›ÌG-ÜHìX«q¡%¤³Š‘«„–Ð䥽N)‚•I•¶Vh·ÙMF‰Ÿ! ‰Ø6&²Å‰R“›‘Å–\±‹¸+Ýܼ(ÆÂÑGc<#dÖ¸5²±n|#ã*–e‹hì‘/Æ;ò1WúciÁ0*J‚Ì#岿šx*}, B³ŒÄDIŒu­1ü à¯ü×J\Ún“ƒ„`&É(JV~ÅH±$¦évÃ¥5/zþÓbòbc-ÿA+Xå¡ }!ŠhÉh.£ b4éMæisœð¹/tãšTØfhÔ´¡²Ü¹Î"vR—Êø(u©?ˆM'\ð?Á4I÷ýpg/·ëçí&j·Üæ>7ºÓÍFhc:Ü]÷#·íîíÁ¶ºžwþ·Ù]ª[ú±$½~j#,ÉÑEWß=~5}µøo±¬2¶Î%8¶ ®çÓŽ“n/À×KOý¾›ŠÉž y–Y=ÌR«DtÍõÈÜâq¦A±#ê°Šèy’XÜþ÷K†lä~¯ “v%6ªÃn~§§ì‹© ¿ýW­4ßÕ.çõÉ#*‹ª¨&C)ä–/úÚCqÒ'½r«¯¸ëg¼ŽÉ¼³ê°ÊâLãÀeöÁaïý ©–ÝoÄ|Å>’ó?ý´ýà=úé×·þ§ÿûÚ›ýͨ9©\«‘ÏÌÅiV‰¥“*à¨ÁÓUí\ÍXTYÝd‰4ÕåAS¨eSîŠÜÜ›)ß,ÁÔüGý•1GËôœ ­ÙZÙÓ~™ÝýÜ”$ÕK•Ï>É•û=Ò±n=GS™] ’YZÎ` º º8Þ‰ Rß9’þVê4]IÝZ} †µáÙTM!ÑiÔ˜±]vhVÕ TDé ÖÇ”UÃI™CåáÝa‡Ø¡TmLç8_HÉÄ…–Á|Z3]ÒŒáÊ-æÍUÖ½\âYSñ¨“V]O[V'‚V^‚Ñ´ØË­“”­áý­ŸƒáÍ•  á¨_öi -F÷]­%߆iŸápßòí"0–ƒ0¾b1ÚË1`2N04F£4Nc¹ùb6#Ú,#¸aãËh£¼qc7Z#Èc6Š£#~ŸÅ%(õVü‰7j‘ä´Ô€Á_-­E. ¯¼ãÀÅPþ)_Ðâ¤P’°!“;zœ*¾Ê)¢þìÞ †òÈvøÂˆÑ”9¥šEîó`“IËó¬Ü¥Gk¨1ríy Bî]ù]L ¢Ùu˜O¼lÙ‘yèüF Ù@màÊ4¢Gáè#ØÁƒªQ¤J˜ɤœ…Ý%Í!DæF×Ýá’ÁKþ ºÔÁà#¬%E %˼ j¬$G%YzY…ÐE]LB%-ZÜ™Œáí¤UâŸi%äRªØY‰ßäÈ.œT§‘Ò¼`V ’Þ•Mgí‹8å;=eÂX(š`Úp¥……å®9EŒ¤9^£:|œ_bfÙ àn©g2¢ï1#9æcgÇjþNkRÍk:µclºâl*cðQ£nî&oöf Ü&p§p'q§q'r&§r.'s6§s>'tF§tN'uV§u^'vf§vn'wv§w~'x†§xŽ'y–§yž'z¦§z®'{¶§{¾'|Ƨ|Î'}Ö§}Þ'~æ§~î'ö§þ'€¨€(¨(‚&¨‚.(ƒ6¨ƒ>(„F¨„N(…V¨…^(†f¨†n(‡v¨‡~(ˆ†¨ˆŽ(‰–¨‰ž(Ц¨5;tendra-doc-4.1.2.orig/doc/tcc/table10.gif100644 1750 1750 21262 6466607533 17371 0ustar brooniebroonieGIF87aK¡æææÿÿÿ,Kþ”©Ëí£œ´Ú‹³Þ¼û†âH–扦êʶî ÇòL×öçúÎ÷þ ‡Ä¢ñˆL*—̦ó J§ÔªõŠÍj·Ü®÷ ‹Çä²ùŒN«×ì¶û ËçôºýŽÏë÷ü¾ÿ(8HXhxˆ˜¨¸ÈØèø)9IYiy‰™©¹ÉÙéù *:JZjzŠšªºÊÚêú +;K[k{‹›«»ËÛëû ,N^n~Žž®¾ÎÞîþ/?O_?€ŸàoŸ?~ >ÐgPCB ´§ß¿„ 'ü×`Cþ3 зÁbˆÄ‘šÄ… ®t©ãÆ– P–´€Òc†“7S6Ëù¨HŽ%ÖªSàG¢I‚úáK§Hy>½Ú”iOŸ²„zmY•¦À›aƒš…ª‘¨Ù 2IžZTêÚ¨+á.åº+ç@¥6‹ÞÍ(V©GžsÓ¾uéö0ᤇ™:Þ¯-½jaþm[Yhåµ–—FËQm[ÌŠí2F+“-EÎ’o]u̶0݇–_¾ˆunªue‹f©÷¥j£NCom)òåR”3=ºôéÔ«[¿Ž=»ö-M»{ÿ>¼øñäË›?>½úõìÛ»?¾üùôáÏ ¿þýüþûûÿ`€H`~ˆ`€ü$È`ƒ¸àƒNHa…^ˆa†ö÷×øm"ˆ†Ø_„$žˆ ‰(®Èb‹.¾Xa‡n|cÿÈ¢Š6æ(ÀŽ>þdÊØBˆãŠ:©á’L> e”+Ɇ‘Rž˜$ŠN^)á–\~ f˜R¹†•bb˜å‰^žIàšl¾ ç‘dªafœ¦I¢›v–Øãž~þ¹ãœiÔ (‚x†¨g¡‰Êh£ Š¡ŽŽ)醋þy餚nÊáqeÔÙ§šO•&èå©û©ú£“YÈ*«¢Îz!¤gŠO~²ˆ«•¹¢ˆg®¥ø+¯}úwþê‡ÅJ¹¥°»þ·,Ê>KkµÚjF¯ºÂªíªÔ^ *•FKì±Ò’«ºÌšë­®ßBø®µò&ˆí§Ðâ'.¾îþ*¿Îú;ì¿ñ ˜&¹Âî 0®ì¦mw¦Nk*ÂWØ,Äþ ü°Àg ðÆs<ñ¼¢ÖKF¯ËŒ/Ê)»‹±Ç¯¬aÁf:ë2ËèVldÇ×LóŽÛ¼ò¿Ûf\sÄ;ÓŒtÈ"oJòÚæœð°E‹òÐ+Ýi‰3 ó«|"{1ËbûË3Öþìr©jûÊðÑcs]ôÒrØ´ÝêLu¾GOKvßfû¬¸eï­àÂFçÝ6áIûˆvÏÛ†nÛü¾þ=¸ºsÏ]wá†íµÊ ïûyç|‹hr¾|ç­°Ö ƒ¾6Ã/3n¸×«ëKtíW³>zÜ—ïn §%ó~(ˆ™î9ü„–󎼡¾; ü߯Z[|—Î'¿tæ`Lÿfð–®iôÔÿ¢õ_`ϦöMr?©÷à¯ìòv7ï¢úlÊÏ~ý1»¯9ü-Ò/&ÿöÿO!ñy¡>, ˆÀ*p l (üAA‚ `,Ø rAƒKàà<¢C„W ¡Lh’, +œB …ð˜¨:1œ ÝVPC ¼,´‰@Vö‚˜Èè°9tBIfÒ—"Ž@‰ÁG’ ¤Å0œ¹HþHˆÁ‘h14I`b ó‚ŽPf)ìÐj¸Ø&:BJ#mf“D6^=ô¡…È(‚‘nÔcl BF€RF‘ nÜ2šÇàÑrDÂû1HBF²î³â¥8É2b’}s“C-ªUIz‚YÆ•¶ÜŠs¡ÈÇÒþVŠžRjCkbܦf7·Ô•êq¦j«ÞÀ“›ùŠp’‹ W‘ËÔçÝûÆDž· =T¯<ÝKÐíö•®”Ýjuwê[À°µ3ˆåDºùÉ3•Æ¡"8Y`ÏS¼w9°„óË 385Ôn‡¹+Í£|g¸à ¢2sኘr™^ñy]LÎ Çx”0±o)cá€Ö¸½)ÖJ€ëaî,7”® ÇËÐ3:§¸§Ì„£HMTjXÆTž2!ýEv†“ž¢að”¡lÝ'$—c¾h󪼎ÉTòæ{ƒûÊ'ó¥”ìõ2… IßòÆ…ÇX¹0Œa3þÈaÞ5¿ú,תÛÂ"ŸÒe©¡q{f6¶lŽà¤+Pæ9Y Y5-_}TùÓŠ=tXÍÔÛ†:ÒTÈt.\hHÇZ¶¾.©/kê¿Xµw ±W÷+ëS/±¹¾>qk`-ì`뚺ündž+DdOfÓ¨^6­²ëðîQÚXTµmYí±Ž˜Ö@1l êoÏúÈp6òtkËî*Ç;ÕÀ^õº“Rnw׳Íy-μMíÊŠ:²Š^·ûap{4Ìš¾õhs]ðQ£»ÔÊŽx«OK§ÔÚ:´G-ÄÞŒJã´­¶d>ìª|å,o¹Ë_ó˜Ë<àWxPÞþš{Úæ6ìxÆ?žY‰\çBOvYƒàF=Ó$+¯Än4p^ë$‹û•døMi£CØÊn6£o­ ¨óú·LN&¦+ÈÉç*œŠ^o«kÅþ¸—üŸc4q+ÏÞpNzÛ%Þ¸Ó}iI–üÆò½ô´½Ýî3–SÒÂÍûѱÞî Ç[ñu&\?[_È]¹/&3Š÷¦ÑQÜ·„7r,Œ«8e"sÙ¨ÛÈÝèßH@ªÆhDÈø)Ê(ÌØħnF8ŽÍÈŽÿ︎å'‰óˆ~tg_ò„€¼G Ø€vÈQ‘h‹¸‡tN7…¡¸€¡÷›w3HþU€õ‰ž`o®d@¤^>Ætù¶Z˜ö‹©z'ƒ5äZɇéeH;#y y†ze^&`h†¹„KØiƒØKõxr±‡ëþD…-–s]¨bA‰“À’ YïFV¶ä޲ה(¦ƒVhPñ‚´@“;iO‚GJ|¸Q •pe{•˜aøxÖoU(ˆ©a\é•Ô†<ˆgNù”")V¹÷f~Ù‰á÷„Œ=NˆÔ’¥‹¢ð•)¸XŽÒ¨~Ô8‹Öö؎И×è…÷™MИžš¦w‹öG×…¥©“§yr–ޝ ›±)›³I›$Ž›)˜¸Yx˜™lyŒ’™”9wü‡šwXŒ¹Ùsœ9_ØŠÉIz蔉¦0šJxœU霂qU¤Ø ÓÙ“Äé›;áPô]4 ¬À$ G®FGU2‹þýx–¦`…Ç…IPèU†••ð 癉QÁšPeuQ‘*y~J )ˆbØvF‡ti¤àŸ‘K2yµŸT)†ù ”¾™H÷yei¨O}fFjb)¡™ Ù}Ï4 Óä{ÑgLˆrz…[¶ ‚4Ÿ)6NÆa‹X z‘xkÉgxȱËúe3Š˜á»…ûZ}ó†yþmI¶™z­Ì´”ø ²Æ·­Ó;½1Šuëz¤dÈt±‡¿…øFä;–OñPs9¿ÛJcré½~†¾à¿ay¹Þ9¶2šeä•…Áõ’]}K^ñ¹®nW½å‰¯ÛæJCIe-æ°â4–=æbÔ绿'–é[LMö«´¼ˆDz»‰™oÀ—i[“´[ `–xì—Ñt—`¤ªÃ»‹_­«g9¥K™žG¼—§cæc[º¹Kª¯Úœn‹·W Ê;´7ü³´˜j{ªf‡ÆúÆ”«µ—‰Ã•ºÆm´iŒ´k ³bûÅxܶjŒµV%µ‡ŒÈ‰¬È†šµz\Æ}|¶zËiokþÆ)aº‚\¶• —\~LÈ>ÁÉ@œ²éد`[ÇŠ ºÔbÙzõ«Ê¦œ³ËÁ†[½Aü¸¤7†~»—\]( {¬Ì¸…踶ÜÉw,Ê¥LÊ·<Ë•ËÌJZlôv´ †®ŒÿE2š+’rüY>Œ˜©;P¡¨»¨ûÊ6|¿Íöƒ¼¾á ¸œ»ÉÅy€ªëg?ăµñÎ/öƒ.éºAt»È«ÍÈÍO¨Á¾ü£»ÁËÌÁŠKv‡……`¬‰½Õ„V¡8˜Îà¤!J² 6Ã2lÄ:·>ÈV ¿ˆ–ÍÔ­ ]ËMÌ(¯îª¿{¹å†ÀÞKÀÏWÐ'ü¯­À<{/þüy¾üuÿü…M8ÒaÔ °¯z±[Òæ\Xvdw ‰k…t†ˆNøÂ ›£ œÕ7 I­¾µ!ÖqÌœê´N;ÎKwòÜÅ ¤~ûüŒ¤;õ$Õà|cåÛuoödM<îÕI,®²Û‘9 hB P -Ë#¸„KÉcŠÒ 7È%ÔíëÅ™ŒNJüR_ÔEË‹¶ª9ÆŸlÉÜÙ̵v<Á±Úl´nzÙ«}ÚwJ'‹LÛµmÛ·Ír¤íH-Ú›¬ÛTÇÛ¬]ÈŸ·I+Ü üÛíÉÇ=Ú® ¹˜ÜÃÌÖðÊH¤ƒÀmÚŒ*݉ýÕ]ÂŽ¼µîD‘ˆK®hÚþ2-Ø Ï»êÆ¸ÌÔÁ*¬vJl€,Ixï5¯®{—I‰¶(ZÔ]ש¸S¹ì_Êü°ŠO<Í£M¥d9²å}ÆýÝÁq«’jÝV¹Òèé®;ÊÇ|ÓÓ¡qèÃAHÍæéNI%HÜ}á™AàïûÔlF\ (»'l¾ŸÑÂD¹À±Êøm€ýŒÞ»Œ0ø¢ú6’KÖ»zˆ¶aß|ÝL á-г:yPïmààí¹Î¼ =žFçìLi©‹r·[µË ^þÚíÝÛö€æÏ Û§ È‹­ÚjÎܾíÜaKÇZ.Ûƒ‚Û}îçèë‘Üp¬ÝpÉ”ZÌy\Ú]û¨d=¹€zþèUžè“Ûqî¾»\èû÷Ü ùËn¯w\sÝáBŒÌÙÌ\Û-½ÎØ :襞“}‰å˜»²–ØÌLháCîМ±„§ez©¾fÝ`ýܰ}®ø«u‹]EÂìyÄ#:O<Ï?¾ƒS¾Ûç=y^÷¿Lj°³þ|¨¿Ý×cÞìÙ ¤Ìà\âu©”5ìSztÎ+±Híé½Ðïê]楬k¡ôÏï–îjIí‹®·&,¢Äê˜ò¾¢*ÒWiËL&Jãë•h” Z”ï°_íÆ9ï®|ë]؆XÏÓmïÚ›mDüu¿+ßhø—~ìã„ùÜĨÑÖü-ɵþNèÆ]é¦Î²w¾«¥‡éËÝó[nmŽçŠÞñ¨Ò“œÝzné^kÙdüÇ‘nÌNÞPoõ©Éo®õ­V.öcOöÞêžé¿ç—>çT¿æõ€ô®ôØÍèçè꼚õäLpúxÖÊJßN¬ÊzE/ êí9Àd éÄžåÒIs8»†á‰ Ízø¢éÍ{¼ý{‚|ëÏ)ÍæÖÆà뫺JG¿ðžú"ß÷ œ ¯pvë»Ô’OìÍÙ¸oño ¡ Þf ¬× š¢Öà­»û(T¡ÏÒ.ëíÝûIv¢Nh oÂ9Šùk¤¢¬!祈øxWäJØËþãÅâ°lù¾¡ræ‰VÝÀ8Züý-íuÈéa|Gû«o&ˆäÇ«³ÁÁýq‘exIì8ù2¯É™TÖ›wÿÁPÉÒì°S]Ù–K](žI˜FOÛÕo¹Ï"+[ðW4‘E^’‰\ªÏfIÚ«.ˆ³hò*ìn*šì”\6ÿ¾gu(½f¶[p÷n^·ßñÐüÞ#çÓÉüþØ E©µ$)+-.ç 319s>CEí6GGJ?Q MWY]_•`¹dMTAl qiw9uw})Á"{yo‘o„™;Okœk”©©¥Y¯s˾¢®ÆÀº«Ç³GͳÓÚ¼Ÿ­ÈáeþÑCçùÔ‰Á»·’ãÿ±%ð^1TtÀˆð-F=BœFpÛ‡,a,ò ΟDÍ>¶úW°Ž3‡!QòKùâ!I7ûÄ­”yç䥚4+*»9SæÎ`-s"óÉåP=é$ºôŒÑENÕ¸,ǔꨇôeÕº•kW¯_Á†;–lY³gѦU»–m[·oá¶­zÕŒT^t«ÂÛhR¡ywdº×jÐc„ÿ¥Šøá©‰!G\Ü·p`Ãq„ yÒ Œ«Æ³DtÇ%à@°šµ„~>h8ü9ÖdÕ‡OÇ6ªvëÖ~x[¦f.Š-ú°0|Š&´GœûxÃXÛàþkØ€ì ³LîÓ-lv€ÜB÷ךw>¸ºuì½+±³¾^d»‰‚oûþ<Ý|ÿñæYóϵó\p¼g{¯´WD-¿ßàËD¾Uª/»­î£.¿f§À~‚@OA/p'Á ăÔˆ½;ÜOÂZæ{Ä¢ì6c#¡߹ѽÅ@QHþ9Šbê~2ÀÀølă_ò'0æ…­àò"Æ:„%L}\JõpÁzÍt8“šãWÀ•PS&ã ‰ ‡Â•€ÐE¤þÒ_Ö±#Ú±®†2Ü” CÓ·‹l0u]¡ÃDk°0€GDbHhX'Ò¯.Öß‹g§)zO€XL¢”¶X¾gy±(JÇHÆDÑfë»a¹ÔXFUÙŠŽu´ãñ˜G=î‘fü^ïÇ/jÑT„£ çåÇ.rˆL$ ‰Æ@:ò‘„ÌŸ!GsÅ‚ÏaÇiN×zÄÆö)ô¨…èÈ%EèuÊ^jã»l³PRòfY,¡(à¢[ö°ˆßbœ+}”4ªa’–››áã> ‹\¶0OIÏ ¤ÃÃZΘ€ÓOÞDÆ6òdÆ]e-Ù ä”Òä[ÐJ9¿s±r“çþgãøCMñîIs£å7× Cw‚sk»T§0“£•~†PÕœš0uèžßÉíig£$>§)9ƒ^K•§.'ª.k2“—8ÌP,KÈÑSA°,fF+¹ÉÜraEÜÈ`ÚË쮽žê*Ç¡®éw£çç *‰ ’0Êdâݹ»›Ü’€ÛkÂs" «=kl]yÙ™¤‘º+4îÚY̼íãÖd¬Ê¾Øš¡»ˆÛâ¯3Xyßþü9n»ÚÜûÞ\¢– ðdïsPëd÷A f [˜BÇ/æDÚ4%7¯Ê*N\íܬ)J;`ö¯¡Ë¹Ô’‡ù….ûpÀù;²(+”½-ÛdžæÊ̽kh²£°Ìßv4)ÇÓ­žþ×Ìû-Áõd·Ä'?sÁëfyÙüáÃLñ8/ò¡È5|q9fÕ'çv5™r{*Tˆœƒá[¤+tQ9kö6eK9‰'—½¹éf8a´L‘ów\,'‰*XžBó}tÑr²wûãÌ´ ÓOÙ˜^ô®9^EÄpk´×»{`;ß¹}dwÏû˺þôn½ðqùka×Ú°Èï=î'½þíM;Fs]õõ±Ÿ}íoÿ®ÊŸ¾î—O}¡zÿøÒ/¿øÑEþæ›ýèGw±ƒøF‘ïøXvBg-Øìÿ9;¾­E‡}4É÷r¯“̉Üäm¿𳸣fɲ¬ÅÝúo£O·6ΡˆÖR§o ˜3ÎéÚŠù*°´Fðëú,Þjj´¯ý®Ÿnâ>ÆÅ~ŽhnîÊFw†È™`0ÄPF›(N½²IÎB,ÌBæM0&K|JgÎkáVêïJЩºm˜|í÷ v¬D¡”„ê¶-ãD¯¤nNí\k¤HêòpèµgnÐPë~k -§$nɆD ãõHKÒ´0ü’®–„§b°ÐmÖþ‹¿Ìâ*.êÜ­ñÄ«ît¬ÆÊMÚÆèê_†!žR.vÔîD´îóæ0©¯rŽëùæ Cj‡NŸlœ ¶œ¬îq¾˜¤§n®÷ŽlVlžÎ°¡ÞŽê˜,ïÒ̳r,1ô4þVPÖ¬JèLËÌ‹½, q+§ú¨Œ=ÇÎŽÄhÚNà¬n17k kg§N^n~Žž®¾ÎÞîþ/?O_?€ŸàoŸ?~ B0xaÂÚÃÆïB‡ 2hÀaFþ \ðÑÂÀŽA’|8-¢B}/†4Y²ßK,žIpfÆ›(­…¬¹fP™ =êh‘eÀB+š¬ø1*À¥Naæ¤z´g°Ÿ\k¾ X0ëΉUUZMzõ,Ù}S72|kÖ©D­É¢‚ ê’bİ@iÞMÚÒìØ±s‰l[.Þ¸*yÒí%UèPÀMÓ²•üshQÍg1SŽ‰Ñ³[ËrãJ~LLéÕÕ•ÁŽ4¬:´>¯Š«‚ž=óîS¨žÓÎuŒ:¸ ±Â‹/!n<¹òåÌ›;=ºôéÔOŒ¼Ž=»öíÜ»{ÿ>¼øñäË›?>½úõìÛ»W?@€ùôëÛ¿?¿þýüþûûÿoŸ|H`ˆ`‚ .È`ƒ>a„óm‡€^ˆafÈa‡~bˆ"ŽX …olHbŠ¢¨b‹.¾cŒ2&h¢,Î8ã8îÈc>þ¨`mè$‰D‰d’J.™¢lÉ$‡PFIe•V^ÙŸ“kL‰¥ƒ\v f˜bö¨¥_Žyà™h®Éf›–™†šnú'çœvÞ‰g–À‘Ad>Ö©ä1È% yЍ„p¢¡ãHû ˆ›Êi‚•â©¡‰nÊ)‚‹ž1é|—Þ7ª¤ùUZª—švÊj«~j† fêg~FŠ«¨þXª­”Öºk°·þ ,®¿Úªþk­®.ˬ~°–!k}™ŠªkµÆúŠm•½^Ë-ªÉbëm¶Ý‚»j³æ¶ù,Ÿ˜ÎZ-²â;®¶§N;è°Òw­»¨ÂËï¹þ6›îÛN«/½ý¦è¼Ö²»«´ø[ðÂäfûoÅ,FŸÊê+ÄÅR¥¬ÊvL-­ß’[¬Çû‚lqËyb†šãYîŸ5»Œ3ž0ƒ!óÍTúL'Ð9½æÎ_='ÒD/ÍtF{¡t›Q7MuËOw1õšYW͵¹Ws±õ˜awM6«_oñ^Új¯ÍvÛn¿ wÜrÏ ·{ \]¬vï7Ÿ|WØ·ßqÜqàxþ·wGá õ³œã<'þžÁž{Æ‘u+ENù-PÔä~E':Ô6-E˜o+Á¥“nÃÅÚn‚©F–Wƒ=%V¸ËÔXc©«Åúî÷t[i1õUVoÙ QN%3Ñ'VRó‹=_{ôÚÏ~›[•Ž5_¢%;æ›­.Ø Ö›Ï=õHANáåSmŽ}ü¼WOÔò»×^ø`?—ÎÄO/MQžÿDм½oy…IßùÓÀÓ¨®~ÜÞùÚWê!çÔOh÷;ó±Å2°S_ ÝçšØÉO€Ásßd£@Þ¥/6¶áß ãÑÁ,€/7RpÜ „ba‡«›‚…Ä×!n‰I4OÔÄ£oKQt"þàžXEÚn\좿Æ0ŠqŒdD¢ÌX4fÑj4â×èF,zŽZ”ã“èXÇ+: /Ví÷¸üQqðkh.sDÞ/‘"dD{@>*?Œox¼…¨“Š£Ÿ!¥œ‰xVžtä9“»Ê=ò—T^#ˆÊ ±’Ù£cJ©HX^Ы„ÂMpÃBô&,A$&oögÆdr~ÁÃ3›ùšÛ„»y_5Ÿ ÃްwÜžfhã;dP)¼á—œä>I"QPT³yé;1”nôyöÄ‹O«‡Á£ú†…ˆ)L<±ÇÀŒ6ðr ½)fªG£üΦÞ;aØÈ—ßH“¤[%a5¯¹Àl:”¡A!úHÌZ³-iu'èd7Ö­f 9݃Lð×&Ö&‹é`eNP1ï°‚#ckG*âñ EBepÙÉ® ³Eà,<«Yë$Vo¡¥h…pÚ ¤¶´Üb_ ÛØÊv¶´­íyF -ÖÆ1«6Ò-$#k&ßFaµìüääjþ©¥P´q¢Í¥Hœ«Ò“´.æ´ÍW;ÙƒŽ@4–µ\.#‹Ë甓…œî1ˆ V[F2tè….œsWBr ¯Ò=¤yiÖe´—4Þé:î2o”P)p8™9RgæOwÞÌ«‚™¢WÖ -óC&GUøŒýî ¿ÆÃgPËD_h÷¥ÇLn&Jà ;Ð5þê;AiÔ|î•©Wu††u@e¢ó+/~_ˆ›K¯˜Çöãä:<Ò çÓªcUrSYz<$3ÃÕÕñ^‹¨ÎÑñ÷¤VŽ*ü*ÖÌ yÅ´kRk#Ô¦R•ªB.³K¨ Üëíy»Ôê–GWW±~¸™…jW™Ùþ ÞFÏ¡^ñ M8óvŽ|0,4nœHã@ÒŸ5¥+qédš›.sãŒXO·õqH}8SŸÔUµ:=ÛÊzÖ´®µ­oÍTÎÕ¯ÖuÌxÝkV“ؘõõq‰mX[ÒØ¬Üìëd-Ûwîdy•êEßqÙ›<r¡í€yw„Ÿ¾¯'°=Qf{`°—¯¹9âÌÕÙèÖi$Ÿ™:ÚÍXÉÆëçŒNjœƒ¦æu*µ«œD³ŒQ½þ$—Bã,Èq0?^ÖåB4®#xDÙck¦Sáø4òÌ—Ü=Ÿž<éI_á„c^HVuÚ¡àùf׫ò1o“®ßgÏÞ»—_Tí)穜êÀ]fÙÌ…;&§gš!Ç8‚iÎù»7áõns¼›Þ?ç-r{û¥§Ψ…õvǰ­k^<Ícãà…gðèw…xCSHô¸"ð‚T7Ï8ëhT^Ħ?šgW¹z) ;·ÈNvëÑV{ÛϾԹWíí=ØûJ×Ä/¾ñü2þ>Á/æîSÝüâ>×Ñ—¾¶%[ýô^?¸•Ã쳿 ~Äp]±·´º¿oïVºòÈî%ÿò‡¨Êbþ§»¼ãFùeyo¹ÀîвÓ÷YùýGž×oë'C'€±ãq gpÑÔQåxeöpýÔ€öWe‚¶gZ·QNEcF'@ò¶`u`STÖEO5åO\–Tj'e,m`ögF”UWZÕt ç€xO·tB£`HÁpÔ‡™÷º“prxCø¦oÙ¤p"t­¯ÐŒ~ô†5Hvnt<¸g„Ò8UMcu5vÿ„TnvTg…g¥„%õTè…W•g²PÍfy©…’Út„;h†Üøe'æ„+—fÔysâ‡p³óúƒeÛS„‡”„JUd0þ %¹ni] é“ É=vQfy¸†ô·w:V“+¹9bgAMy•BUCY¹•LVP ™gC¹@Sy H:Ù`JzÉT C莋ÇVø“—…ŽùƒÉ4‡v)…gC9(‚6ˆ„Ö jù–x^‡‰T¹XlØz”ØVÁöj—™¬Ä˜ñ‘¿Œšö™Ãšö‹áSšÂxšÿ“š3Й‹±)›³I›µ6š¾Ôš®y›3•›JtŠ­Ö›Š÷›Ãœž8ŠIY‰áGší§œ×¸ŒÉ_Ì™q·7—‚e‚U×ù]”Tm¤$™¤ðšÝ‡]ðGBÔn;$^P–ƒ“ª…þis}¸Ž¹A^›¸`aXá; ¸“`øzÚ$Iæoý8Nxqõ áɃ7N¤sèr68uh•!å‘^V7 hXg‘=F–º…Z<9¹o~Ç Ÿà Ï¹“q)¡^èIC…݈¡4ùSXGv¹‡HW„%5wCg“}W”jP6@ÿ5CªÐ¢éiŽO£g •sg¡6zdš£\™w¢D˜`™¥ŸûH¢m¦P™Ž7gqKš MJŒ§ŸW¦u(`Ñe¥L¹Ÿ“Ä…öøQ„)˜Æ§(f8Çb(ØyoÉ ×—î9}›é{~ôW7ƨ©·¦ºÆ¨ÎÇ™šþ©m{¬À¦å¹›ÅÙœØwœ úu«‰{¤zv¦ |¨ZªÃI{¬ZzÃW›³J«µj«qó©Ö «±êª¼·«n¨ªÌ÷«ÀÚ«ÐG¬uê˜H”\¦Œ¶Ð©Î©^ž:jŽ©9_gžËŠ\Úõ¬«Ú«=¸z½ôC„#zȪž“¨¸°­ÞVx˜ùøò(š‰·€Fv€ù†óx†ÊŸñ(§§‡H ×Ö–‹ …<(ay bõ™ªZ(¢Ñˆ\i§SU&ŠbÙº‚,Æ‚x÷t1g”+:© UQ<• u¯))¦ô*s*¹R7h†4¨sFXr6h%K¥[ƒÖƤՅVbé‡9ý›‘Üæ°¨S“c*±oG±ej±1Rd¨^ù£1&¢jÕ• jnGqÒ¨hɰ +“GN\jUz¯‚*³z8‡Š)’¨tOˆi‚Nˆ°D•®ËY¬Ô‡cuë~ÁJž¦·Šx·Œ8¬ä*ª½5¸„Ë}{¸ÐºmŠ»¸êÊ û¸Ÿs«•k¹—‹¹²8¹›Ë¹ë¹Ÿ º¡+º£Kº¥kº§‹º©«º«Ëº­ëº¯ »±+»³K»µk»·‹»¹«»»Ë»½ë»¿ ¼Á+¼ÃK¼Åk¼Ç‹¼É«¼Ë˼Íë¼Ï ½Ñ+½ÓK½Õk½×‹½Ù«½Û˽Ýë½ß ¾á+¾ãK¾åk¾ç[º;tendra-doc-4.1.2.orig/doc/tcc/table12.gif100644 1750 1750 11203 6466607533 17365 0ustar brooniebroonieGIF87aKL¡æææÿÿÿ,KLþ”©Ëí£œ´Ú‹³Þ¼û†âH–扦êʶî ÇòL×öçúÎ÷þ ‡Ä¢ñˆL*—̦ó J§ÔªõŠÍj·Ü®÷ ‹Çä²ùŒN«×ì¶û ËçôºýŽÏë÷ü¾ÿ(8HXhxˆ˜¨¸ÈØèø)9IYiy‰™©¹ÉÙéù *:JZjzŠšªºÊÚêú +;K[k{‹›«»ËÛëû ,N^n~Žž®¾ÎÞîþ/?O_?€ŸàoŸ?~ 68C߃ö°ñûÇ¡ˆÿõ#( þŠ 3jèQ£‹ Ž”¨àÊ-zLÉ€£’\†åi'Ò¥ŠDºÒ'NŒFaæ$¶Ó*‰-­ª¼ÉU¡>Ž2ÉbœUà>§R“šzõÍ­\•Úýù²©Ð~x‹RE+p,ѵuG*uë—g\¹ fùø/a’YQö xnãÁv]þûVêâab©šþÜäŰŽ} 5¬M–i‰j+Ó5Yã{ûê;8à‹?Ž<¹òåÌ›;]ÈéÔ«[¿Ž=»öíÜ»{ÿ>¼øñäË›?>½zósx?¾üùôëÛ¿?¿þýòþÝóÿ`€H`ˆ`‚ .È`ƒïñ‡NHa~Vˆa†nÈa‡~ „o\b‰’hbŠ*®Èb‹.(¢(¾øâŒ4ÞˆcŽ:îh`ŒmØÈ#ˆ@Id‘FY¢l ‰$†L6 e”RN™Ÿ’kÿÅiäŒ!ˆ%ŸuJ¨ƒlâYH÷ù‡š66Úh‘ÒǨ …^Š)‡žñh¥öMúf§ïº ©™žŠª¡wrJ酕ꀞº÷ª©}&ê©?£ÊÊ묱êú+­Âƺ+¬©‹þ¬}›šá§«»Žúì¯Òújk‘¢N;쬌‹-´ÒvK­¥ÉŽ«æ²e4Ÿ§Ä~n·Q^«­¶ßÂ'¡¯ëFÚ®½äî{¬¹dÀ«nÀÙ,%Àó ,½ô{o®WËoÄuú;FžÆþ,»ø2 e³ÆfŒ0· ÇûªÆKŒò¥‹a&ÄrŠ  Ë)Ï<îÊa´ sÁÊLsÏ¨Ú FÎu ísÑFËôDϹôÑN÷œ´M‡útÕVóuS;zu×^Ï—5ëMvÙfŸvÚj¯ÍvÛj˱*³ÑÉGÜçÎ}wÝpãýïÞzóͲß饴7gÝR(> ã?›Ô†Çþvk[ ˜4Žß¼‘d’D9¢Eæ|E³yзFÑm¯•–: !¥„ì³¹öÕV0M—¹bUõ´l¾SæºV©»/Ék½ú]pU4Qô†yVù yñvöuõPA‰e¿}÷²iýèÏ‹”½hàgQQÒŸÍòb{/™fäs¿õ=¼$½eåÃ%¼ðp5Gaÿ ‡—þ‰o.lQ Ã&ùíB‚ZXURx?Öõ×ÍÙ7½»„F|„Þnþg¿ô)0ä»L ÝÍ N(^Qal†ÇÒïveiKPz‚›–Ð4§¡LŒl¨»ÜÔð‡!MW fAŠX "þàŽ`E+d± [¼âº80.΋ªû[„ÈX83ŽŸ;£Ù8E¸¹mŽt¬£ïˆÇ<êqb$Ý áXÅ?þ—ä’YAC^ ‘q|£m h½É™ tƒó@CJFQ‘¼$ä@×GMÆ‚\ãW=VxÒ ôœ PIФ&Î{BéN)ËM:’3/to78ÚĦ”˜o˜ÄÚäîxí£‹nZ§ÌðùrxÂ|]e`gk1«lB+-ÇÄž…zQ1¢öÈIBtPœ+äž’Bó¾Ìä.ƒ˜I UhÎöÕSžGÜ :S‰Šn2!/õÓgú˜¿õ9p|Ñ+`þø×BqÂPˆC|çD}wºЃ"l Eä™uozéü…@—ðM.P¢“Q XV¸‡ªÓ{(dénF7BNô{í4(;;úAô¢å‹L•ÇIÙ´D¢Ø& ‰¿”‡>DMÓᢦˆ&̨K©@‘v·*B…qR%¤UÝ\+ Üê ¸A®¢#æèª¯&©xbd w¹H¿j‘¯œì`+&Ãr‘°tSìëÆÇÈJv²”­¬e/KÆæÍ±~Dl›8ûX7Š´¬ä«j,Y¼Æ“òÓëû$ØV61Æ„ÌjëY¢–ªh]ï&b ¶O1*mwþ8Îââv´{I¥]k°¹Þ–à¹1”dåVé¿ê¤­eas;@¡6´œ }¦tok^¥>‘¬0œÍñÐâÌaú….Éiz{wO÷’›³¥Žy ×™cò·,¸û¨pû ^oo¼ æhUqÑû•´„ e ?s < ï·½«%0V_‹™’ŽSÄA W³Q¦š˜ÁÕ}àyuðbLf3¼ ÞèJCJ;Óo¬æç3'ÓÞï‰XÈ>)ü@죗6}¸Ëß‘!¾ƒžO)Íajébå"˜¹4UiMSUÙÚ€Fi°VG,Ä«ù¢ „gEêaè/1õüá[¸âsêÓÆD ‹™ìþÐCØ´ñ5±`ôÜLËAw–§a çKä6WeŸf bV…9”³~ø‚ér9ø9Cp©È¼´Rá;Ï%+ú»\>dün èAƒ·±Ïèm¬eÝêÀ’¶´³Þ쮪پýÚ›Á®Ø°‰Ýka[­rĬ³Ÿ íhK{Úç)öã–„[ë8ÙØÞ¶sðm\w{®ÖæÜ¸±HèÓ2ï•ñTmvÕ‚Þ ´öó#¹skå4¶`}É‹›k;&ÚùyÖ!¾]þñ—×.œ%ó2©™jú2Ó½$ïd€Cÿò±3øEõô_ZÕëJì¹c6\Äsj³í18ø˜ËMKïÃXü/&ùep~ôƒ_ÜeûW½Ü5þrûܰ^îé„ôwßþø‡þýÏýÇnEmh€ˆ€ ¸Ght˜ ˜Hø€ógnøÿ÷8€(c W^wwYd[·]Ÿ$qÍõp™·h,(;Í£].Xq3uUæ7%: $f³”‚ÓU~³ ƒj9vlEo7\&˜K?È„Jø&y3æT¶´cõ%}ÂMPsÒUcÇ^X•_ ¶Ué5`ªÑ†v A˜)u#8x{œ0|pREhšðoÜÄûóŽOè™MHqç€$LÐWP¸¸r) wÀØsþÙ_ðI”„C_Êø^é å™q•daø¤žpxBTw—‡-刨x@¢¶ˆ‹§ŸˆÐ —MΔz Ö‹ñ©ˆ º‡}7—G W˜z_ J¡×¹©T8Jp÷e Bq¹¡´gxo†¡tY…¨—DwfA‡¢Æõ‹Fg…˜:ê¡7ô{–ŒŒ&zƒæi›]ÆE¤DLŠƒ´å‘ ×atœTºœ½ Ž¿¹Ž±9œ`º†b*ŠdZ¦^JŽhŠ\ꄃȦiªš¯I§uj§wº6fú†qÚ¦zº£|Jº©l€Ú{©]i˜O‰„o ]Ôy¨nJº„;Tˆ !Iþ‰VÊŽSÉ ’šƒY púG]ž›8ž6™nÐx˜ §{1מ>Ú’SH“Ïç¢ôù‹6—Dø©  Ê‘PÚ†ö§š—bÚgVy¬Æš˜ài­(‰z¥Ê¬‡šˆv8^Óz©ºw¡{×nÊf׊xÎGg3 §H^ãP­OÆ{:Šm‰¬²—£4–®"º• ‰w©˜gä°®;ˆbÅØÜé‹_8¤¦|YÅ£>7…Ë®‘‘¯Bª妜w¢­(EÿZ»¾J‘¢ª™°u±ÔÀ±–*›„Z¨j‚(¨¹é²,‘/;H0‹Ž‚ª‘4³2‚§;˳=ë³Üá§°þгŠ*³®6´8Y´ºv´f©²¤º´ài³×æ¤ÁJo1Y ˆ¨[; ¡ñ§Q °¬‡pÈaÙe‹L›³3AJv3²Å:³`k­¼ØµŽÓE]1©Æ™ªc»¶jk´I[¤¶CDð«Q“@•Yhi,MVÈ;Ï×äZ{Žk’îé°\(†þõ_‡kVŸ¦¤©´ÊŒ*´í–dw•:g{7‚‹ =º€vˆH÷¢jh&d—@éw Èa#»žëµ;i§x¦°±¸×•|Y‰o·ºäŒHiŒ§>Ä¡¢XùvÛzb)x›¯O'w7ë·Ý5j×Hf?Š®²hyq™»6ÆŒþÄšS5õºþ´gØz |fŠF9fˆ?¹zv†cƒúµ:Ùh)9º5Øiqº¼§ 6riA|©vUúås¡†{S9½Fê£ê…‡îºŒ7g>V¶¹òĤ%{œ_ª4pÕ»n+"|W7Ù mÛ9%K*Ü´8+Ãpû´qÛ·ÿ{ÃÚ©Ão»Ã<ü#?+ÄCLÄx´ˆøÃûù½[–ÄSëÃ=ÜÄMjþÅ'øÄ !Ž3èVë±j˺¼Ê½¬'Ÿ¿7“ÀPÃy;\Úæo;Sk̵Æ;–™O·p–ÃÎiš.œ¨K—âIƒÿ¤‚¬ÓwékažzÇè–Š<€;>zL´øõZ+™…HZþ’%ÉÁèyÉ™&¶ý鮇—{¸,É>"ÇÅ—pÆ K¾a—Œßyµ,gSÑgQŸyâÉzHR‹¼M–Ë’8—Bgx e»ŠeÂ…9Õ ÌŒXÊ–@Ìlỉ9¡¬LØ*•ØQ²¬>‚\Ë’vËu—u~—eMW\z«\¿‹Š ál|B¤W–Ì­¼iíÜ]v›ž¼•üIgÄSwÑku´¸{*½d;ÌG\¡Ø8¥ãXt]ÅR¼Ä¬6Ò$ ÅŸ{ÒŒšÃW¼Ò“êÒ*ýÒ¡ª³ElÓ7Ó8Ó;ÍÓ=níÓ? ÔA-ÔCMÔEmÔGÔI­ÔKÍÔMíÔO ÕQ-ÕSMÕUmÕWÕY­Õ[ÍÕ]íÕ_ Öa-ÖcMÖemÖgÖi­ÖkÍÖmíÖo ×q-×sM×um×w×y­×{Í×}í× Ø-؃MØX;tendra-doc-4.1.2.orig/doc/tcc/table12.ps100644 1750 1750 156313 6466607533 17276 0ustar brooniebroonie%!PS-Adobe-3.0 %%BoundingBox: (atend) %%Pages: (atend) %%PageOrder: (atend) %%DocumentFonts: (atend) %%Creator: Frame 4.0 %%DocumentData: Clean7Bit %%EndComments %%BeginProlog % % Frame ps_prolog 4.0, for use with Frame 4.0 products % This ps_prolog file is Copyright (c) 1986-1993 Frame Technology % Corporation. All rights reserved. This ps_prolog file may be % freely copied and distributed in conjunction with documents created % using FrameMaker, FrameBuilder and FrameViewer as long as this % copyright notice is preserved. % % Frame products normally print colors as their true color on a color printer % or as shades of gray, based on luminance, on a black-and white printer. The % following flag, if set to True, forces all non-white colors to print as pure % black. This has no effect on bitmap images. /FMPrintAllColorsAsBlack false def % % Frame products can either set their own line screens or use a printer's % default settings. Three flags below control this separately for no % separations, spot separations and process separations. If a flag % is true, then the default printer settings will not be changed. If it is % false, Frame products will use their own settings from a table based on % the printer's resolution. /FMUseDefaultNoSeparationScreen true def /FMUseDefaultSpotSeparationScreen true def /FMUseDefaultProcessSeparationScreen false def % % For any given PostScript printer resolution, Frame products have two sets of % screen angles and frequencies for printing process separations, which are % recomended by Adobe. The following variable chooses the higher frequencies % when set to true or the lower frequencies when set to false. This is only % effective if the appropriate FMUseDefault...SeparationScreen flag is false. /FMUseHighFrequencyScreens true def % % PostScript Level 2 printers contain an "Accurate Screens" feature which can % improve process separation rendering at the expense of compute time. This % flag is ignored by PostScript Level 1 printers. /FMUseAcccurateScreens true def % % The following PostScript procedure defines the spot function that Frame % products will use for process separations. You may un-comment-out one of % the alternative functions below, or use your own. % % Dot function /FMSpotFunction {abs exch abs 2 copy add 1 gt {1 sub dup mul exch 1 sub dup mul add 1 sub } {dup mul exch dup mul add 1 exch sub }ifelse } def % % Line function % /FMSpotFunction { pop } def % % Elipse function % /FMSpotFunction { dup 5 mul 8 div mul exch dup mul exch add % sqrt 1 exch sub } def % % /FMversion (4.0) def /FMLevel1 /languagelevel where {pop languagelevel} {1} ifelse 2 lt def /FMPColor FMLevel1 { false /colorimage where {pop pop true} if } { true } ifelse def /FrameDict 400 dict def systemdict /errordict known not {/errordict 10 dict def errordict /rangecheck {stop} put} if % The readline in PS 23.0 doesn't recognize cr's as nl's on AppleTalk FrameDict /tmprangecheck errordict /rangecheck get put errordict /rangecheck {FrameDict /bug true put} put FrameDict /bug false put mark % Some PS machines read past the CR, so keep the following 3 lines together! currentfile 5 string readline 00 0000000000 cleartomark errordict /rangecheck FrameDict /tmprangecheck get put FrameDict /bug get { /readline { /gstring exch def /gfile exch def /gindex 0 def { gfile read pop dup 10 eq {exit} if dup 13 eq {exit} if gstring exch gindex exch put /gindex gindex 1 add def } loop pop gstring 0 gindex getinterval true } bind def } if /FMshowpage /showpage load def /FMquit /quit load def /FMFAILURE { dup = flush FMshowpage /Helvetica findfont 12 scalefont setfont 72 200 moveto show FMshowpage FMquit } def /FMVERSION { FMversion ne { (Frame product version does not match ps_prolog!) FMFAILURE } if } def /FMBADEPSF { (PostScript Lang. Ref. Man., 2nd Ed., H.2.4 says EPS must not call X ) dup dup (X) search pop exch pop exch pop length 4 -1 roll putinterval FMFAILURE } def /FMLOCAL { FrameDict begin 0 def end } def /concatprocs { /proc2 exch cvlit def/proc1 exch cvlit def/newproc proc1 length proc2 length add array def newproc 0 proc1 putinterval newproc proc1 length proc2 putinterval newproc cvx }def FrameDict begin /FMnone 0 def /FMcyan 1 def /FMmagenta 2 def /FMyellow 3 def /FMblack 4 def /FMcustom 5 def /FrameNegative false def /FrameSepIs FMnone def /FrameSepBlack 0 def /FrameSepYellow 0 def /FrameSepMagenta 0 def /FrameSepCyan 0 def /FrameSepRed 1 def /FrameSepGreen 1 def /FrameSepBlue 1 def /FrameCurGray 1 def /FrameCurPat null def /FrameCurColors [ 0 0 0 1 0 0 0 ] def /FrameColorEpsilon .001 def /eqepsilon { sub dup 0 lt {neg} if FrameColorEpsilon le } bind def /FrameCmpColorsCMYK { 2 copy 0 get exch 0 get eqepsilon { 2 copy 1 get exch 1 get eqepsilon { 2 copy 2 get exch 2 get eqepsilon { 3 get exch 3 get eqepsilon } {pop pop false} ifelse }{pop pop false} ifelse } {pop pop false} ifelse } bind def /FrameCmpColorsRGB { 2 copy 4 get exch 0 get eqepsilon { 2 copy 5 get exch 1 get eqepsilon { 6 get exch 2 get eqepsilon }{pop pop false} ifelse } {pop pop false} ifelse } bind def /RGBtoCMYK { 1 exch sub 3 1 roll 1 exch sub 3 1 roll 1 exch sub 3 1 roll 3 copy 2 copy le { pop } { exch pop } ifelse 2 copy le { pop } { exch pop } ifelse dup dup dup 6 1 roll 4 1 roll 7 1 roll sub 6 1 roll sub 5 1 roll sub 4 1 roll } bind def /CMYKtoRGB { dup dup 4 -1 roll add 5 1 roll 3 -1 roll add 4 1 roll add 1 exch sub dup 0 lt {pop 0} if 3 1 roll 1 exch sub dup 0 lt {pop 0} if exch 1 exch sub dup 0 lt {pop 0} if exch } bind def /FrameSepInit { 1.0 RealSetgray } bind def /FrameSetSepColor { /FrameSepBlue exch def /FrameSepGreen exch def /FrameSepRed exch def /FrameSepBlack exch def /FrameSepYellow exch def /FrameSepMagenta exch def /FrameSepCyan exch def /FrameSepIs FMcustom def setCurrentScreen } bind def /FrameSetCyan { /FrameSepBlue 1.0 def /FrameSepGreen 1.0 def /FrameSepRed 0.0 def /FrameSepBlack 0.0 def /FrameSepYellow 0.0 def /FrameSepMagenta 0.0 def /FrameSepCyan 1.0 def /FrameSepIs FMcyan def setCurrentScreen } bind def /FrameSetMagenta { /FrameSepBlue 1.0 def /FrameSepGreen 0.0 def /FrameSepRed 1.0 def /FrameSepBlack 0.0 def /FrameSepYellow 0.0 def /FrameSepMagenta 1.0 def /FrameSepCyan 0.0 def /FrameSepIs FMmagenta def setCurrentScreen } bind def /FrameSetYellow { /FrameSepBlue 0.0 def /FrameSepGreen 1.0 def /FrameSepRed 1.0 def /FrameSepBlack 0.0 def /FrameSepYellow 1.0 def /FrameSepMagenta 0.0 def /FrameSepCyan 0.0 def /FrameSepIs FMyellow def setCurrentScreen } bind def /FrameSetBlack { /FrameSepBlue 0.0 def /FrameSepGreen 0.0 def /FrameSepRed 0.0 def /FrameSepBlack 1.0 def /FrameSepYellow 0.0 def /FrameSepMagenta 0.0 def /FrameSepCyan 0.0 def /FrameSepIs FMblack def setCurrentScreen } bind def /FrameNoSep { /FrameSepIs FMnone def setCurrentScreen } bind def /FrameSetSepColors { FrameDict begin [ exch 1 add 1 roll ] /FrameSepColors exch def end } bind def /FrameColorInSepListCMYK { FrameSepColors { exch dup 3 -1 roll FrameCmpColorsCMYK { pop true exit } if } forall dup true ne {pop false} if } bind def /FrameColorInSepListRGB { FrameSepColors { exch dup 3 -1 roll FrameCmpColorsRGB { pop true exit } if } forall dup true ne {pop false} if } bind def /RealSetgray /setgray load def /RealSetrgbcolor /setrgbcolor load def /RealSethsbcolor /sethsbcolor load def end /setgray { FrameDict begin FrameSepIs FMnone eq { RealSetgray } { FrameSepIs FMblack eq { RealSetgray } { FrameSepIs FMcustom eq FrameSepRed 0 eq and FrameSepGreen 0 eq and FrameSepBlue 0 eq and { RealSetgray } { 1 RealSetgray pop } ifelse } ifelse } ifelse end } bind def /setrgbcolor { FrameDict begin FrameSepIs FMnone eq { RealSetrgbcolor } { 3 copy [ 4 1 roll ] FrameColorInSepListRGB { FrameSepBlue eq exch FrameSepGreen eq and exch FrameSepRed eq and { 0 } { 1 } ifelse } { FMPColor { RealSetrgbcolor currentcmykcolor } { RGBtoCMYK } ifelse FrameSepIs FMblack eq {1.0 exch sub 4 1 roll pop pop pop} { FrameSepIs FMyellow eq {pop 1.0 exch sub 3 1 roll pop pop} { FrameSepIs FMmagenta eq {pop pop 1.0 exch sub exch pop } { FrameSepIs FMcyan eq {pop pop pop 1.0 exch sub } {pop pop pop pop 1} ifelse } ifelse } ifelse } ifelse } ifelse RealSetgray } ifelse end } bind def /sethsbcolor { FrameDict begin FrameSepIs FMnone eq { RealSethsbcolor } { RealSethsbcolor currentrgbcolor setrgbcolor } ifelse end } bind def FrameDict begin /setcmykcolor where { pop /RealSetcmykcolor /setcmykcolor load def } { /RealSetcmykcolor { 4 1 roll 3 { 3 index add 0 max 1 min 1 exch sub 3 1 roll} repeat setrgbcolor pop } bind def } ifelse userdict /setcmykcolor { FrameDict begin FrameSepIs FMnone eq { RealSetcmykcolor } { 4 copy [ 5 1 roll ] FrameColorInSepListCMYK { FrameSepBlack eq exch FrameSepYellow eq and exch FrameSepMagenta eq and exch FrameSepCyan eq and { 0 } { 1 } ifelse } { FrameSepIs FMblack eq {1.0 exch sub 4 1 roll pop pop pop} { FrameSepIs FMyellow eq {pop 1.0 exch sub 3 1 roll pop pop} { FrameSepIs FMmagenta eq {pop pop 1.0 exch sub exch pop } { FrameSepIs FMcyan eq {pop pop pop 1.0 exch sub } {pop pop pop pop 1} ifelse } ifelse } ifelse } ifelse } ifelse RealSetgray } ifelse end } bind put FMLevel1 not { /patProcDict 5 dict dup begin <0f1e3c78f0e1c387> { 3 setlinewidth -1 -1 moveto 9 9 lineto stroke 4 -4 moveto 12 4 lineto stroke -4 4 moveto 4 12 lineto stroke} bind def <0f87c3e1f0783c1e> { 3 setlinewidth -1 9 moveto 9 -1 lineto stroke -4 4 moveto 4 -4 lineto stroke 4 12 moveto 12 4 lineto stroke} bind def <8142241818244281> { 1 setlinewidth -1 9 moveto 9 -1 lineto stroke -1 -1 moveto 9 9 lineto stroke } bind def <03060c183060c081> { 1 setlinewidth -1 -1 moveto 9 9 lineto stroke 4 -4 moveto 12 4 lineto stroke -4 4 moveto 4 12 lineto stroke} bind def <8040201008040201> { 1 setlinewidth -1 9 moveto 9 -1 lineto stroke -4 4 moveto 4 -4 lineto stroke 4 12 moveto 12 4 lineto stroke} bind def end def /patDict 15 dict dup begin /PatternType 1 def /PaintType 2 def /TilingType 3 def /BBox [ 0 0 8 8 ] def /XStep 8 def /YStep 8 def /PaintProc { begin patProcDict bstring known { patProcDict bstring get exec } { 8 8 true [1 0 0 -1 0 8] bstring imagemask } ifelse end } bind def end def } if /combineColor { FrameSepIs FMnone eq { graymode FMLevel1 or not { [/Pattern [/DeviceCMYK]] setcolorspace FrameCurColors 0 4 getinterval aload pop FrameCurPat setcolor } { FrameCurColors 3 get 1.0 ge { FrameCurGray RealSetgray } { FMPColor graymode and { 0 1 3 { FrameCurColors exch get 1 FrameCurGray sub mul } for RealSetcmykcolor } { 4 1 6 { FrameCurColors exch get graymode { 1 exch sub 1 FrameCurGray sub mul 1 exch sub } { 1.0 lt {FrameCurGray} {1} ifelse } ifelse } for RealSetrgbcolor } ifelse } ifelse } ifelse } { FrameCurColors 0 4 getinterval aload FrameColorInSepListCMYK { FrameSepBlack eq exch FrameSepYellow eq and exch FrameSepMagenta eq and exch FrameSepCyan eq and FrameSepIs FMcustom eq and { FrameCurGray } { 1 } ifelse } { FrameSepIs FMblack eq {FrameCurGray 1.0 exch sub mul 1.0 exch sub 4 1 roll pop pop pop} { FrameSepIs FMyellow eq {pop FrameCurGray 1.0 exch sub mul 1.0 exch sub 3 1 roll pop pop} { FrameSepIs FMmagenta eq {pop pop FrameCurGray 1.0 exch sub mul 1.0 exch sub exch pop } { FrameSepIs FMcyan eq {pop pop pop FrameCurGray 1.0 exch sub mul 1.0 exch sub } {pop pop pop pop 1} ifelse } ifelse } ifelse } ifelse } ifelse graymode FMLevel1 or not { [/Pattern [/DeviceGray]] setcolorspace FrameCurPat setcolor } { graymode not FMLevel1 and { dup 1 lt {pop FrameCurGray} if } if RealSetgray } ifelse } ifelse } bind def /savematrix { orgmatrix currentmatrix pop } bind def /restorematrix { orgmatrix setmatrix } bind def /dmatrix matrix def /dpi 72 0 dmatrix defaultmatrix dtransform dup mul exch dup mul add sqrt def /freq dpi dup 72 div round dup 0 eq {pop 1} if 8 mul div def /sangle 1 0 dmatrix defaultmatrix dtransform exch atan def /dpiranges [ 2540 2400 1693 1270 1200 635 600 0 ] def /CMLowFreqs [ 100.402 94.8683 89.2289 100.402 94.8683 66.9349 63.2456 47.4342 ] def /YLowFreqs [ 95.25 90.0 84.65 95.25 90.0 70.5556 66.6667 50.0 ] def /KLowFreqs [ 89.8026 84.8528 79.8088 89.8026 84.8528 74.8355 70.7107 53.033 ] def /CLowAngles [ 71.5651 71.5651 71.5651 71.5651 71.5651 71.5651 71.5651 71.5651 ] def /MLowAngles [ 18.4349 18.4349 18.4349 18.4349 18.4349 18.4349 18.4349 18.4349 ] def /YLowTDot [ true true false true true false false false ] def /CMHighFreqs [ 133.87 126.491 133.843 108.503 102.523 100.402 94.8683 63.2456 ] def /YHighFreqs [ 127.0 120.0 126.975 115.455 109.091 95.25 90.0 60.0 ] def /KHighFreqs [ 119.737 113.137 119.713 128.289 121.218 89.8026 84.8528 63.6395 ] def /CHighAngles [ 71.5651 71.5651 71.5651 70.0169 70.0169 71.5651 71.5651 71.5651 ] def /MHighAngles [ 18.4349 18.4349 18.4349 19.9831 19.9831 18.4349 18.4349 18.4349 ] def /YHighTDot [ false false true false false true true false ] def /PatFreq [ 10.5833 10.0 9.4055 10.5833 10.0 10.5833 10.0 9.375 ] def /screenIndex { 0 1 dpiranges length 1 sub { dup dpiranges exch get 1 sub dpi le {exit} {pop} ifelse } for } bind def /getCyanScreen { FMUseHighFrequencyScreens { CHighAngles CMHighFreqs} {CLowAngles CMLowFreqs} ifelse screenIndex dup 3 1 roll get 3 1 roll get /FMSpotFunction load } bind def /getMagentaScreen { FMUseHighFrequencyScreens { MHighAngles CMHighFreqs } {MLowAngles CMLowFreqs} ifelse screenIndex dup 3 1 roll get 3 1 roll get /FMSpotFunction load } bind def /getYellowScreen { FMUseHighFrequencyScreens { YHighTDot YHighFreqs} { YLowTDot YLowFreqs } ifelse screenIndex dup 3 1 roll get 3 1 roll get { 3 div {2 { 1 add 2 div 3 mul dup floor sub 2 mul 1 sub exch} repeat FMSpotFunction } } {/FMSpotFunction load } ifelse 0.0 exch } bind def /getBlackScreen { FMUseHighFrequencyScreens { KHighFreqs } { KLowFreqs } ifelse screenIndex get 45.0 /FMSpotFunction load } bind def /getSpotScreen { getBlackScreen } bind def /getCompositeScreen { getBlackScreen } bind def /FMSetScreen FMLevel1 { /setscreen load }{ { 8 dict begin /HalftoneType 1 def /SpotFunction exch def /Angle exch def /Frequency exch def /AccurateScreens FMUseAcccurateScreens def currentdict end sethalftone } bind } ifelse def /setDefaultScreen { FMPColor { orgrxfer cvx orggxfer cvx orgbxfer cvx orgxfer cvx setcolortransfer } { orgxfer cvx settransfer } ifelse orgfreq organgle orgproc cvx setscreen } bind def /setCurrentScreen { FrameSepIs FMnone eq { FMUseDefaultNoSeparationScreen { setDefaultScreen } { getCompositeScreen FMSetScreen } ifelse } { FrameSepIs FMcustom eq { FMUseDefaultSpotSeparationScreen { setDefaultScreen } { getSpotScreen FMSetScreen } ifelse } { FMUseDefaultProcessSeparationScreen { setDefaultScreen } { FrameSepIs FMcyan eq { getCyanScreen FMSetScreen } { FrameSepIs FMmagenta eq { getMagentaScreen FMSetScreen } { FrameSepIs FMyellow eq { getYellowScreen FMSetScreen } { getBlackScreen FMSetScreen } ifelse } ifelse } ifelse } ifelse } ifelse } ifelse } bind def end /gstring FMLOCAL /gfile FMLOCAL /gindex FMLOCAL /orgrxfer FMLOCAL /orggxfer FMLOCAL /orgbxfer FMLOCAL /orgxfer FMLOCAL /orgproc FMLOCAL /orgrproc FMLOCAL /orggproc FMLOCAL /orgbproc FMLOCAL /organgle FMLOCAL /orgrangle FMLOCAL /orggangle FMLOCAL /orgbangle FMLOCAL /orgfreq FMLOCAL /orgrfreq FMLOCAL /orggfreq FMLOCAL /orgbfreq FMLOCAL /yscale FMLOCAL /xscale FMLOCAL /edown FMLOCAL /manualfeed FMLOCAL /paperheight FMLOCAL /paperwidth FMLOCAL /FMDOCUMENT { array /FMfonts exch def /#copies exch def FrameDict begin 0 ne /manualfeed exch def /paperheight exch def /paperwidth exch def 0 ne /FrameNegative exch def 0 ne /edown exch def /yscale exch def /xscale exch def FMLevel1 { manualfeed {setmanualfeed} if /FMdicttop countdictstack 1 add def /FMoptop count def setpapername manualfeed {true} {papersize} ifelse {manualpapersize} {false} ifelse {desperatepapersize} {false} ifelse { (Can't select requested paper size for Frame print job!) } if count -1 FMoptop {pop pop} for countdictstack -1 FMdicttop {pop end} for } {{1 dict dup /PageSize [paperwidth paperheight]put setpagedevice}stopped { (Can't select requested paper size for Frame print job!) } if {1 dict dup /ManualFeed manualfeed put setpagedevice } stopped pop } ifelse FMPColor { currentcolorscreen cvlit /orgproc exch def /organgle exch def /orgfreq exch def cvlit /orgbproc exch def /orgbangle exch def /orgbfreq exch def cvlit /orggproc exch def /orggangle exch def /orggfreq exch def cvlit /orgrproc exch def /orgrangle exch def /orgrfreq exch def currentcolortransfer FrameNegative { 1 1 4 { pop { 1 exch sub } concatprocs 4 1 roll } for 4 copy setcolortransfer } if cvlit /orgxfer exch def cvlit /orgbxfer exch def cvlit /orggxfer exch def cvlit /orgrxfer exch def } { currentscreen cvlit /orgproc exch def /organgle exch def /orgfreq exch def currenttransfer FrameNegative { { 1 exch sub } concatprocs dup settransfer } if cvlit /orgxfer exch def } ifelse end } def /pagesave FMLOCAL /orgmatrix FMLOCAL /landscape FMLOCAL /pwid FMLOCAL /FMBEGINPAGE { FrameDict begin /pagesave save def 3.86 setmiterlimit /landscape exch 0 ne def landscape { 90 rotate 0 exch dup /pwid exch def neg translate pop }{ pop /pwid exch def } ifelse edown { [-1 0 0 1 pwid 0] concat } if 0 0 moveto paperwidth 0 lineto paperwidth paperheight lineto 0 paperheight lineto 0 0 lineto 1 setgray fill xscale yscale scale /orgmatrix matrix def gsave } def /FMENDPAGE { grestore pagesave restore end showpage } def /FMFONTDEFINE { FrameDict begin findfont ReEncode 1 index exch definefont FMfonts 3 1 roll put end } def /FMFILLS { FrameDict begin dup array /fillvals exch def dict /patCache exch def end } def /FMFILL { FrameDict begin fillvals 3 1 roll put end } def /FMNORMALIZEGRAPHICS { newpath 0.0 0.0 moveto 1 setlinewidth 0 setlinecap 0 0 0 sethsbcolor 0 setgray } bind def /fx FMLOCAL /fy FMLOCAL /fh FMLOCAL /fw FMLOCAL /llx FMLOCAL /lly FMLOCAL /urx FMLOCAL /ury FMLOCAL /FMBEGINEPSF { end /FMEPSF save def /showpage {} def % See Adobe's "PostScript Language Reference Manual, 2nd Edition", page 714. % "...the following operators MUST NOT be used in an EPS file:" (emphasis ours) /banddevice {(banddevice) FMBADEPSF} def /clear {(clear) FMBADEPSF} def /cleardictstack {(cleardictstack) FMBADEPSF} def /copypage {(copypage) FMBADEPSF} def /erasepage {(erasepage) FMBADEPSF} def /exitserver {(exitserver) FMBADEPSF} def /framedevice {(framedevice) FMBADEPSF} def /grestoreall {(grestoreall) FMBADEPSF} def /initclip {(initclip) FMBADEPSF} def /initgraphics {(initgraphics) FMBADEPSF} def /initmatrix {(initmatrix) FMBADEPSF} def /quit {(quit) FMBADEPSF} def /renderbands {(renderbands) FMBADEPSF} def /setglobal {(setglobal) FMBADEPSF} def /setpagedevice {(setpagedevice) FMBADEPSF} def /setshared {(setshared) FMBADEPSF} def /startjob {(startjob) FMBADEPSF} def /lettertray {(lettertray) FMBADEPSF} def /letter {(letter) FMBADEPSF} def /lettersmall {(lettersmall) FMBADEPSF} def /11x17tray {(11x17tray) FMBADEPSF} def /11x17 {(11x17) FMBADEPSF} def /ledgertray {(ledgertray) FMBADEPSF} def /ledger {(ledger) FMBADEPSF} def /legaltray {(legaltray) FMBADEPSF} def /legal {(legal) FMBADEPSF} def /statementtray {(statementtray) FMBADEPSF} def /statement {(statement) FMBADEPSF} def /executivetray {(executivetray) FMBADEPSF} def /executive {(executive) FMBADEPSF} def /a3tray {(a3tray) FMBADEPSF} def /a3 {(a3) FMBADEPSF} def /a4tray {(a4tray) FMBADEPSF} def /a4 {(a4) FMBADEPSF} def /a4small {(a4small) FMBADEPSF} def /b4tray {(b4tray) FMBADEPSF} def /b4 {(b4) FMBADEPSF} def /b5tray {(b5tray) FMBADEPSF} def /b5 {(b5) FMBADEPSF} def FMNORMALIZEGRAPHICS [/fy /fx /fh /fw /ury /urx /lly /llx] {exch def} forall fx fw 2 div add fy fh 2 div add translate rotate fw 2 div neg fh 2 div neg translate fw urx llx sub div fh ury lly sub div scale llx neg lly neg translate /FMdicttop countdictstack 1 add def /FMoptop count def } bind def /FMENDEPSF { count -1 FMoptop {pop pop} for countdictstack -1 FMdicttop {pop end} for FMEPSF restore FrameDict begin } bind def FrameDict begin /setmanualfeed { %%BeginFeature *ManualFeed True statusdict /manualfeed true put %%EndFeature } bind def /max {2 copy lt {exch} if pop} bind def /min {2 copy gt {exch} if pop} bind def /inch {72 mul} def /pagedimen { paperheight sub abs 16 lt exch paperwidth sub abs 16 lt and {/papername exch def} {pop} ifelse } bind def /papersizedict FMLOCAL /setpapername { /papersizedict 14 dict def papersizedict begin /papername /unknown def /Letter 8.5 inch 11.0 inch pagedimen /LetterSmall 7.68 inch 10.16 inch pagedimen /Tabloid 11.0 inch 17.0 inch pagedimen /Ledger 17.0 inch 11.0 inch pagedimen /Legal 8.5 inch 14.0 inch pagedimen /Statement 5.5 inch 8.5 inch pagedimen /Executive 7.5 inch 10.0 inch pagedimen /A3 11.69 inch 16.5 inch pagedimen /A4 8.26 inch 11.69 inch pagedimen /A4Small 7.47 inch 10.85 inch pagedimen /B4 10.125 inch 14.33 inch pagedimen /B5 7.16 inch 10.125 inch pagedimen end } bind def /papersize { papersizedict begin /Letter {lettertray letter} def /LetterSmall {lettertray lettersmall} def /Tabloid {11x17tray 11x17} def /Ledger {ledgertray ledger} def /Legal {legaltray legal} def /Statement {statementtray statement} def /Executive {executivetray executive} def /A3 {a3tray a3} def /A4 {a4tray a4} def /A4Small {a4tray a4small} def /B4 {b4tray b4} def /B5 {b5tray b5} def /unknown {unknown} def papersizedict dup papername known {papername} {/unknown} ifelse get end statusdict begin stopped end } bind def /manualpapersize { papersizedict begin /Letter {letter} def /LetterSmall {lettersmall} def /Tabloid {11x17} def /Ledger {ledger} def /Legal {legal} def /Statement {statement} def /Executive {executive} def /A3 {a3} def /A4 {a4} def /A4Small {a4small} def /B4 {b4} def /B5 {b5} def /unknown {unknown} def papersizedict dup papername known {papername} {/unknown} ifelse get end stopped } bind def /desperatepapersize { statusdict /setpageparams known { paperwidth paperheight 0 1 statusdict begin {setpageparams} stopped end } {true} ifelse } bind def /DiacriticEncoding [ /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /space /exclam /quotedbl /numbersign /dollar /percent /ampersand /quotesingle /parenleft /parenright /asterisk /plus /comma /hyphen /period /slash /zero /one /two /three /four /five /six /seven /eight /nine /colon /semicolon /less /equal /greater /question /at /A /B /C /D /E /F /G /H /I /J /K /L /M /N /O /P /Q /R /S /T /U /V /W /X /Y /Z /bracketleft /backslash /bracketright /asciicircum /underscore /grave /a /b /c /d /e /f /g /h /i /j /k /l /m /n /o /p /q /r /s /t /u /v /w /x /y /z /braceleft /bar /braceright /asciitilde /.notdef /Adieresis /Aring /Ccedilla /Eacute /Ntilde /Odieresis /Udieresis /aacute /agrave /acircumflex /adieresis /atilde /aring /ccedilla /eacute /egrave /ecircumflex /edieresis /iacute /igrave /icircumflex /idieresis /ntilde /oacute /ograve /ocircumflex /odieresis /otilde /uacute /ugrave /ucircumflex /udieresis /dagger /.notdef /cent /sterling /section /bullet /paragraph /germandbls /registered /copyright /trademark /acute /dieresis /.notdef /AE /Oslash /.notdef /.notdef /.notdef /.notdef /yen /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /ordfeminine /ordmasculine /.notdef /ae /oslash /questiondown /exclamdown /logicalnot /.notdef /florin /.notdef /.notdef /guillemotleft /guillemotright /ellipsis /.notdef /Agrave /Atilde /Otilde /OE /oe /endash /emdash /quotedblleft /quotedblright /quoteleft /quoteright /.notdef /.notdef /ydieresis /Ydieresis /fraction /currency /guilsinglleft /guilsinglright /fi /fl /daggerdbl /periodcentered /quotesinglbase /quotedblbase /perthousand /Acircumflex /Ecircumflex /Aacute /Edieresis /Egrave /Iacute /Icircumflex /Idieresis /Igrave /Oacute /Ocircumflex /.notdef /Ograve /Uacute /Ucircumflex /Ugrave /dotlessi /circumflex /tilde /macron /breve /dotaccent /ring /cedilla /hungarumlaut /ogonek /caron ] def /ReEncode { dup length dict begin { 1 index /FID ne {def} {pop pop} ifelse } forall 0 eq {/Encoding DiacriticEncoding def} if currentdict end } bind def FMPColor { /BEGINBITMAPCOLOR { BITMAPCOLOR} def /BEGINBITMAPCOLORc { BITMAPCOLORc} def /BEGINBITMAPTRUECOLOR { BITMAPTRUECOLOR } def /BEGINBITMAPTRUECOLORc { BITMAPTRUECOLORc } def } { /BEGINBITMAPCOLOR { BITMAPGRAY} def /BEGINBITMAPCOLORc { BITMAPGRAYc} def /BEGINBITMAPTRUECOLOR { BITMAPTRUEGRAY } def /BEGINBITMAPTRUECOLORc { BITMAPTRUEGRAYc } def } ifelse /K { FMPrintAllColorsAsBlack { dup 1 eq 2 index 1 eq and 3 index 1 eq and not {7 {pop} repeat 0 0 0 1 0 0 0} if } if FrameCurColors astore pop combineColor } bind def /graymode true def /bwidth FMLOCAL /bpside FMLOCAL /bstring FMLOCAL /onbits FMLOCAL /offbits FMLOCAL /xindex FMLOCAL /yindex FMLOCAL /x FMLOCAL /y FMLOCAL /setPatternMode { FMLevel1 { /bwidth exch def /bpside exch def /bstring exch def /onbits 0 def /offbits 0 def freq sangle landscape {90 add} if {/y exch def /x exch def /xindex x 1 add 2 div bpside mul cvi def /yindex y 1 add 2 div bpside mul cvi def bstring yindex bwidth mul xindex 8 idiv add get 1 7 xindex 8 mod sub bitshift and 0 ne FrameNegative {not} if {/onbits onbits 1 add def 1} {/offbits offbits 1 add def 0} ifelse } setscreen offbits offbits onbits add div FrameNegative {1.0 exch sub} if /FrameCurGray exch def } { pop pop dup patCache exch known { patCache exch get } { dup patDict /bstring 3 -1 roll put patDict 9 PatFreq screenIndex get div dup matrix scale makepattern dup patCache 4 -1 roll 3 -1 roll put } ifelse /FrameCurGray 0 def /FrameCurPat exch def } ifelse /graymode false def combineColor } bind def /setGrayScaleMode { graymode not { /graymode true def FMLevel1 { setCurrentScreen } if } if /FrameCurGray exch def combineColor } bind def /normalize { transform round exch round exch itransform } bind def /dnormalize { dtransform round exch round exch idtransform } bind def /lnormalize { 0 dtransform exch cvi 2 idiv 2 mul 1 add exch idtransform pop } bind def /H { lnormalize setlinewidth } bind def /Z { setlinecap } bind def /PFill { graymode FMLevel1 or not { gsave 1 setgray eofill grestore } if } bind def /PStroke { graymode FMLevel1 or not { gsave 1 setgray stroke grestore } if stroke } bind def /fillvals FMLOCAL /X { fillvals exch get dup type /stringtype eq {8 1 setPatternMode} {setGrayScaleMode} ifelse } bind def /V { PFill gsave eofill grestore } bind def /Vclip { clip } bind def /Vstrk { currentlinewidth exch setlinewidth PStroke setlinewidth } bind def /N { PStroke } bind def /Nclip { strokepath clip newpath } bind def /Nstrk { currentlinewidth exch setlinewidth PStroke setlinewidth } bind def /M {newpath moveto} bind def /E {lineto} bind def /D {curveto} bind def /O {closepath} bind def /n FMLOCAL /L { /n exch def newpath normalize moveto 2 1 n {pop normalize lineto} for } bind def /Y { L closepath } bind def /x1 FMLOCAL /x2 FMLOCAL /y1 FMLOCAL /y2 FMLOCAL /R { /y2 exch def /x2 exch def /y1 exch def /x1 exch def x1 y1 x2 y1 x2 y2 x1 y2 4 Y } bind def /rad FMLOCAL /rarc {rad arcto } bind def /RR { /rad exch def normalize /y2 exch def /x2 exch def normalize /y1 exch def /x1 exch def mark newpath { x1 y1 rad add moveto x1 y2 x2 y2 rarc x2 y2 x2 y1 rarc x2 y1 x1 y1 rarc x1 y1 x1 y2 rarc closepath } stopped {x1 y1 x2 y2 R} if cleartomark } bind def /RRR { /rad exch def normalize /y4 exch def /x4 exch def normalize /y3 exch def /x3 exch def normalize /y2 exch def /x2 exch def normalize /y1 exch def /x1 exch def newpath normalize moveto mark { x2 y2 x3 y3 rarc x3 y3 x4 y4 rarc x4 y4 x1 y1 rarc x1 y1 x2 y2 rarc closepath } stopped {x1 y1 x2 y2 x3 y3 x4 y4 newpath moveto lineto lineto lineto closepath} if cleartomark } bind def /C { grestore gsave R clip setCurrentScreen } bind def /CP { grestore gsave Y clip setCurrentScreen } bind def /FMpointsize FMLOCAL /F { FMfonts exch get FMpointsize scalefont setfont } bind def /Q { /FMpointsize exch def F } bind def /T { moveto show } bind def /RF { rotate 0 ne {-1 1 scale} if } bind def /TF { gsave moveto RF show grestore } bind def /P { moveto 0 32 3 2 roll widthshow } bind def /PF { gsave moveto RF 0 32 3 2 roll widthshow grestore } bind def /S { moveto 0 exch ashow } bind def /SF { gsave moveto RF 0 exch ashow grestore } bind def /B { moveto 0 32 4 2 roll 0 exch awidthshow } bind def /BF { gsave moveto RF 0 32 4 2 roll 0 exch awidthshow grestore } bind def /G { gsave newpath normalize translate 0.0 0.0 moveto dnormalize scale 0.0 0.0 1.0 5 3 roll arc closepath PFill fill grestore } bind def /Gstrk { savematrix newpath 2 index 2 div add exch 3 index 2 div sub exch normalize 2 index 2 div sub exch 3 index 2 div add exch translate scale 0.0 0.0 1.0 5 3 roll arc restorematrix currentlinewidth exch setlinewidth PStroke setlinewidth } bind def /Gclip { newpath savematrix normalize translate 0.0 0.0 moveto dnormalize scale 0.0 0.0 1.0 5 3 roll arc closepath clip newpath restorematrix } bind def /GG { gsave newpath normalize translate 0.0 0.0 moveto rotate dnormalize scale 0.0 0.0 1.0 5 3 roll arc closepath PFill fill grestore } bind def /GGclip { savematrix newpath normalize translate 0.0 0.0 moveto rotate dnormalize scale 0.0 0.0 1.0 5 3 roll arc closepath clip newpath restorematrix } bind def /GGstrk { savematrix newpath normalize translate 0.0 0.0 moveto rotate dnormalize scale 0.0 0.0 1.0 5 3 roll arc closepath restorematrix currentlinewidth exch setlinewidth PStroke setlinewidth } bind def /A { gsave savematrix newpath 2 index 2 div add exch 3 index 2 div sub exch normalize 2 index 2 div sub exch 3 index 2 div add exch translate scale 0.0 0.0 1.0 5 3 roll arc restorematrix PStroke grestore } bind def /Aclip { newpath savematrix normalize translate 0.0 0.0 moveto dnormalize scale 0.0 0.0 1.0 5 3 roll arc closepath strokepath clip newpath restorematrix } bind def /Astrk { Gstrk } bind def /AA { gsave savematrix newpath 3 index 2 div add exch 4 index 2 div sub exch normalize 3 index 2 div sub exch 4 index 2 div add exch translate rotate scale 0.0 0.0 1.0 5 3 roll arc restorematrix PStroke grestore } bind def /AAclip { savematrix newpath normalize translate 0.0 0.0 moveto rotate dnormalize scale 0.0 0.0 1.0 5 3 roll arc closepath strokepath clip newpath restorematrix } bind def /AAstrk { GGstrk } bind def /x FMLOCAL /y FMLOCAL /w FMLOCAL /h FMLOCAL /xx FMLOCAL /yy FMLOCAL /ww FMLOCAL /hh FMLOCAL /FMsaveobject FMLOCAL /FMoptop FMLOCAL /FMdicttop FMLOCAL /BEGINPRINTCODE { /FMdicttop countdictstack 1 add def /FMoptop count 7 sub def /FMsaveobject save def userdict begin /showpage {} def FMNORMALIZEGRAPHICS 3 index neg 3 index neg translate } bind def /ENDPRINTCODE { count -1 FMoptop {pop pop} for countdictstack -1 FMdicttop {pop end} for FMsaveobject restore } bind def /gn { 0 { 46 mul cf read pop 32 sub dup 46 lt {exit} if 46 sub add } loop add } bind def /str FMLOCAL /cfs { /str sl string def 0 1 sl 1 sub {str exch val put} for str def } bind def /ic [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0223 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0223 0 {0 hx} {1 hx} {2 hx} {3 hx} {4 hx} {5 hx} {6 hx} {7 hx} {8 hx} {9 hx} {10 hx} {11 hx} {12 hx} {13 hx} {14 hx} {15 hx} {16 hx} {17 hx} {18 hx} {19 hx} {gn hx} {0} {1} {2} {3} {4} {5} {6} {7} {8} {9} {10} {11} {12} {13} {14} {15} {16} {17} {18} {19} {gn} {0 wh} {1 wh} {2 wh} {3 wh} {4 wh} {5 wh} {6 wh} {7 wh} {8 wh} {9 wh} {10 wh} {11 wh} {12 wh} {13 wh} {14 wh} {gn wh} {0 bl} {1 bl} {2 bl} {3 bl} {4 bl} {5 bl} {6 bl} {7 bl} {8 bl} {9 bl} {10 bl} {11 bl} {12 bl} {13 bl} {14 bl} {gn bl} {0 fl} {1 fl} {2 fl} {3 fl} {4 fl} {5 fl} {6 fl} {7 fl} {8 fl} {9 fl} {10 fl} {11 fl} {12 fl} {13 fl} {14 fl} {gn fl} ] def /sl FMLOCAL /val FMLOCAL /ws FMLOCAL /im FMLOCAL /bs FMLOCAL /cs FMLOCAL /len FMLOCAL /pos FMLOCAL /ms { /sl exch def /val 255 def /ws cfs /im cfs /val 0 def /bs cfs /cs cfs } bind def 400 ms /ip { is 0 cf cs readline pop { ic exch get exec add } forall pop } bind def /rip { bis ris copy pop is 0 cf cs readline pop { ic exch get exec add } forall pop pop ris gis copy pop dup is exch cf cs readline pop { ic exch get exec add } forall pop pop gis bis copy pop dup add is exch cf cs readline pop { ic exch get exec add } forall pop } bind def /wh { /len exch def /pos exch def ws 0 len getinterval im pos len getinterval copy pop pos len } bind def /bl { /len exch def /pos exch def bs 0 len getinterval im pos len getinterval copy pop pos len } bind def /s1 1 string def /fl { /len exch def /pos exch def /val cf s1 readhexstring pop 0 get def pos 1 pos len add 1 sub {im exch val put} for pos len } bind def /hx { 3 copy getinterval cf exch readhexstring pop pop } bind def /h FMLOCAL /w FMLOCAL /d FMLOCAL /lb FMLOCAL /bitmapsave FMLOCAL /is FMLOCAL /cf FMLOCAL /wbytes { dup dup 24 eq { pop pop 3 mul } { 8 eq {pop} {1 eq {7 add 8 idiv} {3 add 4 idiv} ifelse} ifelse } ifelse } bind def /BEGINBITMAPBWc { 1 {} COMMONBITMAPc } bind def /BEGINBITMAPGRAYc { 8 {} COMMONBITMAPc } bind def /BEGINBITMAP2BITc { 2 {} COMMONBITMAPc } bind def /COMMONBITMAPc { /r exch def /d exch def gsave 3 index 2 div add exch 4 index 2 div add exch translate rotate 1 index 2 div neg 1 index 2 div neg translate scale /h exch def /w exch def /lb w d wbytes def sl lb lt {lb ms} if /bitmapsave save def r /is im 0 lb getinterval def ws 0 lb getinterval is copy pop /cf currentfile def w h d [w 0 0 h neg 0 h] {ip} image bitmapsave restore grestore } bind def /BEGINBITMAPBW { 1 {} COMMONBITMAP } bind def /BEGINBITMAPGRAY { 8 {} COMMONBITMAP } bind def /BEGINBITMAP2BIT { 2 {} COMMONBITMAP } bind def /COMMONBITMAP { /r exch def /d exch def gsave 3 index 2 div add exch 4 index 2 div add exch translate rotate 1 index 2 div neg 1 index 2 div neg translate scale /h exch def /w exch def /bitmapsave save def r /is w d wbytes string def /cf currentfile def w h d [w 0 0 h neg 0 h] {cf is readhexstring pop} image bitmapsave restore grestore } bind def /ngrayt 256 array def /nredt 256 array def /nbluet 256 array def /ngreent 256 array def /gryt FMLOCAL /blut FMLOCAL /grnt FMLOCAL /redt FMLOCAL /indx FMLOCAL /cynu FMLOCAL /magu FMLOCAL /yelu FMLOCAL /k FMLOCAL /u FMLOCAL FMLevel1 { /colorsetup { currentcolortransfer /gryt exch def /blut exch def /grnt exch def /redt exch def 0 1 255 { /indx exch def /cynu 1 red indx get 255 div sub def /magu 1 green indx get 255 div sub def /yelu 1 blue indx get 255 div sub def /k cynu magu min yelu min def /u k currentundercolorremoval exec def % /u 0 def nredt indx 1 0 cynu u sub max sub redt exec put ngreent indx 1 0 magu u sub max sub grnt exec put nbluet indx 1 0 yelu u sub max sub blut exec put ngrayt indx 1 k currentblackgeneration exec sub gryt exec put } for {255 mul cvi nredt exch get} {255 mul cvi ngreent exch get} {255 mul cvi nbluet exch get} {255 mul cvi ngrayt exch get} setcolortransfer {pop 0} setundercolorremoval {} setblackgeneration } bind def } { /colorSetup2 { [ /Indexed /DeviceRGB 255 {dup red exch get 255 div exch dup green exch get 255 div exch blue exch get 255 div} ] setcolorspace } bind def } ifelse /tran FMLOCAL /fakecolorsetup { /tran 256 string def 0 1 255 {/indx exch def tran indx red indx get 77 mul green indx get 151 mul blue indx get 28 mul add add 256 idiv put} for currenttransfer {255 mul cvi tran exch get 255.0 div} exch concatprocs settransfer } bind def /BITMAPCOLOR { /d 8 def gsave 3 index 2 div add exch 4 index 2 div add exch translate rotate 1 index 2 div neg 1 index 2 div neg translate scale /h exch def /w exch def /bitmapsave save def FMLevel1 { colorsetup /is w d wbytes string def /cf currentfile def w h d [w 0 0 h neg 0 h] {cf is readhexstring pop} {is} {is} true 3 colorimage } { colorSetup2 /is w d wbytes string def /cf currentfile def 7 dict dup begin /ImageType 1 def /Width w def /Height h def /ImageMatrix [w 0 0 h neg 0 h] def /DataSource {cf is readhexstring pop} bind def /BitsPerComponent d def /Decode [0 255] def end image } ifelse bitmapsave restore grestore } bind def /BITMAPCOLORc { /d 8 def gsave 3 index 2 div add exch 4 index 2 div add exch translate rotate 1 index 2 div neg 1 index 2 div neg translate scale /h exch def /w exch def /lb w d wbytes def sl lb lt {lb ms} if /bitmapsave save def FMLevel1 { colorsetup /is im 0 lb getinterval def ws 0 lb getinterval is copy pop /cf currentfile def w h d [w 0 0 h neg 0 h] {ip} {is} {is} true 3 colorimage } { colorSetup2 /is im 0 lb getinterval def ws 0 lb getinterval is copy pop /cf currentfile def 7 dict dup begin /ImageType 1 def /Width w def /Height h def /ImageMatrix [w 0 0 h neg 0 h] def /DataSource {ip} bind def /BitsPerComponent d def /Decode [0 255] def end image } ifelse bitmapsave restore grestore } bind def /BITMAPTRUECOLORc { /d 24 def gsave 3 index 2 div add exch 4 index 2 div add exch translate rotate 1 index 2 div neg 1 index 2 div neg translate scale /h exch def /w exch def /lb w d wbytes def sl lb lt {lb ms} if /bitmapsave save def /is im 0 lb getinterval def /ris im 0 w getinterval def /gis im w w getinterval def /bis im w 2 mul w getinterval def ws 0 lb getinterval is copy pop /cf currentfile def w h 8 [w 0 0 h neg 0 h] {w rip pop ris} {gis} {bis} true 3 colorimage bitmapsave restore grestore } bind def /BITMAPTRUECOLOR { gsave 3 index 2 div add exch 4 index 2 div add exch translate rotate 1 index 2 div neg 1 index 2 div neg translate scale /h exch def /w exch def /bitmapsave save def /is w string def /gis w string def /bis w string def /cf currentfile def w h 8 [w 0 0 h neg 0 h] { cf is readhexstring pop } { cf gis readhexstring pop } { cf bis readhexstring pop } true 3 colorimage bitmapsave restore grestore } bind def /BITMAPTRUEGRAYc { /d 24 def gsave 3 index 2 div add exch 4 index 2 div add exch translate rotate 1 index 2 div neg 1 index 2 div neg translate scale /h exch def /w exch def /lb w d wbytes def sl lb lt {lb ms} if /bitmapsave save def /is im 0 lb getinterval def /ris im 0 w getinterval def /gis im w w getinterval def /bis im w 2 mul w getinterval def ws 0 lb getinterval is copy pop /cf currentfile def w h 8 [w 0 0 h neg 0 h] {w rip pop ris gis bis w gray} image bitmapsave restore grestore } bind def /ww FMLOCAL /r FMLOCAL /g FMLOCAL /b FMLOCAL /i FMLOCAL /gray { /ww exch def /b exch def /g exch def /r exch def 0 1 ww 1 sub { /i exch def r i get .299 mul g i get .587 mul b i get .114 mul add add r i 3 -1 roll floor cvi put } for r } bind def /BITMAPTRUEGRAY { gsave 3 index 2 div add exch 4 index 2 div add exch translate rotate 1 index 2 div neg 1 index 2 div neg translate scale /h exch def /w exch def /bitmapsave save def /is w string def /gis w string def /bis w string def /cf currentfile def w h 8 [w 0 0 h neg 0 h] { cf is readhexstring pop cf gis readhexstring pop cf bis readhexstring pop w gray} image bitmapsave restore grestore } bind def /BITMAPGRAY { 8 {fakecolorsetup} COMMONBITMAP } bind def /BITMAPGRAYc { 8 {fakecolorsetup} COMMONBITMAPc } bind def /ENDBITMAP { } bind def end /ALDsave FMLOCAL /ALDmatrix matrix def ALDmatrix currentmatrix pop /StartALD { /ALDsave save def savematrix ALDmatrix setmatrix } bind def /InALD { restorematrix } bind def /DoneALD { ALDsave restore } bind def /I { setdash } bind def /J { [] 0 setdash } bind def %%EndProlog %%BeginSetup (4.0) FMVERSION 1.20 1.20 0 0 10000 10000 0 1 5 FMDOCUMENT 0 0 /Times-Roman FMFONTDEFINE 1 0 /Courier-Bold FMFONTDEFINE 2 0 /Times-Bold FMFONTDEFINE 3 0 /Times-Italic FMFONTDEFINE 32 FMFILLS 0 0 FMFILL 1 0.1 FMFILL 2 0.3 FMFILL 3 0.5 FMFILL 4 0.7 FMFILL 5 0.9 FMFILL 6 0.97 FMFILL 7 1 FMFILL 8 <0f1e3c78f0e1c387> FMFILL 9 <0f87c3e1f0783c1e> FMFILL 10 FMFILL 11 FMFILL 12 <8142241818244281> FMFILL 13 <03060c183060c081> FMFILL 14 <8040201008040201> FMFILL 16 1 FMFILL 17 0.9 FMFILL 18 0.7 FMFILL 19 0.5 FMFILL 20 0.3 FMFILL 21 0.1 FMFILL 22 0.03 FMFILL 23 0 FMFILL 24 FMFILL 25 FMFILL 26 <3333333333333333> FMFILL 27 <0000ffff0000ffff> FMFILL 28 <7ebddbe7e7dbbd7e> FMFILL 29 FMFILL 30 <7fbfdfeff7fbfdfe> FMFILL %%EndSetup %%Page: "26" 1 %%BeginPaperSize: A4 %%EndPaperSize 595.3 841.9 0 FMBEGINPAGE [0 0 0 1 0 0 0] [ 0 1 1 0 1 0 0] [ 0 1 0 0 1 0 1] [ 1 1 0 0 0 0 1] [ 0 1 1 0 1 0 0] [ 1 0 1 0 0 1 0] [ 1 0 0 0 0 1 1] [ 0 0 1 0 1 1 0] 8 FrameSetSepColors FrameNoSep 0 0 0 1 0 0 0 K J 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 10.29 43.04 404.2 49.04 C 0 -158.1 1000 841.9 C 0 0 0 1 0 0 0 K 0 10 Q 0 X 0 0 0 1 0 0 0 K (a. The) 22.29 36.37 T 1 F (tcc) 49.78 36.37 T 0 F ( environment path, which it searches for its environments, consists of a colon-) 67.78 36.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (separated list of directories given by the) 22.29 24.37 T 1 F (TCCENV) 184.76 24.37 T 0 F ( system variable, plus a default built-in) 220.76 24.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (path.) 22.29 12.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-ch) 16.29 806.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (causes) 106.87 806.37 T 1 F (tcc) 135.47 806.37 T 0 F ( to emulate the TDF checker) 153.47 806.37 T 1 F (tchk) 270.11 806.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-disp) 16.29 790.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (causes all) 106.87 790.37 T 1 F (.j) 145.47 790.37 T 0 F ( \336les to be pretty-printed) 157.47 790.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-disp_t) 16.29 774.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (causes all) 106.87 774.37 T 1 F (.t) 145.47 774.37 T 0 F ( \336les to be pretty-printed) 157.47 774.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-dn) 16.29 758.37 T 0 F (or) 33.24 758.37 T 2 F ( -dy) 41.57 758.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (passed to the system linker) 106.87 758.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-do) 16.29 742.37 T 3 F (type) 30.18 742.37 T (\336le) 49.34 742.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (sets the default output \336le name for \336le type) 106.87 742.37 T 3 F (type) 286.02 742.37 T 0 F ( to) 302.68 742.37 T 3 F (\336le) 315.46 742.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-dry) 16.29 726.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (causes a dry run) 106.87 726.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-dump) 16.29 710.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (dumps the current status to the standard output) 106.87 710.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-e) 16.29 694.37 T 3 F (\336le) 26.56 694.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (speci\336es the end-up \336le) 106.87 694.37 T 3 F (\336le) 204.08 694.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-f) 16.29 678.37 T 3 F (\336le) 25.45 678.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (speci\336es the start-up \336le) 106.87 678.37 T 3 F (\336le) 206.86 678.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-g) 16.29 662.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (causes the production of debugging information) 106.87 662.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-h) 16.29 646.37 T 3 F (str) 27.68 646.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (passed to the system linker) 106.87 646.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-i) 16.29 630.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (halts compilation after creating) 106.87 630.37 T 1 F (.j) 234.07 630.37 T 0 F ( \336les) 246.07 630.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-im) 16.29 614.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (enables intermodular checks) 106.87 614.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-im0) 16.29 598.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (disables intermodular checks) 106.87 598.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-info) 16.29 582.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (causes API information to be printed) 106.87 582.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-j) 16.29 566.37 T 3 F (lib) 25.45 566.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (speci\336es the TDF library) 106.87 566.37 T 3 F (lib) 209.08 566.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-k) 16.29 550.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F -0.16 (halts compilation after creating) 106.87 550.37 P 1 F -0.39 (.k) 233.42 550.37 P 0 F -0.16 ( \336les \050in intermodular checking mode\051) 245.42 550.37 P 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-keep_err) 16.29 534.37 T (ors) 57.76 534.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (stops output \336les being removed when an error occurs) 106.87 534.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-l) 16.29 518.37 T 3 F (lib) 24.9 518.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (speci\336es the system library) 106.87 518.37 T 3 F (lib) 217.97 518.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-make_up_names) 16.29 502.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (makes up names for intermediate \336les) 106.87 502.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-message) 16.29 486.37 T 3 F (string) 57.11 486.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (causes) 106.87 486.37 T 1 F (tcc) 135.47 486.37 T 0 F ( to print the message) 153.47 486.37 T 3 F (string) 238.74 486.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-nepc) 16.29 470.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (switches of) 106.87 470.37 T (f extra portability) 151.96 470.37 T (, and certain other) 221.3 470.37 T (, checks) 293.1 470.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-not_ansi) 16.29 454.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (allows a raft of non-ISO/ANSI C features) 106.87 454.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-o) 16.29 438.37 T 3 F (\336le) 27.12 438.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (speci\336es the output \336le name) 106.87 438.37 T 3 F (\336le) 226.03 438.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-p) 16.29 422.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (causes pro\336ling information to be produced) 106.87 422.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-pr) 16.29 406.37 T (od) 29.44 406.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (causes the production of a TDF archive) 106.87 406.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-q) 16.29 390.37 T 0 F (or) 27.68 390.37 T 2 F (-quiet) 38.51 390.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (speci\336es quiet \050non-verbose\051 mode) 106.87 390.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-query) 16.29 374.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (causes a list of options to be printed) 106.87 374.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-s) 16.29 358.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (passed to the system linker) 106.87 358.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-show_env) 16.29 342.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (causes the environment path to be printed) 106.87 342.37 T 0 8 Q (a) 273.51 346.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 10 Q (-show_err) 16.29 326.37 T (ors) 59.43 326.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (causes the compilation stage causing an error to be printed) 106.87 326.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-special) 16.29 310.37 T 3 F (string) 51.01 310.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (allows various internal options) 106.87 310.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-startup) 16.29 294.37 T 3 F (string) 53.23 294.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (speci\336es a line to be inserted in) 106.87 294.37 T 1 F (tcc) 234.91 294.37 T 0 F (\325) 252.91 294.37 T (s built-in start-up \336le) 255.69 294.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-target) 16.29 278.37 T 3 F (string) 47.66 278.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (provided for) 106.87 278.37 T 1 F (cc) 159.08 278.37 T 0 F ( compatibility) 171.08 278.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-temp) 16.29 262.37 T 3 F (dir) 43.78 262.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (speci\336es the temporary directory to be) 106.87 262.37 T 3 F (dir) 262.95 262.37 T 0 8 Q (b) 274.62 266.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 10 Q (-tidy) 16.29 246.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (causes) 106.87 246.37 T 1 F (tcc) 135.47 246.37 T 0 F ( to tidy up its temporary \336les as it goes along) 153.47 246.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-time) 16.29 230.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (causes all commands to be timed) 106.87 230.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-u) 16.29 214.37 T 3 F (str) 27.68 214.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (passed to the system linker) 106.87 214.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-v) 16.29 198.37 T 0 F (or) 27.12 198.37 T 2 F (-verbose) 37.95 198.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (speci\336es verbose mode) 106.87 198.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-vb) 16.29 182.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (speci\336es fairly verbose mode) 106.87 182.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-version) 16.29 166.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (causes) 106.87 166.37 T 1 F (tcc) 135.47 166.37 T 0 F ( to print its version number) 153.47 166.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-w) 16.29 150.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (suppresses) 106.87 150.37 T 1 F (tcc) 152.14 150.37 T 0 F ( warnings) 170.14 150.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-work) 16.29 134.37 T 3 F (dir) 44.34 134.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (speci\336es the work directory) 106.87 134.37 T (, where preserved \336les are stored, to be) 217.03 134.37 T 3 F (dir) 375.87 134.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-wsl) 16.29 118.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (causes string literals to be made writable in code production) 106.87 118.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-z) 16.29 102.37 T 3 F (str) 26.56 102.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (passed to the system linker) 106.87 102.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 2 F (-#) 16.29 86.37 T 0 F ( or) 24.62 86.37 T 2 F ( -##) 35.45 86.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (equivalent to) 106.87 86.37 T 2 F (-v) 161.31 86.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (-###) 16.29 70.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (equivalent to) 106.87 70.37 T 2 F (-dry) 161.31 70.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (--) 16.29 54.37 T 3 F ( str) 22.95 54.37 T 2 F (,) 35.09 54.37 T 3 F (str) 40.09 54.37 T 2 F (,) 49.73 54.37 T 0 F ( ...) 52.23 54.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (communicates directly with the) 106.87 54.37 T 1 F (tcc) 235.19 54.37 T 0 F ( option interpreter) 253.19 54.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 10.29 816.53 10.29 49.54 2 L V 1 H 0 Z N 100.87 817.53 100.87 48.54 2 L V N 404.2 816.53 404.2 49.54 2 L V N 9.79 817.03 404.7 817.03 2 L V N 9.79 801.03 404.7 801.03 2 L V 0.25 H N 9.79 785.03 404.7 785.03 2 L V N 9.79 769.03 404.7 769.03 2 L V N 9.79 753.03 404.7 753.03 2 L V N 9.79 737.03 404.7 737.03 2 L V N 9.79 721.03 404.7 721.03 2 L V N 9.79 705.03 404.7 705.03 2 L V N 9.79 689.03 404.7 689.03 2 L V N 9.79 673.03 404.7 673.03 2 L V N 9.79 657.03 404.7 657.03 2 L V N 9.79 641.03 404.7 641.03 2 L V N 9.79 625.03 404.7 625.03 2 L V N 9.79 609.03 404.7 609.03 2 L V N 9.79 593.03 404.7 593.03 2 L V N 9.79 577.03 404.7 577.03 2 L V N 9.79 561.03 404.7 561.03 2 L V N 9.79 545.03 404.7 545.03 2 L V N 9.79 529.03 404.7 529.03 2 L V N 9.79 513.03 404.7 513.03 2 L V N 9.79 497.03 404.7 497.03 2 L V N 9.79 481.03 404.7 481.03 2 L V N 9.79 465.03 404.7 465.03 2 L V N 9.79 449.03 404.7 449.03 2 L V N 9.79 433.03 404.7 433.03 2 L V N 9.79 417.03 404.7 417.03 2 L V N 9.79 401.03 404.7 401.03 2 L V N 9.79 385.03 404.7 385.03 2 L V N 9.79 369.03 404.7 369.03 2 L V N 9.79 353.03 404.7 353.03 2 L V N 9.79 337.03 404.7 337.03 2 L V N 9.79 321.03 404.7 321.03 2 L V N 9.79 305.03 404.7 305.03 2 L V N 9.79 289.03 404.7 289.03 2 L V N 9.79 273.03 404.7 273.03 2 L V N 9.79 257.03 404.7 257.03 2 L V N 9.79 241.03 404.7 241.03 2 L V N 9.79 225.03 404.7 225.03 2 L V N 9.79 209.03 404.7 209.03 2 L V N 9.79 193.04 404.7 193.04 2 L V N 9.79 177.04 404.7 177.04 2 L V N 9.79 161.04 404.7 161.04 2 L V N 9.79 145.04 404.7 145.04 2 L V N 9.79 129.04 404.7 129.04 2 L V N 9.79 113.04 404.7 113.04 2 L V N 9.79 97.04 404.7 97.04 2 L V N 9.79 81.04 404.7 81.04 2 L V N 9.79 65.04 404.7 65.04 2 L V N 9.79 49.04 404.7 49.04 2 L V 1 H N 0 0 0 1 0 0 0 K FMENDPAGE %%EndPage: "26" 1 %%Page: "27" 2 595.3 841.9 0 FMBEGINPAGE [0 0 0 1 0 0 0] [ 0 1 1 0 1 0 0] [ 0 1 0 0 1 0 1] [ 1 1 0 0 0 0 1] [ 0 1 1 0 1 0 0] [ 1 0 1 0 0 1 0] [ 1 0 0 0 0 1 1] [ 0 0 1 0 1 1 0] 8 FrameSetSepColors FrameNoSep 0 0 0 1 0 0 0 K 0 10 Q 0 X 0 0 0 1 0 0 0 K (b. The temporary directory can also be set using the) 22.29 832.37 T 1 F (TMPDIR) 231.97 832.37 T 0 F ( system variable on those) 267.97 832.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (machines which implement the XPG3) 22.29 820.37 T 1 F (tempnam) 177 820.37 T 0 F ( system routine.) 219 820.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K FMENDPAGE %%EndPage: "27" 2 %%Trailer %%BoundingBox: 0 0 595.3 841.9 %%PageOrder: Ascend %%Pages: 2 %%DocumentFonts: Times-Roman %%+ Courier-Bold %%+ Times-Bold %%+ Times-Italic %%EOF tendra-doc-4.1.2.orig/doc/tcc/table13.gif100644 1750 1750 10025 6466607533 17367 0ustar brooniebroonieGIF87aK&¡æææÿÿÿ,K&þ”©Ëí£œ´Ú‹³Þ¼û†âH–扦êʶî ÇòL×öçúÎ÷þ ‡Ä¢ñˆL*—̦ó J§ÔªõŠÍj·Ü®÷ ‹Çä²ùŒN«×ì¶û ËçôºýŽÏë÷ü¾ÿ(8HXhxˆ˜¨¸ÈØèø)9IYiy‰™©¹ÉÙéù *:JZjzŠšªºÊÚêú +;K[k{‹›«»ËÛëû ,N^n~Žž®¾ÎÞîþ/?O_?€ŸàoŸ?~ B0xaÂÚÃÆïB‡ 2hÀaFþ \ðÑÂÀŽA’|8-¢B}/†4Y²ßK,žIpfÆ›(­…¬¹fP™ =êh‘eÀB+š¬ø1*À¥Naæ¤z´g°Ÿ\k¾ X0ëΉUUZMzõ,Ù}S72|kÖ©D­É¢‚ ê’bİ@iÞMÚÒìØ±s‰l[.Þ¸*yÒí%UèPÀMÓ²•üshQÍg1SŽ‰Ñ³[ËrãJ~LLéÕÕ•ÁŽ4¬:´>¯Š«‚ž=óîS¨žÓÎuŒ:¸ ±Â‹/!n<¹òåÌ›;=ºôéÔOŒ¼Ž=»öíÜ»{ÿ>¼øñäË›?>½úõìÛ»W?@€ùôëÛ¿?¿þýüþûûÿoŸ|H`ˆ`‚ .È`ƒ>a„óm‡€^ˆafÈa‡~bˆ"ŽX …olHbŠ¢¨b‹.¾cŒ2&h¢,Î8ã8îÈc>þ¨`mè$‰D‰d’J.™¢lÉ$‡PFIe•V^ÙŸ“kL‰¥ƒ\v f˜bö¨¥_Žyà™h®Éf›–™†šnú'çœvÞ‰g–À‘Ad>Ö©ä1È% yЍ„p¢¡ãHû ˆ›Êi‚•â©¡‰nÊ)‚‹ž1é|—Þ7ª¤ùUZª—švÊj«~j† fêg~FŠ«¨þXª­”Öºk°·þ ,®¿Úªþk­®.ˬ~°–!k}™ŠªkµÆúŠm•½^Ë-ªÉbëm¶Ý‚»j³æ¶ù,Ÿ˜ÎZ-²â;®¶§N;è°Òw­»¨ÂËï¹þ6›îÛN«/½ý¦è¼Ö²»«´ø[ðÂäfûoÅ,FŸÊê+ÄÅR¥¬ÊvL-­ß’[¬Çû‚lqËyb†šãYîŸ5»Œ3ž0ƒ!óÍTúL'Ð9½æÎ_='ÒD/ÍtF{¡t›Q7MuËOw1õšYW͵¹Ws±õ˜awM6«_oñ^Új¯ÍvÛn¿ wÜrÏ ·{ \]¬vï7Ÿ|WØ·ßqÜqàxþ·wGá õ³œã<'þžÁž{Æ‘u+ENù-PÔä~E':Ô6-E˜o+Á¥“nÃÅÚn‚©F–Wƒ=%V¸ËÔXc©«Åúî÷t[i1õUVoÙ QN%3Ñ'VRó‹=_{ôÚÏ~›[•Ž5_¢%;æ›­.Ø Ö›Ï=õHANáåSmŽ}ü¼WOÔò»×^ø`?—ÎÄO/MQžÿDм½oy…IßùÓÀÓ¨®~ÜÞùÚWê!çÔOh÷;ó±Å2°S_ ÝçšØÉO€Ásßd£@Þ¥/6¶áß ãÑÁ,€/7RpÜ „ba‡«›‚…Ä×!n‰I4OÔÄ£oKQt"þàžXEÚn\좿Æ0ŠqŒdD¢ÌX4fÑj4â×èF,zŽZ”ã“èXÇ+: /Ví÷¸üQqðkh.sDÞ/‘"dD{@>*?Œox¼…¨“Š£Ÿ!¥œ‰xVžtä9“»Ê=ò—T^#ˆÊ ±’Ù£cJ©HX^Ы„ÂMpÃBô&,A$&oögÆdr~ÁÃ3›ùšÛ„»y_5Ÿ ÃްwÜžfhã;dP)¼á—œä>I"QPT³yé;1”nôyöÄ‹O«‡Á£ú†…ˆ)L<±ÇÀŒ6ðr ½)fªG£üΦÞ;aØÈ—ßH“¤[%a5¯¹Àl:”¡A!úHÌZ³-iu'èd7Ö­f 9݃Lð×&Ö&‹é`eNP1ï°‚#ckG*âñ EBepÙÉ® ³Eà,<«Yë$Vo¡¥h…pÚ ¤¶´Üb_ ÛØÊv¶´­íyF -ÖÆ1«6Ò-$#k&ßFaµìüääjþ©¥P´q¢Í¥Hœ«Ò“´.æ´ÍW;ÙƒŽ@4–µ\.#‹Ë甓…œî1ˆ V[F2tè….œsWBr ¯Ò=¤yiÖe´—4Þé:î2o”P)p8™9RgæOwÞÌ«‚™¢WÖ -óC&GUøŒýî ¿ÆÃgPËD_h÷¥ÇLn&Jà ;Ð5þê;AiÔ|î•©Wu††u@e¢ó+/~_ˆ›K¯˜Çöãä:<Ò çÓªcUrSYz<$3ÃÕÕñ^‹¨ÎÑñ÷¤VŽ*ü*ÖÌ yÅ´kRk#Ô¦R•ªB.³K¨ Üëíy»Ôê–GWW±~¸™…jW™Ùþ ÞFÏ¡^ñ M8óvŽ|0,4nœHã@ÒŸ5¥+qédš›.sãŒXO·õqH}8SŸÔUµ:=ÛÊzÖ´®µ­oÍTÎÕ¯ÖuÌxÝkV“ؘõõq‰mX[ÒØ¬Üìëd-Ûwîdy•êEßqÙ›<r¡í€yw„Ÿ¾¯'°=Qf{`°—¯¹9âÌÕÙèÖi$Ÿ™:ÚÍXÉÆëçŒNjœƒ¦æu*µ«œD³ŒQ½þ$—Bã,Èq0?^ÖåB4®#xDÙck¦Sáø4òÌ—Ü=Ÿž<éI_á„c^HVuÚ¡àùf׫ò1o“®ßgÏÞ»—_Tí)穜êÀ]fÙÌ…;&§gš!Ç8‚iÎù»7áõns¼›Þ?ç-r{û¥§Ψ…õvǰ­k^<Ícãà…gðèw…xCSHô¸"ð‚T7Ï8ëhT^Ħ?šgW¹z) ;·ÈNvëÑV{ÛϾԹWíí=ØûJ×Ä/¾ñü2þ>Á/æîSÝüâ>×Ñ—¾¶%[ýô^?¸•Ã쳿 ~Äp]±·´º¿oïVºòÈî%ÿò‡¨Êbþ§»¼ãFùeyo¹ÀîвÓ÷YùýGž×oë'C'€±ãq gpÑÔQåxeöpýÔ€öWe‚¶gZ·QNEcF'@ò¶`u`STÖEO5åO\–Tj'e,m`ögF”UWZÕt ç€xO·tB£`HÁpÔ‡™÷º“prxCø¦oÙ¤p"t­¯ÐŒ~ô†5Hvnt<¸g„Ò8UMcu5vÿ„TnvTg…g¥„%õTè…W•g²PÍfy©…’Út„;h†Üøe'æ„+—fÔysâ‡p³óúƒeÛS„‡”„JUd0þ %¹ni] é“ É=vQfy¸†ô·w:V“+¹9bgAMy•BUCY¹•LVP ™gC¹@Sy H:Ù`JzÉT C莋ÇVø“—…ŽùƒÉ4‡v)…gC9(‚6ˆ„Ö jù–x^‡‰T¹XlØz”ØVÁöj—™¬Ä˜ñ‘¿Œšö™Ãšö‹áSšÂxšÿ“š3Й‹±)›³I›µ6š¾Ôš®y›3•›JtŠ­Ö›Š÷›Ãœž8ŠIY‰áGší§œ×¸ŒÉ_Ì™q·7—‚e‚U×ù]”Tm¤$™¤ðšÝ‡]ðGBÔn;$^P–ƒ“ª…þis}¸Ž¹A^›¸`aXá; ¸“`øzÚ$Iæoý8Nxqõ áɃ7N¤sèr68uh•!å‘^V7 hXg‘=F–º…Z<9¹o~Ç Ÿà Ï¹“q)¡^èIC…݈¡4ùSXGv¹‡HW„%5wCg“}W”jP6@ÿ5CªÐ¢éiŽO£g •sg¡6zdš£\™w¢D˜`™¥ŸûH¢m¦P™Ž7gqKš MJŒ§ŸW¦u(`Ñe¥L¹Ÿ“Ä…öøQ„)˜Æ§(f8Çb(ØyoÉ ×—î9}›é{~ôW7ƨ©·¦ºÆ¨ÎÇ™šþ©m{¬À¦å¹›ÅÙœØwœ úu«‰{¤zv¦ |¨ZªÃI{¬ZzÃW›³J«µj«qó©Ö «±êª¼·«n¨ªÌ÷«ÀÚ«ÐG¬uê˜H”\¦Œ¶Ð©Î©^ž:jŽ©9_gžËŠ\Úõ¬«Ú«=¸z½ôC„#zȪž“¨¸°­ÞVx˜ùøò(š‰·€Fv€ù†óx†ÊŸñ(§§‡H ×Ö–‹ …<(ay bõ™ªZ(¢Ñˆ\i§SU&ŠbÙº‚,Æ‚x÷t1g”+:© UQ<• u¯))¦ô*s*¹R7h†4¨sFXr6h%K¥[ƒÖƤՅVbé‡9ý›‘Üæ°¨S“c*±oG±ej±1Rd¨^ù£1&¢jÕ• jnGqÒ¨hɰ +“GN\jUz¯‚*³z8‡Š)’¨tOˆi‚Nˆ°D•®ËY¬Ô‡cuë~ÁJž¦·Šx·Œ8¬ä*ª½5¸„Ë}{¸ÐºmŠ»¸êÊ û¸Ÿs«•k¹—‹¹²8¹›Ë¹ë¹Ÿ º¡+º£Kº¥kº§‹º©«º«Ëº­ëº¯ »±+»³K»µk»·‹»¹«»»Ë»½ë»¿ ¼Á+¼ÃK¼Åk¼Ç‹¼É«¼Ë˼Íë¼Ï ½Ñ+½ÓK½Õk½×‹½Ù«½Û˽Ýë½ß ¾á+¾ãK¾åk¾ç[º;tendra-doc-4.1.2.orig/doc/tcc/table14.gif100644 1750 1750 40626 6466607533 17402 0ustar brooniebroonieGIF87aK·¡æææÿÿÿ,K·þ”©Ëí£œ´Ú‹³Þ¼û†âH–扦êʶî ÇòL×öçúÎ÷þ ‡Ä¢ñˆL*—̦ó J§ÔªõŠÍj·Ü®÷ ‹Çä²ùŒN«×ì¶û ËçôºýŽÏë÷ü¾ÿ(8HXhxˆ˜¨¸ÈØèø)9IYiy‰™©¹ÉÙéù *:JZjzŠšªºÊÚêú +;K[k{‹›«»ËÛëû ,N^n~Žž®¾ÎÞîþ/?O_?€ŸàoŸ?~ 6hBB1è3ðÐ^.~ÿ.PÑà¿þ~/BTØQÂFGBôÈ€bH“k©ä¸0#À’!DLYsKŒ1QÚ´è³%-–7y=ù#ΠF5”¹ñ)Å“úF}8gOªW3b})S«Ê®L…v"Š6âΆ?™² ûu&عM_Ö½»¶£]¯v•F= õªR³¦Lfy41b‡q¾«éG¤ñê­J—#åËRüT`Y› K–ìµÞ~§Ñnx²åÙ‚÷ 7õȽ°={ÞIúSÜÈ‘ÉßJÜðá°»Ê}]™*t×bfÎÚ³ÆÇÁ»{ÿ>¼øñäË›?>½úõì…[}?¾üùôëÛ¿?¿þýüþûûÿ`€H`Î@ .È`ƒ>a„NHa…^ˆaƒüdÈa‡~bˆ"ŽHb‰&žˆ¢‚ܹ‘`Š.¾£„ÆHc6ÞˆcŽ:‚¸b-îdÎ(d‘F‰d’6öÈÆJ>ù"‘PNIe•VÉäN^É%‡Rv f˜bŽIa–jlIfš ~©f›n¾‰åhe  g˜lÖ‰gžz–hftîYå€Jh¡ö‰ÆŸ†&)袎>ú&¢gÐéˆ-æ©…lD¢¢ zši¨¢6(©âó)¨Ÿ*ˆj ¶:ê ùªªÂêॶÆÊ뢥Π!®¸Ê¸å°½~:þk®ºF¹ë±Îú+§ÖŠ©«˜rjm­Ön›-µÍÖyg«â^[í{¬z««¶¨¦kì³îªíÓn;n·ÜÖëê½ê¶[h¸ËÒ{í½Ùž+°¾ünÂmÆ+Ƽë²K.Á°¨þJl.·¬¦Zp½#¬pÈb2†ÃÚjü±Áùâ»±ÅÉjø/¬û®ºæÉ1Ïœ²È:sI2&W;pÊc4¿ý¾\3Ð羚ïÒŒmÄOç¼sÕSöüÅ·VwØ(ŠFo 6™X{¡uØv}â×f¯}åØ]”Íö„hÇMw¯nswݲêÍ÷»wo‘wßk"-xá¡þ­Eà…Ïmxãz"ž…’OþNyå–_ŽyæšoÎyærÈ9èíá zÑ”>z ¨û|zêE¬žuë®;Ù²ÏDíoߎûºãÍ{ï=üxðÂëô¹ñÇç@|âÊ_°Ýi4¯CL7XõXTOÀ½A}ôSYð½DÙp~öÉT‘@¾A|%A¯˜ü4¤ßM¬©‡O7ß;0!¢yN]Ôâ˜,G9—ñ‹s´Ã–~¥Õ1Îd&ؽ¦0. „`NŠÒí]Á¦É Ù§Áép05 û "›Yä…¡¹#>è¼Æ/²ÁÞj.ÈBvæ/A „­à¿æD0†ëÓLûzXÄ"æ„8¶I!þL6( XQ‚tâøˆèB&"‡^ o¢(ˆ#Vt9dbÿPøDÚø¦…S¹Meâè@ÜÈцûÐ`ûBƒ—‚å}”‘Ž¿H—DÒ~“jâ´#–; 2<¬Îl¤FìP°†}ädqîè½:z’ƒs©Íp¸hFCjr•?¤âÔHX:’ ²$G-£pKð9„ÝÉå|é£åù˜M ¦–„9¼Y2˜Ȝž2ÁÌfN šIèœ5¯‰Íljs›Üì¦7Ëç§çISÔDB9M5Næ=SçLgJÖ™ˆvºÓ&ðDDøÞNÀ²…èÛ_?;yúíŒñ]=M(—M²È9\]>þÄûaOu}§bú”ÈS!묊û HÈ`ÚÏŸ¸6ª€’…*¥gkj©Q”ôKécôȇ6†Š¤DàXØ¢2…Ù _<ÊÅMzЂJKãèSËüÔ’,TmÉ+Rõ§7­<ÿ‡Cü-4)PMä#F!Æ0¨{™¡±R®àˆ6„ Z÷×J~æ10Y\ÉZ­SÉ’¬×q£L-šÓ€Zg--m¸ÞÕ€UÄc!cš(ÂÕIUå 9ÙŽ±¢ÑŽ_“ÂIòF¨¿9#XypX °ñ¬k¥iu3X)ÒŽŠ$c_9«×3Jo¯‡,îíþJT'ª´½mëgð8P" ¶¸•nf‘gÛGvЧ•×cƒHÌPÒ£¡lk÷8Ó\óÚ±-¢ ¬xùšWæ<'7€e%UãÃEð´6Uá_“™ÝFÄÀõDmFKºbÀÕën¢¦©=µ4ÀËœç? Sð§…WJah~ø~Ÿû¦‰OŒâ«xÅ,þ‚ ÁáfÆØ«ñ f¬ÎÛ88N°Žwü‚_ÏÃ%Å'ì„\d7ò˜¤ñ<(P¹‚ds ¡Ð‹²HâÊŠ69ÄÅQMoÄP—:ÓÁÓ¼ð &‰à£žTòXÊÀ«f'¢Y  ÉûÀ£g‚P¶ F!xþÙ²6¯˜ *ga¼æ®ºù™|¼#mù9ɺ¶Ô­…ÜkZWÒ\Aþ&¢´ï(/9ÄÓZ¹š_]ašÛÈ‚T¯‡–c[°¸-Zv®fýLµ¬¿ÚVw¾1—YšÇE;–°J ädÿxÑL·VŽæãbhÃ\ÉW¹Ë%niA½EBŒšœ%åî ‰](_²ËåMå¦ñYU®f&ÐU•êóüjÌæ’R1%™ó°í%ÿ¸˜F¾BdÚݻƿƒp “àà.`8-Žð_Ûâ ±¶)ž?‹‡ã Îi‹?ò‹|ä$ÆÓÈñ3ŸÜˆ)WÃ;§Gæ.OxnZpþaÔ™X{­˜±¬Ñš>2£”–¤“ëìÔ„:çÁ# ¢CG“Ö›ë\±ižßÏ«.©{ýÛ~õ»!K×v7²éXæhwg{ꬓoèA,ÛÝfu<Ð<$ì*Ønê9±o§QÝo}Óbg¤÷Y¡æuer¬ªÉOzV«Ÿ´kÁì,wî& ü ê×%šÚòSþ.n­Ü66¹lãpYíé.»¨‘Ýì³uIäB^¥Òæ´.}÷Ò‹YÑÞ7º% ûV;ºí+<%Zå[|zÚõϪ~¯]?âëZÕ—}óíoNý¸#¿÷·Ýucl!frÕ”¼ë‘kÓÕªºùÇýâte_P#þëþýÓµ¾ýµ ïH¶ZZžÏ~¨Á‹l‚·t‚$jFõ\†Z£Gh|!Pl¥^–4€±!WUdnQNÇ?-wy+œ7¸p6§‚#P‚¹3‚ˆj–‚ǯԂ.¸‚ˆƒ&Xb%‡ƒ9¨ƒ;ȃÞô‚~p‚ß„%‚5˜d]sFxR?BJCx`ûF‚òG?‡WU¶u“ævegxÒ7eY¨wWno±JgfaG>ådu5WWtW×_o·ªuL\†TYôhy‡Nd7um—sÇd(ˆ¥O^8ˆ.õ;qÇtHÈx3ô}z†p†P=eTS¥t|FY˜xxš¸þzȉ~æT€%C•8y˜Õ…{w^âCg¬ƒ{l–Pnh†¾§VÇ&zÞ§khkʦ{ZDk¨~äz¤§Š×vv´EüU2¥Æ{S$F±Ø}ÇH~h$Tµ¦U[×z¤·‹µ¥÷‹M5}†ÅkÏuÆxkÕ;3ØhÔŒøŒ®{ȆxÔhiÄmÙ†wò…~sÔ˜ö_àGW…õ=”–Y×XGP8SŒˆŽ7;ÅàæïÕäõdžè<ÕnG¥€é‡ÔJ÷Å@KGh‰;§ˆIŒdɇæ„Qˆ¾Ö’+9l ’„Q“sX„1Yˆè¸q: ˆ/ÉN>ù“àÓƒEþi”G‰”I‰LÈ7)N™P9dB9…<‰rT•L¹Ri\¹vR˜•ÏQ„Ø3ÓXfpHYù…iÇ’@)Fõ¶Xe¸‡?‰Œ&u}¨|qx—‡hlYf`H“Õ~•¦[‰8yˆ…XŒ;©–x t9S‰ˆ–ÙŠv—|<Ô^s‰tqVms6’鉣‰Aä=°Vhâuÿó…æ•W™™Bë·ˆ9‹ïX‹³8Qª%kÌøVû՜Ʀ¸˜ßøi¯©on™{†nÑg:²XŽ—9Efél5U×囋’ó§}Ä8Ž„T{i¹eni`îçO­t˜•e‹Ç&Jñhlþ¿W~ùx|Ô%k„ÉË÷†bÙiƒdRx}Ø—õ昑Óç–xÌ!‘HxàGxb׋¥Ä[˜’ºgç5z©Ô—g•,‡•cIžþÙ¡GøŸ¢Uø¡,X¢"z0™¢W¶¡Ø¢*ê#JI£5j£7:rZ©È)<ꢔ£{9¢m¤r¸¢AY¤d9¤,Z•Åf¢ag•ÈK*'[‰‰oGWAâ¶š÷œt™›ØGˆ¹']ò$dbƒv8méØœÚC9 ËÆv¤õ¦Ò§Fjlsú¢¬¸wàɧ …’惊¬)Šéš’¨P¢ÅIozVK%’Žg‚ŽÛ…^Àè£2Yþ‡{z‹§ùŠc׎Ç›¥€æÉdnu~®Uͧi2Œy8 Hœxš©KÚf‰X—*ˆf °ŠP·iˆ¦êÎgoÈ!m…¥…;GZ×aJ_º£—›Ê…]j›î\íٛ¶Flùe©w¬ò¦ŽíçdV¿EMiq´UÞV›»6JÿGŠ™Ô]®Xg/¥nÙiZФWD_¼¶TÃinêš§@(/º“kÏ «FJ¤Ysdyáo:z¥Iê°3™„˘¶:žkˆ"¤;™g‚£%k²'‹²›C±r'²[²Û²Vê©0³Óú®[³ »±$‚.‰©§³Åþ#†›é¨ð¡t¸= 'BKKb´Ê Aë²ÛÚªšUt¼7¨Ã=Ã¥§2 ™¨µ«…í)vÓIŽŠ]H©IEšd˜‹ÓŠ")W¥ š»Š™U|V𳆠_;™ì*a`g™à9¥RÿhX¹(ƒÕšøÈªë(šõiëfŸæ÷ªèÇ·RƱ®Ú^áv‹8¥’:õ_®ô[‘Yѵ^ŠsǨlÙš>(Zw'Šg¥Ð€8WµNû:~ÈŸüèeÎ&i¦›ló¨¦ JÞ¯÷›Š‹¬Z› —ë——Œq—©9B~(°¢ ‘‡Ö‘ˆ›ã…þWŠW·Ššä&=Ϋ:+ktþ³€¾M{¢4 ƒ{ ík¶êÛ’ôK¿6X³Ï›ž8³ø›“) À`ù¿7˜²ŒÀ ¬Àõa¿u¿â0À!»¿L³ÜÀtpX\Ú;üs$¨74³Üö‡K\ªÌ)V›ŒéË0ßù”³‹™à8ºªS ¹ŠYeŽž‰´›È^‹Õ©è`Â}¦½yY¿üæcº‹¨x…±G¯;$µ~zAŒ¨¸ŠâI­zÄÃÑ›>´A£ÅŽk³ã Å`L»šu³ÄzçzQe¸Ñ‡®R[ºé0Æñ6xt¦Á[Û©†–™ úÃò&«]S>¼Ãæ0Ç Ê¼ÉùÀÂqÁ`f`%¿äQÈ(º¿EŒþK\¢‘ ¿“ŒÉ:É” ‰L„ìÉ¿´À¥lʧœ²‹¬Ê1¸É=ÙÉ­|•¯¼Êï[¤°Ì¡5ÂJJ¥a¼ sœŠ#é¶Üx„ÛO¹¬OÔd¦ÂàËç©dlVÎLÌ,\:Ï윗§eÊ Ã›KļøN©[É=\ŠQ5›9¨Œ€V!|óF·ÜEGIÇ Rì·×Õ«iŽ¿ÔH|V5¼ÄC”RøXkšw÷ÇEe{½œÍ´V|´\›»¨ešv\Ïø|u‰ÏÀh­ž¨Q‹ lÄäyÍÔgÆ |Ÿìyª‰Ækܺ©škþìŽÁ¦nÆ5Ð÷‰»©ÐÊ‚êÁ!YS€Í ¬Çîþ€}L_î ÈŠª½pµÇÅ«ù× ¶ £ì¾Í L-?è¼;N}cw\R°¢¬ÕMÈÕ³œ¹ÛÕ+ËýƒÊgÖi݃`}¤";Ö[)Ê£¼K–¢oý¬_}°^Ë0EÒ3M×¶£–7ý«%¹LeÏékÌPÚ×ríÊ™»›ÍìÍÚßÜ´ÓlvG”Á^{Ф:χì[üK;ÎóºÅ‚7ywxxÑ ©soy[©Ì'Ô¥!˜ªy긳Œ ›·åzªÏÜù^ýÌ[Æf´ç~Õét«k]ÎÕ›m ½·½F,õÐ×ѬU‘MÐò˜‰6VöHÅÊ+œ®ªÈCÛXþ =ÒÀ‘”‘i|Ò%ÄÅ.½Òà ªáÇzhœ[ÏV¼ˆ{kœÝ܇P=Llý²œ×d­É.ޱ0NËmãD©Ö;Îã=žb4^«9ÞÖ‡×7Ë3ãp]äa-ÉäôÈM ¥(¥*þ¤¤°Ì—(ºÃ\؆-壖ÙTN’%¦ém……qÐ=¼Y;¼›çÙ4VÙ<§˜ÔLÞhþ•Êâ5gÞ›–‰Ø}ÊæBPÃÕÍSu[Ô}y¡ùÑ-§ã ——¥©øþèÚxw~†@ÏYèß«h6M¤›Ô}uçg©ù '¬÷Z˜ïö]ýmЏMÀA~ÂáT§(­tjàÔ=Ú3LñíàÆ*¸Ú ‹(ŒÅ3Œ_ý¬º¯ÎË;ä}n!« ‰(è{î Cë•ÒòíÐôÍÝŽ«ª«j®/UF®>Ù¾ëο|·ƒ{é8ÝàUÊ^ Ã'JÚèì6†‡Ú€¨Äµëè=ÂVâ÷¯” âëkâO-[ kÕO äD •h†>.Q!±±Pð\‡äF~ËÏä™,ä/î¿-‹¿>Nò%oòœÓðìÈñCnÈœäw]ÖßÔ1ßñœ|Õ`î—íþ.½: náç“påBlÎ} Îë~…ÑxNb8ßæhè¤*¬] ]ÍkªÛ5ߦÁJìž6u¼|È(Ï~–o.£Ê'狞ˆµCf€YÁ ¶¸ÍÙùäˆ!Í_t(õKÎYLÚ½nÚ›ˆÚåʼnˆ «jË÷º Ù+ÛtˤlŸ¥Îæ®é>­ˆØ;²(àé†ö¿ýhÁýQ,-iÃ>ê<=Ÿ`4â—ì¡òè‹´.©ïÆ$%yã sÅ®ÎJÇÂIЛìȽ­°¾x[Fß~ßHÌj`Ágþܦ'篪‡¸º^ÒÖΟ(ßÎ%ÜÁ[߀ϭ¯«Zê£Ê­âíø¯øþšZÇ×)¿¾àx˨$®ôùúéG­à„Íà§JÑõ:ÎãîœõzxÞ›–]^êªM€0u¹ýa”“V+ÎÕ›wš½PÉ TŒrOÕ}áXžé¨­ñøÆ¾Ï“]*hó‘Iå’ÉØ5¡Žg”:›V±Yí–›¸v“_ðXH6ŸÑéX-c·Ïoøœ^_Êí"|>»çÿ5ü+ •ù|$')+-/13579;=?ACEGIKMOQSUW?!K]i`cik±fiqmUtw}y€…¡z‰‘;Œ—“ ©Žt žšµ¬¯N&1ˆ¨¿Áþ»³‡Ç_Ñ/2†Ø¯£þà·X¨#ïñ!ZXæ‹þ•ÁÐNÂuk8l /ÃõP tçd`EpüìE ôî‡7w):zi/œ$/D¢‰S‰‚ÒÊ•? ¢9Ó$Eš7?æ\èðÏHRj†´è¥"ž~éåóæqâ¾}ókz$É‹%c.mê”+7¨³Á7sdQ¢>Õ쬑(·¶TO²ýÊÕ‘Ò¢2cn-7£ÅŽs/úe©0ß—2‡’ –0Ò³hѨ¥Ã–®`}ú¶=yNiU”"3~Ûw"b«7+œ5pذ»fzõ1d3’çÈjidʨ¼9“Ã[Z"ß¾ÉÎöÝ{(SÖ»k­zéþ`êÉÿâ¶p{åî$Þݪì{ùñhqWò¾JMÍCÖÞf>²úññ“¹9?øþÿãÙ/"Ð@a¼í@îl°–õs )dÂ1XÉPà 9ìÐÃA QÄAüîÂ]N¬PÅL\± aÌ#E.fÌ%ÆskGŠvìQÁîjŒ%Áåxù€H=¼kÎG€t”‚%ª°êG¨ßÄÎʇ¢¤iº#,¦ºà;-Le¼sIYšü )½6sJ®³>ë#Œxlm´žäI23¾ÄÒ1ÿü<3Œ4ëˬŒô ³ËŠskϾ ½gŽðR)$˜ÔÙ-¥ì¢t©Êh4ÅR§œ<þ•Ó¹S{›RP$ Ô’lÒ3ª'ýÁìÅB¡òÌÎòÚÊѪ$-تCíª×›Â+­ U²4וuÕ.m£¬¸ÇzKU5mô75*GÉÅ<,¯®2•ÊÍϤ75dO›µºÕö” VH§t;koÎóÖøc˜÷]•½i¤ÂÌ£1$4õÖ1G<…Û㉧¡î"È>”îZì©¥Þñë±)<»½°Ñnµþ™>n· œ»ñÎ[ï½ùîÛï¿·» Áƒ£ÛpV“”ðÃã[¼!ÅœZùØ–ÜòÊÆqH•‘Nøòµ[}Ô¹s [™L>•¿<¹6ÁÇ¡†}k:±-sQ:smsàÀ~ôr\Ú3k}ÂÏãYto»MTÙ™ÿÍ`#uKãÒõ e™ ›’jMs.+;œkx<YèʾòÃ`—`ï&w¦TX厛óæÐ’wbbñŸªéñ‡g<'EÄQ`Iéõ+ÁÀ=ÒkY²ú¥.šíLwµiÖü×,û_Dœ¹…>éêwÒUsÄ5 …yæYû™Ã&‰‹hÌþz—Ë`³¿ô¥Ïƒä¡¬×’1[Ò3Ýêl†½Ü…¯~÷“Ùrþ‚B(yêU@ƒ§¨×C}ý°GKËÓÿ”!;â‰1éi Šì ­­€Œ—k#Ù §E¾‘äþ™“Æ€ Ó—­\£Óx%¾õ]+‰ó›b»hxÁeʼn˜úËáµèuClŽ1•ˆ ÉXÉ-Öš +WªÒù$áÌ95XÅ0à ’0¡ærW;ÑØÌ]V”ŸÁ¼Íè’ÂnYthÏ™ÎÂúôÂCõ ƒ/ÄXþºCƒ6¬f,D J)J*ªgˆ§Ã <íy=cöSŸÈ8,í±K›à£ïš*®ð]•‚B$R?è#/ún¡h2—I&•:íŒIÅš9ÇF#ŵŽ\4_ ‡V©°t¬[5YX±V®vT,Ú›FN–²•µìe1› ¥V(²Ú\»èÙ±uÖ”Žmþì6›¶Óv´$¬<ò:Øè©.u-ÔjÛš ʱS_ÕGgZÂu¯|jéHQ˜$âZ²µà׿îZÏøÅ´M9úâÄ S5º»¼+nskà!šÊ«W4©iÝ®bÔ{ÜD {šjÒwžïO Lž9_Õ+¾6÷[æƒgyåiA‹­%Ÿ 0?} ݊ɧ ]̉TPnfp¿ %àCáRªúëy*¼(£Qí­‹‚vB×AIÍåÝ% <%1w<–j ~Àã°4ÓྌÉÉ…9­¥ÿLS¬—ÕX<áŠ7¨OÆÂÈÎÙJW¦Îjêõ­GVªN¬œ:kÖÊþŠö]c;ö"Õr?×ä—.i­ƒë0xQ Ú÷TÙFbÜ=áLŸÔJHÍy”œ5ççÄ¢gЂ4 õhhÓúõЊ-2iIOšÒ•ÜžÔgG÷•rŸÜt¢ýÇOÿ5ÔÄ+:²Ë“áÙµø /€4}<‹‘¬·Gt%­ike&h<Ùýò‘ßl‚ÇÅ:“¡æ.zÁúfâ•w dv_ô46ÙWØÞ}œ©íP6!´XÛ­fWl>“ªSUÜ[õ:å f,ÞÖ¾:Úcù)oš+¿Ø¡šBlkw/~5¸–\—êwÓ™?1%E.ëÙÉ¥g»~ZEÆPGÁff1Acþ¨ïÉZÆéÍÖ_z°=6pÜqÒðˆAŽ`ƒ^+…Xïô½ãùØÎà´3maÐ¥Æeñ—ĨcÞ´¨ãîÆ¤Ù±‚ÔÈȵJÄ…9qÙ¤†áü³éÅ\>Fˋʫ¼g”kEFùU5Ët÷к<ï&–9‚O?¡Û5K,/œ}‡V’-¥ñjkýµñúâ¿å£÷>Ø:¨ëƒÒÛØšßuÃtƒ?ê-vzÑ’åãyËËÖ'š/öæ/_jJ‚~ô¶ôéQŸzÕ¯¾˜?çI_Ú­‹:öœu½ãk¯Zч>÷9ß½ŠH‰ê‚¨:ìwñ‹uÜp&x¶Qeâªk4þÜlHû×È5ìÃÍ÷¦‹´ÛÒ5 «krïïúÙ­¶‰¯Îýã÷wýxÌö…·pO÷vþºÝ(µMÒuS£`F§õÄ錶åØvJ›¼ìî £°ïïMî hJ 9,‚¼¦ÀBLÅÆŠ\t§ÁêÁÀMʆê{Šì.~ ðPëýFÈ22l/ÉW Ç¢Ï.ô9DL‚ƪ~jì‚RŒÚ.*U8†Äl¨áBIE¨ú Žvz® jìjàΦ®èèè"¦íÊâ@¦~ã”nâL…â°x(oøT ìb+¿N–l ¨„ìZªf˜í.{°ŠíÄìeȬÞöG†ZƬþê÷lÏÎÈ/ëzovÑ–èL’ñX­ïàö$ 1Å8®ò&ñ?4O;ç1aqo_XoI±Mñ²<±Añ‘ÏV‘µÔ^1?2±Ðf±Ä#°ºÌ¯`\‡¥æÍçÚoÙ@¦”Г®¡E×pÿ’Q_JLdœŒ4ÆIŠ‘±±ðÀÏçšAÓ”q£ÙV.@&yí¶Ðqýªá[ÁZ\nÍzq å±ÿLþ¶M·f Uø¯ëÄéXþ‹¾Ú+ºÐ€:pݦN¾)åÀ©œP–ž/²¾q£Ãy²Ð„NW8ðæ¾¬¦äoyÑÞ¬#ãÖÉÞpYþ6¦Ž#©Ž$   ÿ`"ª"MÌ„žqà .àX2Q 'eî¤üÉ„ R§’ì(T²„zòå °"Áïš\°ÎbÑÆZãšÔ,#o¨Ènn‰–î''æ®®%ù§ õç$?Èhc(‡p%Œ6`‡v,¥¦2zlPÉQŸÃhmÌàÐÿè«cLðÂ~ ‡CÞ& ï¢ðŠ’âúíŸÆjzÌêÜ€/gÛn±qR±?4ñiòÑ.“ñ2“?3qòF4K3óN³<2ñ]ó5a36E$4ñc3W±3eq5+3'-Q7y7Í7+q7y\Þ.ç©2¥«PŽëÊ1Í’þsF Å i¢±¸¸QØ€.Ó±#7ñ§ò€,h “°:ÓÊço±©Žn2e­±ä8]ðb¾’!ÿjƒÆGg1r© Jåì‡ÁàÍJ¬¦õÏø,sëšÓª.-æª.&¿ÏÑëÆr*,§ðÊxD$=%î §Î°òˆƒ† ÿ„°à„2š¶Ê+k'(yIÜÆ£¢<´âØY0=Sp.ÉhjŽuR#áe+¥+9EÁrwМˎRDGƒ1ÝF>Å-ĶÊ-4îèúÒ0Ò-©èçÂï8o¦†â®ê4M´ô|tA…ó7i“4ã´óæ5ëTNSþóõòÔN÷4ûTHßÔM•7«E65QuQ[ïO#±PóíNYR•8R usSUTR3sQÚ±üöm ÏÓ4u<‹ouQ¨Ñ˜ÚìÎÔS#uVæ8uçÒaR6®ÛþŒWe’öæ:lOç‹õ“Ü|ÆûÏì*e:ÉŒcp‚Þ4J+ß Z¶DüÚPOµßŒX³2+Ÿ’$3Ò»5-™î—¬U¤¨è4HMßò{SOšҫl6E›RÁ5Ï ‘ÔÛ¨t;ª^Ñ4D)¬G-•E›MÎ6’éŒ4.Õ4:NFJª›çîÒRá¶É Ñò]Ý“Í6+jsKþÊ1½4ËÀtLéõÌNp_Á0Ȳ¨Zñ ×€4[õVUq&9uWådÁfëÔJátgAh •h‹ÖSqiãlP‹“iå2aujûÈôõj±6kaSi–j ÁhŸÖk)s[Ul¥Rj]Ñ:yѹ¦ñ»sñøµÞR•£Ô'_¬T_QÅ=üq®óÉÚ6Þ¶­bc'{.ãêvZ€S<309÷V¶¨‘Gƒí ÖqT;”)CpÅÖ¶ƒõ^ÊUððS_U2“5ÞÂ… õ1KÐ_[ˆÝ^®¼fö\”5*qsUök%-]²o (yU$ÅÒp‹R»5Âà’˜Ä¯)A-X=îþÀ ñ†4F[Ty ®å0Ìhf´u‰òIŸuX‘ÌÄrpewj¥öþB°q¿uBw—+=r-ë3C™Ôê^’{³ØèI‡î æh‡“ˆ¢•їḴeÍŽ9ç¸{óVSöEÑN‹Fä Ö~Õ[û×3Ívl/xS3˜¸m;¸iAØ4CØÀöRKø’>˜lSXàºvv[XLDki¸†mØÒVxƒcXg—V4wX†{xdwƒ˜N‡8)ŠØ‚Ÿ±þM7²é~Ç V—¸RÉo¥ó©œjTœ9ô–¦Rtð²î ÞÊ­$WV daqöìóÄv2 Ž]…rF¦OöfŒÃþ(Æf¡Œá³‚3U‡© WfN)c²¬jGåİ* T‹#L]®¤1x:PªÖ0 r>?®õEÿWÞR‰{öÊþ"Š"m2ì(×Ͷ«HOj25Ðî.ô ÷p¦ ¹V-îûÄ‚æ(FÛ]Òˆ«'/×þœŒÄ0®;°tƒÐFùs>w aFkÔ˜f¨¼Jê–•‚½ê-)t3ï˜UÌ|}19¶T¯4^£VxYw§tYØ9š36«nôYq94tÙe×u@…ø…y ÞjkÊLAæ.™¤§XæZÝŽ*Ìö5ft MùÊLV“}ʧȒ»§}5,‰Á6™XÐþn—…ûxκXƒÁ+¤ýxVãõˆEX¤Ã–¥³o„U¦yø…É—¦c:ðnx§yº§)‡§£Z¨­ ¨£9ñ„aب‡Ú¦ÓÖ˜Ý ŒÙï¨q'Šéxj÷f}ÔŠÏÉY‹fK:-²Ñª¡Ío‡Y«Ë³šm·:ù‹Ýœ}±˜!÷³±ªñÄŽ% ¥aùX¥ÕøøèS˜ƒçÛ–¬{*‘»®z…¶BEo!£¡o’÷Ò$}€3ùäîN•艫0(6ì SͰ߉̙-W9nó%~ÏrûH*/©™ž+Úš-–ÈÖ•Ûà:¥ƒ4˜5£,2QrúDµD™£þå¢t^=šÿø‹ídž)—Ó—eŠn#“¯O´ }wåPú‰‘™…´Ò-á¼rD…)IŸ›ZwÈnÙ±pHà>©‹;¬?KǺ±y nÁ Îy•c˺š8ïÜX%Y¢»”¢Ó̲ô}ÐŽ:}9¥‡V´Â[¦« ïlWkÃ]MÕ©ª}ÏÄŸšÃµUÅQ{Ä›ÚÅ=ìP}ºÆmüÆg“¨u\ƧzÇYœÇ{õÇ{È¿©˜È‹ÜÇa<;V¯%ÊmÕ‘kð““ù­»tuŠËÚ®ºš)vû;iÕÖÑšÉ b;Å ù&[ÛÆoï:ГþWÄOÜqµó±º'W |{ðÊÝ|ÉßÜ9å<¼?|ƒÕÝÐp¼ÿr1EwcF`µ÷µ»ÜË!êš2ïLwÑ¡—våZ”›Üs—† î¡`ÙäÌuÉxjšY÷&+÷ºówaË릒¹ó‰˜ŸÅÚC”Äi{€kY˾éÑÅ4Ý0•È1váò|$˜ºÿ} #½dÁêKUXÉ1¼ÂdõЃӳ¬=Æþƒ:㑼)¾ã?ž™Pœ„G>ÉK~¦OäEÇ]þåaÞŒ|¤WÞ‹+þ¥kþE¾lsÞ …ê3×Ñž?ªñ Áê%¯ú‰>èuþ“1ØûùæQXü¿?©/ÞüËhæµ[ýÇ¿ób>þåþ/ýõÝýIõç5ÿÙJÿ}˜ÿ @>¦.74ŒrÒjá»zóîö1¡ˆ¥w†Ù©©(Ë3-´5ž'÷Aÿ¯4WEÖXT™¦%óøhê$}öêö»i÷òÕŠí† `M‹X^¢`—žŸãcU!ä$Þ_£ÍË›\ܘ&ç'(Ø\'e(¦Ÿê“êUT+ª¦“,jª«î‘,oR_S0k/Tþªé1ò„d²ThìåÞl–×´âíäÏË­›¨Ûw-¹05§ p:®yùæïù¹ë(³þþ)ÿ#Û´L¢ÆStͶe\¸ábkP¼S å‘ Gj`´ošhË—ï—˜]µ6õóg²ÙIS³­ó˜m£ÁKY*ŒT/¢;w#ï]ŒøîÞ  âòÜ'Eotè´3j.%T55£ªåa0$"³Rá…ËžµWWÿÝ4&ÌGQmðF–úêöVWvô8J‘¦+Þ̯`!ª¥êÇÔ¿k¶äÄ 3Ö¸qÈ…¯¡ŒÅrÅ’7—ÁÌYÙg¿žC“.ýy´iéý¡^þíúµÉÖ¯eÖA»6îÜ(us¸Í;ÅïàÂùù6]|¸…ãÈ—3Ý~4ùÙïG1¿?€¥ý·Ù€õ ‚ØØ‚íµæ‰z‹X1–ü5è\% ºàJÕÀ‹Vš½âWzEõ =Y%É]¶±a!i؇# !Do´ˆjNur!p%²Nt㢑d1ÂÒŒ€Õh#ŽphÔ/æq‘Š'1äVwqY"+^(â."õõ#‰Å”9ÌPpy¹ W Ù¤OÖZ ååÀMu` $þw>T:•“%R$ñÕ#Pm ‰¦=)Vr(L”"•Ö¡tÎè_W š×ˆòØRŠ:Š&h›¦’šé,Œ4ºät}â©Rçdx _G:dkV†jjˆVÜñ(šU’:k®Æ2ÃР‰ÂôS±„âЕM‰Ó¦¤÷tmTG´ÕÉdíÚÖW¿ÜµJW£(%e,f¨ "Î<¥|ÿ2 äL¾ô-®Xæ ‹SŠ’@ôÉ£V¡²”ûôEB{IŽ4òÛ Úð§HéÉs!ªt2/š%#øªÛQúÕ@¶|¨#kiWðâ2EƒIÑŠ[|†™î·–²å,Âù¡TÄv44 ŠNl<ôFÎô0>q¤QÔîèµ_éq1yÜ# ›—Ÿ>Â,†Œ‘]FÈCh‘"#$#)ÉIR²’–¼$ÑÉ2G2r?œÖ;)J'­ jŸ¥5™²S¢ÒAªDÛDH$ =†zŠ\ó|ÃJ?z²eW,X˜fÇ.°(.vkŠ›Ê´ç1¿­ŽOþÄÒ3W¹ 5i—Z€’ÿø–¬Ò…JT4ˆ®>DÂ1 4—óåæ,dc¢s‘Y¥Å?eF®qÞdÔK¦i†ÖÝÎÄXHЧ.‹tÑMr¨_¨' – Šÿ|Þ^~¦g0Ôš{ë%6I—Ïè pXÔÃC8)t˜$®\Ù›ÒöÕ½ï‘1#;œ›B_"ö‘Q_©:© }¿WÊÇ™šCàD6Xº–…ôo¼4^ÿ’BBÜ9kTôf èÏœPQ‹Á#Ô ‘µ*©†/›Þ³(xúÁzJäží\à8Œj“”^ïUêë& e˜¿¦´”âÒH£©–*‡*| õ¸‚•šƒþ\‘‰1DåÎS_¢Ý7ÓÈÖ2¢ƒ^J´×ìvÃâ®[÷ÚÊøÒtÕYÕnìc½Ì‚Ù™ñ°¥TZa÷[$¾VKÅ+d+Ñ–[tH»ýí;1jJàW?Â…mq“k2±‚l¶Ê ‡0)ÝéR·ºÖ½.vÃÃÜŽ9÷¹«é®ÂBéÝñ–3•â%/yÁ;Ëó¢×»ê5.r/*KøòI­¬ÜGÓiEé žµMñŽ™Ñ*r‰ŽÉDé2i9Ëg NX»)Ò:7I?a¬ŠªÊâF7Vl,÷œõý0‡ÕZC…wQŒ|/‰Œ#Õ--­ö… ?ñ4¼ã!ô³MD¤Ô—ƒVtªFùþ]/ ªX‘îUx;Þ15„âþ®ë¾ýä-ò8šb §!l¡þXJ>WT‹ø°µv•S ïÕŽsT3•ϲâ7s† gõHºbnp &í¨UéŒc®z¯Dr³F¸ç beŠLÕÔ’#¬`곌Ö0REèVC6Òl ]·Ì”ðùÙ'å+£Ÿ…réXX‰~ªt½\“™öˆPÂ[!Må/ZÔÝÌl5ë(0ŒÀk(^ é§…Ê-R“oêíjLÕ‚l/}ï3Ÿè4ÇÙR©#´ClÞáf۽̆˜µ»=¹o3,Üâ†1¹—}îäšÄX[7»£›ÝyÓ»Þö¾7þ¾É“îqµÞÄÙ7ýMÜ~ÿôÝÿ-Áõûµ£5Ån®%¶[ìn?º·ˆN¤\†Ú¿å±ùE·|ë·¢E“5æB$M]î ߨ¯Éss>hâ‰m¸¼˜ó°Äm.b•;¼ç‘ïq#8Ô*_ø§Ïz±Öb庅âe¡v¦è@±ÚÙ\#ÖYßÓƒ\õ†¦M³F&fB#ÞÓÿjTÇ2†9H B 9ϰÚô+‡ÝpóÉrc8ÂLñÚRîzù”pmS)æ úF <Ù9o•¿vtLòÜø34} < ½œCÃȪ¤Iá±àÅŠ]}îžÈ:3n7>¢û]¯LDa)þljáS—ð„ílÒWÁ¯Ô:µ¼Ø‘3x6˜c|õC=Ç+Óúõt⯗²Z^›x‰µ6X²©ÿ¸~õ¹ª;öšæm»–Û'>8á]ùHоíÄÁîðí_ô·2þH7ýskÿñ—lÿù—lÐÍÿ¡Òþùßü  (`¾- 6 > &ÜÅ!`' oQà!Y {aài ùq y БÓ|ùœSõ ƒ•ÝÄáш…’@Ùq  ˜ó Ö Ù‰0\ᜃñ4é—¨¤ß†è•^SÅ … ÖïA™ôåÞd$“ü•E4æ\ŸXFÙ,…®×Øàåe™éþQáéÉcÄXžPžÇå‹@ÉH°™]A]yjhb´Ãב¶¸‰ÝLŽA]®)H<ÝÔ>] .^ÛÍ“2YZܩʢИíP÷¹žÞߨÍC©ÉÓ‹°ž6ÍÐçÐÚÅàºØá9šÌ‰\êE6^ =ïE^À]!ЖbæõD,’šKpb¾èœÝZŠÞ£ Õ º˜ã,Û­ž+F‘÷5Uä½nšå¹Uñš,ÙäñbP¡^@¹"_âö-Îû-‹èµÚñ)Ý?uEå=áâ© ¥Öb‘ÙA£Ž¥\ÇýiµaáÉã Rã¥Ð…äÂ0ÙÚTÝþø!Xœ‘¢"uà¿Í_ùÂm ŸØœ#``¤’ŸÁ`J yd ‰`GŠäNdIšä2 ¶¤K¾$LƤv€d5©¤×¤ÚäöÖNêäÂ9¤„ùä2ÍIRS 2¹€\Æ›CFÙÒŽF*^Í1$¾ m !ü±\ )àõšE^ÔRBSq< ³_V’"'*›éè¢è¬ËÒÑC)–Ú‡ \~Baª8ÔßANªà›GžÏ"âT)Jã÷LJÊÿìe1íšµW`ªâèÁMÙö#AØÕ.ÆPgþ„Š„Ž]q•`›*¦³MQTœÅ4£4Rþbì-æUãÝå&–qÚ¶ÔÞ´Pf&*×e¾š‰¨aTJ¥#îËAÒci¤Œõí£j5YôUØ…V zPJ¤h`Ò䌰Ñq %§š‘gNnäO¢çÌ çC²g{šgèÁçz’LÞ'~æ§~Þ›|F }Ö'Qöä6[6Ò€¥z¾çö_Ò¤kr¡š%D6¥0"Töã[p!x'ÖŸVJfI•ã6M†‚åýA˜úiáý¥'Jþä Ú?BUXÂã\ž¡¥áÓ¥K”õa?ÚåŒéÎ]ÊØ^†]°Yeà Ë‘øbñ¹ÚþÈæÝ½Tbæ&RÑ&cšcýئTþAç¾8ãr>¦’¢e(d¦=Qálq¦Ü¹ÝWÁ]Ybž˜²eê·£3ZQài‚&ã Y ~—”ÞÙ[Õ¦#b ÝÑߥãìñfÞ!ªo*‡®¤´%Þ¼!¿p Âr^_e^2Ó!›fn=écGøá‚Qž}œ€®èPNÎ2Œç‚ò\¥¾jl̪wè­Rj­®ÜvÂÒ¯†Œ{"è°â_°2è±(Îì§³>+´F+Ð4h².+µÒ–¯ZkMVksi+°î)wzë¶‚«°‚Ííê|&¨SêHdhÚÆÁ¦¸Jè»âSe–Y*Ö+.YÐnCpýª{6éZ¾¼:Mþ Êåíð(›p†ÙQåùhù%ÄŠäe²_›òcU›Ñ)A¦÷ð L0Š(6.¡NlÇ•8é+\Hßf…æ6¦fÚ¢çY¦º*²¬½ž©¬%ˆšéïÙ,¤*ø„cäŒ#Ò¾"ºús](âà±µiuNìd$4€™éZBÆl¯M-Õr+wÉ*Š–çÊ^닲FçЪ¥¢-¶Æm|Îë’«²êí¸â-¯úm„ò-± îßh¶.à:ˆ´6®ã>.äfÜ"®âîâkåªmÖâjæÂ*å¦dp¥­º%°ˆÎZáHÑëùQèýÈNqöþ* ®æ„8Ñ©¬(6\Ü„p]lËÝ+Ù|èÏî£ÑJ‡¦.¬´èM.i¶ æà™áŸÁ_‚*]Ú#VK]BÖ[6‹Dõå‘QŠÝâ‘zÚ’õ*ïªÞl>æ§ì^Á5)ù’æU=b¾þÏ×Þ¢ùÖÌpf éiO¢¯<2"ŽæÒ,ã¤ØÏ6"µæ4ÎÜáê *_µÝ§ÈD ö^Óq—ñᣂ pˆr«Õbla†Ë®yªÀ@_¨²(Áò³pŸAÒU÷^*v¾!N.ìvîå¶eâò°çî°‹±÷o¬1|k¸"q1è2qk.²Bñ ÛQä^1þg±´ê°S±÷p¹zqWqàŠñWn·šq£ñÚºàðö ‘\hëZå>Ý/'ЂÈDa-Ìr!U²í¢ŽR%¥ÛžÌ‡&qˆî”‰ÒíÝ–ðÂȈ0„ªîºmò­näï2Æžã/»F/AMïâ0ì-•¯ÉpÖîå1löÚâö)@ié’òæŽo×Å(&‡šN©úJûŽû:­í•V CæëåU/3›sgp-®þöU.?sMµâ2ß×l¶/d*¢Çª¦Ž|š,Ž,•&0Z­ «fb¶Ÿæ*advñ *¥ª³™GYÈ0§l™Éü*-¿feñf>WpþL…›“;B)"Çcóe•Ömv 3îÚï'wüÂ(9"_Rg ÃÚS$As1W`ø¢£ën1ò„²1³ª1Ç–´º±t'ôÃ4ѪôÓ´.Ë›ï4O÷´‚´Lã45qÞ õP»sQõòõ+õù‚q߯.ï+ébèë¶­c`e5£`_î\zÅ’ìB‡!ÖîËò>rQuârÈæHâåN^n~Žž®¾ÎÞîþ/?O_?€ŸàoŸ?~ 6hBB1è3ðÐ^.~ÿ.PÑà¿þ~/BTØQÂFGBôÈ€bH“k©ä¸0#À’!DLYsKŒ1QÚ´è³%-–7y=ù#ΠF5”¹ñ)Å“úF}8gOªW3b})S«Ê®L…v"Š6âΆ?™² ûu&عM_Ö½»¶£]¯v•F= õªR³¦Lfy41b‡q¾«éG¤ñê­J—#åËRüT`Y› K–ìµÞ~§Ñnx²åÙ‚÷ 7õȽ°={ÞIúSÜÈ‘ÉßJÜðá°»Ê}]™*t×bfÎÚ³ÆÇÁ»{ÿ>¼øñäË›?>½úõì…[}?¾üùôëÛ¿?¿þýüþûûÿ`€H`Î@ .È`ƒ>a„NHa…^ˆaƒüdÈa‡~bˆ"ŽHb‰&žˆ¢‚ܹ‘`Š.¾£„ÆHc6ÞˆcŽ:‚¸b-îdÎ(d‘F‰d’6öÈÆJ>ù"‘PNIe•VÉäN^É%‡Rv f˜bŽIa–jlIfš ~©f›n¾‰åhe  g˜lÖ‰gžz–hftîYå€Jh¡ö‰ÆŸ†&)袎>ú&¢gÐéˆ-æ©…lD¢¢ zši¨¢6(©âó)¨Ÿ*ˆj ¶:ê ùªªÂêॶÆÊ뢥Π!®¸Ê¸å°½~:þk®ºF¹ë±Îú+§ÖŠ©«˜rjm­Ön›-µÍÖyg«â^[í{¬z««¶¨¦kì³îªíÓn;n·ÜÖëê½ê¶[h¸ËÒ{í½Ùž+°¾ünÂmÆ+Ƽë²K.Á°¨þJl.·¬¦Zp½#¬pÈb2†ÃÚjü±Áùâ»±ÅÉjø/¬û®ºæÉ1Ïœ²È:sI2&W;pÊc4¿ý¾\3Ð羚ïÒŒmÄOç¼sÕSöüÅ·VwØ(ŠFo 6™X{¡uØv}â×f¯}åØ]”Íö„hÇMw¯nswݲêÍ÷»wo‘wßk"-xá¡þ­Eà…Ïmxãz"ž…’OþNyå–_ŽyæšoÎyærÈ9èíá zÑ”>z ¨û|zêE¬žuë®;Ù²ÏDíoߎûºãÍ{ï=üxðÂëô¹ñÇç@|âÊ_°Ýi4¯CL7XõXTOÀ½A}ôSYð½DÙp~öÉT‘@¾A|%A¯˜ü4¤ßM¬©‡O7ß;0!¢yN]Ôâ˜,G9—ñ‹s´Ã–~¥Õ1Îd&ؽ¦0. „`NŠÒí]Á¦É Ù§Áép05 û "›Yä…¡¹#>è¼Æ/²ÁÞj.ÈBvæ/A „­à¿æD0†ëÓLûzXÄ"æ„8¶I!þL6( XQ‚tâøˆèB&"‡^ o¢(ˆ#Vt9dbÿPøDÚø¦…S¹Meâè@ÜÈцûÐ`ûBƒ—‚å}”‘Ž¿H—DÒ~“jâ´#–; 2<¬Îl¤FìP°†}ädqîè½:z’ƒs©Íp¸hFCjr•?¤âÔHX:’ ²$G-£pKð9„ÝÉå|é£åù˜M ¦–„9¼Y2˜Ȝž2ÁÌfN šIèœ5¯‰Íljs›Üì¦7Ëç§çISÔDB9M5Næ=SçLgJÖ™ˆvºÓ&ðDDøÞNÀ²…èÛ_?;yúíŒñ]=M(—M²È9\]>þÄûaOu}§bú”ÈS!묊û HÈ`ÚÏŸ¸6ª€’…*¥gkj©Q”ôKécôȇ6†Š¤DàXØ¢2…Ù _<ÊÅMzЂJKãèSËüÔ’,TmÉ+Rõ§7­<ÿ‡Cü-4)PMä#F!Æ0¨{™¡±R®àˆ6„ Z÷×J~æ10Y\ÉZ­SÉ’¬×q£L-šÓ€Zg--m¸ÞÕ€UÄc!cš(ÂÕIUå 9ÙŽ±¢ÑŽ_“ÂIòF¨¿9#XypX °ñ¬k¥iu3X)ÒŽŠ$c_9«×3Jo¯‡,îíþJT'ª´½mëgð8P" ¶¸•nf‘gÛGvЧ•×cƒHÌPÒ£¡lk÷8Ó\óÚ±-¢ ¬xùšWæ<'7€e%UãÃEð´6Uá_“™ÝFÄÀõDmFKºbÀÕën¢¦©=µ4ÀËœç? Sð§…WJah~ø~Ÿû¦‰OŒâ«xÅ,þ‚ ÁáfÆØ«ñ f¬ÎÛ88N°Žwü‚_ÏÃ%Å'ì„\d7ò˜¤ñ<(P¹‚ds ¡Ð‹²HâÊŠ69ÄÅQMoÄP—:ÓÁÓ¼ð &‰à£žTòXÊÀ«f'¢Y  ÉûÀ£g‚P¶ F!xþÙ²6¯˜ *ga¼æ®ºù™|¼#mù9ɺ¶Ô­…ÜkZWÒ\Aþ&¢´ï(/9ÄÓZ¹š_]ašÛÈ‚T¯‡–c[°¸-Zv®fýLµ¬¿ÚVw¾1—YšÇE;–°J ädÿxÑL·VŽæãbhÃ\ÉW¹Ë%niA½EBŒšœ%åî ‰](_²ËåMå¦ñYU®f&ÐU•êóüjÌæ’R1%™ó°í%ÿ¸˜F¾BdÚݻƿƒp “àà.`8-Žð_Ûâ ±¶)ž?‹‡ã Îi‹?ò‹|ä$ÆÓÈñ3ŸÜˆ)WÃ;§Gæ.OxnZpþaÔ™X{­˜±¬Ñš>2£”–¤“ëìÔ„:çÁ# ¢CG“Ö›ë\±ižßÏ«.©{ýÛ~õ»!K×v7²éXæhwg{ꬓoèA,ÛÝfu<Ð<$ì*Ønê9±o§QÝo}Óbg¤÷Y¡æuer¬ªÉOzV«Ÿ´kÁì,wî& ü ê×%šÚòSþ.n­Ü66¹lãpYíé.»¨‘Ýì³uIäB^¥Òæ´.}÷Ò‹YÑÞ7º% ûV;ºí+<%Zå[|zÚõϪ~¯]?âëZÕ—}óíoNý¸#¿÷·Ýucl!frÕ”¼ë‘kÓÕªºùÇýâte_P#þëþýÓµ¾ýµ ïH¶ZZžÏ~¨Á‹l‚·t‚$jFõ\†Z£Gh|!Pl¥^–4€±!WUdnQNÇ?-wy+œ7¸p6§‚#P‚¹3‚ˆj–‚ǯԂ.¸‚ˆƒ&Xb%‡ƒ9¨ƒ;ȃÞô‚~p‚ß„%‚5˜d]sFxR?BJCx`ûF‚òG?‡WU¶u“ævegxÒ7eY¨wWno±JgfaG>ådu5WWtW×_o·ªuL\†TYôhy‡Nd7um—sÇd(ˆ¥O^8ˆ.õ;qÇtHÈx3ô}z†p†P=eTS¥t|FY˜xxš¸þzȉ~æT€%C•8y˜Õ…{w^âCg¬ƒ{l–Pnh†¾§VÇ&zÞ§khkʦ{ZDk¨~äz¤§Š×vv´EüU2¥Æ{S$F±Ø}ÇH~h$Tµ¦U[×z¤·‹µ¥÷‹M5}†ÅkÏuÆxkÕ;3ØhÔŒøŒ®{ȆxÔhiÄmÙ†wò…~sÔ˜ö_àGW…õ=”–Y×XGP8SŒˆŽ7;ÅàæïÕäõdžè<ÕnG¥€é‡ÔJ÷Å@KGh‰;§ˆIŒdɇæ„Qˆ¾Ö’+9l ’„Q“sX„1Yˆè¸q: ˆ/ÉN>ù“àÓƒEþi”G‰”I‰LÈ7)N™P9dB9…<‰rT•L¹Ri\¹vR˜•ÏQ„Ø3ÓXfpHYù…iÇ’@)Fõ¶Xe¸‡?‰Œ&u}¨|qx—‡hlYf`H“Õ~•¦[‰8yˆ…XŒ;©–x t9S‰ˆ–ÙŠv—|<Ô^s‰tqVms6’鉣‰Aä=°Vhâuÿó…æ•W™™Bë·ˆ9‹ïX‹³8Qª%kÌøVû՜Ʀ¸˜ßøi¯©on™{†nÑg:²XŽ—9Efél5U×囋’ó§}Ä8Ž„T{i¹eni`îçO­t˜•e‹Ç&Jñhlþ¿W~ùx|Ô%k„ÉË÷†bÙiƒdRx}Ø—õ昑Óç–xÌ!‘HxàGxb׋¥Ä[˜’ºgç5z©Ô—g•,‡•cIžþÙ¡GøŸ¢Uø¡,X¢"z0™¢W¶¡Ø¢*ê#JI£5j£7:rZ©È)<ꢔ£{9¢m¤r¸¢AY¤d9¤,Z•Åf¢ag•ÈK*'[‰‰oGWAâ¶š÷œt™›ØGˆ¹']ò$dbƒv8méØœÚC9 ËÆv¤õ¦Ò§Fjlsú¢¬¸wàɧ …’惊¬)Šéš’¨P¢ÅIozVK%’Žg‚ŽÛ…^Àè£2Yþ‡{z‹§ùŠc׎Ç›¥€æÉdnu~®Uͧi2Œy8 Hœxš©KÚf‰X—*ˆf °ŠP·iˆ¦êÎgoÈ!m…¥…;GZ×aJ_º£—›Ê…]j›î\íٛ¶Flùe©w¬ò¦ŽíçdV¿EMiq´UÞV›»6JÿGŠ™Ô]®Xg/¥nÙiZФWD_¼¶TÃinêš§@(/º“kÏ «FJ¤Ysdyáo:z¥Iê°3™„˘¶:žkˆ"¤;™g‚£%k²'‹²›C±r'²[²Û²Vê©0³Óú®[³ »±$‚.‰©§³Åþ#†›é¨ð¡t¸= 'BKKb´Ê Aë²ÛÚªšUt¼7¨Ã=Ã¥§2 ™¨µ«…í)vÓIŽŠ]H©IEšd˜‹ÓŠ")W¥ š»Š™U|V𳆠_;™ì*a`g™à9¥RÿhX¹(ƒÕšøÈªë(šõiëfŸæ÷ªèÇ·RƱ®Ú^áv‹8¥’:õ_®ô[‘Yѵ^ŠsǨlÙš>(Zw'Šg¥Ð€8WµNû:~ÈŸüèeÎ&i¦›ló¨¦ JÞ¯÷›Š‹¬Z› —ë——Œq—©9B~(°¢ ‘‡Ö‘ˆ›ã…þWŠW·Ššä&=Ϋ:+ktþ³€¾M{¢4 ƒ{ ík¶êÛ’ôK¿6X³Ï›ž8³ø›“) À`ù¿7˜²ŒÀ ¬Àõa¿u¿â0À!»¿L³ÜÀtpX\Ú;üs$¨74³Üö‡K\ªÌ)V›ŒéË0ßù”³‹™à8ºªS ¹ŠYeŽž‰´›È^‹Õ©è`Â}¦½yY¿üæcº‹¨x…±G¯;$µ~zAŒ¨¸ŠâI­zÄÃÑ›>´A£ÅŽk³ã Å`L»šu³ÄzçzQe¸Ñ‡®R[ºé0Æñ6xt¦Á[Û©†–™ úÃò&«]S>¼Ãæ0Ç Ê¼ÉùÀÂqÁ`f`%¿äQÈ(º¿EŒþK\¢‘ ¿“ŒÉ:É” ‰L„ìÉ¿´À¥lʧœ²‹¬Ê1¸É=ÙÉ­|•¯¼Êï[¤°Ì¡5ÂJJ¥a¼ sœŠ#é¶Üx„ÛO¹¬OÔd¦ÂàËç©dlVÎLÌ,\:Ï윗§eÊ Ã›KļøN©[É=\ŠQ5›9¨Œ€V!|óF·ÜEGIÇ Rì·×Õ«iŽ¿ÔH|V5¼ÄC”RøXkšw÷ÇEe{½œÍ´V|´\›»¨ešv\Ïø|u‰ÏÀh­ž¨Q‹ lÄäyÍÔgÆ |Ÿìyª‰Ækܺ©škþìŽÁ¦nÆ5Ð÷‰»©ÐÊ‚êÁ!YS€Í ¬Çîþ€}L_î ÈŠª½pµÇÅ«ù× ¶ £ì¾Í L-?è¼;N}cw\R°¢¬ÕMÈÕ³œ¹ÛÕ+ËýƒÊgÖi݃`}¤";Ö[)Ê£¼K–¢oý¬_}°^Ë0EÒ3M×¶£–7ý«%¹LeÏékÌPÚ×ríÊ™»›ÍìÍÚßÜ´ÓlvG”Á^{Ф:χì[üK;ÎóºÅ‚7ywxxÑ ©soy[©Ì'Ô¥!˜ªy긳Œ ›·åzªÏÜù^ýÌ[Æf´ç~Õét«k]ÎÕ›m ½·½F,õÐ×ѬU‘MÐò˜‰6VöHÅÊ+œ®ªÈCÛXþ =ÒÀ‘”‘i|Ò%ÄÅ.½Òà ªáÇzhœ[ÏV¼ˆ{kœÝ܇P=Llý²œ×d­É.ޱ0NËmãD©Ö;Îã=žb4^«9ÞÖ‡×7Ë3ãp]äa-ÉäôÈM ¥(¥*þ¤¤°Ì—(ºÃ\؆-壖ÙTN’%¦ém……qÐ=¼Y;¼›çÙ4VÙ<§˜ÔLÞhþ•Êâ5gÞ›–‰Ø}ÊæBPÃÕÍSu[Ô}y¡ùÑ-§ã ——¥©øþèÚxw~†@ÏYèß«h6M¤›Ô}uçg©ù '¬÷Z˜ïö]ýmЏMÀA~ÂáT§(­tjàÔ=Ú3LñíàÆ*¸Ú ‹(ŒÅ3Œ_ý¬º¯ÎË;ä}n!« ‰(è{î Cë•ÒòíÐôÍÝŽ«ª«j®/UF®>Ù¾ëο|·ƒ{é8ÝàUÊ^ Ã'JÚèì6†‡Ú€¨Äµëè=ÂVâ÷¯” âëkâO-[ kÕO äD •h†>.Q!±±Pð\‡äF~ËÏä™,ä/î¿-‹¿>Nò%oòœÓðìÈñCnÈœäw]ÖßÔ1ßñœ|Õ`î—íþ.½: náç“påBlÎ} Îë~…ÑxNb8ßæhè¤*¬] ]ÍkªÛ5ߦÁJìž6u¼|È(Ï~–o.£Ê'狞ˆµCf€YÁ ¶¸ÍÙùäˆ!Í_t(õKÎYLÚ½nÚ›ˆÚåʼnˆ «jË÷º Ù+ÛtˤlŸ¥Îæ®é>­ˆØ;²(àé†ö¿ýhÁýQ,-iÃ>ê<=Ÿ`4â—ì¡òè‹´.©ïÆ$%yã sÅ®ÎJÇÂIЛìȽ­°¾x[Fß~ßHÌj`Ágþܦ'篪‡¸º^ÒÖΟ(ßÎ%ÜÁ[߀ϭ¯«Zê£Ê­âíø¯øþšZÇ×)¿¾àx˨$®ôùúéG­à„Íà§JÑõ:ÎãîœõzxÞ›–]^êªM€0u¹ýa”“V+ÎÕ›wš½PÉ TŒrOÕ}áXžé¨­ñøÆ¾Ï“]*hó‘Iå’ÉØ5¡Žg”:›V±Yí–›¸v“_ðXH6ŸÑéX-c·Ïoøœ^_Êí"|>»çÿ5ü+ •ù|$')+-/13579;=?ACEGIKMOQSUW?!K]i`cik±fiqmUtw}y€…¡z‰‘;Œ—“ ©Žt žšµ¬¯N&1ˆ¨¿Áþ»³‡Ç_Ñ/2†Ø¯£þà·X¨#ïñ!ZXæ‹þ•ÁÐNÂuk8l /ÃõP tçd`EpüìE ôî‡7w):zi/œ$/D¢‰S‰‚ÒÊ•? ¢9Ó$Eš7?æ\èðÏHRj†´è¥"ž~éåóæqâ¾}ókz$É‹%c.mê”+7¨³Á7sdQ¢>Õ쬑(·¶TO²ýÊÕ‘Ò¢2cn-7£ÅŽs/úe©0ß—2‡’ –0Ò³hѨ¥Ã–®`}ú¶=yNiU”"3~Ûw"b«7+œ5pذ»fzõ1d3’çÈjidʨ¼9“Ã[Z"ß¾ÉÎöÝ{(SÖ»k­zéþ`êÉÿâ¶p{åî$Þݪì{ùñhqWò¾JMÍCÖÞf>²úññ“¹9?øþÿãÙ/"Ð@a¼í@îl°–õs )dÂ1XÉPà 9ìÐÃA QÄAüîÂ]N¬PÅL\± aÌ#E.fÌ%ÆskGŠvìQÁîjŒ%Áåxù€H=¼kÎG€t”‚%ª°êG¨ßÄÎʇ¢¤iº#,¦ºà;-Le¼sIYšü )½6sJ®³>ë#Œxlm´žäI23¾ÄÒ1ÿü<3Œ4ëˬŒô ³ËŠskϾ ½gŽðR)$˜ÔÙ-¥ì¢t©Êh4ÅR§œ<þ•Ó¹S{›RP$ Ô’lÒ3ª'ýÁìÅB¡òÌÎòÚÊѪ$-تCíª×›Â+­ U²4וuÕ.m£¬¸ÇzKU5mô75*GÉÅ<,¯®2•ÊÍϤ75dO›µºÕö” VH§t;koÎóÖøc˜÷]•½i¤ÂÌ£1$4õÖ1G<…Û㉧¡î"È>”îZì©¥Þñë±)<»½°Ñnµþ™>n· œ»ñÎ[ï½ùîÛï¿·» Áƒ£ÛpV“”ðÃã[¼!ÅœZùØ–ÜòÊÆqH•‘Nøòµ[}Ô¹s [™L>•¿<¹6ÁÇ¡†}k:±-sQ:smsàÀ~ôr\Ú3k}ÂÏãYto»MTÙ™ÿÍ`#uKãÒõ e™ ›’jMs.+;œkx<YèʾòÃ`—`ï&w¦TX厛óæÐ’wbbñŸªéñ‡g<'EÄQ`Iéõ+ÁÀ=ÒkY²ú¥.šíLwµiÖü×,û_Dœ¹…>éêwÒUsÄ5 …yæYû™Ã&‰‹hÌþz—Ë`³¿ô¥Ïƒä¡¬×’1[Ò3Ýêl†½Ü…¯~÷“Ùrþ‚B(yêU@ƒ§¨×C}ý°GKËÓÿ”!;â‰1éi Šì ­­€Œ—k#Ù §E¾‘äþ™“Æ€ Ó—­\£Óx%¾õ]+‰ó›b»hxÁeʼn˜úËáµèuClŽ1•ˆ ÉXÉ-Öš +WªÒù$áÌ95XÅ0à ’0¡ærW;ÑØÌ]V”ŸÁ¼Íè’ÂnYthÏ™ÎÂúôÂCõ ƒ/ÄXþºCƒ6¬f,D J)J*ªgˆ§Ã <íy=cöSŸÈ8,í±K›à£ïš*®ð]•‚B$R?è#/ún¡h2—I&•:íŒIÅš9ÇF#ŵŽ\4_ ‡V©°t¬[5YX±V®vT,Ú›FN–²•µìe1› ¥V(²Ú\»èÙ±uÖ”Žmþì6›¶Óv´$¬<ò:Øè©.u-ÔjÛš ʱS_ÕGgZÂu¯|jéHQ˜$âZ²µà׿îZÏøÅ´M9úâÄ S5º»¼+nskà!šÊ«W4©iÝ®bÔ{ÜD {šjÒwžïO Lž9_Õ+¾6÷[æƒgyåiA‹­%Ÿ 0?} ݊ɧ ]̉TPnfp¿ %àCáRªúëy*¼(£Qí­‹‚vB×AIÍåÝ% <%1w<–j ~Àã°4ÓྌÉÉ…9­¥ÿLS¬—ÕX<áŠ7¨OÆÂÈÎÙJW¦Îjêõ­GVªN¬œ:kÖÊþŠö]c;ö"Õr?×ä—.i­ƒë0xQ Ú÷TÙFbÜ=áLŸÔJHÍy”œ5ççÄ¢gЂ4 õhhÓúõЊ-2iIOšÒ•ÜžÔgG÷•rŸÜt¢ýÇOÿ5ÔÄ+:²Ë“áÙµø /€4}<‹‘¬·Gt%­ike&h<Ùýò‘ßl‚ÇÅ:“¡æ.zÁúfâ•w dv_ô46ÙWØÞ}œ©íP6!´XÛ­fWl>“ªSUÜ[õ:å f,ÞÖ¾:Úcù)oš+¿Ø¡šBlkw/~5¸–\—êwÓ™?1%E.ëÙÉ¥g»~ZEÆPGÁff1Acþ¨ïÉZÆéÍÖ_z°=6pÜqÒðˆAŽ`ƒ^+…Xïô½ãùØÎà´3maÐ¥Æeñ—ĨcÞ´¨ãîÆ¤Ù±‚ÔÈȵJÄ…9qÙ¤†áü³éÅ\>Fˋʫ¼g”kEFùU5Ët÷к<ï&–9‚O?¡Û5K,/œ}‡V’-¥ñjkýµñúâ¿å£÷>Ø:¨ëƒÒÛØšßuÃtƒ?ê-vzÑ’åãyËËÖ'š/öæ/_jJ‚~ô¶ôéQŸzÕ¯¾˜?çI_Ú­‹:öœu½ãk¯Zч>÷9ß½ŠH‰ê‚¨:ìwñ‹uÜp&x¶Qeâªk4þÜlHû×È5ìÃÍ÷¦‹´ÛÒ5 «krïïúÙ­¶‰¯Îýã÷wýxÌö…·pO÷vþºÝ(µMÒuS£`F§õÄ錶åØvJ›¼ìî £°ïïMî hJ 9,‚¼¦ÀBLÅÆŠ\t§ÁêÁÀMʆê{Šì.~ ðPëýFÈ22l/ÉW Ç¢Ï.ô9DL‚ƪ~jì‚RŒÚ.*U8†Äl¨áBIE¨ú Žvz® jìjàΦ®èèè"¦íÊâ@¦~ã”nâL…â°x(oøT ìb+¿N–l ¨„ìZªf˜í.{°ŠíÄìeȬÞöG†ZƬþê÷lÏÎÈ/ëzovÑ–èL’ñX­ïàö$ 1Å8®ò&ñ?4O;ç1aqo_XoI±Mñ²<±Añ‘ÏV‘µÔ^1?2±Ðf±Ä#°ºÌ¯`\‡¥æÍçÚoÙ@¦”Г®¡E×pÿ’Q_JLdœŒ4ÆIŠ‘±±ðÀÏçšAÓ”q£ÙV.@&yí¶Ðqýªá[ÁZ\nÍzq å±ÿLþ¶M·f Uø¯ëÄéXþ‹¾Ú+ºÐ€:pݦN¾)åÀ©œP–ž/²¾q£Ãy²Ð„NW8ðæ¾¬¦äoyÑÞ¬#ãÖÉÞpYþ6¦Ž#©Ž$   ÿ`"ª"MÌ„žqà .àX2Q 'eî¤üÉ„ R§’ì(T²„zòå °"Áïš\°ÎbÑÆZãšÔ,#o¨Ènn‰–î''æ®®%ù§ õç$?Èhc(‡p%Œ6`‡v,¥¦2zlPÉQŸÃhmÌàÐÿè«cLðÂ~ ‡CÞ& ï¢ðŠ’âúíŸÆjzÌêÜ€/gÛn±qR±?4ñiòÑ.“ñ2“?3qòF4K3óN³<2ñ]ó5a36E$4ñc3W±3eq5+3'-Q7y7Í7+q7y\Þ.ç©2¥«PŽëÊ1Í’þsF Å i¢±¸¸QØ€.Ó±#7ñ§ò€,h “°:ÓÊço±©Žn2e­±ä8]ðb¾’!ÿjƒÆGg1r© Jåì‡ÁàÍJ¬¦õÏø,sëšÓª.-æª.&¿ÏÑëÆr*,§ðÊxD$=%î §Î°òˆƒ† ÿ„°à„2š¶Ê+k'(yIÜÆ£¢<´âØY0=Sp.ÉhjŽuR#áe+¥+9EÁrwМˎRDGƒ1ÝF>Å-ĶÊ-4îèúÒ0Ò-©èçÂï8o¦†â®ê4M´ô|tA…ó7i“4ã´óæ5ëTNSþóõòÔN÷4ûTHßÔM•7«E65QuQ[ïO#±PóíNYR•8R usSUTR3sQÚ±üöm ÏÓ4u<‹ouQ¨Ñ˜ÚìÎÔS#uVæ8uçÒaR6®ÛþŒWe’öæ:lOç‹õ“Ü|ÆûÏì*e:ÉŒcp‚Þ4J+ß Z¶DüÚPOµßŒX³2+Ÿ’$3Ò»5-™î—¬U¤¨è4HMßò{SOšҫl6E›RÁ5Ï ‘ÔÛ¨t;ª^Ñ4D)¬G-•E›MÎ6’éŒ4.Õ4:NFJª›çîÒRá¶É Ñò]Ý“Í6+jsKþÊ1½4ËÀtLéõÌNp_Á0Ȳ¨Zñ ×€4[õVUq&9uWådÁfëÔJátgAh •h‹ÖSqiãlP‹“iå2aujûÈôõj±6kaSi–j ÁhŸÖk)s[Ul¥Rj]Ñ:yѹ¦ñ»sñøµÞR•£Ô'_¬T_QÅ=üq®óÉÚ6Þ¶­bc'{.ãêvZ€S<309÷V¶¨‘Gƒí ÖqT;”)CpÅÖ¶ƒõ^ÊUððS_U2“5ÞÂ… õ1KÐ_[ˆÝ^®¼fö\”5*qsUök%-]²o (yU$ÅÒp‹R»5Âà’˜Ä¯)A-X=îþÀ ñ†4F[Ty ®å0Ìhf´u‰òIŸuX‘ÌÄrpewj¥öþB°q¿uBw—+=r-ë3C™Ôê^’{³ØèI‡î æh‡“ˆ¢•їḴeÍŽ9ç¸{óVSöEÑN‹Fä Ö~Õ[û×3Ívl/xS3˜¸m;¸iAØ4CØÀöRKø’>˜lSXàºvv[XLDki¸†mØÒVxƒcXg—V4wX†{xdwƒ˜N‡8)ŠØ‚Ÿ±þM7²é~Ç V—¸RÉo¥ó©œjTœ9ô–¦Rtð²î ÞÊ­$WV daqöìóÄv2 Ž]…rF¦OöfŒÃþ(Æf¡Œá³‚3U‡© WfN)c²¬jGåİ* T‹#L]®¤1x:PªÖ0 r>?®õEÿWÞR‰{öÊþ"Š"m2ì(×Ͷ«HOj25Ðî.ô ÷p¦ ¹V-îûÄ‚æ(FÛ]Òˆ«'/×þœŒÄ0®;°tƒÐFùs>w aFkÔ˜f¨¼Jê–•‚½ê-)t3ï˜UÌ|}19¶T¯4^£VxYw§tYØ9š36«nôYq94tÙe×u@…ø…y ÞjkÊLAæ.™¤§XæZÝŽ*Ìö5ft MùÊLV“}ʧȒ»§}5,‰Á6™XÐþn—…ûxκXƒÁ+¤ýxVãõˆEX¤Ã–¥³o„U¦yø…É—¦c:ðnx§yº§)‡§£Z¨­ ¨£9ñ„aب‡Ú¦ÓÖ˜Ý ŒÙï¨q'Šéxj÷f}ÔŠÏÉY‹fK:-²Ñª¡Ío‡Y«Ë³šm·:ù‹Ýœ}±˜!÷³±ªñÄŽ% ¥aùX¥ÕøøèS˜ƒçÛ–¬{*‘»®z…¶BEo!£¡o’÷Ò$}€3ùäîN•艫0(6ì SͰ߉̙-W9nó%~ÏrûH*/©™ž+Úš-–ÈÖ•Ûà:¥ƒ4˜5£,2QrúDµD™£þå¢t^=šÿø‹ídž)—Ó—eŠn#“¯O´ }wåPú‰‘™…´Ò-á¼rD…)IŸ›ZwÈnÙ±pHà>©‹;¬?KǺ±y nÁ Îy•c˺š8ïÜX%Y¢»”¢Ó̲ô}ÐŽ:}9¥‡V´Â[¦« ïlWkÃ]MÕ©ª}ÏÄŸšÃµUÅQ{Ä›ÚÅ=ìP}ºÆmüÆg“¨u\ƧzÇYœÇ{õÇ{È¿©˜È‹ÜÇa<;V¯%ÊmÕ‘kð““ù­»tuŠËÚ®ºš)vû;iÕÖÑšÉ b;Å ù&[ÛÆoï:ГþWÄOÜqµó±º'W |{ðÊÝ|ÉßÜ9å<¼?|ƒÕÝÐp¼ÿr1EwcF`µ÷µ»ÜË!êš2ïLwÑ¡—våZ”›Üs—† î¡`ÙäÌuÉxjšY÷&+÷ºówaË릒¹ó‰˜ŸÅÚC”Äi{€kY˾éÑÅ4Ý0•È1váò|$˜ºÿ} #½dÁêKUXÉ1¼ÂdõЃӳ¬=Æþƒ:㑼)¾ã?ž™Pœ„G>ÉK~¦OäEÇ]þåaÞŒ|¤WÞ‹+þ¥kþE¾lsÞ …ê3×Ñž?ªñ Áê%¯ú‰>èuþ“1ØûùæQXü¿?©/ÞüËhæµ[ýÇ¿ób>þåþ/ýõÝýIõç5ÿÙJÿ}˜ÿ @>¦.74ŒrÒjá»zóîö1¡ˆ¥w†Ù©©(Ë3-´5ž'÷Aÿ¯4WEÖXT™¦%óøhê$}öêö»i÷òÕŠí† `M‹X^¢`—žŸãcU!ä$Þ_£ÍË›\ܘ&ç'(Ø\'e(¦Ÿê“êUT+ª¦“,jª«î‘,oR_S0k/Tþªé1ò„d²ThìåÞl–×´âíäÏË­›¨Ûw-¹05§ p:®yùæïù¹ë(³þþ)ÿ#Û´L¢ÆStͶe\¸ábkP¼S å‘ Gj`´ošhË—ï—˜]µ6õóg²ÙIS³­ó˜m£ÁKY*ŒT/¢;w#ï]ŒøîÞ  âòÜ'Eotè´3j.%T55£ªåa0$"³Rá…ËžµWWÿÝ4&ÌGQmðF–úêöVWvô8J‘¦+Þ̯`!ª¥êÇÔ¿k¶äÄ 3Ö¸qÈ…¯¡ŒÅrÅ’7—ÁÌYÙg¿žC“.ýy´iéý¡^þíúµÉÖ¯eÖA»6îÜ(us¸Í;ÅïàÂùù6]|¸…ãÈ—3Ý~4ùÙïG1¿?€¥ý·Ù€õ ‚ØØ‚íµæ‰z‹X1–ü5è\% ºàJÕÀ‹Vš½âWzEõ =Y%É]¶±a!i؇# !Do´ˆjNur!p%²Nt㢑d1ÂÒŒ€Õh#ŽphÔ/æq‘Š'1äVwqY"+^(â."õõ#‰Å”9ÌPpy¹ W Ù¤OÖZ ååÀMu` $þw>T:•“%R$ñÕ#Pm ‰¦=)Vr(L”"•Ö¡tÎè_W š×ˆòØRŠ:Š&h›¦’šé,Œ4ºät}â©Rçdx _G:dkV†jjˆVÜñ(šU’:k®Æ2ÃР‰ÂôS±„âЕM‰Ó¦¤÷tmTG´ÕÉdíÚÖW¿ÜµJW£(%e,f¨ "Î<¥|ÿ2 äL¾ô-®Xæ ‹SŠ’@ôÉ£V¡²”ûôEB{IŽ4òÛ Úð§HéÉs!ªt2/š%#øªÛQúÕ@¶|¨#kiWðâ2EƒIÑŠ[|†™î·–²å,Âù¡TÄv44 ŠNl<ôFÎô0>q¤QÔîèµ_éq1yÜ# ›—Ÿ>Â,†Œ‘]FÈCh‘"#$#)ÉIR²’–¼$ÑÉ2G2r?œÖ;)J'­ jŸ¥5™²S¢ÒAªDÛDH$ =†zŠ\ó|ÃJ?z²eW,X˜fÇ.°(.vkŠ›Ê´ç1¿­ŽOþÄÒ3W¹ 5i—Z€’ÿø–¬Ò…JT4ˆ®>DÂ1 4—óåæ,dc¢s‘Y¥Å?eF®qÞdÔK¦i†ÖÝÎÄXHЧ.‹tÑMr¨_¨' – Šÿ|Þ^~¦g0Ôš{ë%6I—Ïè pXÔÃC8)t˜$®\Ù›ÒöÕ½ï‘1#;œ›B_"ö‘Q_©:© }¿WÊÇ™šCàD6Xº–…ôo¼4^ÿ’BBÜ9kTôf èÏœPQ‹Á#Ô ‘µ*©†/›Þ³(xúÁzJäží\à8Œj“”^ïUêë& e˜¿¦´”âÒH£©–*‡*| õ¸‚•šƒþ\‘‰1DåÎS_¢Ý7ÓÈÖ2¢ƒ^J´×ìvÃâ®[÷ÚÊøÒtÕYÕnìc½Ì‚Ù™ñ°¥TZa÷[$¾VKÅ+d+Ñ–[tH»ýí;1jJàW?Â…mq“k2±‚l¶Ê ‡0)ÝéR·ºÖ½.vÃÃÜŽ9÷¹«é®ÂBéÝñ–3•â%/yÁ;Ëó¢×»ê5.r/*KøòI­¬ÜGÓiEé žµMñŽ™Ñ*r‰ŽÉDé2i9Ëg NX»)Ò:7I?a¬ŠªÊâF7Vl,÷œõý0‡ÕZC…wQŒ|/‰Œ#Õ--­ö… ?ñ4¼ã!ô³MD¤Ô—ƒVtªFùþ]/ ªX‘îUx;Þ15„âþ®ë¾ýä-ò8šb §!l¡þXJ>WT‹ø°µv•S ïÕŽsT3•ϲâ7s† gõHºbnp &í¨UéŒc®z¯Dr³F¸ç beŠLÕÔ’#¬`곌Ö0REèVC6Òl ]·Ì”ðùÙ'å+£Ÿ…réXX‰~ªt½\“™öˆPÂ[!Må/ZÔÝÌl5ë(0ŒÀk(^ é§…Ê-R“oêíjLÕ‚l/}ï3Ÿè4ÇÙR©#´ClÞáf۽̆˜µ»=¹o3,Üâ†1¹—}îäšÄX[7»£›ÝyÓ»Þö¾7þ¾É“îqµÞÄÙ7ýMÜ~ÿôÝÿ-Áõûµ£5Ån®%¶[ìn?º·ˆN¤\†Ú¿å±ùE·|ë·¢E“5æB$M]î ߨ¯Éss>hâ‰m¸¼˜ó°Äm.b•;¼ç‘ïq#8Ô*_ø§Ïz±Öb庅âe¡v¦è@±ÚÙ\#ÖYßÓƒ\õ†¦M³F&fB#ÞÓÿjTÇ2†9H B 9ϰÚô+‡ÝpóÉrc8ÂLñÚRîzù”pmS)æ úF <Ù9o•¿vtLòÜø34} < ½œCÃȪ¤Iá±àÅŠ]}îžÈ:3n7>¢û]¯LDa)þljáS—ð„ílÒWÁ¯Ô:µ¼Ø‘3x6˜c|õC=Ç+Óúõt⯗²Z^›x‰µ6X²©ÿ¸~õ¹ª;öšæm»–Û'>8á]ùHоíÄÁîðí_ô·2þH7ýskÿñ—lÿù—lÐÍÿ¡Òþùßü  (`¾- 6 > &ÜÅ!`' oQà!Y {aài ùq y БÓ|ùœSõ ƒ•ÝÄáш…’@Ùq  ˜ó Ö Ù‰0\ᜃñ4é—¨¤ß†è•^SÅ … ÖïA™ôåÞd$“ü•E4æ\ŸXFÙ,…®×Øàåe™éþQáéÉcÄXžPžÇå‹@ÉH°™]A]yjhb´Ãב¶¸‰ÝLŽA]®)H<ÝÔ>] .^ÛÍ“2YZܩʢИíP÷¹žÞߨÍC©ÉÓ‹°ž6ÍÐçÐÚÅàºØá9šÌ‰\êE6^ =ïE^À]!ЖbæõD,’šKpb¾èœÝZŠÞ£ Õ º˜ã,Û­ž+F‘÷5Uä½nšå¹Uñš,ÙäñbP¡^@¹"_âö-Îû-‹èµÚñ)Ý?uEå=áâ© ¥Öb‘ÙA£Ž¥\ÇýiµaáÉã Rã¥Ð…äÂ0ÙÚTÝþø!Xœ‘¢"uà¿Í_ùÂm ŸØœ#``¤’ŸÁ`J yd ‰`GŠäNdIšä2 ¶¤K¾$LƤv€d5©¤×¤ÚäöÖNêäÂ9¤„ùä2ÍIRS 2¹€\Æ›CFÙÒŽF*^Í1$¾ m !ü±\ )àõšE^ÔRBSq< ³_V’"'*›éè¢è¬ËÒÑC)–Ú‡ \~Baª8ÔßANªà›GžÏ"âT)Jã÷LJÊÿìe1íšµW`ªâèÁMÙö#AØÕ.ÆPgþ„Š„Ž]q•`›*¦³MQTœÅ4£4Rþbì-æUãÝå&–qÚ¶ÔÞ´Pf&*×e¾š‰¨aTJ¥#îËAÒci¤Œõí£j5YôUØ…V zPJ¤h`Ò䌰Ñq %§š‘gNnäO¢çÌ çC²g{šgèÁçz’LÞ'~æ§~Þ›|F }Ö'Qöä6[6Ò€¥z¾çö_Ò¤kr¡š%D6¥0"Töã[p!x'ÖŸVJfI•ã6M†‚åýA˜úiáý¥'Jþä Ú?BUXÂã\ž¡¥áÓ¥K”õa?ÚåŒéÎ]ÊØ^†]°Yeà Ë‘øbñ¹ÚþÈæÝ½Tbæ&RÑ&cšcýئTþAç¾8ãr>¦’¢e(d¦=Qálq¦Ü¹ÝWÁ]Ybž˜²eê·£3ZQài‚&ã Y ~—”ÞÙ[Õ¦#b ÝÑߥãìñfÞ!ªo*‡®¤´%Þ¼!¿p Âr^_e^2Ó!›fn=écGøá‚Qž}œ€®èPNÎ2Œç‚ò\¥¾jl̪wè­Rj­®ÜvÂÒ¯†Œ{"è°â_°2è±(Îì§³>+´F+Ð4h².+µÒ–¯ZkMVksi+°î)wzë¶‚«°‚Ííê|&¨SêHdhÚÆÁ¦¸Jè»âSe–Y*Ö+.YÐnCpýª{6éZ¾¼:Mþ Êåíð(›p†ÙQåùhù%ÄŠäe²_›òcU›Ñ)A¦÷ð L0Š(6.¡NlÇ•8é+\Hßf…æ6¦fÚ¢çY¦º*²¬½ž©¬%ˆšéïÙ,¤*ø„cäŒ#Ò¾"ºús](âà±µiuNìd$4€™éZBÆl¯M-Õr+wÉ*Š–çÊ^닲FçЪ¥¢-¶Æm|Îë’«²êí¸â-¯úm„ò-± îßh¶.à:ˆ´6®ã>.äfÜ"®âîâkåªmÖâjæÂ*å¦dp¥­º%°ˆÎZáHÑëùQèýÈNqöþ* ®æ„8Ñ©¬(6\Ü„p]lËÝ+Ù|èÏî£ÑJ‡¦.¬´èM.i¶ æà™áŸÁ_‚*]Ú#VK]BÖ[6‹Dõå‘QŠÝâ‘zÚ’õ*ïªÞl>æ§ì^Á5)ù’æU=b¾þÏ×Þ¢ùÖÌpf éiO¢¯<2"ŽæÒ,ã¤ØÏ6"µæ4ÎÜáê *_µÝ§ÈD ö^Óq—ñᣂ pˆr«Õbla†Ë®yªÀ@_¨²(Áò³pŸAÒU÷^*v¾!N.ìvîå¶eâò°çî°‹±÷o¬1|k¸"q1è2qk.²Bñ ÛQä^1þg±´ê°S±÷p¹zqWqàŠñWn·šq£ñÚºàðö ‘\hëZå>Ý/'ЂÈDa-Ìr!U²í¢ŽR%¥ÛžÌ‡&qˆî”‰ÒíÝ–ðÂȈ0„ªîºmò­näï2Æžã/»F/AMïâ0ì-•¯ÉpÖîå1löÚâö)@ié’òæŽo×Å(&‡šN©úJûŽû:­í•V CæëåU/3›sgp-®þöU.?sMµâ2ß×l¶/d*¢Çª¦Ž|š,Ž,•&0Z­ «fb¶Ÿæ*advñ *¥ª³™GYÈ0§l™Éü*-¿feñf>WpþL…›“;B)"Çcóe•Ömv 3îÚï'wüÂ(9"_Rg ÃÚS$As1W`ø¢£ën1ò„²1³ª1Ç–´º±t'ôÃ4ѪôÓ´.Ë›ï4O÷´‚´Lã45qÞ õP»sQõòõ+õù‚q߯.ï+ébèë¶­c`e5£`_î\zÅ’ìB‡!ÖîËò>rQuârÈæHâåN^n~Žž®¾ÎÞîþ/?O_?€ŸàoŸ?~ :c A íYã÷oaC þëGp¢þ)f´pQBH€#Œ\ÁÑa)ˆ õu yÒãÆ‚1;R”Y¡fƒ˜SÖЩÒÔI—$q-Ê‘(Íž™º¸ñ©>¨oFÊgÀˆWRÝŠP¡Ï ž†š%ÊS©À±`»FÕšÕ#ÌŠGݾuúöªLºL÷fdIׯU²+%"¼kq"WT Ê5*.d˜HC¬¬2æÀ|ÿBí«–°¨‘m&ÎÛw_Þ£á‚.í 檂SkF,¸viÜ{ëŠùñc¯Ä[z^­p箞›2Ç6î`¼™9g{½ùàßÜ»{ÿ>¼øñäË›?>½úõŸÄº?¾üùôëÛ¿?¿þýüþûûÿ`€H`c¡@ .È`ƒ>a„NHa…^è EnÈa‡~bˆ"ŽHb‰&žˆbÛ¹‘`Š.¾x¢†0ÎHc6ÞˆcŽ:.¸b-îd2Id‘F‰d’öÈÆJ>ÉáPNIe•V^É “k8‰e—*rée˜bŽIæ‡Zªf™OJ©f›n¾)æ™i¤ g‘lÖ‰gžzî('‚{y矂J(ˆ}žAg¡4ªh£Ž>ú¥i*ta¢~éà@#Zº §—~ ꆇšA'> zz*ª'ªŠc ¦š "¬¶(k¨¶ÞŠá¨e”Jk•µ"é*­¬nø+®Æþ¡®dðÀš¾š ¬þ4mªùPK)¶Â^{­’Á¦jm´ÝºGm¸âŠ î¶Å"Ë.¡ÊŽÁ«¬ÐZ .½óÞ‹®´ùî å·å–;-ºÍbû¯´ °Àí.ìî¤Bøj§ë¬Ä _<¯¶ü®kç¤kL1ƒÎ~<1Âø2Œ² ïŠÁ,Æÿ;pºó ³Ëýz\¯¼wjqÁ˼sÊB×¹rOôÈô.}rÆ»<¬`j*rÄ#Ÿ›sÀN‡ÌñÐ^‡Y4Q­ £)vMvÚ*;¼«Ú¹Žý!ÚnÏ gØ_ÀýµÙtï­¶Ý^àíµÞ|>´ß]^8â„/þ©á\(ž²àŒOþ~¬ã[ˆyæšoÎyçžzè¢(é l³wÄéðšŽ:ª³ÌzëR¼ntì²CA»Ø¶3Æ@hŽq»j}–ô;JÄÿ÷A@ R?N¤”Ô¨È Ÿt½Ït}õLß ÷—3t-=¿Sö&a>úâ#¾ú˜ûò¸x¯ôhIu¿uΓä;å/œÈô„yÄŠc£âè{÷{Žs˜¢EôËBô(<Ë Ð}ãk‚e¬RíäÆz™Ij²Ó&½f„ ìàrC=ô¯5Á¨ .¯ì„²‰¡ål  RO€!ÍQSĸ\†yH4¡[¸¿öЇAä]þ K÷»#ò0ƒ3Ü¡û’ðAØ`p…7!Mm$SF%¶Æˆ¸á ¹¸@žìp1Z¡ —¹ç9z4Œ a A¼äñ3€‘Mb IH¿ °ŠÑÁŠkŠ’œ2ö;‡Ü 0þhL’ 44'‡¡I×í·ñdB¹’;•·Kˆ*ÿÀÊV¾ –¸¥,•@Ë'äòn·$_FÌ` s˜Ä,¦1 ÌWúa—½D3™ðÌÇ5šÊìC4§9‚k‚Ñ–Øô_57úÑnœ*Ц6GqÎÔ•ï %ðš—“È/›ÍH§ '{¶@œVÁƒ8OèßlŒ—Wʳ4' @„ "þù—ˆ&p¡ œNSÚG&º'¢Ð9‹­³Ð+þrSì¢ù:È¿%~ÑaT)éØF^‡7!<ãKQø’™j ¦q” a苟>”ÞÕ·Æ¢²Ô¡ÎÙñîrÔ¦pƒOå Rqº—®…§kL¢! i½^UD-©JubI¤Òt¬hi{ÃU¬Bq1n„«N¹²œœÚt)UyaWxÉoFô >ùJ#ñÇV‘@p*UŽ%išŸg88udS?ÊÑ­òN„Ž„,PÅ*X0´¡;­ >K›j$öÿËú׃ÕÚ„µ¥tèCB›Ùv“ºåAo©¹[uŽT¿ îûpþ‹‡â—¤oPî6—ëÍ_"sºÔ­®u¯‹Ýìú¹wp.t½È"n~׷ܵƒwÇ»Ò-‰7žìåm9yÛ˜ö.Ö´à%ÅyÂÜ:ðS¾Šuæë´èÞ„µ­©¸/ ÊjÞTÑŸ v}L`û渫0ð ’BØ”Nò°ô ‚[ÈDʇ9=d±“Q¥l4Ä/ÙpFY¬×µ|Ä„´ð ‹R“o‹é}®^KøÖ6K¬«WéêÄÜd•ÇóÅc•§bÉÖ±§? „]iÊ·Oˆ=þd_ éTž5Šw­©Sïúc!ët±)d³UW4Ù!Ö‘ª`}è%® Èüžæ44iâ„qÉþÔËX5®Jžk‘Ÿhש Z®T¶û¤ÌÓÔ¦™…þó#ðšåuií‚GØs@v3-Ïõ¢W´Ì®ö´“mm8@û–Û>A·oðíÛ…›Ú-·¹Ïît«{@ÙnîµYÛîð¾;¶ñöѼé=\+ß›¼ù^¥‚á`gfz㦧)mãý<ÂÞ>gÁùðð ˧üw}ßÉÎÍ>3âË,h†9Ýѹ±þ0ŠCüÀÈö)£6h¬5êZ¿XÆ)nõÌAÛoþðÒ°“»æ³Z«¼Ï@‡9È_® ‘mm柎‘ÑRÜéSý×¼ÇõuáœAᦹªbž™-DÃLuÒ“–5€ÿúVÖYÙÝ£úÏŠáÕäøédºÒ =ä6ÒªoÆ+Ó!=É®ûuíŒÞÇ3.àJnºÓ¡Þt|HQë%¾ŒqOQ=xÊÚ³2¼gñ7gÞL=ç—æ±=Ì6¶Û‘ôâ#ö÷NùÔž¸õö¾u0{‹ëûö9È=ÆwÏ{p×^½Á¾t×üä+ùÌ?æðÑT|ãk[ÚÑgï_Kýê¿—õ×þŸ‚+œáä7òryý]¹]÷÷ ûû»ðþ<˜}¥×Eúµwpü¿ûí×µ@5_Ñö ÷~®¦x r³sµoEGc5Q$Rub&Vc[ÕH.VyQ5÷€°u ¸=zÖC<·k1TDCçav—uxVGwt[¤t7UhOætM¤?"—y08`ª ‚w‚Zj(¨v 1Tv§EOGG2v2øcWUƒM7_»‘WUgi×E¨åƒë'Cp—_G•‚Wy,8hZ%u††tseS~…€×g’¡?XY1ei!Zùs€‰×x•§Ñ…%bF$ytj–”jÔ¬¦jȃøUzÄGŸW‡B~ÒkFóƒsR‰+vþ8—(6™h‰Ü· è¤þæ}§U‰ˆ’Šª(Šz`ŠÜ‹ Ñ|µh‹·ˆ‹¹x«¸I­˜g¯˜[¾ø‹ÓG{ˆmÀ˜\ÆxŒÄ|þW~é#pÒXÈ>šø{ÏX —OÁ3‹ºCŠü§Tâø~içOÿe Ý8„ÕáXã;rhb(wYH´rGt@Šz}FWÕ5vrž êø…jU¹,,X‚mF‡pÔSvdE–dC6ƒöÃF$i!$eµ@ù7s }ðøsBU0ˆ\¶†d·d²ö‘¶áu‹dfAÖ‘ºDuö莵ã€gä‚F§w”Ɔ¹‘+©U1&“eþ‡h™G 3y‚ªÁ‰“wí(QpÖˆÑ1j#tjPyåSBiõy%‰ª JÙ°è‰D–ïÈŒ¨ˆ{¾'–¶–Ù}Êh}¼¨~t Nk Kx™—îVŒ|Ú¦‹ƒI˜…i˜éf— ˜é—͸˜ä–˜«ó˜æ™j9™õ'oŒyc×xŠ) iñW qÙc Ž79;å˜Ø¸9f˜ ¤¹SrYÈ™@G›#è`ÿe‚žy ²ù?*çXèù‚+(’ìh¯IÖ…Ãiùˆûh[šæišpYMÁùˆ(Y’L™›Èù„ ¹“CÑ“F6‘Y”f‘’o(aþ;H‡oI‰¬gà¹AhŸ¸9p[7’}õk˜tã)v¹W Hhw”„‡szYR]ˆFÜiV^h9Y†T…çy”‹vAŸCIˆ&瓚 À™xHy` ‰)çePI':ˆ8˜˜·‘‡e[F¸Yva…1hxñ6“¿%Ÿg $~ÇÉK°7;²y— jM—ù™ß¨_N ™PJJ Dê˜RúTÚlZšO’r˜a*¦cJ¦ S™Oé¥[ʤs™¦jÚ˜lÙ¦·i{úŽÎè›§I™m…£á馱€¥!7q½HŽwºŽÙ¤:þȧªé º›Ð¸¤vªŸ¸ P¼)'šd¥>öþ¦¦<“q@û—Uñ8QÌéÎÙ¢{z0‡£7Z0Ð1cts©$ŠÅewç©=¹tšw ‘éé„ghƒÉH êRƒGH9Ÿk ˆYäHV?HÈuºúŸ} ›…úfnV[Ùwr“ÝYx5«é‡c÷¬:SåÉ„@©‘ëÉwgƒü¨ƒ†¨žªÈ*«kЍ³1Þ™ISˆ§:•–Wˆr·ƒY¹‡[Éf#Ç©NhŸ¸H¢—¬™ÚqHº+—êKÊj–»,ë~\z¦É¨§rJ|YZ²º©±Á˜²ƒjoÙ²°)˜eJ³5k³7[!{¤1+³Û¤<Û³™þ §@+šs:´DK¨Ö¨™W0~Ië‘_ª§Zè´ÓȤ*J§›é²«9µ v¨.É_éXP‰ª´j© Æ””6{«/k[&ŠHÔd©šüÆŸÊéªX|.׎5ºrzø? Ä·‘Ø£ãÊsi…Cjµs+j­äi†L‡¡½ê®4ˆ­h¦‡H¢€Å # ®:ÔQT”Ÿy(žŸÚŸ-ÆA‘ë°ç©’ñéb0† ’[” 곃•uröŸm¦c£«wªf<É«T¥¡@©¹öÊŸT&»B„$Çi.ú¶}è‡{Xª5º¬p£ô:£ËyljO´6Aü„¼+‡_Ç®jH­ëÙ»ÙjE:þva¢F&¢íÂЛDž–YX+½{ËÄÛ¡2ŠÆÚ[sŸL•ÌÃIü•Äyh“˜ S¼±mé#jÇî[˽§”‡Ü ´Ì²Ü¿uÁÂüÀ'ÛÀÆ©î´¿Æ œ ÍÑ,ÍÍGÀ¶©ÌeIÌó{Í]–Í ¼ÍÜÌŠ0ûÍÀ<¾$\¨&\Áðe¤æÌ‘·Îc{ÁZ›Á­©§2'Ãá;»twÎì²1ܧæ˜>ÏøŒÀL¶°!lͤk·2†· EŒV»b ª°ªœÎA\`wèª# ŠÖ‚¤¼«ìÅb É‹¬Ïd»¬‘‘ÄÚƒ<®gBË\qÉ9ÇþYÇ|ÇݺdeþLj$¡ ú­)ȯÀÂæ«4½»›®"Íſۮ¼/è†oØU3-”÷šÉ+ìÎh•†’É¢¢üX¤|yØ{Ê«¦•ª¬yE蕯KN¯’´äIÖÇÖËäN-ߎåAm×] Å>îÉñý¯šJà@Låtf•åÈ;äkž½lYååcçi–QY½^ŽUÅzæ„‹çEåN¡þÚ’MÛgÙŸ]ZÅ Ø»¬ÄUÚçõCþçŒ~ã޼؉ Á›MèîÙœ¥›žé6NÙŒç£Nê¥nê§Žê©®ê«Îê­îê¯ë±.ë³Nëµnë·Žë¹®ë»Îë­þéLëÙˆî×Í,Ìœ^¥Ánìù[è×Ü8Ï šçwaÍæu|ÏÂþ}9ì¶Y–пgÛïcuuY€Òž‡Öì¿=ßöãÀµ²”ÒÇ\Ç OÔ>дK;å³µî·Ýí“®T°¬ˆË^Þ[yîà½Ðo]ð0GÞåðP,çÔžðÿHï/•/ñîñ™EïWžñ1:]ŽY¯Ö5­±ß )Ý‹‚hÆß(þEo¹.«Õú zAþu[\h:o’ò=¢ó  †_Ìòz­‚*˜áÀÞÑ¿ôeL•ãíó×ãóŽà4Ü{E4oô.WUÿ’2¿cÑ™«ãÙ¯16àI¿èoD…&hÜxõD'õ_ïSUÙ¨Co÷{vºP?P¾®-ýõYݸU?ôs$x^% Pí«ƒ?¡úñ$Oò|ôñßo·d/ópýÖ}y •àFÿÛæ>÷“ÿˆ‡ÆÞÊ"Tl? ×»á§¯°¯ô‡nèKP㢑ì¸Uù©ƒùàÑû–ž²Ã¾lü·ü¿æ—ümïÀÏüñÓOýÕoý×ýÙ¯ýÛÏýÝïýßþá/þþãOþåoþçþé¯þëþÌoùžNü\éÇßéÎïþÌLØÏïïE»cÔöúOò1u¹ýaŒ@::k6€#ÞP¹R;ÑT]ÙÖ}_žéU®ñœ¾u~‘}#Oi# êIå’ik>qÁiš^«Vºä}6[JG ÖÏZ²lþ1.×’IÞµßñùÔQßOø¨ÐŠâB dž ýb„­"éÌ ãÆ0=2E4ß'ÿEGI™øJñY !ÑQB‰ ×Ý*#oç@áh?cke‹!N‘MÃ&-[}—+nÈx·:-7)y¥¦/?w«ÓÕ•×s¤rÛþ°«ÀËÒܾÖÊzqåÃÊ훃èš5˜â4ˆaß´ƒØ¥Ä‰1fdaQã Žb9’dI‘&¡´óQeK—Y¾L$Ó M›7;Æ´©§‘ž?®ã)shÐfF‘&UÔ%§O¡F•:•jU«W±fÕº•kW¯_Á†;–lY±J- M©–m[%L[ÂE+×m]»çÜÒMª÷nß¹~ù ì—0ÐÁ ŸÌé"q ..$„7Yá›3êäôàË´‹L1ÏpEl۹ɛ[n xVjOŸ¯rì1Úùþ„V Q÷D}u² Œ<Ó3d‡ÅQ±§Ixnå—ä çboáþ.:°„ðC(øòÝ{ónöÍû¢aį7DŽ­Ö¦Üânÿ¥¼½ûn²Í¼+“-]¬lùn¤ðŽ/þبüˆÁg½æÒâ•ôH“¯Ÿ€^‘$a(ÃN’ÉÒDBL‚D†T‰¦ÿ8¤, Á½ù²é¤Ÿç›1mt|ÍŸ„/ÀÒºqåÄ£`MF‚œòº'ŸÛ5w jã™í(!hI"0T±—îš+NH}áÇzVÔåH$gëkM“ÞdSNdâL§ÎçÌS­;—ás1=LI@ý ÔЕÕ³ÐCeÆMCmTÒî2ËÒK1ÍTÓM9íÔÓOAí4Ñ<#þÔTÑF³ÔSYÝ(U9WmUVÚ 4ÖYqu­­[Õ‹­ èt}gÊSÀ´Ì³h“×j–eGbà¤:ØZo4ÎzËÐÎÎäöÀgpyÈšõƒ#hZ&_‰—Û)Ý^Zˆ4½qw€õÕÇðU0Y3“1¹rîž‚«DŒ3¬Œ,¸ÍrÖàƒŸB¸>|dD‚CäWŽÒž÷¯Ze;/]jÔ¥ÖÆ ´ôìƒÆÃs ²Ì MëOBùrfç{Õ ×Â…¼Öd»Êí#Åp½­0cðHã÷ÎS@¼@”ìe@êQ7–iôcÓâ+™Kû>Än?Â’Öƒëmþѽøé€©f2ön†që¿æ&ï°ÿì‡ÆëŸI#¼ÌÂ'ÔÙ”+}5ádá@V;Ä–œ‡`€­´íIŽ­–6s!¦ÇÉì¶ÕôWÎgp1AgñËÐ%yOÉÛ.¥NÛƒ”÷sO%Z‘+Ômž¦% à;~0xæ!ÍUz*%Gòùég½ž¢è±ïþí¡¿ñ½—”|SBM_ýõÙoßý÷áÿüšl-ß~Ý«þþý%YQþ˜‡ù½…{4`DòçìXt@'HMV*L’ذ¶£™ ju›šá†¸úÌ:ªÈ<§¸Ó8.>_Üš_Ö )ÁhÙ«Þ¿‚a²ª%¯@VüÏÌ´¡F´1ŒBBãÎX$/º †}…mÆ#½-ŒŠ¤c£œèÀ)flE]dšQ5»ÍC[5Ób‘BG¯od±’»Žú£¡12•²K%(aɳæýÏÌ‘êžã0bŽ…Z*}ù·ÏÍîEGÑ™Ä0—uéG>ë’'iGC'¡ KßjU$óU¿’Ìò=Á&ª´©¼ov³ZÜÍ8ÉYt†P_étþg6#ÎwÎÓUµT=ñÙ¤0Ÿý´`äP” 5èA¿bNÕ¬ÓŸ ´g;ÑÆ“Pµh wÅO¬­rKˆdÚ>ŽŠuEQx0¶€&¥!‘lcÊj”lc3éQêÁvÝ®œ¬)=ÈMc!Φ7¥Ô>‡g¹ƒ0Ô5Kâà¦S/:"‡DÄ’á„X´$NN¨FíÝÈÄfÈ¢ªd†jãㆵ¡ÏÅQ«Aƒœ#ɳñÌ«ðDjXC:7œ”µ^bS»Ìz”¸ÆÎbÝúÒTÑz­ºŠô¢Dæ^4 Ë=&¯Ýl[ɘ:V oiìêb›v×áéryµeþ0£#­4úM‘š}ìö¸SBsˆcªúÏ•路=§B“ÊSo~v£”­>Õ©ÑãâÓ·”çr™ \rAןÍEu%HÝ|b÷YÊÕn:¹;B„Ž—¼å5ïyÑ{–ÜZï»Ñ]¯þÚ;Ïðê¶¢ñ}ç|‰J*ûÞWºlcgH©×(¤[³¡^ޤ8o6ö;,ýÕÕ\÷`äx iëW‘ à½2ø·=Ý©$ß@Üëf]RMdw¡=¤à7XBüW€iÂ×\ª‰Ìyá¨L‡Õ°u<¶1Å|UµµX|„R›\¦k| ‹+s2±B€R'Wkž˜Ù,kÑ¢ëëþÄÚZ0Ë,¨µÑ+a+ÙW®ò‹bF¬o©N^n~Žž®¾ÎÞîþ/?O_?€ŸàoŸ?~ :¨Ã €õÙ«ÆïŸB…$ü× Eþ€,šØ(EA>|q$Ɔ /nÄÒ¡H%VŒi“dË“×l¶T¹³¤‰3M–„™ò(R}Dj´ÕçÑ+£òŒæS¦R—Cµ 4ªt'ÒŠBŠ}úTh@©?¿VM öª±œÕrÍh¶«×}mIÂ, —&K§L_Ö• U.´œe”cdÆdý þë7ää¤ý#ŽYñ2‡K§.%ÍP3jª)Q».Ývjߺ€ã-:7†¬quû&Âû·ðáÄ‹?Ž<¹òåÌ›g=ºôéÔ«[¿Ž=»öíÜ»{ÿ>¼øñäË›?þûÚ»?¾üùôëÛ¿ßž=þýüþûûÿ`€H`þÇÏ .È yÁ¡_ƒúá„^ˆa†nˆ ~"|¾Qaˆ–hbŠ*®Èb† ¶£‚#ºbŒÖhcŽ:îã‹<þxߌmà$D‰d’Jèã’I ÉÆ‘NÊ'å”V^™d“XîåUnùå–b޹¢–dÂØ¥a^¹æ™n¾9¡™p¦˜fmNyçœzîÙ!Ÿ+։Ƒäž?:ÖHèøåég£Ê@>‘:Ú  g >‘F¨é¡óiÚ©€¡Æ§ß¨”žj¢œ šê☖šA¤¤…²*«-âØ©­êŠj¯ªÊ釼*ùj±Žºþê¬ï%jè¦Ðå'«¨Ÿ–Jíª¹F·)´’nKí¤Îëk¸ý™i+¨Ú2‹í·ßV«®“Å’qì Íf«ìµ×f{/¸öáÚ-¾ÛÒKk²Üm²úŠ‹p}ä"°¿;ì¯ÀùNùîüFË*² k -ĉÈoÇëÊŠ¬²ßë°À ¯<®‡™2œòÆWk(ÊÑ.Y±¸ŠLï²òœ¯ÌŒ’:­ÈµBœŸÉ¹>sÏ,?½¯Ë/ÚñÌKÌ©ÐIlcÎaÀìlØS×KsÃ*ßÌ$•órLë¬÷K(·nW=4Ô½Bº6Éf¼qØX·]7½ÁêgànwâŸJ­x^ƒø‘þ9yãCj¹Q¹§™¾$æ ã·¹çxúèª/(úêô•ÞEê1Êîzí 3n{ƒØúïíÁÎEzÆ|òÊ/Ï|óÎ?}órìnõºY¿ÞôT`/÷qxÏùöÎO¢öSÏúCš‚ú€Tm°…õVL_ VŒIî“ÕkCñu“ÁMäû{Î ¨øu…#áŸjÒb˜û‰e.7ñÀHó?ÉüO€© û¢?•LD0P¢›!P8±àƒò’f0—Y¡ ËW”Ö¼æ.20 nú‡JP32DH‡ÿÅf/fyá=S¿%Ž3Olßþ3rDÞ> gÂrB"Êf‹yщ0Zh†±2.©J›r0fÆ$°aô'™‹ôï…=Ì¡ãWÂЬQHm¤àËXf†Ô¢ §H:ðŠy$¡%…ÈE‘hPœ$ä7 ʱåh ›‚B=nqŠjágæ¨ÊI~ÏkI#aúxÈÉTr3D<üÀ(È3-‘#ƒLDúÏ—·LÌ(ßGI'ØÓä@51iÅYžo|<¸&ñ àM(„3ãTG9MÍ&œë,^:™ÐNsÄS óDB=SpOqäÓ ûNôþ Ѐ t -¨AÚÏ!$´ŽÜÄBþƒðPDt}BE{pQ 64ÝAG­¹Q@|4 ´g™BüAñ4Á"2G9B1›˜&UJûA§“,ˆF÷ˆ¾ó›äÔ$ “K^ò„pÄMS=É2Ч¡¬©D[ØÊ jµƒªŒ%C“š@®:RCUÂMHü½Å7m¢S§ºÆXþ00«\¡úìGU5z5†qMKOƒhÍ?r #Å"ÝÒÆŠ¶•.˜\J»„VȼR—ŽÁÌ&3ÂÔHQ‰Ñ"CöÂûL6™’LåG†ÔÍúT°D%éPùxYÃÕ)/ÅbݲÇÛ¤Ö#™a+]QSrŽ®ü­]ÊþÈÆ³äµ“¼}eKi«Ao–º]]mb«JØ×bQ‹mkm{ÛVŠJ-$#eBšË&v¸oe r•Ë]º2RˆèÍ `ñÚÚÙªF6Li€fù*NíF×·²¥ª\;Zæį< 挻UÏD.Ym."µJ Ï”æ°WüW÷Nv-3üp~oPXû¥&¤þä«Üý*•—ÏuëKE»Vœò˜ðÝm` c ÛÆ¥óëp‹ÙûÝ疺ʼ¯_ ϲ&!ÅÔA°û*WOÊ&¥ƒ–Y°X|8\>˜qf!¤9“ÛlΚðfĹ#!ýÜÁ,>7âÎ+à³~•ãþç„L¡„.´¡èD+ZóÀrÅ/0’%v†µ§>õWÏ+JÈÚ¶Ï9¬-tÜE¶r’‰æörÓXÝ JûÄ6¨·DõûCe"›‰Jö0}÷ê`#|ýÀªþéP–;¬Ä5lGmí Ä1.±S!îm]/0Ñìo%ŽiŠæ5«.Ì9_¬ÛãÊ<˜è½5uSûíþaʱɽÌv§Lñì>š£Ñl±BëìwZ¨Ci¤ÉÜt;?ýèQOúÔÕŒô>(ýÕÁ^ÎÖÙ ¡E‹}ìd/»ÙÏnž«C4ë|øúOS^u­«Îl߃Ûy wæÜ½{·ê¿gnP/ÙåŸ$5N?Ów5ý9ÅÆ¹4ç~厤×näRg ØÛV×[ Ʋý¾K~× ñŒ»M:ì£j{¦ÇM}³=¿kf‡~âÐÞŠ´ÿ[HÐÊRØØ&íK WZÇÞçïüøBî¼_ú†‡¸‡CžåvÛúÝ€þæ‹›ê†ß[‡òdýLül`/·ù,^7Mß^׃ǖõ6O7Ãyápãþ;ÕqU>fbñ¹ª?ǧ_.§†þ5rV¥÷z¢çb_è†* H¼lzøsoÉ6}àt…Z˜y ”|L'þ~¶5¶w~ìVãöGËdB ÆRÎTKÂ%j¢AE™(uznÇ|†ˆ€÷N5§cPÆw FНG\£fD£e½…\¥¥Vò%UïåR›sF”GÖµd¹èisøŒÁUE¥I˜•|1e_¦…‹@æŒ%÷ŒµÕ[qä_„ªÕˆÊhÜØ‰1W^€¸O0Ça\ÄHR¥aÕÕaëgŒ}Vw€6æV Ø};ƉØèrÓvsuXƒ¹”Eø€ç{¨øYŽÕ\àÇ–!nïFt´‚¬ÆÕ÷kظx¾x–ŒPÇ lHó¸`Ž(hYöecH 2}[˜’Y*)‡@9’4þÙ€]˜‚ŠèZH™ƒE¸„O •Q)•S™JiiDét>9”ÉÁ“èwmæ‚Aù•öÇ•˜(”B·]Ó§“|¸–ë(ŽÖ“†aÄAfŠfHm “gÉxýsxèlv)‘`•o|¸‡(yìX†8{d%WÅ—‚h“”è’u(^þiEg‡I—…ù†›(„F)j;EÅÔ’¿‡Ø—H§øK£ÙÈŠ©Qe‘æu‹YqªUm×W\ãtZÉ‹îö—¢x‡¬¸KÃXxŨzÇHlÉ5Hk•ŽíŽˆyÒè‡úV´šc›Ätëè‡Ø|Çy†™œÓ‹:–ŽÕy›E)cË™þðˆ*irÔ‡òHª7c¼…‘ð–œdæ›LõŸ6]é•­wT$v† ù™©Š¸o/°l¹QDŒ¤‘µ$Y?VUé\!)™PȂ҂')‹h©—¡˜Ây‰‚(i®` ”—$iš£÷j9¹¢Ÿ£ —¢-èu›Ù˜`ù£b¹>BŠ•³8£(z¤vTé¤O ¥QZvV©bL)¢Z©¤eI¤AʘDZ£ª–¤.”£2:¤ì9–5—JÕ¦¡ôvp+yp˜›YÊ¥çÖN})YHÅS<6Žñ£‰¹‚àÙ¥ŠG@F„蹉—‰Mô¥Æ"¨sê™G9£+ÇRÕ(þ¡‰™V˜9~È›ùƒZò‡š·›€š©>9˜¤[`—k•XŒìÇà‡ ÓEΈ|ûŸgڨ͖Šö eœš‡ôµ  /»öpiò_‹è£º:a´–ªùä]–´TŒˆÆ ´š &'Ž J©Bi©Äj¬ê^37Ÿ˜RD¬ÅJ£$j’8v¢'ˆ¥ð–fZ¤…Zé ‰aÚ›ój›=Z‚VZ…TÚoTˆjk£[°ñº¯eÚ¯4"¥ ±+±Ïc°‰ˆ°u€¯Sx±ö¦°Ëª¥ôz§ƒº”4z@p:ªR8¦÷’þ—3“'B^ñaw–}";miŠu›™Cd³dʧuda> þ†|nY—#G´·ŸJw…H³ÚÌÚ“!µ2–³;èl“x´ùªnR¦—µ¦Êˆ‘Ú™„ú´õq9‚0‹gÃ×_hKxnHØš¬)ª¹YŠ·‘*»C‡¤¡ ·¸Ç1ª¸é¦“Ê…Ù)^gxaË‚îWžú©¨g\ F[¤É€í•Œ™\–œLÀÕŸ=&¸Ó™^êy«×i¯V&}ªê¸ 8‘ÃiŠÚf­'먳Ûmäp±: –µ¹ø¥Ž»ªW¿ c`È·¢úŽ&–«f»­jzE;“Ûš­¬¹L{a€ûè|Hškcœû–Ùk´ÌY»w•zJà›´JëhþˆË¼*+·yº}¦5·BVxãe½˜¡u;®äu¿z[½Å&?Áq©Q¡Gµ{ƒëš^%ê®­j§^‚…U›>@êÀF ™Ý#Á†šºÃ¯v©¯‹LûÀ|Áv2Â^Zª›ÁÂÂdè” Ã1,Ã3|«‹›=8»¾ ‹¤þºÃKJˆü¯üÚÃPû“uõª/K=é’Ê&§Z›;»´BüÃdTÀcˆ+é=-l§)ûg²V¶!Û­‚ùF¼'g\¼°,9¶‹ÆܸI¬XÞiűæY«È‘—:n»P|„kjî¬òê­¥«›7I× Zb(—Ø¿Áèñ[«W[¡² ÈFÜþUQ,ů»žCÊXsüšJìÄÅëŽï‰¼+ ©•—n’,®H[Ÿ|Uqº4bǶ ·€­çË ¥ÜÆüià*’K‘Ú{œÚmšÇ»Ã&´´ðL¦Àúø®i¬¼cü± û̇ËÃLêÃ/ŠÃ6Ü ÛÍE<ÍGLÂ[êÆ¹|¯)|)4ŒÎé¬Î«Í¾ÚÍÖlÄÝ´;h¼ ô Áß|ÀM\ÍA|Í£§“öœ ýµbüÁÙœ´dzeЇ²óäºî ³¦H¾kǸ¬·^\p½¢—¬¬¶Ê²¥šE¿Jœrf†ý®,Z͉£‹OsjÆ`Â&èÂÓÃéçËèDpÅ}ª Ü'ªSÒŠ€W Ù 9Ù›Ô0éÌÄz½éjÊœ³Ë¥,B¦Qµ¬î·åÉ_d-s\ÚÓíɽ«ÛmçòY¿–np^~×踱Ÿ¯éÚ•›»ûÛåéùD±÷ÖÝþéÝÝ;¼<µÛ¥©Ñ`ÍÔÛÖÏ{å¸É««™šÅìˆ –cîoà:ÚÁ¾nòM¿Äl¹Ø†ê ìÎ:>É´NáæÞã KÅ´ˆ¨Û½t/ÞgÜÞí1Îá¦ìþÑ ~ã4>âOÍ@L„ž— ï…ßÃâ'ò)h.îð$Ÿ”"ïàûò ?òïÌÏñŒ“þOÔ6?ó ‰ðK’BÞ ßZ‚ ì6MF´´Óõéã@¾Ów©óåˆy˜>µÔî\•åb*ë-Zõ™©ÝŒlõXæ;ÿÖ—=õRk@߸ºæX»— ÷·Þ°ª3M‹¦bîêOï Þ{ÞÒ®æ^¡•t¬lªtþ¶mêèúû¦›·O>ãÈÝŠk ïoé{ïõ”.I%Åà…øˆçYŽ™[êÑ*æÈ8¹žn¿ÄË¡ÝôZLà r¢ é­ǾÙÇHOæÖ9ø± ŸðÖÔ»~úË)߃\§Ü bá·q¬ÛÆNõ]°½MÙº?éîmëtôÊÖúƒ8p­wIѽ¹þµÞ¬ÿH}ÎNöÞn|ßÍþâŽØÙ?²}šFÞ¿à¼ÇÖñþï/‹`Çö·ÜOÀ0¯íÑklªñ’wÿÁPI‘âëÐØÕ½Ôê=77†ÎRß!ÞÿA]NX4vˆÇäÑ·d>¡QéTâ¤^±?k–݆áoWAgÏdõš=J·áÏwœŽªcçwýžß¯ú…òÉ ‘¹ ï¡")%/1-3 9ã6 A=GIKEOÝT?RY_aw\ci¯Äpsuwy{}ƒ…‡‰‹‘“•—™›Ÿ¡‘mͦª¯±ƒf³K¶§½õÀ¹Ç1ÅÉ=ÌaÓ?ÏÛÝËÞMþ㛪“ÂlàóíÄÀ˜R¬q `g`¡ 5zX˱® C€CþûÇÇá©4ü-9H¡Â?<~42‘ƒô♣[•ŒQ‘MR˸·ÒK'›&â»R–>4­\®| „¡ÇýØe/‹ŸºnÈC’’–øHZx:VlÄ®U6ÎÜÈñ¤ÂªkËÎX!Ó«DçæÚ*-Ùü$’ËõáS£áæ¡Üú5%K©é¤{¸£É|0&Ø…§Qé_…—7‡ùWH¹ SY4æ›è.³5záéÒûh„š˜ÇÕžH+0ÖŠÈ©R°Ç÷E,»ål_=wŽ]ƒþHê-’¢]}j\O¿µßÕ^Õ¹óÛ8‚N¦È»èV¡‡»†É÷}ó€› ·¬Î}pjémg‹®:ÓX£í¹ë^¨¯$¨ôcð¹÷.Q­üLð$ëKˆ2@¤ún±¼vúðº Tòê>½HÍ0ý²K0ôd»Â2^$͆{PCQÅýêªÌÆHâq¯mk¶Ñh¼+·ê8öjªg '³ƒ2*¯Á2BmréÃ-µ´R0—Ó)ÍL“2ia3#4ÕŒó7Õ‘¶oìÌ=µáSÃ6ý t:_!ôC•TQ:µHÐh TÒI)­ÔÒK1ÍTÓK…sÑO;u•@þ-¢TQ;uH?Ue ÕWoU§Nñ”ÕVnE„TOaʾ?ðºs$u–)ŒˆmU’ù ./i­)Ä=Ý+F]W4VŽžh¤™þTvÎ1µÀ¶»õví&'¾ŠûSÃl‹ÝvI}vµ*ûÑÓ¦j»‘ÃhÑ22E³X •HžèEÒ;Á{ÑYãä{«Å†‰spaŠ# JúZNâ‚YLÑ5üà«C\H$cÌŠFîbeõDï#ýhÔŒÀxeJë´š\pZêÄ#p@9»v@kÃ0“tƒ6݃{×Yfak—²Ý},®èfuÙ—|6<˺Ãl!ìÔÉ‘ ¤™ç&þÓ‹wP€ÇSÏ%n5ž’>«SâöFù,ôÇnûUXé‰:·Àµ‘<ÏL eFè ‘.üWÈ¥Ûc«À@ÂXöÑ;ŸV ló‚î/«Ê@äNé%Ä1ȱZ“c°É;²ôÆçµÖ[§Yå×¹Ç-9ø\£&šK\€÷±e§]ôä-Ÿùë›E~úÌkÅÞû(½ßžî¿/¿[í5/ÔüõµÍU|86_þùé¯ßþûñÏ?ï´…gßü÷ßT=t™ÊĬÈBÅ€þK µŽ7¨e¯áûÞÉå²3¯=³LÖj6AÊ‚XYRÆì6ª ºƒjœÚèžA7D|þ¡ºªÅ::æs7ì†ß®ÙÈM…kG !r7ÅÅO8™„Y,á›Â5:?QÓ~‡œ%ºê‰Dd!ѯ2Q_"zKÊXv6Ö‚]8 ÝPü†1ŒõÊk:+Œ{º”±é pB/$À¶FCVÜ.hÔ£éB¶4%úp 4¢r@Àá«‹ôÉNÌFW¶h‹Qãc¦, ÉEh;ZårT •énŒCÝ(Íc›ŠL8Ä“!$)šZð[Há&Y³¡O‚hƒFyP-9s$ÝÌ”g#å@IJëá‹uaCàRù¬ÑÉ‘Äc ã³Í™è—oüÛgxD¹¡]7‘þ¡*yG2¤rž©¬\EÎÈénr¯”¥u ö/nNu"â#.+DÑéef:CäQÃaÑpת[‘²e£™Nb¹‹¨4cAÈ"µ.p‚S’B-ŠOø¥oz³å˜º™‰˜îf CÈãK:>ΔzØã)èXYÁ µ¦-jù~ŠÓ<%•—G}SQVTU@Õ©&#*«ô—U­n•«]õêW•ÁÒ©VÕ‚e½eÈz=ªrï¬êK+úÌÚÄþ½UkÕ©\É׫–Ì*+óǃå<킼A" è¶•]„íÓðè¥ÃZì|˧JÙ(59¤é°5¬WS(V/ 6Qegþ±à•ËÞPô³]ËÞìz”r¹LŒd+^•Q4öÅG´›Ð|ãÆ,f4ŽÏšãü65ìÌ‘¹Ô4›G ù7ß± j]_AÖ0E6”‘+Äm]4k¬ÔžeŠox™pQJÞòd†“|íŒf0ÔÌgÌ•okÙ=Ò;Íl¦Œe|EIËwÞ¶ŒÕU­zO[^¶Ó† E° É5ÂUI“ímŽâT´“†kùE«sbM†bÓqz¼ló LNpªeCB|—y÷¦àᲸÃÁäg’ËÝÖ“?Qãh…öãOr´P?í ÒÏœx€ß•™nw«Mä^Ìà ”ìqñS:‡žQ$š¨3þ‡¢¢$mMa!Æ …t¡“žC¤m#iš?šÍ+ƒ¹É)†ìbsJ`ºŽS­¢5/ȸY<ïY¶>õók-«Ø6äЫJt]¯êä?ZT‹­TóIéJGZ_`õô§AjQÚö¦7W;ëÕ ²t#M½¦V»šÓKë¬m­ê\U¯öI¦ ¼Ú%2ïÎUílM¼FFcŠ¥Ldù+É›„»ÜåcÓzìb~–›±v…8¤xÚ¤¤6yÂâ _=éߢ¶Ý}ä®v]æR»Û®¸™Ç*2LlS¾ãxzH{®…<º6òÄ®åé&‡» …µT›fEÒ©ÖÙÕjðÚþÜeÕAg†ã)­+Ïy"u¿õ…gÒîÉà)wâ‰ÛmêÀ=«rΈëÝç:Å b o¸-F[Ÿ‰XÍ‹rø¿jÜ+µ—ÍØl¯+ræóE/æsÖ˜?ƒË9ÄxÏQ¾.˜@÷V%wä$/]8+Ö³¨"0ÂÊîçÊ#JoN—‡n™s]†[J¹]ÝlÿT›Gô晤s΢RÑ~ëÌ×½44³;GlÄWü®§¼ûb-SËg^QéÆö47¿jÌCÞÑ¡O¼Á•—WÓÇÉóÞdìXWo¸b•Ôµ·ýíqŸûüq¾§±g}­OÏkß?ø©þð­ÔúáˆþòI?7fÅœþ<å?Þõ þš›¦ßÊÍíy^Ý>:í*#ÿÝ@]÷îhúzÄæëá¡ó«+K~«Î6Œ‹ô{ùOKíJ™iÝ/öÍã-áF+ûf/žKí θhè¾\¬Îd/œ2N:. ¼"f¾¨(þä¯ò¶C6„Œ¹Nå Šå”.I"0ÇÚ/9&ÉÆàÇ20Ðú†‡oŠ®Ù˜lò~¯ä*ðËÍüÌ-,Pmv°½+Ä0ÈÄîäJ¬ìFëìòïÉz&¯oü¶ïw2jGJ,»¤P¥ðNH£OÍ /Úžñ‚ô´pðÙ$Í­Î0UŠï¯ Ùì ùGõäpQÍðñúÖþШøð‹ò°ã0±! ]Šöto±ñƒ÷–ÅÑ™)‘Èè³/÷ðVѳX͹!Iëa¯GžNW Ù# %qÛÎnŒä€ \0èvÎVLÑûš°ùœOFXQZ +Ú­Ïëþðpá6Nò-aˆIï:ÊèjDl Jd n°êc®^q •ñcÆe,ü¢‘¾Û«Ì|’yÎmàÉ5MÐ鲨TˆifÂ6rlž1$ñ3•)ÓÉ Läî˜V ^ŽfËÚ‘rþ)¼0RAŸŠ¥¢'7à†Ž<?¥æ6Oô™6«Îƒ4Û®°I-\t¤ÏîÆLÎÄpÚ4=õr ê3S°2ù~HFƒ1e¢”,§tcð2K{5 ñ½´ôš“)sL÷rKÙƒ©ü2ôÚ”4Ç-åtNé´N1Mñ4OõþtOù´OýôO5PuP µP õPÕ <'á³ßnFÛ5R—t>¥#o¡k:S4J í#Šo* ýœÙ"•R•J¯ôÿ´˜þŒÅ EK’ÛèQUKSˆ ÚÍ?r #L%g`«éÕ´åCg’&YÕR7o~ËurÆþ2àäíc~¤ÞêÆ±Ëi˜‹b¬NÂv2&+B>ˆT) [ÂE6ýSº°Ñõ7Á5ö€-6–é×<®G’‡|î$oTk.Ô\ßæ ‘4ëâÃf®mõó<ñ±Ç–™Ü•¼fÁÓ>‡îH#)!·•çþJ2ïZPZiæ~†ûîÅf@ÃD#ô!ÛµçBm2 Ö˜´]ud?mn•B–as•+|-biÜæuw~…ãøó:“,Þ24¯Ð:z £ó½4sä oR®<°È#ÃΙ†ÌujµB.òs:&@Æ®b ’dqNbaPYY–jïÈd5Tm;Pä O–Þ–_¡É•v”qL¨ÿŒÓ-È6Iq/[®agj·õr4qã %ÇKvk঎‡ 8!,tcÓcr2Gñâ¹Jö'ß4ÛV÷Tÿ§uuUÓÒØ`7vm÷vq7wuwwy·w}÷w7x…×UÕÝÓx‡7y'àuþ'Uy¡wyM%&\7z­—¶N7wT]1ê<È•8Õµ‘³ûõz}7C]öBÖm¶¿šDf'¶eÙ×|¡÷>± )0Då÷D4LŒp‰a^vjé÷|MR²Ü7>0?Žðbu,߇okw€ÿî'·ivPíáœÎ"nTÝfÈ$x€›"„Ix/­""¸„Ux…Y¸…]ø…a8†ex†i¸†mø†q8‡ux‡yø­^&>3ÕE±ÐŠw'‰·ÚÖ3‰`lz§UÀKÏü/Uµp<|êÅ–/1É3¢èV‰%–¸4¶<ùWsuÏŠýU…L•ruUX.˜ g ¶Òþ1d¶Å†Ð‹‘wøž˜jï÷ã¢îÍ®1É·?Ç8?óõÇ|ëZ¹7±‚óanŠÞTÖ"HG×IJá|å½LjÙˆŸlDŒ3“ÓôГnï–ÿþÑ}ÇbÖ (—æ·r ¿<àžÓè N;‹³8퉙`ù:Û÷_»ÖT=ë@·!eÙ\ø–g´–úÒ¶|°o}Ð××ä6–ÿæ6€O(„”Är®†#÷èHa šè‘œ‡ˆ'¸yoïeÌ > p™uÌ<ÿ•bS˜QØ_ï Ãë}ù­—Á‚‘VÅÃn«ØS7 š_Ù}½7ä Z˜Q·j‹9¢Çm’ð.Fþ@²¢³îme™CK:=-ÏS‡‹ZRšRÙ _qønÍù}¡Pë²V’‰cpÓcîr‚=J¢ÜsÇ0™èd–Üv¬sÁËpÁŽm~L“í5Dñ5£·‡Ñð ŒÒdתÉÏÃþ¸® ˜«Åz¬Éº¬Íú¬Ñ:­5"û8#‡: ¢§ñ•ò˜÷„ÏGU$ô) 79­²yíºR•³x3芯yzU·å°Ýz4G…° ;)óë¯[Q4áÚ`ÐÏW+XùØâÚú¥X¢Ù¯Ÿ×?YsOç“"ù­ëm8›:>¹FŽH9uò5±f›“ÎÙ°+¦÷+&Ÿ}ô#Óõ8³Ñ?ÓU_¡(þ}'ê,˜7gÍ©÷ù@7uOºïúlÁÑ–-úºç=³7YV§ã)$Ϙ<³û™-v˜»[k¿©lMò˜û1:«»ñn2ºišÌ œGŸõW¢øÛ½o¾Ñ)ð¼?V&w;Eh¼‰YÆnÃ"zk³ìëæ;é›%¶F·N²}êpayoiÖ£ù¢Oçu~çy¾ç}þç>è…~艾èþè÷ØŽõ—'~³û釫í’H`gÕáôÔûÀÆ´Š‰ñœj]R3[°cåX§¾"_Ðê{Ý’xöR*S%P³¥‘ÕïØ—ÀÁŽõ]nQšî’Ô=eµ vHÕg:JG±ñ}Ø›öؼN’©‘ßl”g'Ž•̽qò·÷^‹zÏH®+ÿË9_ÿöUþ¨Ùs1¶Œ#’£yáM\C5—½Ó޽óYoe¿¥Ë<ÎöÙš±;¤uÿÜ»e þömßãò8#©:høb%Ý,k( g)AÉøÂî½ù3že¾¾]£5œž·£mÍYøÚù-IÓ-Ø¥¹g1ls{NÚv½=¶G×?Ë[õIZÊ+Xà…q¤co ònŠ_ƒ = ›x:Ú·µ]4‚&WU”§¹­™I‘¸(k]Òwʌωö‚" ÷⩚Î'4*R«Ö+6+½€”BŒþÈ¿3Ë<æ*Ñ` ; ÇÌ*w]žg—?h`ȘÜà WY„ÚIŸI¢XŒà—_£ÞšÏ¤Waœá¢Ý'Ž&¥–é)jªê*kë)£k¬ì,'­mì­®kË®ï/p°ðnîþ°±nñ1k²r3 3t´ô4uµõ5v¶ö6w·÷7x¸ø8y¹ù9zºú:{»û;|¼ü<}ý©abSœVïª?®{­Übï „æ nùó„f+2t2d ‰rìèqZÅŒI6 ‰èÊE(&}léò¥0OùD¢ù°O’|uÎÌÁ3K¥öØÉä£ D¡„Ôì G#̨RÛÕ©zt .FNž %èç¤gX­IÁT"œ"Êaëá*Ô©rçŽKÓõ. CKg›ÉíI~2ý¨õ™×ˆŒ¢pæ;1ÍÐ6+éR®œ,QÌ{ybûw†–`õÖì"„ã9XÓ®­þõÖF‰.–gÓæÆãggØÌ ŠÒ¹·hE5µÆ¦hïF(ìÝtç¨Õµ§S·wØõ«sï^.û­Þ“µ'÷nþ<úôê׳oïþ=üøòçÓ¯oÿ>þüú÷óïïÿ?€ 8 x ‚ ÈM)©4Ša4aã…“”_ ÅÂx¯tXKS–8Xasú¼¥àD!›c*¦Ø•„NaaŒTµHš44tŽý܈ÒkÏ`r>¿‰Šž½H$E/&F15.´KÔô(cŠ;ªr%'ÐY`ä+;—áJ(Þ^y9©d$¿m2˜rÀ•2,Nv Ry¾!˜VzegtþÃ5øÉk{®8ÒÎ%¨&>5šf£‹‰È!lR!Ÿ’šå"êâiLP©çvj™¨Åév[[ŽèWb¨Æh𬬱¹ê—BŽu[k»–åOUK~šëS;÷%WB:Ú"|˜¨¨pÝD%™>Im¨i*Åì¨cÆŠÔŠ“\$hœBÖ§5Ö¨Oþ1¬³ëæÔ«]ˆõ©A¬JæêUXßr¦ë¡f©­@¤¢.À<:Èe}ðf™iÛýË£‰¼êcž/´j¬Zª"™“¾ê&Ù­âF{­âÚJT•;ìÈ9dË‹ç.û/ŒlžŠ2Ï4§zš_ Ó‡ægc¥öqmm§þÅ`ùiSnn†´^h¡E¤¨E‡\PWßÜEÓ`»¹ÜŸSÒÓS`“í¸¸š]é#—fu¤äÆ­i’ò”š7–|ûMž™k+8á…~8â‰+¾8ã;þ8ä‘n¡ÐÉò¬¾ßì^.!õ”nåbuºäèPÓ†˜˜cµ´ü •Có25J¥wGzæ¸Õ/ã™´y‘Z®+Ò˜­§rºí/µ• uëyg'‹=vs<}ŸÅ.*'oÓŒnl'3M¨QsNŠ·¶ÏÖ=')P{ÿ¾îÉ[§ñ̰krœò¦b\X­’f¦ ÊQ9»´~%@wíX#Sh@v¿ Ëþ8”<@’10T‡`‚»…«}-2? VÀ¶ö®1f{ ,à™J†BþˆR¬Ñ…—ÿŃB!Ë–Óšy¹Ll)»×»`(³ŽŒƒKôj^Ö<&Q+ì f®ˆCj‚H0«æCuÐ Nð ÛÈ÷½(¢Œ+{“l>g¾lI‚{„ÒÜä–¿¬Á z~:R_>çœñƒ@ž×J8Æy¬ArG#©žHC”¼Æ%%©ÉMr²“žü$(C)ÊQ’²”¦<%*S©ÊU²²•®|%,c)ËYÒ²–¶¼%.s©Ë]ò²—¾ü%0ƒ)Ìa³˜Æ<&2“©Ìe2³™aÎ|&4£)ÍiR³šÖ¼&6³©Ímr³›Þü&8Ã)Îq’³œæ<':Ó©Îu²³î|'<ã)ÏyÒ³žö¼'>ó©Ï}ò³Ÿþü'@*д =(BªÐ…2´¡}(D#Œ;tendra-doc-4.1.2.orig/doc/tcc/table6.ps100644 1750 1750 201431 6466607533 17211 0ustar brooniebroonie%!PS-Adobe-3.0 %%BoundingBox: (atend) %%Pages: (atend) %%PageOrder: (atend) %%DocumentFonts: (atend) %%Creator: Frame 4.0 %%DocumentData: Clean7Bit %%EndComments %%BeginProlog % % Frame ps_prolog 4.0, for use with Frame 4.0 products % This ps_prolog file is Copyright (c) 1986-1993 Frame Technology % Corporation. All rights reserved. This ps_prolog file may be % freely copied and distributed in conjunction with documents created % using FrameMaker, FrameBuilder and FrameViewer as long as this % copyright notice is preserved. % % Frame products normally print colors as their true color on a color printer % or as shades of gray, based on luminance, on a black-and white printer. The % following flag, if set to True, forces all non-white colors to print as pure % black. This has no effect on bitmap images. /FMPrintAllColorsAsBlack false def % % Frame products can either set their own line screens or use a printer's % default settings. Three flags below control this separately for no % separations, spot separations and process separations. If a flag % is true, then the default printer settings will not be changed. If it is % false, Frame products will use their own settings from a table based on % the printer's resolution. /FMUseDefaultNoSeparationScreen true def /FMUseDefaultSpotSeparationScreen true def /FMUseDefaultProcessSeparationScreen false def % % For any given PostScript printer resolution, Frame products have two sets of % screen angles and frequencies for printing process separations, which are % recomended by Adobe. The following variable chooses the higher frequencies % when set to true or the lower frequencies when set to false. This is only % effective if the appropriate FMUseDefault...SeparationScreen flag is false. /FMUseHighFrequencyScreens true def % % PostScript Level 2 printers contain an "Accurate Screens" feature which can % improve process separation rendering at the expense of compute time. This % flag is ignored by PostScript Level 1 printers. /FMUseAcccurateScreens true def % % The following PostScript procedure defines the spot function that Frame % products will use for process separations. You may un-comment-out one of % the alternative functions below, or use your own. % % Dot function /FMSpotFunction {abs exch abs 2 copy add 1 gt {1 sub dup mul exch 1 sub dup mul add 1 sub } {dup mul exch dup mul add 1 exch sub }ifelse } def % % Line function % /FMSpotFunction { pop } def % % Elipse function % /FMSpotFunction { dup 5 mul 8 div mul exch dup mul exch add % sqrt 1 exch sub } def % % /FMversion (4.0) def /FMLevel1 /languagelevel where {pop languagelevel} {1} ifelse 2 lt def /FMPColor FMLevel1 { false /colorimage where {pop pop true} if } { true } ifelse def /FrameDict 400 dict def systemdict /errordict known not {/errordict 10 dict def errordict /rangecheck {stop} put} if % The readline in PS 23.0 doesn't recognize cr's as nl's on AppleTalk FrameDict /tmprangecheck errordict /rangecheck get put errordict /rangecheck {FrameDict /bug true put} put FrameDict /bug false put mark % Some PS machines read past the CR, so keep the following 3 lines together! currentfile 5 string readline 00 0000000000 cleartomark errordict /rangecheck FrameDict /tmprangecheck get put FrameDict /bug get { /readline { /gstring exch def /gfile exch def /gindex 0 def { gfile read pop dup 10 eq {exit} if dup 13 eq {exit} if gstring exch gindex exch put /gindex gindex 1 add def } loop pop gstring 0 gindex getinterval true } bind def } if /FMshowpage /showpage load def /FMquit /quit load def /FMFAILURE { dup = flush FMshowpage /Helvetica findfont 12 scalefont setfont 72 200 moveto show FMshowpage FMquit } def /FMVERSION { FMversion ne { (Frame product version does not match ps_prolog!) FMFAILURE } if } def /FMBADEPSF { (PostScript Lang. Ref. Man., 2nd Ed., H.2.4 says EPS must not call X ) dup dup (X) search pop exch pop exch pop length 4 -1 roll putinterval FMFAILURE } def /FMLOCAL { FrameDict begin 0 def end } def /concatprocs { /proc2 exch cvlit def/proc1 exch cvlit def/newproc proc1 length proc2 length add array def newproc 0 proc1 putinterval newproc proc1 length proc2 putinterval newproc cvx }def FrameDict begin /FMnone 0 def /FMcyan 1 def /FMmagenta 2 def /FMyellow 3 def /FMblack 4 def /FMcustom 5 def /FrameNegative false def /FrameSepIs FMnone def /FrameSepBlack 0 def /FrameSepYellow 0 def /FrameSepMagenta 0 def /FrameSepCyan 0 def /FrameSepRed 1 def /FrameSepGreen 1 def /FrameSepBlue 1 def /FrameCurGray 1 def /FrameCurPat null def /FrameCurColors [ 0 0 0 1 0 0 0 ] def /FrameColorEpsilon .001 def /eqepsilon { sub dup 0 lt {neg} if FrameColorEpsilon le } bind def /FrameCmpColorsCMYK { 2 copy 0 get exch 0 get eqepsilon { 2 copy 1 get exch 1 get eqepsilon { 2 copy 2 get exch 2 get eqepsilon { 3 get exch 3 get eqepsilon } {pop pop false} ifelse }{pop pop false} ifelse } {pop pop false} ifelse } bind def /FrameCmpColorsRGB { 2 copy 4 get exch 0 get eqepsilon { 2 copy 5 get exch 1 get eqepsilon { 6 get exch 2 get eqepsilon }{pop pop false} ifelse } {pop pop false} ifelse } bind def /RGBtoCMYK { 1 exch sub 3 1 roll 1 exch sub 3 1 roll 1 exch sub 3 1 roll 3 copy 2 copy le { pop } { exch pop } ifelse 2 copy le { pop } { exch pop } ifelse dup dup dup 6 1 roll 4 1 roll 7 1 roll sub 6 1 roll sub 5 1 roll sub 4 1 roll } bind def /CMYKtoRGB { dup dup 4 -1 roll add 5 1 roll 3 -1 roll add 4 1 roll add 1 exch sub dup 0 lt {pop 0} if 3 1 roll 1 exch sub dup 0 lt {pop 0} if exch 1 exch sub dup 0 lt {pop 0} if exch } bind def /FrameSepInit { 1.0 RealSetgray } bind def /FrameSetSepColor { /FrameSepBlue exch def /FrameSepGreen exch def /FrameSepRed exch def /FrameSepBlack exch def /FrameSepYellow exch def /FrameSepMagenta exch def /FrameSepCyan exch def /FrameSepIs FMcustom def setCurrentScreen } bind def /FrameSetCyan { /FrameSepBlue 1.0 def /FrameSepGreen 1.0 def /FrameSepRed 0.0 def /FrameSepBlack 0.0 def /FrameSepYellow 0.0 def /FrameSepMagenta 0.0 def /FrameSepCyan 1.0 def /FrameSepIs FMcyan def setCurrentScreen } bind def /FrameSetMagenta { /FrameSepBlue 1.0 def /FrameSepGreen 0.0 def /FrameSepRed 1.0 def /FrameSepBlack 0.0 def /FrameSepYellow 0.0 def /FrameSepMagenta 1.0 def /FrameSepCyan 0.0 def /FrameSepIs FMmagenta def setCurrentScreen } bind def /FrameSetYellow { /FrameSepBlue 0.0 def /FrameSepGreen 1.0 def /FrameSepRed 1.0 def /FrameSepBlack 0.0 def /FrameSepYellow 1.0 def /FrameSepMagenta 0.0 def /FrameSepCyan 0.0 def /FrameSepIs FMyellow def setCurrentScreen } bind def /FrameSetBlack { /FrameSepBlue 0.0 def /FrameSepGreen 0.0 def /FrameSepRed 0.0 def /FrameSepBlack 1.0 def /FrameSepYellow 0.0 def /FrameSepMagenta 0.0 def /FrameSepCyan 0.0 def /FrameSepIs FMblack def setCurrentScreen } bind def /FrameNoSep { /FrameSepIs FMnone def setCurrentScreen } bind def /FrameSetSepColors { FrameDict begin [ exch 1 add 1 roll ] /FrameSepColors exch def end } bind def /FrameColorInSepListCMYK { FrameSepColors { exch dup 3 -1 roll FrameCmpColorsCMYK { pop true exit } if } forall dup true ne {pop false} if } bind def /FrameColorInSepListRGB { FrameSepColors { exch dup 3 -1 roll FrameCmpColorsRGB { pop true exit } if } forall dup true ne {pop false} if } bind def /RealSetgray /setgray load def /RealSetrgbcolor /setrgbcolor load def /RealSethsbcolor /sethsbcolor load def end /setgray { FrameDict begin FrameSepIs FMnone eq { RealSetgray } { FrameSepIs FMblack eq { RealSetgray } { FrameSepIs FMcustom eq FrameSepRed 0 eq and FrameSepGreen 0 eq and FrameSepBlue 0 eq and { RealSetgray } { 1 RealSetgray pop } ifelse } ifelse } ifelse end } bind def /setrgbcolor { FrameDict begin FrameSepIs FMnone eq { RealSetrgbcolor } { 3 copy [ 4 1 roll ] FrameColorInSepListRGB { FrameSepBlue eq exch FrameSepGreen eq and exch FrameSepRed eq and { 0 } { 1 } ifelse } { FMPColor { RealSetrgbcolor currentcmykcolor } { RGBtoCMYK } ifelse FrameSepIs FMblack eq {1.0 exch sub 4 1 roll pop pop pop} { FrameSepIs FMyellow eq {pop 1.0 exch sub 3 1 roll pop pop} { FrameSepIs FMmagenta eq {pop pop 1.0 exch sub exch pop } { FrameSepIs FMcyan eq {pop pop pop 1.0 exch sub } {pop pop pop pop 1} ifelse } ifelse } ifelse } ifelse } ifelse RealSetgray } ifelse end } bind def /sethsbcolor { FrameDict begin FrameSepIs FMnone eq { RealSethsbcolor } { RealSethsbcolor currentrgbcolor setrgbcolor } ifelse end } bind def FrameDict begin /setcmykcolor where { pop /RealSetcmykcolor /setcmykcolor load def } { /RealSetcmykcolor { 4 1 roll 3 { 3 index add 0 max 1 min 1 exch sub 3 1 roll} repeat setrgbcolor pop } bind def } ifelse userdict /setcmykcolor { FrameDict begin FrameSepIs FMnone eq { RealSetcmykcolor } { 4 copy [ 5 1 roll ] FrameColorInSepListCMYK { FrameSepBlack eq exch FrameSepYellow eq and exch FrameSepMagenta eq and exch FrameSepCyan eq and { 0 } { 1 } ifelse } { FrameSepIs FMblack eq {1.0 exch sub 4 1 roll pop pop pop} { FrameSepIs FMyellow eq {pop 1.0 exch sub 3 1 roll pop pop} { FrameSepIs FMmagenta eq {pop pop 1.0 exch sub exch pop } { FrameSepIs FMcyan eq {pop pop pop 1.0 exch sub } {pop pop pop pop 1} ifelse } ifelse } ifelse } ifelse } ifelse RealSetgray } ifelse end } bind put FMLevel1 not { /patProcDict 5 dict dup begin <0f1e3c78f0e1c387> { 3 setlinewidth -1 -1 moveto 9 9 lineto stroke 4 -4 moveto 12 4 lineto stroke -4 4 moveto 4 12 lineto stroke} bind def <0f87c3e1f0783c1e> { 3 setlinewidth -1 9 moveto 9 -1 lineto stroke -4 4 moveto 4 -4 lineto stroke 4 12 moveto 12 4 lineto stroke} bind def <8142241818244281> { 1 setlinewidth -1 9 moveto 9 -1 lineto stroke -1 -1 moveto 9 9 lineto stroke } bind def <03060c183060c081> { 1 setlinewidth -1 -1 moveto 9 9 lineto stroke 4 -4 moveto 12 4 lineto stroke -4 4 moveto 4 12 lineto stroke} bind def <8040201008040201> { 1 setlinewidth -1 9 moveto 9 -1 lineto stroke -4 4 moveto 4 -4 lineto stroke 4 12 moveto 12 4 lineto stroke} bind def end def /patDict 15 dict dup begin /PatternType 1 def /PaintType 2 def /TilingType 3 def /BBox [ 0 0 8 8 ] def /XStep 8 def /YStep 8 def /PaintProc { begin patProcDict bstring known { patProcDict bstring get exec } { 8 8 true [1 0 0 -1 0 8] bstring imagemask } ifelse end } bind def end def } if /combineColor { FrameSepIs FMnone eq { graymode FMLevel1 or not { [/Pattern [/DeviceCMYK]] setcolorspace FrameCurColors 0 4 getinterval aload pop FrameCurPat setcolor } { FrameCurColors 3 get 1.0 ge { FrameCurGray RealSetgray } { FMPColor graymode and { 0 1 3 { FrameCurColors exch get 1 FrameCurGray sub mul } for RealSetcmykcolor } { 4 1 6 { FrameCurColors exch get graymode { 1 exch sub 1 FrameCurGray sub mul 1 exch sub } { 1.0 lt {FrameCurGray} {1} ifelse } ifelse } for RealSetrgbcolor } ifelse } ifelse } ifelse } { FrameCurColors 0 4 getinterval aload FrameColorInSepListCMYK { FrameSepBlack eq exch FrameSepYellow eq and exch FrameSepMagenta eq and exch FrameSepCyan eq and FrameSepIs FMcustom eq and { FrameCurGray } { 1 } ifelse } { FrameSepIs FMblack eq {FrameCurGray 1.0 exch sub mul 1.0 exch sub 4 1 roll pop pop pop} { FrameSepIs FMyellow eq {pop FrameCurGray 1.0 exch sub mul 1.0 exch sub 3 1 roll pop pop} { FrameSepIs FMmagenta eq {pop pop FrameCurGray 1.0 exch sub mul 1.0 exch sub exch pop } { FrameSepIs FMcyan eq {pop pop pop FrameCurGray 1.0 exch sub mul 1.0 exch sub } {pop pop pop pop 1} ifelse } ifelse } ifelse } ifelse } ifelse graymode FMLevel1 or not { [/Pattern [/DeviceGray]] setcolorspace FrameCurPat setcolor } { graymode not FMLevel1 and { dup 1 lt {pop FrameCurGray} if } if RealSetgray } ifelse } ifelse } bind def /savematrix { orgmatrix currentmatrix pop } bind def /restorematrix { orgmatrix setmatrix } bind def /dmatrix matrix def /dpi 72 0 dmatrix defaultmatrix dtransform dup mul exch dup mul add sqrt def /freq dpi dup 72 div round dup 0 eq {pop 1} if 8 mul div def /sangle 1 0 dmatrix defaultmatrix dtransform exch atan def /dpiranges [ 2540 2400 1693 1270 1200 635 600 0 ] def /CMLowFreqs [ 100.402 94.8683 89.2289 100.402 94.8683 66.9349 63.2456 47.4342 ] def /YLowFreqs [ 95.25 90.0 84.65 95.25 90.0 70.5556 66.6667 50.0 ] def /KLowFreqs [ 89.8026 84.8528 79.8088 89.8026 84.8528 74.8355 70.7107 53.033 ] def /CLowAngles [ 71.5651 71.5651 71.5651 71.5651 71.5651 71.5651 71.5651 71.5651 ] def /MLowAngles [ 18.4349 18.4349 18.4349 18.4349 18.4349 18.4349 18.4349 18.4349 ] def /YLowTDot [ true true false true true false false false ] def /CMHighFreqs [ 133.87 126.491 133.843 108.503 102.523 100.402 94.8683 63.2456 ] def /YHighFreqs [ 127.0 120.0 126.975 115.455 109.091 95.25 90.0 60.0 ] def /KHighFreqs [ 119.737 113.137 119.713 128.289 121.218 89.8026 84.8528 63.6395 ] def /CHighAngles [ 71.5651 71.5651 71.5651 70.0169 70.0169 71.5651 71.5651 71.5651 ] def /MHighAngles [ 18.4349 18.4349 18.4349 19.9831 19.9831 18.4349 18.4349 18.4349 ] def /YHighTDot [ false false true false false true true false ] def /PatFreq [ 10.5833 10.0 9.4055 10.5833 10.0 10.5833 10.0 9.375 ] def /screenIndex { 0 1 dpiranges length 1 sub { dup dpiranges exch get 1 sub dpi le {exit} {pop} ifelse } for } bind def /getCyanScreen { FMUseHighFrequencyScreens { CHighAngles CMHighFreqs} {CLowAngles CMLowFreqs} ifelse screenIndex dup 3 1 roll get 3 1 roll get /FMSpotFunction load } bind def /getMagentaScreen { FMUseHighFrequencyScreens { MHighAngles CMHighFreqs } {MLowAngles CMLowFreqs} ifelse screenIndex dup 3 1 roll get 3 1 roll get /FMSpotFunction load } bind def /getYellowScreen { FMUseHighFrequencyScreens { YHighTDot YHighFreqs} { YLowTDot YLowFreqs } ifelse screenIndex dup 3 1 roll get 3 1 roll get { 3 div {2 { 1 add 2 div 3 mul dup floor sub 2 mul 1 sub exch} repeat FMSpotFunction } } {/FMSpotFunction load } ifelse 0.0 exch } bind def /getBlackScreen { FMUseHighFrequencyScreens { KHighFreqs } { KLowFreqs } ifelse screenIndex get 45.0 /FMSpotFunction load } bind def /getSpotScreen { getBlackScreen } bind def /getCompositeScreen { getBlackScreen } bind def /FMSetScreen FMLevel1 { /setscreen load }{ { 8 dict begin /HalftoneType 1 def /SpotFunction exch def /Angle exch def /Frequency exch def /AccurateScreens FMUseAcccurateScreens def currentdict end sethalftone } bind } ifelse def /setDefaultScreen { FMPColor { orgrxfer cvx orggxfer cvx orgbxfer cvx orgxfer cvx setcolortransfer } { orgxfer cvx settransfer } ifelse orgfreq organgle orgproc cvx setscreen } bind def /setCurrentScreen { FrameSepIs FMnone eq { FMUseDefaultNoSeparationScreen { setDefaultScreen } { getCompositeScreen FMSetScreen } ifelse } { FrameSepIs FMcustom eq { FMUseDefaultSpotSeparationScreen { setDefaultScreen } { getSpotScreen FMSetScreen } ifelse } { FMUseDefaultProcessSeparationScreen { setDefaultScreen } { FrameSepIs FMcyan eq { getCyanScreen FMSetScreen } { FrameSepIs FMmagenta eq { getMagentaScreen FMSetScreen } { FrameSepIs FMyellow eq { getYellowScreen FMSetScreen } { getBlackScreen FMSetScreen } ifelse } ifelse } ifelse } ifelse } ifelse } ifelse } bind def end /gstring FMLOCAL /gfile FMLOCAL /gindex FMLOCAL /orgrxfer FMLOCAL /orggxfer FMLOCAL /orgbxfer FMLOCAL /orgxfer FMLOCAL /orgproc FMLOCAL /orgrproc FMLOCAL /orggproc FMLOCAL /orgbproc FMLOCAL /organgle FMLOCAL /orgrangle FMLOCAL /orggangle FMLOCAL /orgbangle FMLOCAL /orgfreq FMLOCAL /orgrfreq FMLOCAL /orggfreq FMLOCAL /orgbfreq FMLOCAL /yscale FMLOCAL /xscale FMLOCAL /edown FMLOCAL /manualfeed FMLOCAL /paperheight FMLOCAL /paperwidth FMLOCAL /FMDOCUMENT { array /FMfonts exch def /#copies exch def FrameDict begin 0 ne /manualfeed exch def /paperheight exch def /paperwidth exch def 0 ne /FrameNegative exch def 0 ne /edown exch def /yscale exch def /xscale exch def FMLevel1 { manualfeed {setmanualfeed} if /FMdicttop countdictstack 1 add def /FMoptop count def setpapername manualfeed {true} {papersize} ifelse {manualpapersize} {false} ifelse {desperatepapersize} {false} ifelse { (Can't select requested paper size for Frame print job!) } if count -1 FMoptop {pop pop} for countdictstack -1 FMdicttop {pop end} for } {{1 dict dup /PageSize [paperwidth paperheight]put setpagedevice}stopped { (Can't select requested paper size for Frame print job!) } if {1 dict dup /ManualFeed manualfeed put setpagedevice } stopped pop } ifelse FMPColor { currentcolorscreen cvlit /orgproc exch def /organgle exch def /orgfreq exch def cvlit /orgbproc exch def /orgbangle exch def /orgbfreq exch def cvlit /orggproc exch def /orggangle exch def /orggfreq exch def cvlit /orgrproc exch def /orgrangle exch def /orgrfreq exch def currentcolortransfer FrameNegative { 1 1 4 { pop { 1 exch sub } concatprocs 4 1 roll } for 4 copy setcolortransfer } if cvlit /orgxfer exch def cvlit /orgbxfer exch def cvlit /orggxfer exch def cvlit /orgrxfer exch def } { currentscreen cvlit /orgproc exch def /organgle exch def /orgfreq exch def currenttransfer FrameNegative { { 1 exch sub } concatprocs dup settransfer } if cvlit /orgxfer exch def } ifelse end } def /pagesave FMLOCAL /orgmatrix FMLOCAL /landscape FMLOCAL /pwid FMLOCAL /FMBEGINPAGE { FrameDict begin /pagesave save def 3.86 setmiterlimit /landscape exch 0 ne def landscape { 90 rotate 0 exch dup /pwid exch def neg translate pop }{ pop /pwid exch def } ifelse edown { [-1 0 0 1 pwid 0] concat } if 0 0 moveto paperwidth 0 lineto paperwidth paperheight lineto 0 paperheight lineto 0 0 lineto 1 setgray fill xscale yscale scale /orgmatrix matrix def gsave } def /FMENDPAGE { grestore pagesave restore end showpage } def /FMFONTDEFINE { FrameDict begin findfont ReEncode 1 index exch definefont FMfonts 3 1 roll put end } def /FMFILLS { FrameDict begin dup array /fillvals exch def dict /patCache exch def end } def /FMFILL { FrameDict begin fillvals 3 1 roll put end } def /FMNORMALIZEGRAPHICS { newpath 0.0 0.0 moveto 1 setlinewidth 0 setlinecap 0 0 0 sethsbcolor 0 setgray } bind def /fx FMLOCAL /fy FMLOCAL /fh FMLOCAL /fw FMLOCAL /llx FMLOCAL /lly FMLOCAL /urx FMLOCAL /ury FMLOCAL /FMBEGINEPSF { end /FMEPSF save def /showpage {} def % See Adobe's "PostScript Language Reference Manual, 2nd Edition", page 714. % "...the following operators MUST NOT be used in an EPS file:" (emphasis ours) /banddevice {(banddevice) FMBADEPSF} def /clear {(clear) FMBADEPSF} def /cleardictstack {(cleardictstack) FMBADEPSF} def /copypage {(copypage) FMBADEPSF} def /erasepage {(erasepage) FMBADEPSF} def /exitserver {(exitserver) FMBADEPSF} def /framedevice {(framedevice) FMBADEPSF} def /grestoreall {(grestoreall) FMBADEPSF} def /initclip {(initclip) FMBADEPSF} def /initgraphics {(initgraphics) FMBADEPSF} def /initmatrix {(initmatrix) FMBADEPSF} def /quit {(quit) FMBADEPSF} def /renderbands {(renderbands) FMBADEPSF} def /setglobal {(setglobal) FMBADEPSF} def /setpagedevice {(setpagedevice) FMBADEPSF} def /setshared {(setshared) FMBADEPSF} def /startjob {(startjob) FMBADEPSF} def /lettertray {(lettertray) FMBADEPSF} def /letter {(letter) FMBADEPSF} def /lettersmall {(lettersmall) FMBADEPSF} def /11x17tray {(11x17tray) FMBADEPSF} def /11x17 {(11x17) FMBADEPSF} def /ledgertray {(ledgertray) FMBADEPSF} def /ledger {(ledger) FMBADEPSF} def /legaltray {(legaltray) FMBADEPSF} def /legal {(legal) FMBADEPSF} def /statementtray {(statementtray) FMBADEPSF} def /statement {(statement) FMBADEPSF} def /executivetray {(executivetray) FMBADEPSF} def /executive {(executive) FMBADEPSF} def /a3tray {(a3tray) FMBADEPSF} def /a3 {(a3) FMBADEPSF} def /a4tray {(a4tray) FMBADEPSF} def /a4 {(a4) FMBADEPSF} def /a4small {(a4small) FMBADEPSF} def /b4tray {(b4tray) FMBADEPSF} def /b4 {(b4) FMBADEPSF} def /b5tray {(b5tray) FMBADEPSF} def /b5 {(b5) FMBADEPSF} def FMNORMALIZEGRAPHICS [/fy /fx /fh /fw /ury /urx /lly /llx] {exch def} forall fx fw 2 div add fy fh 2 div add translate rotate fw 2 div neg fh 2 div neg translate fw urx llx sub div fh ury lly sub div scale llx neg lly neg translate /FMdicttop countdictstack 1 add def /FMoptop count def } bind def /FMENDEPSF { count -1 FMoptop {pop pop} for countdictstack -1 FMdicttop {pop end} for FMEPSF restore FrameDict begin } bind def FrameDict begin /setmanualfeed { %%BeginFeature *ManualFeed True statusdict /manualfeed true put %%EndFeature } bind def /max {2 copy lt {exch} if pop} bind def /min {2 copy gt {exch} if pop} bind def /inch {72 mul} def /pagedimen { paperheight sub abs 16 lt exch paperwidth sub abs 16 lt and {/papername exch def} {pop} ifelse } bind def /papersizedict FMLOCAL /setpapername { /papersizedict 14 dict def papersizedict begin /papername /unknown def /Letter 8.5 inch 11.0 inch pagedimen /LetterSmall 7.68 inch 10.16 inch pagedimen /Tabloid 11.0 inch 17.0 inch pagedimen /Ledger 17.0 inch 11.0 inch pagedimen /Legal 8.5 inch 14.0 inch pagedimen /Statement 5.5 inch 8.5 inch pagedimen /Executive 7.5 inch 10.0 inch pagedimen /A3 11.69 inch 16.5 inch pagedimen /A4 8.26 inch 11.69 inch pagedimen /A4Small 7.47 inch 10.85 inch pagedimen /B4 10.125 inch 14.33 inch pagedimen /B5 7.16 inch 10.125 inch pagedimen end } bind def /papersize { papersizedict begin /Letter {lettertray letter} def /LetterSmall {lettertray lettersmall} def /Tabloid {11x17tray 11x17} def /Ledger {ledgertray ledger} def /Legal {legaltray legal} def /Statement {statementtray statement} def /Executive {executivetray executive} def /A3 {a3tray a3} def /A4 {a4tray a4} def /A4Small {a4tray a4small} def /B4 {b4tray b4} def /B5 {b5tray b5} def /unknown {unknown} def papersizedict dup papername known {papername} {/unknown} ifelse get end statusdict begin stopped end } bind def /manualpapersize { papersizedict begin /Letter {letter} def /LetterSmall {lettersmall} def /Tabloid {11x17} def /Ledger {ledger} def /Legal {legal} def /Statement {statement} def /Executive {executive} def /A3 {a3} def /A4 {a4} def /A4Small {a4small} def /B4 {b4} def /B5 {b5} def /unknown {unknown} def papersizedict dup papername known {papername} {/unknown} ifelse get end stopped } bind def /desperatepapersize { statusdict /setpageparams known { paperwidth paperheight 0 1 statusdict begin {setpageparams} stopped end } {true} ifelse } bind def /DiacriticEncoding [ /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /space /exclam /quotedbl /numbersign /dollar /percent /ampersand /quotesingle /parenleft /parenright /asterisk /plus /comma /hyphen /period /slash /zero /one /two /three /four /five /six /seven /eight /nine /colon /semicolon /less /equal /greater /question /at /A /B /C /D /E /F /G /H /I /J /K /L /M /N /O /P /Q /R /S /T /U /V /W /X /Y /Z /bracketleft /backslash /bracketright /asciicircum /underscore /grave /a /b /c /d /e /f /g /h /i /j /k /l /m /n /o /p /q /r /s /t /u /v /w /x /y /z /braceleft /bar /braceright /asciitilde /.notdef /Adieresis /Aring /Ccedilla /Eacute /Ntilde /Odieresis /Udieresis /aacute /agrave /acircumflex /adieresis /atilde /aring /ccedilla /eacute /egrave /ecircumflex /edieresis /iacute /igrave /icircumflex /idieresis /ntilde /oacute /ograve /ocircumflex /odieresis /otilde /uacute /ugrave /ucircumflex /udieresis /dagger /.notdef /cent /sterling /section /bullet /paragraph /germandbls /registered /copyright /trademark /acute /dieresis /.notdef /AE /Oslash /.notdef /.notdef /.notdef /.notdef /yen /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /ordfeminine /ordmasculine /.notdef /ae /oslash /questiondown /exclamdown /logicalnot /.notdef /florin /.notdef /.notdef /guillemotleft /guillemotright /ellipsis /.notdef /Agrave /Atilde /Otilde /OE /oe /endash /emdash /quotedblleft /quotedblright /quoteleft /quoteright /.notdef /.notdef /ydieresis /Ydieresis /fraction /currency /guilsinglleft /guilsinglright /fi /fl /daggerdbl /periodcentered /quotesinglbase /quotedblbase /perthousand /Acircumflex /Ecircumflex /Aacute /Edieresis /Egrave /Iacute /Icircumflex /Idieresis /Igrave /Oacute /Ocircumflex /.notdef /Ograve /Uacute /Ucircumflex /Ugrave /dotlessi /circumflex /tilde /macron /breve /dotaccent /ring /cedilla /hungarumlaut /ogonek /caron ] def /ReEncode { dup length dict begin { 1 index /FID ne {def} {pop pop} ifelse } forall 0 eq {/Encoding DiacriticEncoding def} if currentdict end } bind def FMPColor { /BEGINBITMAPCOLOR { BITMAPCOLOR} def /BEGINBITMAPCOLORc { BITMAPCOLORc} def /BEGINBITMAPTRUECOLOR { BITMAPTRUECOLOR } def /BEGINBITMAPTRUECOLORc { BITMAPTRUECOLORc } def } { /BEGINBITMAPCOLOR { BITMAPGRAY} def /BEGINBITMAPCOLORc { BITMAPGRAYc} def /BEGINBITMAPTRUECOLOR { BITMAPTRUEGRAY } def /BEGINBITMAPTRUECOLORc { BITMAPTRUEGRAYc } def } ifelse /K { FMPrintAllColorsAsBlack { dup 1 eq 2 index 1 eq and 3 index 1 eq and not {7 {pop} repeat 0 0 0 1 0 0 0} if } if FrameCurColors astore pop combineColor } bind def /graymode true def /bwidth FMLOCAL /bpside FMLOCAL /bstring FMLOCAL /onbits FMLOCAL /offbits FMLOCAL /xindex FMLOCAL /yindex FMLOCAL /x FMLOCAL /y FMLOCAL /setPatternMode { FMLevel1 { /bwidth exch def /bpside exch def /bstring exch def /onbits 0 def /offbits 0 def freq sangle landscape {90 add} if {/y exch def /x exch def /xindex x 1 add 2 div bpside mul cvi def /yindex y 1 add 2 div bpside mul cvi def bstring yindex bwidth mul xindex 8 idiv add get 1 7 xindex 8 mod sub bitshift and 0 ne FrameNegative {not} if {/onbits onbits 1 add def 1} {/offbits offbits 1 add def 0} ifelse } setscreen offbits offbits onbits add div FrameNegative {1.0 exch sub} if /FrameCurGray exch def } { pop pop dup patCache exch known { patCache exch get } { dup patDict /bstring 3 -1 roll put patDict 9 PatFreq screenIndex get div dup matrix scale makepattern dup patCache 4 -1 roll 3 -1 roll put } ifelse /FrameCurGray 0 def /FrameCurPat exch def } ifelse /graymode false def combineColor } bind def /setGrayScaleMode { graymode not { /graymode true def FMLevel1 { setCurrentScreen } if } if /FrameCurGray exch def combineColor } bind def /normalize { transform round exch round exch itransform } bind def /dnormalize { dtransform round exch round exch idtransform } bind def /lnormalize { 0 dtransform exch cvi 2 idiv 2 mul 1 add exch idtransform pop } bind def /H { lnormalize setlinewidth } bind def /Z { setlinecap } bind def /PFill { graymode FMLevel1 or not { gsave 1 setgray eofill grestore } if } bind def /PStroke { graymode FMLevel1 or not { gsave 1 setgray stroke grestore } if stroke } bind def /fillvals FMLOCAL /X { fillvals exch get dup type /stringtype eq {8 1 setPatternMode} {setGrayScaleMode} ifelse } bind def /V { PFill gsave eofill grestore } bind def /Vclip { clip } bind def /Vstrk { currentlinewidth exch setlinewidth PStroke setlinewidth } bind def /N { PStroke } bind def /Nclip { strokepath clip newpath } bind def /Nstrk { currentlinewidth exch setlinewidth PStroke setlinewidth } bind def /M {newpath moveto} bind def /E {lineto} bind def /D {curveto} bind def /O {closepath} bind def /n FMLOCAL /L { /n exch def newpath normalize moveto 2 1 n {pop normalize lineto} for } bind def /Y { L closepath } bind def /x1 FMLOCAL /x2 FMLOCAL /y1 FMLOCAL /y2 FMLOCAL /R { /y2 exch def /x2 exch def /y1 exch def /x1 exch def x1 y1 x2 y1 x2 y2 x1 y2 4 Y } bind def /rad FMLOCAL /rarc {rad arcto } bind def /RR { /rad exch def normalize /y2 exch def /x2 exch def normalize /y1 exch def /x1 exch def mark newpath { x1 y1 rad add moveto x1 y2 x2 y2 rarc x2 y2 x2 y1 rarc x2 y1 x1 y1 rarc x1 y1 x1 y2 rarc closepath } stopped {x1 y1 x2 y2 R} if cleartomark } bind def /RRR { /rad exch def normalize /y4 exch def /x4 exch def normalize /y3 exch def /x3 exch def normalize /y2 exch def /x2 exch def normalize /y1 exch def /x1 exch def newpath normalize moveto mark { x2 y2 x3 y3 rarc x3 y3 x4 y4 rarc x4 y4 x1 y1 rarc x1 y1 x2 y2 rarc closepath } stopped {x1 y1 x2 y2 x3 y3 x4 y4 newpath moveto lineto lineto lineto closepath} if cleartomark } bind def /C { grestore gsave R clip setCurrentScreen } bind def /CP { grestore gsave Y clip setCurrentScreen } bind def /FMpointsize FMLOCAL /F { FMfonts exch get FMpointsize scalefont setfont } bind def /Q { /FMpointsize exch def F } bind def /T { moveto show } bind def /RF { rotate 0 ne {-1 1 scale} if } bind def /TF { gsave moveto RF show grestore } bind def /P { moveto 0 32 3 2 roll widthshow } bind def /PF { gsave moveto RF 0 32 3 2 roll widthshow grestore } bind def /S { moveto 0 exch ashow } bind def /SF { gsave moveto RF 0 exch ashow grestore } bind def /B { moveto 0 32 4 2 roll 0 exch awidthshow } bind def /BF { gsave moveto RF 0 32 4 2 roll 0 exch awidthshow grestore } bind def /G { gsave newpath normalize translate 0.0 0.0 moveto dnormalize scale 0.0 0.0 1.0 5 3 roll arc closepath PFill fill grestore } bind def /Gstrk { savematrix newpath 2 index 2 div add exch 3 index 2 div sub exch normalize 2 index 2 div sub exch 3 index 2 div add exch translate scale 0.0 0.0 1.0 5 3 roll arc restorematrix currentlinewidth exch setlinewidth PStroke setlinewidth } bind def /Gclip { newpath savematrix normalize translate 0.0 0.0 moveto dnormalize scale 0.0 0.0 1.0 5 3 roll arc closepath clip newpath restorematrix } bind def /GG { gsave newpath normalize translate 0.0 0.0 moveto rotate dnormalize scale 0.0 0.0 1.0 5 3 roll arc closepath PFill fill grestore } bind def /GGclip { savematrix newpath normalize translate 0.0 0.0 moveto rotate dnormalize scale 0.0 0.0 1.0 5 3 roll arc closepath clip newpath restorematrix } bind def /GGstrk { savematrix newpath normalize translate 0.0 0.0 moveto rotate dnormalize scale 0.0 0.0 1.0 5 3 roll arc closepath restorematrix currentlinewidth exch setlinewidth PStroke setlinewidth } bind def /A { gsave savematrix newpath 2 index 2 div add exch 3 index 2 div sub exch normalize 2 index 2 div sub exch 3 index 2 div add exch translate scale 0.0 0.0 1.0 5 3 roll arc restorematrix PStroke grestore } bind def /Aclip { newpath savematrix normalize translate 0.0 0.0 moveto dnormalize scale 0.0 0.0 1.0 5 3 roll arc closepath strokepath clip newpath restorematrix } bind def /Astrk { Gstrk } bind def /AA { gsave savematrix newpath 3 index 2 div add exch 4 index 2 div sub exch normalize 3 index 2 div sub exch 4 index 2 div add exch translate rotate scale 0.0 0.0 1.0 5 3 roll arc restorematrix PStroke grestore } bind def /AAclip { savematrix newpath normalize translate 0.0 0.0 moveto rotate dnormalize scale 0.0 0.0 1.0 5 3 roll arc closepath strokepath clip newpath restorematrix } bind def /AAstrk { GGstrk } bind def /x FMLOCAL /y FMLOCAL /w FMLOCAL /h FMLOCAL /xx FMLOCAL /yy FMLOCAL /ww FMLOCAL /hh FMLOCAL /FMsaveobject FMLOCAL /FMoptop FMLOCAL /FMdicttop FMLOCAL /BEGINPRINTCODE { /FMdicttop countdictstack 1 add def /FMoptop count 7 sub def /FMsaveobject save def userdict begin /showpage {} def FMNORMALIZEGRAPHICS 3 index neg 3 index neg translate } bind def /ENDPRINTCODE { count -1 FMoptop {pop pop} for countdictstack -1 FMdicttop {pop end} for FMsaveobject restore } bind def /gn { 0 { 46 mul cf read pop 32 sub dup 46 lt {exit} if 46 sub add } loop add } bind def /str FMLOCAL /cfs { /str sl string def 0 1 sl 1 sub {str exch val put} for str def } bind def /ic [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0223 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0223 0 {0 hx} {1 hx} {2 hx} {3 hx} {4 hx} {5 hx} {6 hx} {7 hx} {8 hx} {9 hx} {10 hx} {11 hx} {12 hx} {13 hx} {14 hx} {15 hx} {16 hx} {17 hx} {18 hx} {19 hx} {gn hx} {0} {1} {2} {3} {4} {5} {6} {7} {8} {9} {10} {11} {12} {13} {14} {15} {16} {17} {18} {19} {gn} {0 wh} {1 wh} {2 wh} {3 wh} {4 wh} {5 wh} {6 wh} {7 wh} {8 wh} {9 wh} {10 wh} {11 wh} {12 wh} {13 wh} {14 wh} {gn wh} {0 bl} {1 bl} {2 bl} {3 bl} {4 bl} {5 bl} {6 bl} {7 bl} {8 bl} {9 bl} {10 bl} {11 bl} {12 bl} {13 bl} {14 bl} {gn bl} {0 fl} {1 fl} {2 fl} {3 fl} {4 fl} {5 fl} {6 fl} {7 fl} {8 fl} {9 fl} {10 fl} {11 fl} {12 fl} {13 fl} {14 fl} {gn fl} ] def /sl FMLOCAL /val FMLOCAL /ws FMLOCAL /im FMLOCAL /bs FMLOCAL /cs FMLOCAL /len FMLOCAL /pos FMLOCAL /ms { /sl exch def /val 255 def /ws cfs /im cfs /val 0 def /bs cfs /cs cfs } bind def 400 ms /ip { is 0 cf cs readline pop { ic exch get exec add } forall pop } bind def /rip { bis ris copy pop is 0 cf cs readline pop { ic exch get exec add } forall pop pop ris gis copy pop dup is exch cf cs readline pop { ic exch get exec add } forall pop pop gis bis copy pop dup add is exch cf cs readline pop { ic exch get exec add } forall pop } bind def /wh { /len exch def /pos exch def ws 0 len getinterval im pos len getinterval copy pop pos len } bind def /bl { /len exch def /pos exch def bs 0 len getinterval im pos len getinterval copy pop pos len } bind def /s1 1 string def /fl { /len exch def /pos exch def /val cf s1 readhexstring pop 0 get def pos 1 pos len add 1 sub {im exch val put} for pos len } bind def /hx { 3 copy getinterval cf exch readhexstring pop pop } bind def /h FMLOCAL /w FMLOCAL /d FMLOCAL /lb FMLOCAL /bitmapsave FMLOCAL /is FMLOCAL /cf FMLOCAL /wbytes { dup dup 24 eq { pop pop 3 mul } { 8 eq {pop} {1 eq {7 add 8 idiv} {3 add 4 idiv} ifelse} ifelse } ifelse } bind def /BEGINBITMAPBWc { 1 {} COMMONBITMAPc } bind def /BEGINBITMAPGRAYc { 8 {} COMMONBITMAPc } bind def /BEGINBITMAP2BITc { 2 {} COMMONBITMAPc } bind def /COMMONBITMAPc { /r exch def /d exch def gsave 3 index 2 div add exch 4 index 2 div add exch translate rotate 1 index 2 div neg 1 index 2 div neg translate scale /h exch def /w exch def /lb w d wbytes def sl lb lt {lb ms} if /bitmapsave save def r /is im 0 lb getinterval def ws 0 lb getinterval is copy pop /cf currentfile def w h d [w 0 0 h neg 0 h] {ip} image bitmapsave restore grestore } bind def /BEGINBITMAPBW { 1 {} COMMONBITMAP } bind def /BEGINBITMAPGRAY { 8 {} COMMONBITMAP } bind def /BEGINBITMAP2BIT { 2 {} COMMONBITMAP } bind def /COMMONBITMAP { /r exch def /d exch def gsave 3 index 2 div add exch 4 index 2 div add exch translate rotate 1 index 2 div neg 1 index 2 div neg translate scale /h exch def /w exch def /bitmapsave save def r /is w d wbytes string def /cf currentfile def w h d [w 0 0 h neg 0 h] {cf is readhexstring pop} image bitmapsave restore grestore } bind def /ngrayt 256 array def /nredt 256 array def /nbluet 256 array def /ngreent 256 array def /gryt FMLOCAL /blut FMLOCAL /grnt FMLOCAL /redt FMLOCAL /indx FMLOCAL /cynu FMLOCAL /magu FMLOCAL /yelu FMLOCAL /k FMLOCAL /u FMLOCAL FMLevel1 { /colorsetup { currentcolortransfer /gryt exch def /blut exch def /grnt exch def /redt exch def 0 1 255 { /indx exch def /cynu 1 red indx get 255 div sub def /magu 1 green indx get 255 div sub def /yelu 1 blue indx get 255 div sub def /k cynu magu min yelu min def /u k currentundercolorremoval exec def % /u 0 def nredt indx 1 0 cynu u sub max sub redt exec put ngreent indx 1 0 magu u sub max sub grnt exec put nbluet indx 1 0 yelu u sub max sub blut exec put ngrayt indx 1 k currentblackgeneration exec sub gryt exec put } for {255 mul cvi nredt exch get} {255 mul cvi ngreent exch get} {255 mul cvi nbluet exch get} {255 mul cvi ngrayt exch get} setcolortransfer {pop 0} setundercolorremoval {} setblackgeneration } bind def } { /colorSetup2 { [ /Indexed /DeviceRGB 255 {dup red exch get 255 div exch dup green exch get 255 div exch blue exch get 255 div} ] setcolorspace } bind def } ifelse /tran FMLOCAL /fakecolorsetup { /tran 256 string def 0 1 255 {/indx exch def tran indx red indx get 77 mul green indx get 151 mul blue indx get 28 mul add add 256 idiv put} for currenttransfer {255 mul cvi tran exch get 255.0 div} exch concatprocs settransfer } bind def /BITMAPCOLOR { /d 8 def gsave 3 index 2 div add exch 4 index 2 div add exch translate rotate 1 index 2 div neg 1 index 2 div neg translate scale /h exch def /w exch def /bitmapsave save def FMLevel1 { colorsetup /is w d wbytes string def /cf currentfile def w h d [w 0 0 h neg 0 h] {cf is readhexstring pop} {is} {is} true 3 colorimage } { colorSetup2 /is w d wbytes string def /cf currentfile def 7 dict dup begin /ImageType 1 def /Width w def /Height h def /ImageMatrix [w 0 0 h neg 0 h] def /DataSource {cf is readhexstring pop} bind def /BitsPerComponent d def /Decode [0 255] def end image } ifelse bitmapsave restore grestore } bind def /BITMAPCOLORc { /d 8 def gsave 3 index 2 div add exch 4 index 2 div add exch translate rotate 1 index 2 div neg 1 index 2 div neg translate scale /h exch def /w exch def /lb w d wbytes def sl lb lt {lb ms} if /bitmapsave save def FMLevel1 { colorsetup /is im 0 lb getinterval def ws 0 lb getinterval is copy pop /cf currentfile def w h d [w 0 0 h neg 0 h] {ip} {is} {is} true 3 colorimage } { colorSetup2 /is im 0 lb getinterval def ws 0 lb getinterval is copy pop /cf currentfile def 7 dict dup begin /ImageType 1 def /Width w def /Height h def /ImageMatrix [w 0 0 h neg 0 h] def /DataSource {ip} bind def /BitsPerComponent d def /Decode [0 255] def end image } ifelse bitmapsave restore grestore } bind def /BITMAPTRUECOLORc { /d 24 def gsave 3 index 2 div add exch 4 index 2 div add exch translate rotate 1 index 2 div neg 1 index 2 div neg translate scale /h exch def /w exch def /lb w d wbytes def sl lb lt {lb ms} if /bitmapsave save def /is im 0 lb getinterval def /ris im 0 w getinterval def /gis im w w getinterval def /bis im w 2 mul w getinterval def ws 0 lb getinterval is copy pop /cf currentfile def w h 8 [w 0 0 h neg 0 h] {w rip pop ris} {gis} {bis} true 3 colorimage bitmapsave restore grestore } bind def /BITMAPTRUECOLOR { gsave 3 index 2 div add exch 4 index 2 div add exch translate rotate 1 index 2 div neg 1 index 2 div neg translate scale /h exch def /w exch def /bitmapsave save def /is w string def /gis w string def /bis w string def /cf currentfile def w h 8 [w 0 0 h neg 0 h] { cf is readhexstring pop } { cf gis readhexstring pop } { cf bis readhexstring pop } true 3 colorimage bitmapsave restore grestore } bind def /BITMAPTRUEGRAYc { /d 24 def gsave 3 index 2 div add exch 4 index 2 div add exch translate rotate 1 index 2 div neg 1 index 2 div neg translate scale /h exch def /w exch def /lb w d wbytes def sl lb lt {lb ms} if /bitmapsave save def /is im 0 lb getinterval def /ris im 0 w getinterval def /gis im w w getinterval def /bis im w 2 mul w getinterval def ws 0 lb getinterval is copy pop /cf currentfile def w h 8 [w 0 0 h neg 0 h] {w rip pop ris gis bis w gray} image bitmapsave restore grestore } bind def /ww FMLOCAL /r FMLOCAL /g FMLOCAL /b FMLOCAL /i FMLOCAL /gray { /ww exch def /b exch def /g exch def /r exch def 0 1 ww 1 sub { /i exch def r i get .299 mul g i get .587 mul b i get .114 mul add add r i 3 -1 roll floor cvi put } for r } bind def /BITMAPTRUEGRAY { gsave 3 index 2 div add exch 4 index 2 div add exch translate rotate 1 index 2 div neg 1 index 2 div neg translate scale /h exch def /w exch def /bitmapsave save def /is w string def /gis w string def /bis w string def /cf currentfile def w h 8 [w 0 0 h neg 0 h] { cf is readhexstring pop cf gis readhexstring pop cf bis readhexstring pop w gray} image bitmapsave restore grestore } bind def /BITMAPGRAY { 8 {fakecolorsetup} COMMONBITMAP } bind def /BITMAPGRAYc { 8 {fakecolorsetup} COMMONBITMAPc } bind def /ENDBITMAP { } bind def end /ALDsave FMLOCAL /ALDmatrix matrix def ALDmatrix currentmatrix pop /StartALD { /ALDsave save def savematrix ALDmatrix setmatrix } bind def /InALD { restorematrix } bind def /DoneALD { ALDsave restore } bind def /I { setdash } bind def /J { [] 0 setdash } bind def %%EndProlog %%BeginSetup (4.0) FMVERSION 1.20 1.20 0 0 10000 10000 0 1 5 FMDOCUMENT 0 0 /Times-Bold FMFONTDEFINE 1 0 /Times-Roman FMFONTDEFINE 2 0 /Times-Italic FMFONTDEFINE 3 0 /Courier-Bold FMFONTDEFINE 32 FMFILLS 0 0 FMFILL 1 0.1 FMFILL 2 0.3 FMFILL 3 0.5 FMFILL 4 0.7 FMFILL 5 0.9 FMFILL 6 0.97 FMFILL 7 1 FMFILL 8 <0f1e3c78f0e1c387> FMFILL 9 <0f87c3e1f0783c1e> FMFILL 10 FMFILL 11 FMFILL 12 <8142241818244281> FMFILL 13 <03060c183060c081> FMFILL 14 <8040201008040201> FMFILL 16 1 FMFILL 17 0.9 FMFILL 18 0.7 FMFILL 19 0.5 FMFILL 20 0.3 FMFILL 21 0.1 FMFILL 22 0.03 FMFILL 23 0 FMFILL 24 FMFILL 25 FMFILL 26 <3333333333333333> FMFILL 27 <0000ffff0000ffff> FMFILL 28 <7ebddbe7e7dbbd7e> FMFILL 29 FMFILL 30 <7fbfdfeff7fbfdfe> FMFILL %%EndSetup %%Page: "26" 1 %%BeginPaperSize: A4 %%EndPaperSize 595.3 841.9 0 FMBEGINPAGE [0 0 0 1 0 0 0] [ 0 1 1 0 1 0 0] [ 0 1 0 0 1 0 1] [ 1 1 0 0 0 0 1] [ 0 1 1 0 1 0 0] [ 1 0 1 0 0 1 0] [ 1 0 0 0 0 1 1] [ 0 0 1 0 1 1 0] 8 FrameSetSepColors FrameNoSep 0 0 0 1 0 0 0 K J 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 10 Q 0 X 0 0 0 1 0 0 0 K (T) 1.87 810.37 T (ABLE A. Command-line Options) 7.8 810.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 10.79 784.53 102.46 803.53 R 5 X V 0 X (Option) 16.29 789.37 T 103.46 784.53 405.79 803.53 R 5 X V 0 X (Description) 108.96 789.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (-A -) 16.29 773.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (unasserts all built-in predicates) 108.96 773.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-A) 16.29 757.37 T 2 F (assertion) 29.34 757.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (asserts the predicate) 108.96 757.37 T 2 F (assertion) 191.99 757.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-B) 16.29 741.37 T 2 F (string) 28.79 741.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (passed to the system linker) 108.96 741.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-C) 16.29 725.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (preserves comments when preprocessing \050not implemented\051) 108.96 725.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-D) 16.29 709.37 T 2 F (macr) 29.34 709.37 T (o) 49.52 709.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (de\336nes the macro) 108.96 709.37 T 2 F (macr) 182 709.37 T (o) 202.18 709.37 T 1 F ( to be) 207.18 709.37 T 3 F (1) 231.9 709.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-D) 16.29 693.37 T 2 F (macr) 29.34 693.37 T (o) 49.52 693.37 T 0 F (=) 54.52 693.37 T 2 F (defn) 60.22 693.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (de\336nes the macro) 108.96 693.37 T 2 F (macr) 182 693.37 T (o) 202.18 693.37 T 1 F ( to be) 207.18 693.37 T 2 F (defn) 231.9 693.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-E) 16.29 677.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (preprocesses C source \336les to the standard output) 108.96 677.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-E) 16.29 661.37 T 2 F (stage) 26.29 661.37 T 0 F (:) 47.4 661.37 T 2 F (\336le) 53.23 661.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (speci\336es the executable,) 108.96 661.37 T 2 F (\336le) 208.38 661.37 T 1 F (, for stage) 220.6 661.37 T 2 F (stage) 262.81 661.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-F) 16.29 645.37 T 2 F (type) 25.73 645.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (halts the compilation after creating \336les of type) 108.96 645.37 T 2 F (type) 300.6 645.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-G) 16.29 629.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (provided for) 108.96 629.37 T 3 F (cc) 161.17 629.37 T 1 F ( compatibility \050used to build shared libraries\051) 173.17 629.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-H) 16.29 613.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (causes) 108.96 613.37 T 3 F (#include) 137.56 613.37 T 1 F (\325d \336le names to be printed as they are processed) 185.56 613.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-I) 16.29 597.37 T 2 F (dir) 26.01 597.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (speci\336es the include \336le directory) 108.96 597.37 T 2 F (dir) 246.44 597.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-J) 16.29 581.37 T 2 F (dir) 27.12 581.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (speci\336es the TDF library directory) 108.96 581.37 T 2 F (dir) 249.77 581.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-K) 16.29 565.37 T 2 F ( str) 27.4 565.37 T 0 F (,) 39.54 565.37 T 2 F (str) 44.54 565.37 T 0 F (,) 54.18 565.37 T 1 F ( ...) 56.68 565.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (provided for) 108.96 565.37 T 3 F (cc) 161.17 565.37 T 1 F ( compatibility \050used to specify translation options\051) 173.17 565.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-L) 16.29 549.37 T 2 F (dir) 28.79 549.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (speci\336es the system library directory) 108.96 549.37 T 2 F (dir) 258.66 549.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-M) 16.29 533.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (causes all tar) 108.96 533.37 T (get independent TDF capsules to be mer) 160.43 533.37 T (ged) 321.89 533.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-MA) 16.29 517.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (as) 108.96 517.37 T 0 F (-M) 119.79 517.37 T 1 F (, but with all internal names hidden) 132.56 517.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-O) 16.29 501.37 T 2 F (level) 27.4 501.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (switches optimisation level \050provided for) 108.96 501.37 T 3 F (cc) 275.89 501.37 T 1 F ( compatibility\051) 287.89 501.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-P) 16.29 485.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (preprocesses C source \336les to a) 108.96 485.37 T 3 F (.i) 236.71 485.37 T 1 F ( \336le) 248.71 485.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-P) 16.29 469.37 T 2 F (type) 25.73 469.37 T 1 F (...) 42.39 469.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (causes intermediate \336les of the given \336le type\050s\051 to be preserved) 108.96 469.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-S) 16.29 453.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (halts compilation after creating) 108.96 453.37 T 3 F (.s) 236.16 453.37 T 1 F ( \336les) 248.16 453.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-S) 16.29 437.37 T 2 F (type) 25.18 437.37 T 0 F (,) 41.84 437.37 T 2 F (\336le) 46.84 437.37 T 0 F (,) 59.06 437.37 T 1 F ( ...) 61.56 437.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (speci\336es a number of input \336les,) 108.96 437.37 T 2 F (\336le) 241.45 437.37 T 1 F (, of \336le type) 253.67 437.37 T 2 F (type) 304.5 437.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-S) 16.29 421.37 T 2 F (type) 25.18 421.37 T 0 F (:) 41.84 421.37 T 2 F (\336le) 47.67 421.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (speci\336es an input \336le,) 108.96 421.37 T 2 F (\336le) 198.68 421.37 T 1 F (, of \336le type) 210.9 421.37 T 2 F (type) 261.73 421.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-U) 16.29 405.37 T 2 F (macr) 29.34 405.37 T (o) 49.52 405.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (unde\336nes the macro) 108.96 405.37 T 2 F (macr) 192 405.37 T (o) 212.18 405.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-V) 16.29 389.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (causes all compilation tools to print their version numbers) 108.96 389.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-W) 16.29 373.37 T 2 F (stage) 29.62 373.37 T 0 F (,) 50.73 373.37 T 2 F (opt) 55.73 373.37 T 0 F (,) 68.51 373.37 T 1 F ( ...) 71.01 373.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (passes a number of options to the compilation tool at stage) 108.96 373.37 T 2 F (stage) 345.89 373.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-W) 16.29 357.37 T 2 F (stage) 29.62 357.37 T 0 F (:) 50.73 357.37 T 2 F (opt) 56.56 357.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (passes an option to the compilation tool at stage) 108.96 357.37 T 2 F (stage) 303.12 357.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-X) 16.29 341.37 T 2 F (mode) 26.84 341.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (speci\336es a compilation mode) 108.96 341.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-X:) 16.29 325.37 T 2 F (opt) 30.17 325.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (speci\336es a compilation option) 108.96 325.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-Y) 16.29 309.37 T 2 F (\336le) 29.34 309.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (reads the) 108.96 309.37 T 3 F (tcc) 147.28 309.37 T 1 F ( environment) 165.28 309.37 T 2 F (\336le) 220.83 309.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-Z) 16.29 293.37 T 2 F ( str) 26.29 293.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (provided for) 108.96 293.37 T 3 F (cc) 161.17 293.37 T 1 F ( compatibility \050used to specify translation options\051) 173.17 293.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-b) 16.29 277.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (suppresses the standard system library in linking) 108.96 277.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-c) 16.29 261.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (halts compilation after creating) 108.96 261.37 T 3 F (.o) 236.16 261.37 T 1 F ( \336les) 248.16 261.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-cc) 16.29 245.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (forces the system compiler to be used for code production) 108.96 245.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-cc_only) 16.29 229.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (forces only the system compiler to be used) 108.96 229.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-ch) 16.29 213.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (causes) 108.96 213.37 T 3 F (tcc) 137.56 213.37 T 1 F ( to emulate the TDF checker) 155.56 213.37 T 3 F (tchk) 272.2 213.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-disp) 16.29 197.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (causes all) 108.96 197.37 T 3 F (.j) 147.56 197.37 T 1 F ( \336les to be pretty-printed) 159.56 197.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-disp_t) 16.29 181.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (causes all) 108.96 181.37 T 3 F (.t) 147.56 181.37 T 1 F ( \336les to be pretty-printed) 159.56 181.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-dn) 16.29 165.37 T 1 F (or) 33.24 165.37 T 0 F ( -dy) 41.57 165.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (passed to the system linker) 108.96 165.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-do) 16.29 149.37 T 2 F (type) 30.18 149.37 T (\336le) 49.34 149.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (sets the default output \336le name for \336le type) 108.96 149.37 T 2 F (type) 288.11 149.37 T 1 F ( to) 304.77 149.37 T 2 F (\336le) 317.55 149.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-dry) 16.29 133.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (causes a dry run) 108.96 133.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-dump) 16.29 117.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (dumps the current status to the standard output) 108.96 117.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-e) 16.29 101.37 T 2 F (\336le) 26.56 101.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (speci\336es the end-up \336le) 108.96 101.37 T 2 F (\336le) 206.17 101.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-f) 16.29 85.37 T 2 F (\336le) 25.45 85.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (speci\336es the start-up \336le) 108.96 85.37 T 2 F (\336le) 208.95 85.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-g) 16.29 69.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (causes the production of debugging information) 108.96 69.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-h) 16.29 53.37 T 2 F (str) 27.68 53.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (passed to the system linker) 108.96 53.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-i) 16.29 37.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (halts compilation after creating) 108.96 37.37 T 3 F (.j) 236.16 37.37 T 1 F ( \336les) 248.16 37.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-im) 16.29 21.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (enables intermodular checks) 108.96 21.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 10.29 803.53 10.29 16.54 2 L V 1 H 0 Z N 102.96 804.53 102.96 15.54 2 L V N 406.29 803.53 406.29 16.54 2 L V N 9.79 804.03 406.79 804.03 2 L V N 9.79 784.03 406.79 784.03 2 L V N 9.79 768.03 406.79 768.03 2 L V 0.25 H N 9.79 752.03 406.79 752.03 2 L V N 9.79 736.03 406.79 736.03 2 L V N 9.79 720.03 406.79 720.03 2 L V N 9.79 704.03 406.79 704.03 2 L V N 9.79 688.03 406.79 688.03 2 L V N 9.79 672.03 406.79 672.03 2 L V N 9.79 656.03 406.79 656.03 2 L V N 9.79 640.03 406.79 640.03 2 L V N 9.79 624.03 406.79 624.03 2 L V N 9.79 608.03 406.79 608.03 2 L V N 9.79 592.03 406.79 592.03 2 L V N 9.79 576.03 406.79 576.03 2 L V N 9.79 560.03 406.79 560.03 2 L V N 9.79 544.03 406.79 544.03 2 L V N 9.79 528.03 406.79 528.03 2 L V N 9.79 512.03 406.79 512.03 2 L V N 9.79 496.03 406.79 496.03 2 L V N 9.79 480.03 406.79 480.03 2 L V N 9.79 464.03 406.79 464.03 2 L V N 9.79 448.03 406.79 448.03 2 L V N 9.79 432.03 406.79 432.03 2 L V N 9.79 416.03 406.79 416.03 2 L V N 9.79 400.03 406.79 400.03 2 L V N 9.79 384.03 406.79 384.03 2 L V N 9.79 368.03 406.79 368.03 2 L V N 9.79 352.03 406.79 352.03 2 L V N 9.79 336.03 406.79 336.03 2 L V N 9.79 320.03 406.79 320.03 2 L V N 9.79 304.03 406.79 304.03 2 L V N 9.79 288.03 406.79 288.03 2 L V N 9.79 272.03 406.79 272.03 2 L V N 9.79 256.03 406.79 256.03 2 L V N 9.79 240.03 406.79 240.03 2 L V N 9.79 224.03 406.79 224.03 2 L V N 9.79 208.03 406.79 208.03 2 L V N 9.79 192.04 406.79 192.04 2 L V N 9.79 176.04 406.79 176.04 2 L V N 9.79 160.04 406.79 160.04 2 L V N 9.79 144.04 406.79 144.04 2 L V N 9.79 128.04 406.79 128.04 2 L V N 9.79 112.04 406.79 112.04 2 L V N 9.79 96.04 406.79 96.04 2 L V N 9.79 80.04 406.79 80.04 2 L V N 9.79 64.04 406.79 64.04 2 L V N 9.79 48.04 406.79 48.04 2 L V N 9.79 32.04 406.79 32.04 2 L V N 9.79 16.04 406.79 16.04 2 L V 1 H N 0 0 0 1 0 0 0 K FMENDPAGE %%EndPage: "26" 1 %%Page: "27" 2 595.3 841.9 0 FMBEGINPAGE [0 0 0 1 0 0 0] [ 0 1 1 0 1 0 0] [ 0 1 0 0 1 0 1] [ 1 1 0 0 0 0 1] [ 0 1 1 0 1 0 0] [ 1 0 1 0 0 1 0] [ 1 0 0 0 0 1 1] [ 0 0 1 0 1 1 0] 8 FrameSetSepColors FrameNoSep 10.29 240.03 406.29 246.03 C 0 -158.1 1000 841.9 C 0 0 0 1 0 0 0 K 1 10 Q 0 X 0 0 0 1 0 0 0 K (a. The) 22.29 233.37 T 3 F (tcc) 49.78 233.37 T 1 F ( environment path, which it searches for its environments, consists of a colon-) 67.78 233.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (separated list of directories given by the) 22.29 221.37 T 3 F (TCCENV) 184.76 221.37 T 1 F ( system variable, plus a default built-in) 220.76 221.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (path.) 22.29 209.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (b. The temporary directory can also be set using the) 22.29 194.37 T 3 F (TMPDIR) 231.97 194.37 T 1 F ( system variable on those) 267.97 194.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (machines which implement the XPG3) 22.29 182.37 T 3 F (tempnam) 177 182.37 T 1 F ( system routine.) 219 182.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-im0) 16.29 795.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (disables intermodular checks) 108.96 795.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-info) 16.29 779.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (causes API information to be printed) 108.96 779.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-j) 16.29 763.37 T 2 F (lib) 25.45 763.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (speci\336es the TDF library) 108.96 763.37 T 2 F (lib) 211.17 763.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-k) 16.29 747.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F -0.16 (halts compilation after creating) 108.96 747.37 P 3 F -0.39 (.k) 235.51 747.37 P 1 F -0.16 ( \336les \050in intermodular checking mode\051) 247.51 747.37 P 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-keep_err) 16.29 731.37 T (ors) 57.76 731.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (stops output \336les being removed when an error occurs) 108.96 731.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-l) 16.29 715.37 T 2 F (lib) 24.9 715.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (speci\336es the system library) 108.96 715.37 T 2 F (lib) 220.06 715.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-make_up_names) 16.29 699.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (makes up names for intermediate \336les) 108.96 699.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-message) 16.29 683.37 T 2 F (string) 57.11 683.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (causes) 108.96 683.37 T 3 F (tcc) 137.56 683.37 T 1 F ( to print the message) 155.56 683.37 T 2 F (string) 240.83 683.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-nepc) 16.29 667.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (switches of) 108.96 667.37 T (f extra portability) 154.05 667.37 T (, and certain other) 223.39 667.37 T (, checks) 295.19 667.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-not_ansi) 16.29 651.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (allows a raft of non-ISO/ANSI C features) 108.96 651.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-o) 16.29 635.37 T 2 F (\336le) 27.12 635.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (speci\336es the output \336le name) 108.96 635.37 T 2 F (\336le) 228.12 635.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-p) 16.29 619.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (causes pro\336ling information to be produced) 108.96 619.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-pr) 16.29 603.37 T (od) 29.44 603.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (causes the production of a TDF archive) 108.96 603.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-q) 16.29 587.37 T 1 F (or) 27.68 587.37 T 0 F (-quiet) 38.51 587.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (speci\336es quiet \050non-verbose\051 mode) 108.96 587.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-query) 16.29 571.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (causes a list of options to be printed) 108.96 571.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-s) 16.29 555.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (passed to the system linker) 108.96 555.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-show_env) 16.29 539.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (causes the environment path to be printed) 108.96 539.37 T 1 8 Q (a) 275.6 543.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 10 Q (-show_err) 16.29 523.37 T (ors) 59.43 523.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (causes the compilation stage causing an error to be printed) 108.96 523.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-special) 16.29 507.37 T 2 F (string) 51.01 507.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (allows various internal options) 108.96 507.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-startup) 16.29 491.37 T 2 F (string) 53.23 491.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (speci\336es a line to be inserted in) 108.96 491.37 T 3 F (tcc) 237 491.37 T 1 F (\325) 255 491.37 T (s built-in start-up \336le) 257.78 491.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-target) 16.29 475.37 T 2 F (string) 47.66 475.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (provided for) 108.96 475.37 T 3 F (cc) 161.17 475.37 T 1 F ( compatibility) 173.17 475.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-temp) 16.29 459.37 T 2 F (dir) 43.78 459.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (speci\336es the temporary directory to be) 108.96 459.37 T 2 F (dir) 265.04 459.37 T 1 8 Q (b) 276.71 463.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 10 Q (-tidy) 16.29 443.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (causes) 108.96 443.37 T 3 F (tcc) 137.56 443.37 T 1 F ( to tidy up its temporary \336les as it goes along) 155.56 443.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-time) 16.29 427.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (causes all commands to be timed) 108.96 427.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-u) 16.29 411.37 T 2 F (str) 27.68 411.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (passed to the system linker) 108.96 411.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-v) 16.29 395.37 T 1 F (or) 27.12 395.37 T 0 F (-verbose) 37.95 395.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (speci\336es verbose mode) 108.96 395.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-vb) 16.29 379.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (speci\336es fairly verbose mode) 108.96 379.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-version) 16.29 363.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (causes) 108.96 363.37 T 3 F (tcc) 137.56 363.37 T 1 F ( to print its version number) 155.56 363.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-w) 16.29 347.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (suppresses) 108.96 347.37 T 3 F (tcc) 154.23 347.37 T 1 F ( warnings) 172.23 347.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-work) 16.29 331.37 T 2 F (dir) 44.34 331.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (speci\336es the work directory) 108.96 331.37 T (, where preserved \336les are stored, to be) 219.12 331.37 T 2 F (dir) 377.96 331.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-wsl) 16.29 315.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (causes string literals to be made writable in code production) 108.96 315.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-z) 16.29 299.37 T 2 F (str) 26.56 299.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (passed to the system linker) 108.96 299.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (-#) 16.29 283.37 T 1 F ( or) 24.62 283.37 T 0 F ( -##) 35.45 283.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (equivalent to) 108.96 283.37 T 0 F (-v) 163.4 283.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (-###) 16.29 267.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 1 F (equivalent to) 108.96 267.37 T 0 F (-dry) 163.4 267.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (--) 16.29 251.37 T 2 F ( str) 22.95 251.37 T 0 F (,) 35.09 251.37 T 2 F (str) 40.09 251.37 T 0 F (,) 49.73 251.37 T 1 F ( ...) 52.23 251.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (communicates directly with the) 108.96 251.37 T 3 F (tcc) 237.28 251.37 T 1 F ( option interpreter) 255.28 251.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 F (T) 1.87 832.37 T (ABLE A. Command-line Options \050Continued\051) 7.8 832.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 10.79 806.53 102.46 825.53 R 5 X V 0 X (Option) 16.29 811.37 T 103.46 806.53 405.79 825.53 R 5 X V 0 X (Description) 108.96 811.37 T 10.29 825.53 10.29 246.53 2 L V 1 H 0 Z N 102.96 826.53 102.96 245.53 2 L V N 406.29 825.53 406.29 246.53 2 L V N 9.79 826.03 406.79 826.03 2 L V N 9.79 806.03 406.79 806.03 2 L V N 9.79 790.03 406.79 790.03 2 L V 0.25 H N 9.79 774.03 406.79 774.03 2 L V N 9.79 758.03 406.79 758.03 2 L V N 9.79 742.03 406.79 742.03 2 L V N 9.79 726.03 406.79 726.03 2 L V N 9.79 710.03 406.79 710.03 2 L V N 9.79 694.03 406.79 694.03 2 L V N 9.79 678.03 406.79 678.03 2 L V N 9.79 662.03 406.79 662.03 2 L V N 9.79 646.03 406.79 646.03 2 L V N 9.79 630.03 406.79 630.03 2 L V N 9.79 614.03 406.79 614.03 2 L V N 9.79 598.03 406.79 598.03 2 L V N 9.79 582.03 406.79 582.03 2 L V N 9.79 566.03 406.79 566.03 2 L V N 9.79 550.03 406.79 550.03 2 L V N 9.79 534.03 406.79 534.03 2 L V N 9.79 518.03 406.79 518.03 2 L V N 9.79 502.03 406.79 502.03 2 L V N 9.79 486.03 406.79 486.03 2 L V N 9.79 470.03 406.79 470.03 2 L V N 9.79 454.03 406.79 454.03 2 L V N 9.79 438.03 406.79 438.03 2 L V N 9.79 422.03 406.79 422.03 2 L V N 9.79 406.03 406.79 406.03 2 L V N 9.79 390.03 406.79 390.03 2 L V N 9.79 374.03 406.79 374.03 2 L V N 9.79 358.03 406.79 358.03 2 L V N 9.79 342.03 406.79 342.03 2 L V N 9.79 326.03 406.79 326.03 2 L V N 9.79 310.03 406.79 310.03 2 L V N 9.79 294.03 406.79 294.03 2 L V N 9.79 278.03 406.79 278.03 2 L V N 9.79 262.03 406.79 262.03 2 L V N 9.79 246.03 406.79 246.03 2 L V 1 H N 0 0 0 1 0 0 0 K FMENDPAGE %%EndPage: "27" 2 %%Trailer %%BoundingBox: 0 0 595.3 841.9 %%PageOrder: Ascend %%Pages: 2 %%DocumentFonts: Times-Bold %%+ Times-Roman %%+ Times-Italic %%+ Courier-Bold %%EOF tendra-doc-4.1.2.orig/doc/tcc/table7.gif100644 1750 1750 11203 6466607533 17311 0ustar brooniebroonieGIF87aKL¡æææÿÿÿ,KLþ”©Ëí£œ´Ú‹³Þ¼û†âH–扦êʶî ÇòL×öçúÎ÷þ ‡Ä¢ñˆL*—̦ó J§ÔªõŠÍj·Ü®÷ ‹Çä²ùŒN«×ì¶û ËçôºýŽÏë÷ü¾ÿ(8HXhxˆ˜¨¸ÈØèø)9IYiy‰™©¹ÉÙéù *:JZjzŠšªºÊÚêú +;K[k{‹›«»ËÛëû ,N^n~Žž®¾ÎÞîþ/?O_?€ŸàoŸ?~ 68C߃ö°ñûÇ¡ˆÿõ#( þŠ 3jèQ£‹ Ž”¨àÊ-zLÉ€£’\†åi'Ò¥ŠDºÒ'NŒFaæ$¶Ó*‰-­ª¼ÉU¡>Ž2ÉbœUà>§R“šzõÍ­\•Úýù²©Ð~x‹RE+p,ѵuG*uë—g\¹ fùø/a’YQö xnãÁv]þûVêâab©šþÜäŰŽ} 5¬M–i‰j+Ó5Yã{ûê;8à‹?Ž<¹òåÌ›;]ÈéÔ«[¿Ž=»öíÜ»{ÿ>¼øñäË›?>½zósx?¾üùôëÛ¿?¿þýòþÝóÿ`€H`ˆ`‚ .È`ƒïñ‡NHa~Vˆa†nÈa‡~ „o\b‰’hbŠ*®Èb‹.(¢(¾øâŒ4ÞˆcŽ:îh`ŒmØÈ#ˆ@Id‘FY¢l ‰$†L6 e”RN™Ÿ’kÿÅiäŒ!ˆ%ŸuJ¨ƒlâYH÷ù‡š66Úh‘ÒǨ …^Š)‡žñh¥öMúf§ïº ©™žŠª¡wrJ酕ꀞº÷ª©}&ê©?£ÊÊ묱êú+­Âƺ+¬©‹þ¬}›šá§«»Žúì¯Òújk‘¢N;쬌‹-´ÒvK­¥ÉŽ«æ²e4Ÿ§Ä~n·Q^«­¶ßÂ'¡¯ëFÚ®½äî{¬¹dÀ«nÀÙ,%Àó ,½ô{o®WËoÄuú;FžÆþ,»ø2 e³ÆfŒ0· ÇûªÆKŒò¥‹a&ÄrŠ  Ë)Ï<îÊa´ sÁÊLsÏ¨Ú FÎu ísÑFËôDϹôÑN÷œ´M‡útÕVóuS;zu×^Ï—5ëMvÙfŸvÚj¯ÍvÛj˱*³ÑÉGÜçÎ}wÝpãýïÞzóͲß饴7gÝR(> ã?›Ô†Çþvk[ ˜4Žß¼‘d’D9¢Eæ|E³yзFÑm¯•–: !¥„ì³¹öÕV0M—¹bUõ´l¾SæºV©»/Ék½ú]pU4Qô†yVù yñvöuõPA‰e¿}÷²iýèÏ‹”½hàgQQÒŸÍòb{/™fäs¿õ=¼$½eåÃ%¼ðp5Gaÿ ‡—þ‰o.lQ Ã&ùíB‚ZXURx?Öõ×ÍÙ7½»„F|„Þnþg¿ô)0ä»L ÝÍ N(^Qal†ÇÒïveiKPz‚›–Ð4§¡LŒl¨»ÜÔð‡!MW fAŠX "þàŽ`E+d± [¼âº80.΋ªû[„ÈX83ŽŸ;£Ù8E¸¹mŽt¬£ïˆÇ<êqb$Ý áXÅ?þ—ä’YAC^ ‘q|£m h½É™ tƒó@CJFQ‘¼$ä@×GMÆ‚\ãW=VxÒ ôœ PIФ&Î{BéN)ËM:’3/to78ÚĦ”˜o˜ÄÚäîxí£‹nZ§ÌðùrxÂ|]e`gk1«lB+-ÇÄž…zQ1¢öÈIBtPœ+äž’Bó¾Ìä.ƒ˜I UhÎöÕSžGÜ :S‰Šn2!/õÓgú˜¿õ9p|Ñ+`þø×BqÂPˆC|çD}wºЃ"l Eä™uozéü…@—ðM.P¢“Q XV¸‡ªÓ{(dénF7BNô{í4(;;úAô¢å‹L•ÇIÙ´D¢Ø& ‰¿”‡>DMÓᢦˆ&̨K©@‘v·*B…qR%¤UÝ\+ Üê ¸A®¢#æèª¯&©xbd w¹H¿j‘¯œì`+&Ãr‘°tSìëÆÇÈJv²”­¬e/KÆæÍ±~Dl›8ûX7Š´¬ä«j,Y¼Æ“òÓëû$ØV61Æ„ÌjëY¢–ªh]ï&b ¶O1*mwþ8Îââv´{I¥]k°¹Þ–à¹1”dåVé¿ê¤­eas;@¡6´œ }¦tok^¥>‘¬0œÍñÐâÌaú….Éiz{wO÷’›³¥Žy ×™cò·,¸û¨pû ^oo¼ æhUqÑû•´„ e ?s < ï·½«%0V_‹™’ŽSÄA W³Q¦š˜ÁÕ}àyuðbLf3¼ ÞèJCJ;Óo¬æç3'ÓÞï‰XÈ>)ü@죗6}¸Ëß‘!¾ƒžO)Íajébå"˜¹4UiMSUÙÚ€Fi°VG,Ä«ù¢ „gEêaè/1õüá[¸âsêÓÆD ‹™ìþÐCØ´ñ5±`ôÜLËAw–§a çKä6WeŸf bV…9”³~ø‚ér9ø9Cp©È¼´Rá;Ï%+ú»\>dün èAƒ·±Ïèm¬eÝêÀ’¶´³Þ쮪پýÚ›Á®Ø°‰Ýka[­rĬ³Ÿ íhK{Úç)öã–„[ë8ÙØÞ¶sðm\w{®ÖæÜ¸±HèÓ2ï•ñTmvÕ‚Þ ´öó#¹skå4¶`}É‹›k;&ÚùyÖ!¾]þñ—×.œ%ó2©™jú2Ó½$ïd€Cÿò±3øEõô_ZÕëJì¹c6\Äsj³í18ø˜ËMKïÃXü/&ùep~ôƒ_ÜeûW½Ü5þrûܰ^îé„ôwßþø‡þýÏýÇnEmh€ˆ€ ¸Ght˜ ˜Hø€ógnøÿ÷8€(c W^wwYd[·]Ÿ$qÍõp™·h,(;Í£].Xq3uUæ7%: $f³”‚ÓU~³ ƒj9vlEo7\&˜K?È„Jø&y3æT¶´cõ%}ÂMPsÒUcÇ^X•_ ¶Ué5`ªÑ†v A˜)u#8x{œ0|pREhšðoÜÄûóŽOè™MHqç€$LÐWP¸¸r) wÀØsþÙ_ðI”„C_Êø^é å™q•daø¤žpxBTw—‡-刨x@¢¶ˆ‹§ŸˆÐ —MΔz Ö‹ñ©ˆ º‡}7—G W˜z_ J¡×¹©T8Jp÷e Bq¹¡´gxo†¡tY…¨—DwfA‡¢Æõ‹Fg…˜:ê¡7ô{–ŒŒ&zƒæi›]ÆE¤DLŠƒ´å‘ ×atœTºœ½ Ž¿¹Ž±9œ`º†b*ŠdZ¦^JŽhŠ\ꄃȦiªš¯I§uj§wº6fú†qÚ¦zº£|Jº©l€Ú{©]i˜O‰„o ]Ôy¨nJº„;Tˆ !Iþ‰VÊŽSÉ ’šƒY púG]ž›8ž6™nÐx˜ §{1מ>Ú’SH“Ïç¢ôù‹6—Dø©  Ê‘PÚ†ö§š—bÚgVy¬Æš˜ài­(‰z¥Ê¬‡šˆv8^Óz©ºw¡{×nÊf׊xÎGg3 §H^ãP­OÆ{:Šm‰¬²—£4–®"º• ‰w©˜gä°®;ˆbÅØÜé‹_8¤¦|YÅ£>7…Ë®‘‘¯Bª妜w¢­(EÿZ»¾J‘¢ª™°u±ÔÀ±–*›„Z¨j‚(¨¹é²,‘/;H0‹Ž‚ª‘4³2‚§;˳=ë³Üá§°þгŠ*³®6´8Y´ºv´f©²¤º´ài³×æ¤ÁJo1Y ˆ¨[; ¡ñ§Q °¬‡pÈaÙe‹L›³3AJv3²Å:³`k­¼ØµŽÓE]1©Æ™ªc»¶jk´I[¤¶CDð«Q“@•Yhi,MVÈ;Ï×äZ{Žk’îé°\(†þõ_‡kVŸ¦¤©´ÊŒ*´í–dw•:g{7‚‹ =º€vˆH÷¢jh&d—@éw Èa#»žëµ;i§x¦°±¸×•|Y‰o·ºäŒHiŒ§>Ä¡¢XùvÛzb)x›¯O'w7ë·Ý5j×Hf?Š®²hyq™»6ÆŒþÄšS5õºþ´gØz |fŠF9fˆ?¹zv†cƒúµ:Ùh)9º5Øiqº¼§ 6riA|©vUúås¡†{S9½Fê£ê…‡îºŒ7g>V¶¹òĤ%{œ_ª4pÕ»n+"|W7Ù mÛ9%K*Ü´8+Ãpû´qÛ·ÿ{ÃÚ©Ão»Ã<ü#?+ÄCLÄx´ˆøÃûù½[–ÄSëÃ=ÜÄMjþÅ'øÄ !Ž3èVë±j˺¼Ê½¬'Ÿ¿7“ÀPÃy;\Úæo;Sk̵Æ;–™O·p–ÃÎiš.œ¨K—âIƒÿ¤‚¬ÓwékažzÇè–Š<€;>zL´øõZ+™…HZþ’%ÉÁèyÉ™&¶ý鮇—{¸,É>"ÇÅ—pÆ K¾a—Œßyµ,gSÑgQŸyâÉzHR‹¼M–Ë’8—Bgx e»ŠeÂ…9Õ ÌŒXÊ–@Ìlỉ9¡¬LØ*•ØQ²¬>‚\Ë’vËu—u~—eMW\z«\¿‹Š ál|B¤W–Ì­¼iíÜ]v›ž¼•üIgÄSwÑku´¸{*½d;ÌG\¡Ø8¥ãXt]ÅR¼Ä¬6Ò$ ÅŸ{ÒŒšÃW¼Ò“êÒ*ýÒ¡ª³ElÓ7Ó8Ó;ÍÓ=níÓ? ÔA-ÔCMÔEmÔGÔI­ÔKÍÔMíÔO ÕQ-ÕSMÕUmÕWÕY­Õ[ÍÕ]íÕ_ Öa-ÖcMÖemÖgÖi­ÖkÍÖmíÖo ×q-×sM×um×w×y­×{Í×}í× Ø-؃MØX;tendra-doc-4.1.2.orig/doc/tcc/table8.gif100644 1750 1750 13037 6466607533 17321 0ustar brooniebroonieGIF87aK†¡æææÿÿÿ,K†þ”©Ëí£œ´Ú‹³Þ¼û†âH–扦êʶî ÇòL×öçúÎ÷þ ‡Ä¢ñˆL*—̦ó J§ÔªõŠÍj·Ü®÷ ‹Çä²ùŒN«×ì¶û ËçôºýŽÏë÷ü¾ÿ(8HXhxˆ˜¨¸ÈØèø)9IYiy‰™©¹ÉÙéù *:JZjzŠšªºÊÚêú +;K[k{‹›«»ËÛëû ,N^n~Žž®¾ÎÞîþ/?O_?€ŸàoŸ?~ 6èa Aƒü§ÏÞ5~÷)€¸¡þ4„ÀƒGŽ1:0ù°bH‰Ó(jŒÐeÌ‹%S6ô8aåÆšOò¤¹“¥60ƒ†ÔÙ‘"ƒ¢“FhÔåK¥QN%)ÓiÒ­[Kês(Tѱ0‘‚ˆ³'J’UÍ–•‰2¦I¬Ò ˜6,¯•PgÚгïR¸^7¶Iµ«T¾Z×.äªÕêY½¿Cæš1(æ—š/+Þlø³Û~„ ÖÍzòÓÇH)ï¥ù«W„ Iï,¬5m¼|Çnžªxõm¬«•æuM¹5òåQ”3=ºôéÔ«[¿Ž=»ö3 »{ÿ>¼øñäË›?>½úõìÛ»?¾üùôáÏ ¿þýüþûûÿÿ~H €ˆ`‚ .È`ƒ>a„NHa…^ˆaƒÉ!`†vèa†Hb‰&žˆbŠ*®È"„ÆâŠ1š8b‹6ÞˆcŽ:îÈ£‹Ç¡1cŠA’XcF‰d’J.‰à‹p y"”Éd•V^‰e–:ù†”%z‰!•ZŽIf™f2É¥`†¸¦…bž gœrÎ9åÜÝØæ–yÒÉgŸ~ò™f{^8¨„oþ‰h¢Š–(Cú`>Hª ”’úŸ˜Q¸&¦‹~ ªœ®$>“>zé©•ˆ¥¬&ø¦©¦F8«Öj®º":ª¥Þúªª ¢ªŸ”¸þë­ž~¸ì®Î>«d¯iüZ«¬­Z+¬¬‘Ú:£¶§R ©Éægí¬ž{éµàbëí¦Ð¾ oÒɪ¹× ;)¾®ö÷+¿÷b«ê±ùð¿ê’K.»ÀfkpÁñ> q‹óÞÉï· ;ŒpµË­¿Û»¯~—[.¾_lîÅ7qË.‡8±¥r¼2Ç[l¾ 3Œl¿gr‡+‹rÿŒtÒ Æ\F¿ ®»×+ôÓU©i¸ «l×E¯{¯Å?+MvÙ2MF³´®xh†G› wÜ>rˆ'ÛjWz·ÜzÃöy;ø7¬ïMxánÚ)sÝ*¶mx㎟طƒ/þ89Œ?ŽyæzÒmcå=kzè‡sX_馟ŽzꪯÎz뮿¾:‡0 íÛÁ(û ¶'t»»K>{ ¿÷îhî. ¿òÄOk¼ðΧäÜ–-߃ò^XŸüó¶yfBb‘S¯}—Á³€˜E*Œ~õÍ‹ÄTCLAùƒÍvF1$ýéÛ€=ý?–“ P€@YŠ_Jó"žü$4¬ Íþhп-üO$Ô€ülÂÔ8pƒækàoA- 3‘Àï2SÁ~˜Ï0¼ùL©R‘žæ{A³0Büµï$ïÛ!HšB”¾P6r)â5ãîà†X`b ϧÄû¬o‰á‹¢þš¦EòYw\<^·H*,¦À‰ñ£øºøÅÑŒÍ!ìÞÇ8ÊqŽt¬£ï(F¬Ñ}l,µÈ>ú*jL£ ÓFÈîòÀû#ŸÀG¢%z1 üJ0½êìq !ᬘ΀°€”¬—6©T¶QwiÄ €ó‘,®P“‰ŒŸ'JÓHF*ÃYLdbc¿Û‘)ÎyKaìÇË_J&–ILN-ËIfÀ’l¢ C!’™]Ùž.ƒZ“›,ä sT …N¾ï“Óü¦8{bMn°8óä`1«¹¡µÔS—òœ:ŸpËYJs’è1«òBÑt. á,þI8Ô¡ï̧÷žóO'äP.¥(H.ÉGoÚe6®L =³y¿eæF…„ÊH…yÄßøó™Ò[¤-¹Ê'ŠÓÌ(NyªSVò§ˆt¤,JÔFÕ‹CMj|ʨª°©Ný‚T•pÕXµª`È*îÖ°Šu¬d-«YMGSÚ”«MLk®êU¶òÏ­¦¤ª\EH׃Ds dG#iÀ(¯6‚Ù% â Äá†',(ЗϺ–w쌬“RcNʦ2¯šEê8Û ØÐŽ3§\á?ë ÔáÍt§-ÿz½ÂÔ;³ýK_ŒÙKaÞö¥.mÍ6¸À_Êf¸Õ_9Tþ+Æîš}­Uf¨@QZT¡ñdètç"ΈN¶´Ã‘ás±Ólš¹APîcÚ© ·œi "xƒËvó³ÚuçoCI '¾›5jH=úßóV–ŸªéL Ý[šƒŠÆ8 6Ê‚·‹ÂFøÁ ® EÇA^ lT½|]î›òÜà¤T¸¦gˆC˜“¦¸€•nm‘ÉÞÞÂô¸œu퇛+à«¶ΰ2|ìÖÔ„<æ0.[›< [PÉÇ`òâ ×»ZµÆˆ²”»àd@y­W–B–s°e»v¹ _~åN=;f2»ñ¬ln³›ß ç8£Õ¿zärš5Jåì‰ùÎ9=#S·Êþg*”y”Bt ½œg ~±6o•ìcwkL$£ƒž«ž=,[ôZX­‚­¬IK8ÀF×£Ò5ˆ ò†ŒãÏNÚ‰£õ®VU]ÓQ'º°7&²¤ûÍâ@zÆ$^Œþv­[Þ@öÕÑÍ©O^³ƒÔ¬ò€Óëa’B÷½‰'u±9_ #ÐÈÛ<‹¶ïbR º¿K-¦·›j­–%ÆÞ´6¨ÛkÝÖ—Û®æ6lx™lfÓBß2@õ©Ó mnbÖ¶¢<0ƒ¯Ýà…nÛ ïâtJÄ„òû?3‘ðþÄ»9±‰U:ÜGTÅÁi· ¹lâlZÆÊ^GÅ mgG§ÐÔ/ ujsþN:' åy:qžØD›͇¨Ð'­È¢ úèÏ£Ò—NçBîùéX]³œ¯Žõ¬k}ëce:À1IuDG=é;{ÕÇö©›Ý@/{Õ¾v"´èow;I[Éß rîN‡yßm¼d#;Oï}ä{ÝsÞY“À†6|6OvöÝ:³>™<ãÃmÜëZ5ñÈ“©Ìüµ\‰OûÐsÜá¦Ï»Ù·¸%|Ä|S{ª-ecé[eËç°È™öSVŠ›ŸÓ¡µg`ûiÆÛÇõ¡®5s7®^m_÷àg O£ÝjVŸ´¤¡v”¯sŸwõ¯,‹UÂ^/6æDin_ÊkÇþŒ¾†àõŸéŽø¸>Œ~?üìx¯Œ7Hý'uxvåwög€_åuç~ Èv H~þrgu\‡¨Èé!gz¸X˜z(‚åE‚Ì…'Èõ'sn‡{,hhƒ›„w=ôjiæ‚F€‘ÇLÍ—9x^BØe=˜x0¸|¦¥FзƒHPèlIÆc»ç„A*æRÇrÀz gDR&…—VP¶ÓX;¨‚Õ}0vY@T|ÚÄBUlcˆ txzÍe†O„jÀ%}ÄÁ‡x'nòOÞ\b˜W÷•‡9Æ{øNØ…Ož‘}œw]Ü3‡ÅVvˆt>1I¨þqWX\%–½7D¤…À7b†HƒKøwªX‚2ˆ†~t«X;Ef‰®Hy‹&h‹O–‚Χ‹»f½`¿Œ„†€Jø€Å¨eØÍèŒÏq&Œ<”€ÊXIÓ˜i+hý†èFŒÛÈ©˜ŒŽf&Ž (‹jhC„‡kÃè\EáSya~ ˆTÇQˆƒ€‰ª÷h™bw¦ÄŽ‚çïÈzä(%~ÿwÀ…ùøu­˜†¾M™$Zíh‘Ô‘ÙX‘‰—w yY©2“=ºG@–‡†÷äk3a\ª¡rœç=Át@Ä…b߆û%|l˜r~Ø“_Ø“Êä…~x[,þI‘‚ ’œøh{x¹¤xêoUQâ…DÆÆRo(‰ÛçzÖG´wm­g„_I_ ”bÅ·•è;³†’'„’))úD„ädYÛ³†;)‡±U_Ü•ìvMÃÄaàEl:„Åfo2¥}íµO…”ôa.†‘€ŒXZ-´}`‡§„mÉæb…h•ÙÖà„mÚ_Ôו¾ôp™)qjš{™m3y ‘šh–‰IoiIhQðzº¡l-µpåã#làdY½q-dlŽ‘RŽ˜_¡8|›rUIR®D²™‘éhûøÉ Ø „‚bqÝyñxd°žá ‹ù7…åhŽþÇ8’/èž3ž‘Y>8Ÿ–ŸÚ©žù9žëyŸIèŸÿI*Ñh Š  Gݸ!8 ×xŽòÙŸ JÅždH¡?ˆ‹ªhωL„ù–ö A©–%:DЃrŽ)I”t†PŸÞŸø)p|¨n<$úŠ :£4QR ZµÈ„‹ð¢Iž/(ˆ5ªÞ·ö¨¢9ª[×§Ÿ €x0¤:Ú‰U¸”!*E— 瓚7L7ø¥gÙk~„òul2©™B‰q?9~Y¸œ»Ñœ+ùb&úUz‘zؤ\m|zb¦ùö™4êšîÆ}u1OÒi}£Ò¢aœ™~¦Õ—s8{'¤³fþ§·ø˜}Jl¯(˜Ð){™ÙŸª9*|vªœÙ>¥Ixà–¢@™œŠIˆû„£Hyˆð[n ™î¸˜˜e‘(ª[œß¥•0tQ©:“Šúˆf¡©ö¤¨¦Š»™pV]“褶¡0i’Yꉭ£Á_7I§Çù­p¸L¼Ù’ÿHLˆú`p™¢ŽJD>éЉše:o`ú®ØºŸ{š§"ê€w0å•QxÊŠzdLYžSª¤~fk«uZ@°™:Žš¡ü¹¡Úž›‹£ª±Ë<Zƒ‹±]E J² $ ʲ-ë²/» Ú¯(›²Ûɱ*;±5;²W$²«³;+ž=þ›³?‹ŽA»±á‰œM‰¯½$U¾ £VzŠ›¨°6q­CǢͪ[µÿеª FTé«Põi¼:³ IM©°tƒË¤H* ZÛµüzK‹ÈOoÛ¥÷åw?b›YDiá[l›Uˆa ·ßÃhs‹¸[›§`[kz«™"G¦—[!‰¦CùmkʦhQ¹K§éK”Õi”µš‡ë„FH¤ Û­d©‚ˆ˜IÀjmž9{š+»-ªºaj–ç›,e_Qi˜›¶šŒë¢œ¹Ùé¯7—Ùx¼œÛ†¥Ú+”JJÐ¥¦Ñצ!¹»¤Ô»ºÈpÙ…™Š{4…¦Nk¶OÛŠË»©þ¦—V©}ÔGÃ[•íZ¸W)¥’¨½ÿȽ—á¾ðÛ†Öj©qK¾÷“È;?;ˆ'× Ì¬Àq^ø«  ˆù[y¢‹AÇšœ!muꪘ»«—š­ +¾É\ë‘ý6°”€º=4µVæÂSû»¢åyÀèËÂ2 ’>K´E{³E*£? ´Al³KÄ&̰%Û¡I\ÄšÃQ,ÅSLÅUlÅWŒÅY¬Å[ÌÅ]ìÅ_ Æa,ÆcLÆelÆgŒÆi¬ÆkÌÆYÌÃë±N¬Ä<{´4+Çsl´êwœ±t¬ÇüÈÇ€Ì' ´T¬RLÛt‡K-œ¤ü:)¼–»6 þÇ ÈÈ>ê·7¡i‹ù¿ìó —Ì­eÀ„À¤WÑ]Q|®Ÿ›Ã,Ê(Å£ ­™¨w¼‰•ˆõ¯œ^“L¼êøuµlª­,•­LÌ¿f‹¼Œn|)ËÖÌ’ê©ìkÌ9¿Š\‡Ì…SœÉŒ°ÄÜb”¨ÊÚÚ®‘ëÊoüËqÈ‚L’B Äé¼Ç‹ŒíìβuüÄóŒÉëlÏ>ŒÏJ Ï‹ÇýœÏMÓÆmÐÐ ­Ð ÍÐ íÐ½Ðæ\Â},Ð]Tû<´½Í@BÈü¬ÑçËÑìl£ Òƒ,ÒC|È$¬;¾œš&ŸÖ Ê|°þ̺_ëÒÈÒÖéͬ¶º4þ6ÂA:›“ÕºçŒ%*ɨ¸0Ý2ÝaÞ¼ÑÉ”ÁÇÊÉTÅmÎÆìKWíÔ`’[“X]˜ÛDÝ LÝÌ=ÝéÍÁú]ÀÕô ¾"½—«Ê°‰Õ¨ ×ríf}cá|Ö˜n($Q`šÎ{—Å,[†9ˆÆQ™™Ë„w}¢dÍ |m_ç|†4ÄˆÇ qÓ7_Ô›×Ó<͘é¸;½¨œý­ŸY¸JÍ”ÍbúÔ<Úº¾y×±l¬ ü¼ï'¹ÀÙB|Õ|Õzý¬š»l­Ò(ÜV ÙYª½ÚÄ ÃMƒÜT(ÙeÍÜ\ѬíÑMÒh­ÏÔmÑÙÝ×ÓÑØíÝi-3mÞþçÞé­ÞëÍÞííÞÖ-Þ#=ÞgÒ-ÏóMß&mßòßßeá}ßýíßþà6zwäÝÄÚ-«<´ÕšÒ= ð}Ã}[~[¼¬¦¾CXÒºÉGNíàq¬Ü&ûÓ;ú×qíÏÞSîÉö™›F}}(Ð!üc˜Z—¯-CÝl¹¥»K¤›ã±ìÁ;.º¡¦ÕŸû›^ý„mØ>N?,ÝdLwÙÆ}y¥Ýmç•R¹àT>Ûò&mM•ÚXÚ¦¸¨Ã ž¼è»á€·ÖCž¨¯Y»O¸ÓÇÜ•u©å]I¨e«Ù~ý¼¯Çç!ÞÙJ™%^ÉÑÜz¦åiI¯‚}šoˆDpþæ.ýá› “y]p-qˆ JÓ¬zq~œ=Š^­5žYNÚ: Ö¨)çr(¬ãw¬Ì¿ ü fþàHÜ‚AÆ2 辯GlÇÕãÆØ‘¢EÚÔ@ëè|Ý~ì1þë>àTàµîìÏŽW'ìξìíëÓ^Ü8ßîá.îãNîånîçŽîé®îëÎîíîîïïñ.ïóNïõnï÷Žïù®ïûŽîÓÍÄí*Äó𠎳Ê^cDUâ^è©JL^UÇþío³=ºŽ*î»ÎHf>êÝП⩖áwÅ×àδ/®g^ÄRí¥¾6ñ¨¼Êaú… ŸãWfÖ/ï¾Vþn¥TØáîÙ>ßÉbŽòw®š³:nOH‡ýòiÝÜPÉÖ±á¨}þæÂMNÊU7ß¿ ™æ.Ø›Mõ€º» …òF¿ô6ŸHÆjòCó6Îõ²=¹‰ö¤¨ö3¿m&oö#Nñ¶nðA]Îx¿÷Û¾dµëCáïñÜßÿÎ÷Ì툯Îüð~ŸøÉ~øŸÐôܰüŽù™¯ù›ÏùïùŸú¡/úoù¥oú§ú©¯ú«Ïú­ïú¯û±/û³Oûµoû·û¹¯û»Ïû½ïû¿üÁ/üÃOüÅoüÇüɯüËÏüÍïüÏýÑ/ýÓOýÕoý×ýÙ¯ýÛÏýÝïýßþá/þãOþåoþçþé¯ú;tendra-doc-4.1.2.orig/doc/tcc/table9.gif100644 1750 1750 7133 6466607533 17302 0ustar brooniebroonieGIF87aKÿ¡æææÿÿÿ,Kÿþ”©Ëí£œ´Ú‹³Þ¼û†âH–扦êʶî ÇòL×öçúÎ÷þ ‡Ä¢ñˆL*—̦ó J§ÔªõŠÍj·Ü®÷ ‹Çä²ùŒN«×ì¶û ËçôºýŽÏë÷ü¾ÿ(8HXhxˆ˜¨¸ÈØèø)9IYiy‰™©¹ÉÙéù *:JZjzŠšªºÊÚêú +;K[k{‹›«»ËÛëû ,N^n~Žž®¾ÎÞîþ/?O_?€ŸàoŸ?~ 6è@…%4hÏ¿! hð_¿þ‡ ,œèñ"Ã8LHÁ£F"#c)Ð"F€*9šÔH²åM +kêü8'Í‹]*`dR˜Kw†\úthÓ€P¡>dÙ³bÅ–Y­b½*s¢NŒd³‚Eh5­ÑNH•¢mº)ͨc¿Â¥:³gQ€\¯Žê7nà¼úÌö½ ’âZ¶ZGÎü94ä[ € gÔû¸íVÀu‰fö98q@œ ‹<¹˜“ÐÀ;#î÷Sóa©xíÂØYï]η¥æ¾Œ3LÒ½S_* ötpäTië>½Z7ÂéÉÏíJyLÜÁySç]U¬ññä’?Å<úõìÛ»?¾üùôëÛOá0¿þýüþûûÿ`€H`ˆ`‚ .È`ƒ>!ƒs@…^ˆa†nÈa‡~bˆ"zHáˆ&žˆbŠ*®Èb‹.¾cŒ2¾˜”%ΈcŽ:ZxãŽ>þdBI¤5ÆÑc‘J™ä’N> e”RN¹á‘p4Ie–+b©e—^~ f˜Zù—bžy¡™h®Éf›nžH¦j¾éåœtÞ‰gž`ÆÙ†zFé矂J¨|²h¡D&ªh£Ž> â¡k0 鎔VŠi¦…Jª–Ux©¦zê‹~†*jªªvÈi\âj°Âºê–Ί*f–Hk­¾þ*b«h¼Êkš¹úJþ,¨ÇvØ+°Î>Ë¡°g$Ë+…´–:²×æ§í²uÞZm·²æÓ-¶Õ’‹n¸ç6 m»Jk±ÛŽ«ì¸¸ŠKï½ùz«e²ôî;ë¿Öê{­Àêü¯» 7 oÔÖ»m¶½’»¯ÁêoÄÙ&,«®õve£íšÖâ"xÄß!KcsÈYÃ88³ÁüëƒÑ LoÕ~nÖô8OmÛd–µ‹«íµŸì=ÞIÖ;ÂLue& ÄX¿Vª,å®íÆÐâö–Éí(cûHZOz×Ô=ðt½hgMÈšÑcÌ.gsª<ï’¼Äæõ‡ß][i3›“3Ë®Ï[”þ"[‘°w Û¬WÛyÅß2]âvTíWÇ´õÕ¬]txmÛ0Xâx•0]_9 ‚›@ã:`r§çÃq„Ü™K93L®Ô$¹(‡ªÊW~Ý2ë¡åñ¡¹ïÄ sþ’瘢9'ó@µ ô  }èD/zƒdž›¿Gé¹{yÏoÀt=ãüéÞD:¢Î¬\Çš¦Ë›ñ¹[@ùˆ-¬¤ZöyãVçÞµøë•¢4´‘;ö¢½Óa5ÚÌ 'ÓµŽpžÚv‡{H×Ù솤¼Î<Ñ;è~ÚÒ¿s…k ®ÓúmQ7æ—¥®ü_K|ì +Žý©ø…â¾uƒqãYXÂÔ¶ÃÇdóÎc²i9‰U¹¯·‡OÛkÓæúyLñ½pé+Û}C·Û’÷f‰÷Ç_º^½¿?ör÷ x³g›ÖÙG<áÛj<¢úÃ÷Uåð=Yüü‚û¹=$4òëmÕþò[ôµw¬mªÆ½P8®˜îþá±hn¹5,ƒ´nw^î|ð€ÆG|òµ_5ÔYŒä|ÐÅE‘ñ€üE]BÐ÷_ö¶~ŸóZ©TEtölù¶yÇC­§D£§@¦fq°vmÀDzHcr–YÅÁ·E$æY ˆa´>È‚¶7x\WJþg,Ö}JÈ>NÇGNxJ€µT·çQTçcmgR¸\xv<‡…P§…uà….Q†Ýcti¨†kȆmÈPcHghrXxD†bh…#u‡M–‡@‡ôð‡Ž†¶¦Rζu&r–VNk×f:ø„~ÈUdx¾CM¢nxgˆ“XD_ˆþN¸³IŒ`jç}0‰§‰eŠ;AwNuŠˆxˆ3¦h'€ó£ŠÚ§‡yH‚vXÁlÄôËá?`…Šnjô{sÅ|sUµ$„¤§;×x¥K”¡¯W›(p\Õ‡®‚wÛgˆñ&b¸}”fxw׈ÖYÕ§ç7D,‚íVg%ÈluU¹eYþU(8J øgº×‡Eåxæ¶Š|âR,6H"×=uqH=HGç\kµÑÕjçfhð8i;X?§—lT[ËGcå÷q'óhèVàv)o§X ÙgåÕì˜RôåaêÅoà‘Kp—|¦õªHÀtÆF~"þ‰GX’´!qÒˆƒ¦÷‚¢j÷ämôÍȔ΅ŒD¸j¸7vP©‹o´Ž]GˆA¸iæ—L[Ô‚ðÕ8uN#9k“ˆTÙ–YW, Œz©—Öe‚uS—‹‡dô´‡×‡R˜Uw—f–˜a¶˜3ט5&cnH™•i™—‰™r˜›™R÷˜I×™<Å™Hšs9˜HXš™–”ƒˆ{µ<ïssi)Šõô—öÇ1ù“›ÃèˆuX´éŠúÄM¹)›‚ø¢ØŠ¿Éxê œ0%`œæuª×‹Â±—l%šó¨@XÐW„æKnÁX9Ž×Y Ï™v„–’ضŠôh[þéœÿV–ÈF“Â'hüèž9yo‚èž— è¹›OõxuÕZaäFiPâx|á÷“ ËX“úw9ÚTŒÄ\d¤žƒÁ~.ù’jv¡ôƒä˜GºU_øgO´“·Õ’Ü`¡ÚVž%%XÖè‚ô7•yV•fùŒ¶Ôß9m¥LãÈ)Ò¹”.:š‰f„ ñcϹ/;ÆH‰r9wdc·™NÊš©In¤y…Zªšo—¥^šŠ\z‹b*Ÿe’™iª¦kʦCg¦o §q*§sJ§uj§wЧyª§{ʧ}ê§ ¨*¨ƒJ¨…j¨‡Š¨‰ª¨‹Ê¨ê¨ ©‘*©“J©•j©—ŠB©™ª©›Ê©ê©Ÿ ª¡*ª£Jª¥jª§Šª©ªª«Êª­êª¯ «±*«³J«µj«·Š«¹ª«»Ê«½ê«¿ ¬Áz ;tendra-doc-4.1.2.orig/doc/tcc/tcc1.html100644 1750 1750 5251 6466607533 17152 0ustar brooniebroonie tcc User's Guide

tcc User's Guide

January 1998

next section previous section current document TenDRA home page document index


1 - The Function of tcc
2 - The TDF Compilation Strategy
3 - The Overall Design of tcc
4 - tcc Environments
5 - The Components of the TDF System
6 - Miscellaneous Topics
7 - tcc Reference Guide

1. The Function of tcc

Like most compilation systems, the TDF C system consists of a number of components which transform or combine various types of input files into output files. tcc is designed to be a compilation manager, coordinating these various components into a coherent compilation scheme. It is also the normal user's interface to the TDF system on Unix machines: direct use of the various components of the system is not recommended. Therefore it is worth familiarising oneself with tcc before attempting to use the TDF system. To aid this familiarisation tcc has been designed to have the same look and feel as the system C compiler cc, but with added functionality to deal with the additional features of the TDF system. This does not mean that tcc can be necessarily regarded as a direct replacement for cc ; the extra portability checks performed by the TDF system require the precise compilation environment to be specified to tcc in a way that it cannot be to cc.

There are two basic components to this paper. The first describes the TDF compilation strategy and how it is implemented by tcc. The second is a Quick Reference section at the end of the paper, which is intended to be a tcc user's manual. For even quicker reference, tcc will print a list of all its command-line options (with a brief description) if invoked with the -query option.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tcc/tcc2.html100644 1750 1750 4353 6466607532 17154 0ustar brooniebroonie tcc User's Guide: The Function of tcc

tcc User's Guide

January 1998

next section previous section current document TenDRA home page document index


1. The Function of tcc

Like most compilation systems, the TDF C system consists of a number of components which transform or combine various types of input files into output files. tcc is designed to be a compilation manager, coordinating these various components into a coherent compilation scheme. It is also the normal user's interface to the TDF system on Unix machines: direct use of the various components of the system is not recommended. Therefore it is worth familiarising oneself with tcc before attempting to use the TDF system. To aid this familiarisation tcc has been designed to have the same look and feel as the system C compiler cc, but with added functionality to deal with the additional features of the TDF system. This does not mean that tcc can be necessarily regarded as a direct replacement for cc ; the extra portability checks performed by the TDF system require the precise compilation environment to be specified to tcc in a way that it cannot be to cc.

There are two basic components to this paper. The first describes the TDF compilation strategy and how it is implemented by tcc. The second is a Quick Reference section at the end of the paper, which is intended to be a tcc user's manual. For even quicker reference, tcc will print a list of all its command-line options (with a brief description) if invoked with the -query option.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tcc/tcc3.html100644 1750 1750 2404 6466607533 17151 0ustar brooniebroonie tcc User's Guide: The TDF Compilation Strategy

tcc User's Guide

January 1998

next section previous section current document TenDRA home page document index


2. The TDF Compilation Strategy

FIGURE 1. Traditional Compilation Path

Before discussing tcc itself in detail, it is necessary to explain the compilation strategy that it is designed to implement. This was discussed at length in [1] and is summarised in Fig. 2, which is taken from that paper. Readers are urged to consult [1].

FIGURE 2. TDF Compilation Path


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tcc/tcc4.html100644 1750 1750 52231 6466607533 17175 0ustar brooniebroonie tcc User's Guide: The Overall Design of tcc

tcc User's Guide

January 1998

next section previous section current document TenDRA home page document index


3.1 - Specifying the API
3.2 - The Main Compilation Path
3.3 - Input File Types
3.4 - Intermediate and Output Files
3.5 - Other Compilation Paths
3.5.1 - Preprocessing
3.5.2 - TDF Archives
3.5.3 - TDF Notation
3.5.4 - Merging TDF Capsules
3.6 - Finding out what tcc is doing

3. The Overall Design of tcc

Having discussed the compilation strategy tcc is designed to implement, let us move on to describe the details of this implementation. The basic compilation path is shown in Fig. 3, which corresponds to Fig. 2.

FIGURE 3. Basic tcc Compilation Path


3.1. Specifying the API

As we have seen, the API plays a far more concrete role in the TDF compilation strategy than in the traditional scheme. Therefore the API needs to be explicitly specified to tcc before any compilation takes place. As can be seen from Fig. 3, the API has three components. Firstly, in the target independent (or production) half of the compilation, there are the target independent headers which describe the API. Secondly in the target dependent (or installation) half, there is the API implementation for the particular target machine. This is divided between the TDF libraries, derived from the system headers, and the system libraries. Specifying the API to tcc essentially consists of telling it what target independent headers, TDF libraries and system libraries to use. The precise way in which this is done is discussed below (in section 4.3).


3.2. The Main Compilation Path

Once the API has been specified, the actual compilation can begin. The default action of tcc is to perform production and installation consecutively on the same machine; any other action needs to be explicitly specified. So let us describe the entire compilation path from C source to executable shown in Fig. 3.

  1. The first stage is production. The C --> TDF producer transforms each input C source file into a target independent TDF capsule, using the target independent headers to describe the API in abstract terms. These target independent capsules will contain tokens to represent the uses of objects from the API, but these tokens will be left undefined.

  2. The second stage, which is also the first stage of the installation, is TDF linking. Each target independent capsule is combined with the TDF library describing the API implementation to form a target dependent TDF capsule. Recall that the TDF libraries contain the local definitions of the tokens left undefined by the producer, so the resultant target dependent capsule will contain both the uses of these tokens and the corresponding token definitions.

  3. The third stage of the compilation is for the TDF translator to transform each target dependent TDF capsule into an assembly source file for the appropriate target machine. Some TDF translators output not an assembly source file, but a binary object file. In this case the following assembler stage is redundant and the compilation skips to the system linking.

  4. The next stage of the compilation is for each assembly source file to be translated into a binary object file by the system assembler.

  5. The final compilation phase is for the system linker to combine all the binary object files with the system libraries to form a single, final executable. Recall that the system libraries are the final constituent of the API implementation, so this stage completes the combination of the program with the API implementation started in stage 2).

Let us, for convenience, tabulate these stages, giving the name of each compilation tool (plus the corresponding executable name), a code letter which tcc uses to refer to this stage, and the input and output file types for the stage (also see 7.2
).

	   TOOL					INPUT		OUTPUT
	1. C producer (tdfc)	 	c	C source	target ind. TDF
	2. TDF linker (tld)	 	L	target ind. TDF	target dep. TDF
	3. TDF translator (trans) 	t	target dep. TDF	assembly source
	4. assembler (as) 		a	assembly source	binary object
	5. system linker (ld)	 	l	binary object	executable
The executable name of the TDF translator varies, depending on the target machine. It will normally start, or end, however, in trans. These stages are documented in more detail in sections 5.1 to 5.5.

The code letters for the various compilation stages can be used in the -Wtool, opt, ... command-line option to tcc. This passes the option(s) opt directly to the executable in the compilation stage identified by the letter tool. For example, -Wl, -x will cause the system linker to be invoked with the -x option. Similarly the -Etool: file allows the executable to be invoked at the compilation stage tool to be specified as file. This allows the tcc user access to the compilation tools in a very direct manner.


3.3. Input File Types

This compilation path may be joined at any point, and terminated at any point. The latter possibility is discussed below. For the former, tcc determines, for each input file it is given, to which of the file types it knows (C source, target independent TDF, etc.) this file belongs. This determines where in the compilation path described this file will start. The method used to determine the type of a file is the normal filename suffix convention:

  • files ending in .c are understood to be C source files,

  • files ending in .j are understood to be target independent TDF capsules,

  • files ending in .t are understood to be target dependent TDF capsules,

  • files ending in .s are understood to be assembly source files,

  • files ending in .o are understood to be binary object files,

  • files whose type cannot otherwise be determined are assumed to be binary object files,

(for a complete list see 7.1
). Thus, for example, we speak of ".j files" as a shorthand for "target independent TDF capsules". Each file type recognised by tcc is assigned an identifying letter. For convenience, this corresponds to the suffix identifying the file type (c for C source files, j for target independent TDF capsules etc.).

There is an alternative method of specifying input files, by means of the -Stype, file, ... command-line option. This specifies that the file file should be treated as an input file of the type corresponding to the letter type, regardless of its actual suffix. Thus, for example, -Sc, file specifies that file should be regarded as a C source (or .c) file.


3.4. Intermediate and Output Files

During the compilation, tcc makes up names for the output files of each of the compilation phases. These names are based on the input file name, but with the input file suffix replaced by the output file suffix (unless the -make_up_names command-line option is given, in which case the intermediate files are given names of the form _tccnnnn.x, where nnnn is a number which is incremented for each intermediate file produced, and x is the suffix corresponding to the output file type). Thus if the input file file.c is given, this will be transformed into file.j by the producer, which in turn will be transformed into file.t by the TDF linker, and so on. The system linker output file name can not be deduced in the same way since it is the result of linking a number of .o files. By default, as with cc, this file is called a.out .

For most purposes these intermediate files are not required to be preserved; if we are compiling a single C source file to an executable, then the only output file we are interested in is the executable, not the intermediate files created during the compilation process. For this reason tcc creates a temporary directory in which to put these intermediate files, and removes this directory when the compilation is complete. All intermediate files are put into this temporary directory except:

  • those which are an end product of the compilation (such as the executable),

  • those which are explicitly ordered to be preserved by means of command-line options,

  • binary object files, when more than one such file is produced (this is for compatibility with cc).

tcc can be made to preserve intermediate files of various types by means of the -Ptype... command-line option, which specifies a list of letters corresponding to the file types to be preserved. Thus for example -Pjt specifies that all TDF capsules produced, whether target independent or target dependent, (i.e. all .j and .t files) should be preserved. The special form -Pa specifies that all intermediate files should be preserved. It is also possible to specify that a certain file type should not be preserved by preceding the corresponding letter by - in the -P option. The only really useful application of this is to use -P-o to cancel the cc convention on preserving binary object files mentioned above.

By default, all preserved files are stored in the current working directory. However the -work dir command-line option specifies that they should be stored in the directory dir.

The compilation can also be halted at any stage. The -Ftype option to tcc tells it to stop the compilation after creating the files of the type corresponding to the letter type. Because any files of this type which are produced will be an end product of the compilation, they will automatically be preserved. For example, -Fo halts the compilation after the creation of the binary object, or .o, files (i.e. just before the system linking), and preserves all such files produced. A number of other tcc options are equivalent to options of the form -Ftype:

  • -i is equivalent to -Fj (i.e. just apply the producer),

  • -S is equivalent to -Fs (cc compatibility),

  • -c is equivalent to -Fo (cc compatibility).

If more than one -F option (including the equivalent options just listed) is given, then tcc issues a warning. The stage coming first in the compilation path takes priority.

If the compilation has precisely one end product output file, then the name of this file can be specified to be file by means of the -o file command-line option. If a -o file option is given when there is more than one end product, then the first such file produced will be called file, and all such files produced subsequently will cause tcc to issue a warning.

FIGURE 4. Full tcc Compilation Path


3.5. Other Compilation Paths

So far we have been discussing the main tcc compilation path from C source to executable. This is however only part of the picture. The full complexity (almost) of all the possible compilation paths supported by tcc is shown in Fig. 4. This differs from Fig. 3 in that it only shows the left hand, or program, half of the main compilation diagram. The solid arrows show the default compilation paths; the shaded arrows are only followed if tcc is so instructed by means of command-line options. Let us consider those paths in this diagram which have not so far been mentioned.

3.5.1. Preprocessing

The first paths to be considered involve preprocessed C source files. These form a distinct file type which tcc recognises by means of the .i file suffix. Input .i files are treated in exactly the same way as .c files; that is, they are fed into the producer.

tcc can be made to preprocess the C source files it is given by means of the -P and -E options. If the -P option is given then each .c file is transformed into a corresponding .i file by the TDF C preprocessor, tdfcpp . If the -E option is given then the output of tdfcpp is sent instead to the standard output. In both cases the compilation halts after the preprocessor has been applied. Preprocessing is discussed further in section 5.6.

3.5.2. TDF Archives

The second new file type introduced in Fig. 4 is the TDF archive. This is recognised by tcc by means of the .ta file suffix. Basically a TDF archive is a set of target independent TDF capsules (this is slightly simplified, see section 5.2.3 for more details). Any input TDF archives are automatically split into their constituent target independent capsules. These then join the main compilation path in the normal way.

In order to create a TDF archive, tcc must be given the -prod command-line option. It will combine all the target independent TDF capsules it has into an archive, and the compilation will then halt. By default this archive is called a.ta, but another name may be specified using the -o option.

The routines for splitting and building TDF archives are built into tcc, and are not implemented by a separate compilation tool (in particular, TDF archives are not ar archives). Really TDF archives are a tcc-specific construction; they are not part of TDF proper.

3.5.3. TDF Notation

TDF has the form of an abstract syntax tree which is encoded as a series of bits. In order to examine the contents of a TDF capsule it is necessary to translate it into an equivalent human readable form. Two tools are provided which do this. The TDF pretty printer, disp, translates TDF into text, whereas the TDF notation compiler, tnc, both translates TDF to text and text to TDF. The two textual forms of TDF are incompatible - disp output cannot be used as tnc input. disp is in many ways the more sophisticated decoder - it understands the TDF extensions used to handle diagnostics, for example - but it does not handle the text to TDF translation which tnc does. By default tnc is a text to TDF translator, it needs to be passed the -p flag in order to translate TDF into text. We refer to the textual form of TDF supported by tnc as TDF notation.

By default, tcc uses disp. If it is given the -disp command-line option then all target independent TDF capsules (.j files) are transformed into text using disp . The -disp_t option causes all target dependent TDF capsules (.t files) to be transformed into text. In both cases the output files have a .p suffix, and the compilation halts after they are produced.

In order for tnc to be used, the -Ytnc flag should be passed to tcc. In this case the -disp and the -disp_t option cause, not disp, but tnc -p, to be invoked. But this flag also causes tcc to recognise files with a .p suffix as TDF notation source files. These are translated by tnc into target independent TDF capsules, which join the main compilation path in the normal way.

Similarly if the -Ypl_tdf flag is passed to tcc then it recognises files with a .pl suffix as PL_TDF source files. These are translated by the PL_TDF compiler, pl, into target independent TDF capsules.

disp and tnc are further discussed in section 5.7.

3.5.4. Merging TDF Capsules

The final unexplored path in Fig. 4 is the ability to combine all the target independent TDF capsules into a single capsule. This is specified by means of the -M command-line option to tcc. The combination of these capsules is performed by the TDF linker, tld. Whereas in the main compilation path tld is used to combine a single target independent TDF capsule with the TDF libraries to form a target dependent TDF capsule, in this case it is used to combine several target independent capsules into a single target independent capsule. By default the combined capsule is called a.j. The compilation will continue after the combination phase, with the resultant capsule rejoining the main compilation path. This merging operation is further discussed in section 5.2.2.

The only unresolved issue in this case is, if the -M option is given, to what .j files do the -Fj and the -Pj options refer? In fact, tcc takes them to refer to the merged TDF capsule rather than the capsules which are merged to form it. The -Pa option, however, will cause both sets of capsules to be preserved.

To summarise, tcc has an extra three file types, and an extra three compilation tools (not including the TDF archive creating and splitting routines which are built into tcc). These are:

  • files ending in .i are understood to be preprocessed C source files,

  • files ending in .ta are understood to be TDF archives,

  • files ending in .p are understood to be TDF notation source files,

and:

	   TOOL					INPUT		OUTPUT
	6. C preprocessor (tdfcpp) 	c	C source	preproc. C source
	7a. pretty printer (disp) 	d	TDF capsule	TDF notation
	7b. reverse notation (tnc -p)	d	TDF capsule	TDF notation
	8. notation compiler (tnc) 	d	TDF notation	TDF capsule
(see 7.1 and 7.2 for complete lists).


3.6. Finding out what tcc is doing

With so many different file types and alternative compilation paths, it is often useful to be able to keep track of what tcc is doing. There are several command-line options which do this. The simplest is -v which specifies that tcc should print each command in the compilation process on the standard output before it is executed. The -vb option is similar, but only causes the name of each input file to be printed as it is processed. Finally the -dry option specifies that the commands should be printed (as with -v) but not actually executed. This can be used to experiment with tcc to find out what it would do in various circumstances.

Occasionally an unclear error message may be printed by one of the compilation tools. In this case the -show_errors option to tcc might be useful. It causes tcc to print the command it was executing when the error occurred. By default, if an error occurs during the construction of an output file, the file is removed by tcc. It can however be preserved for examination using the -keep_errors option. This applies not only to normal errors, but also to exceptional errors such as the user interrupting tcc by pressing ^C, or one of the compilation tools crashing. In the latter case, tcc will also remove any core file produced, unless the -keep_errors option is specified.

For purposes of configuration control, the -version flag will cause tcc to print its version number. This will typically be of the form:

	tcc: Version: 4.0, Revision: 1.5, Machine: hp
giving the version and revision number, plus the target machine identifier. The -V flag will also cause each compilation tool to print its version number (if appropriate) as it is invoked.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tcc/tcc5.html100644 1750 1750 30256 6466607533 17201 0ustar brooniebroonie tcc User's Guide: tcc Environments

tcc User's Guide

January 1998

next section previous section current document TenDRA home page document index


4.1 - The Environment Search Path
4.2 - The Default Environment: Configuring tcc
4.3 - Using Environments to Specify APIs
4.4 - Using Environments to Implement tcc Options
4.5 - User-Defined Environments

4. tcc Environments

In addition to command-line options, there is a second method of specifying tcc's behaviour, namely tcc environments. An environment is just a file consisting of lines of the form:

	*IDENTIFIER "text"
where * stands for one of the environment prefixes, +, < and > (in fact ? is also a valid environment prefix. It is used to query the values represented by environmental identifiers. If tcc is invoked with the -Ystatus command-line option it will print the values of all the environmental identifiers it recognises). Any line in the environment not beginning with one of these characters is ignored. IDENTIFIER will be one of the environmental identifiers recognised by tcc, the environment prefix will tell tcc how to modify the value given by this identifier, and text what to modify it by.

The simplest environmental identifiers are those which are used to pass flags to tcc and the various components of the compilation system. The line:

	+FLAG "text"
causes text to be interpreted by tcc as if it was a command-line option. Similarly:

	+FLAG_TDFC "text"
causes text to be passed as an option to tdfc. There are similar environmental identifiers for each of the components of the compilation system (see 7.6 for a complete list).

The second class of environmental identifiers are those corresponding to simple string variables. Only the form:

	+IDENTIFIER "text"
is allowed. This will set the corresponding variable to text. The permitted environmental identifiers and the corresponding variables are:
ENVDIR the default environments directory (see section 4.1),

	MACHINE		the target machine type (see section 4.2),
	PORTABILITY	the producer portability table (see section 5.1.3),
	TEMP		the default temporary directory (see section 6.4),
	VERSION		the target machine version (Mips only, see section 5.3.4).
The final class of environmental identifiers are those corresponding to lists of strings. Firstly text is transformed into a list of strings, b say, by splitting at any spaces, then the list corresponding to the identifier, a say, is modified by this value. How this modification is done depends on the environment prefix:

  • if the prefix is + then a = b,

  • if the prefix is > then a = a + b,

  • if the prefix is < then a = b + a,

where + denotes concatenation of lists. The lists represented in this way include those giving the pathnames of the executables of the various compilation components (plus default flags). These are given by the identifiers TDFC, TLD, etc. (see 7.6 for a complete list). The other lists can be divided between those affecting the producer, the TDF linker, and the system linker respectively (see sections 5.1, 5.2 and 5.5 for more details):

	INCL		list of default producer include file directories (as -I options),
	STARTUP		list of default producer start-up files (as -f options),
	STARTUP_DIR	list of default producer start-up directories (as -I options),
	
	LIB		list of default TDF libraries (as -l options),
	LINK		list of default TDF library directories (as -L options),
	
	CRT0		list of default initial .o files,
	CRT1		second list of default initial .o files,
	CRTN		list of default final .o files,
	SYS_LIB		list of default system libraries (as -l options),
	SYS_LIBC	list of default standard system libraries (as -l options),
	SYS_LINK	list of default system library directories (as -L options).

4.1. The Environment Search Path

The command-line option -Yenv tells tcc to read the environment env. If env is not a full pathname then it is searched for along the environment search path. This consists of a colon-separated list of directories, the initial segment of which is given by the system variable TCCENV (we use the term "system variable" to describe TCCENV rather than the more normal "environmental variable" to avoid confusion with tcc environments) if this is defined, and the final segment of which consists of the default environments directory, which is built into tcc at compile-time, and the current working directory. The option -show_env causes tcc to print this environment search path. If the environment cannot be found, then a warning is issued.


4.2. The Default Environment: Configuring tcc

The most important environment is the default environment, which is built into tcc at compile-time. This does not mean that the default environment is read every time that tcc is invoked, but rather that it is read once (at compile-time) to determine the default configuration of tcc.

The information included in the default environment includes: the pathnames and default flags of the various components of the compilation system; the target machine type; the default temporary directory; the specification of the target independent headers, TDF libraries and system libraries comprising the default API (which is always ANSI); the variables specifying the default compilation mode; the default environments directory (mentioned above).

The target machine type, defined by the MACHINE environmental identifier, actually plays a very minor role in dealing with the very real target dependency problems in tcc. These problems are caused by the fact that tcc is designed to work on many different target machines. All the information on where the executables, include files, TDF libraries etc. are located on a particular machine is stored in the standard environments, and in particular, the default environment. The interaction with the system assembler and, more importantly, the system linker is also expressed using environments. The only target dependencies for which the machine type needs to be known are genuine aberrations. For example, the TDF to Mips translator and the Mips assembler are completely different from most other translator-assembler pairs in that they pass two files, a .G and a .T file, between them, rather than the more normal single .s file. Thus it is important for tcc to know that the machine type is Mips in this case (see section 5.3.4 for more details).


4.3. Using Environments to Specify APIs

Another important use of environments concerns their use in specifying APIs. As was mentioned above, an API may be considered to have three components: the target independent headers, giving an abstract description of the API to the producer, and the TDF libraries and system libraries, giving the details of the API implementation to the installer. Environments are an ideal medium for expressing this information. The INCL environmental identifier can be used to specify the location of the target independent headers, LIB and LINK the location of the TDF libraries, and SYS_LIB and SYS_LINK the location of the system libraries. Moreover, all this information can be canned into a single command-line option.

A number of standard APIs have been described as target independent headers and are provided with the TDF system. A tcc environment is provided for each of these APIs (for example, ansi, posix, xpg3 - see 7.5 for a complete list, also see section 6.3). There is an important distinction to be made between base APIs (for example, POSIX) and extension APIs (for example, X11 Release 5). The command-line option -Yposix sets the API to be precisely POSIX, whereas the option -Yx5_lib sets it to the existing API plus the X11 Release 5 basic X library. This is done by using +INCL etc. in the posix environment to set the various variables corresponding to these environmental identifiers to precisely the values for POSIX, but <INCL etc. in the x5_lib environment to extend these variables by the values for X11 Release 5. Thus, to specify the API POSIX plus X11 Release 5, the command-line options -Yposix -Yx5_lib are required (in that order).

All the standard API environments provided also contain lines which set, or modify, the INFO environmental identifier. This contains textual information on the API, including API names and version numbers. This information can be printed by invoking tcc with the -info command-line option. For example, the command-line options:

	> tcc -info -Yposix -Yx5_lib
cause the message:

	tcc: API is X11 Release 5 Xlib plus POSIX (1003.1).
to be printed.

As was mentioned above, the default API is ANSI. Thus invoking tcc without specifying an API environment is equivalent to giving the -Yansi command-line option. On the basis that, when it comes to portability, explicit decisions are better than implicit ones, the use of -Yansi is recommended.


4.4. Using Environments to Implement tcc Options

Another use to which environments are put is to implement certain tcc command-line options. In particular, some options require different actions depending on the target machine. It is far easier to implement these by means of an environment, which can be defined differently on each target machine, rather than by trying to build all the alternative definitions into tcc.

An important example is the -g flag, which causes the generation of information for symbolic debugging. Depending on the target machine, different flags may need to be passed to the assembler and system linker when -g is specified, or the default .o files and libraries used by the linker may need to be changed. For this reason tcc uses a standard environment, tcc_diag, to implement the -g option.

For a complete list of those options which are implemented by means of environments, see 7.7. If the given option is not supported on a particular target machine, then the corresponding environment will not exist, and tcc will issue a warning to that effect.


4.5. User-Defined Environments

The tcc user can also set up and use environments. It is anticipated that this facility will be used mainly to group a number of tcc command-line options into an environment using the FLAG environmental identifier and to set up environments corresponding to user-defined APIs.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tcc/tcc6.html100644 1750 1750 121725 6466607533 17224 0ustar brooniebroonie tcc User's Guide: The Component of the TDF System

tcc User's Guide

January 1998

next section previous section current document TenDRA home page document index


5.1 - The C to TDF Producer
5.1.1 - Include File Directories
5.1.2 - Start-up Files and End-up Files
5.1.3 - Compilation Modes and Portability Tables
5.1.4 - Description of Compilation Modes
5.2 - The TDF Linker
5.2.1 - The Linker and TDF Libraries
5.2.2 - Combining TDF Capsules
5.2.3 - Constructing TDF Libraries
5.2.4 - Useful tld Options
5.3 - The TDF to Target Translator
5.3.1 - tcc Options Affecting the Translator
5.3.2 - Useful trans Options
5.3.3 - Optimisation in TDF Translators
5.3.4 - The Mips Translator and Assembler
5.4 - The System Assembler
5.5 - The System Linker
5.5.1 - The System Linker and tcc Environments
5.5.2 - The Effect of Command-Line Options on the System Linker
5.6 - The C Preprocessor
5.7 - The TDF Pretty Printer
5.8 - The TDF Archiver

5. The Components of the TDF System


5.1. The C to TDF Producer

We now turn to the individual components of the TDF system. Most of the command-line options to tcc so far discussed have been concerned with controlling the behaviour of tcc itself. Another, even more important, class of options concern the ways in which the behaviour of the components can be specified. The -Wtool, opt, ... command-line option for communicating directly with the components has already been mentioned. This however is not recommended for normal purposes; the other tcc command-line options give a more controlled access to the components.

The first component to be considered is the C --> TDF producer, tdfc. This translates an input C source file (a .c file or a .i file) into a output target independent TDF capsule (a .j file).

5.1.1. Include File Directories

The most important producer options are those which tell it where to search for files included using a #include preprocessing directive. As with cc, the user can specify a directory, dir, to search for these files using the -Idir command-line option. However, unlike cc, the producer does not search /usr/include as default. Instead, the default search directories are those containing the target independent headers for the API selected, as given by the INCL identifier in the environment describing the API. In addition, the directories to search for the default start-up files (see below), as given by the STARTUP_DIR environmental identifier, are also passed to the producer.

If the -H option is passed to tcc then it will cause the producer to print the name of each file it opens. This is often helpful if a multiplicity of -I options leads to confusion.

5.1.2. Start-up Files and End-up Files

The producer has a useful feature of start-up and end-up files. The tcc command-line option -ffile is equivalent to inserting the line:

	#include "file"
at the start of each input C source file. Similarly -efile is equivalent to inserting this line at the end of each such file. These included files are searched for along the directories specified by the -I options in the normal manner.

tcc generates a producer start-up file, called tcc_startup.h , in order to implement certain command-line options. The cc-compatible options:

	-Dname
	-Dname=value
	-Uname
	-Astr
are translated into the lines:

	#define name 1
	#define name value
	#undef name
	#assert str
respectively. tcc does not check that these lines are valid C preprocessing directives since this will be done by the producer. So any producer error message referring to tcc_startup.h is likely actually to refer to the -D, -U and -A command-line options. In case of difficulties, tcc_startup.h can be preserved for closer examination using the -Ph option to tcc.

There may be default start-up options specified by the STARTUP environmental identifier. The purpose of these is discussed below. The order the start-up options are passed to the producer is: firstly, the default start-up options; secondly, the start-up option for the tcc built-in start-up file, tcc_startup.h; thirdly, any command-line start-up options. (For technical reasons, a -no_startup_options command-line option is provided which causes no start-up or end-up options to be passed to tdfc. This is not likely to prove useful in normal use.

5.1.3. Compilation Modes and Portability Tables

We have already described how one aspect of the compilation environment, the API, is specified to the producer by means of the default -I options. But another aspect, the control of the syntax and portability checks applied by the producer, can also be specified in a fairly precise manner.

The producer accepts a number of #pragma statements which tell it which portability checks to apply and which syntactic extensions to ISO/ANSI C to allow (see [3] and [2]). These can be inserted into the main C source, but the ideal place for them is in a start-up file. This is the purpose of the STARTUP environmental identifier, to give a list of default start-up files containing #pragma statements which specify the default behaviour of the producer.

In fact not all the information the producer requires is obtained through start-up files. The basic information on the minimum sizes which can be assumed for the basic integer types is passed to the producer by means of another type of file, the portability table. This is specified by means of the PORTABILITY environmental identifier. There are in fact only two portability tables provided, Ansi_Max.pf, which specifies the minimum sizes permitted by the ISO/ANSI standard, and Common.pf, which specifies the minimum sizes found on most 32-bits machines. The main difference between the two is that in ISO/ANSI it is stated that int is only guaranteed to have 16 bits, whereas on 32-bits machines it has at least 32 bits.

A number of tcc command-line options are concerned with specifying the compilation environment to the producer. The main option for setting the compilation mode is -Xmode. A number of different modes are available:

  • -Xs specifies strict ISO/ANSI C with extra portability checks,

  • -Xp specifies strict ISO/ANSI C with minimal portability checks,

  • -Xc specifies strict ISO/ANSI C with no extra portability checks,

  • -Xa specifies ISO/ANSI C with various syntactic extensions,

  • -Xt specifies "traditional" C.

The default is -Xc. For a precise description of each of these modes, see [3] (tchk is just tcc in disguise). In addition the command-line options -not_ansi and -nepc can be used to modify the basic compilation modes. -not_ansi specifies that certain non-ANSI syntactic constructions should be allowed. -nepc switches off the producer's extra portability checks (it also suppresses certain constant overflow checks in the TDF translators). All these options are implemented by start-up files.

The portability table to be used is specified separately by means of an environment. The default is the ISO/ANSI portability table, but -Y32bit or -Ycommon can be used to specify 32-bit checking. -Y16bit will restore the portability table to the default. Note that all checks involving the portability table are switched off by the -nepc command-line option, so in this case no portability table is specified to the producer.

5.1.4. Description of Compilation Modes

Let us briefly describe the compilation modes introduced in the previous section. The following tables describe some of the main features of each mode. The list of pre-defined macros is complete (other than the built-in macros, __FILE__, __LINE__, __DATE__ and __TIME__; because the producer is designed to be target independent it does not define any of the machine name macros which are built into cc. The cc-compatible option, -A-, which is meant to cause all pre-defined macros (other than those beginning with __) to be undefined, and all pre-assertions to be unasserted, is ignored by tcc. In the standard compilation modes there are no such macros and no such assertions. The integer promotion rules are either the arithmetic rules specified by ISO/ANSI or the "traditional" signed promotion rules. The precise set of syntactic relaxations to the ISO/ANSI standard allowed by each mode varies. For a complete list see [3]. The -not_ansi command-line option can be used to allow further relaxations. The extra prototype checks cause the producer to construct a prototype for procedures which are actually traditionally defined. This is very useful for getting prototype-like checking without having to use prototypes in function definitions. This, and other portability checks, are switched off by the -nepc option. Finally, the additional checks are lint-like checks which are useful in detecting possible portability problems.

	compilation mode: -Xs
	description: strict ISO/ANSI with additional checks
	pre-defined macros: __STDC__ = 1, __ANDF__ = 1, __TenDRA__ = 1
	integer promotions: ISO/ANSI
	syntactic relaxations: no
	extra prototype checks: yes
	additional checks: yes
	
	compilation mode: -Xp
	description: strict ISO/ANSI with minimal extra checks
	pre-defined macros: __STDC__ = 1, __ANDF__ = 1, __TenDRA__ = 1
	integer promotions: ISO/ANSI
	syntactic relaxations: no
	extra prototype checks: yes
	additional checks: some
	
	compilation mode: -Xc
	description: strict ISO/ANSI with no extra checks
	pre-defined macros: __STDC__ = 1, __ANDF__ = 1, __TenDRA__ = 1
	integer promotions: ISO/ANSI
	syntactic relaxations: no
	extra prototype checks: no
	additional checks: no
	
	compilation mode: -Xa (default)
	description: lenient ISO/ANSI with no extra checks
	pre-defined macros: __STDC__ = 1, __ANDF__ = 1, __TenDRA__ = 1
	integer promotions: ISO/ANSI
	syntactic relaxations: yes
	extra prototype checks: no
	additional checks: no
	
	compilation mode: -Xt
	description: traditional C
	pre-defined macros: __STDC__ = 0, __ANDF__ = 1, __TenDRA__ = 1
	integer promotions: signed
	syntactic relaxations: yes
	extra prototype checks: no
	additional checks: no
The choice of compilation mode very much depends on the level of checking required. -Xa is suitable for general compilation, and -Xc. -Xp and -Xs for serious program checking (although some may find the latter Xs-ive). -Xt is provided for cc compatibility only; its use is discouraged.

The recommended method of proceeding is to define your own compilation mode. In this way any choices about syntax and portability checking are made into conscious decisions. One still needs to select a basic mode to form the basis for this user-defined mode. -Xc is probably best; it is a well-defined mode (the definition being the ISO/ANSI standard) and so forms a suitable baseline. Suppose that, on examining the program to be compiled, we decide that we need to do the following:

  • allow the #ident directive,

  • allow through unknown escape sequences with a warning,

  • warn of uses of undeclared procedures,

  • warn of incorrect uses of simple return statements.

The first two of these are syntactic in nature. The third is more interesting. ISO/ANSI says that any undeclared procedures are assumed to return int. However for strict API checking we really need to know about these undeclared procedures, because they may be library routines which are not part of the declared API. The fourth condition is a simple lint-like check that no procedure which is declared to return a value contains a simple return statement (without a return value).

To tell the producer about these options, it is necessary to have them included in every source file. The easiest way of doing this is by using a start-up file, check.h say, containing the lines:

	#pragma TenDRA begin
	#pragma TenDRA directive ident allow
	#pragma TenDRA unknown escape warning
	#pragma TenDRA implicit function declaration warning
	#pragma TenDRA incompatible void return warning
The second, third, fourth and fifth lines correspond to the statements above (see [3]). The first line indicates that this file is defining a new checking scope.

Once the compilation mode has been described in this way, it needs to be specified to tcc in the form of the command-line options -Xc -fcheck.h.


5.2. The TDF Linker

The next component of the system to be considered is the TDF linker, tld. This is used to combine several TDF capsules or TDF libraries into a single TDF capsule. It is put to two distinct purposes in the tcc compilation scheme. Firstly, in the main compilation path, it is used in the installer half to combine a target independent TDF capsule (a .j file) with the TDF libraries representing the API implementation on the target machine, to form a target dependent TDF capsule (a .t file). Secondly, if the -M option is given to tcc, it is used in the producer half to combine all the target independent TDF capsules (.j files) into a single target independent capsule. Let us consider these two cases separately.

5.2.1. The Linker and TDF Libraries

In the main TDF linking phase, combining target independent capsules with TDF libraries to form target dependent capsules, two pieces of information need to be specified to tld. Firstly, the TDF libraries to be linked with, and, secondly, the directories to search for these libraries. For standard APIs, the location of the TDF libraries describing the API implementation is given in the environment corresponding to the API. The LIB identifier gives the names of the TDF libraries, and the LINK identifier the directories to be searched for these libraries. The user can also specify libraries and library directories by means of command-line options to tcc. The option -jstr indicates that the TDF library str.tl should be used for linking (.tl is the standard suffix for TDF libraries). The option -Jdir indicates that the directory dir should be added to the TDF library search path. Libraries and directories specified by command-line options are searched before those given in the API environment.

There is a potential source of confusion in that the tld options specifying the TDF library str.tl and the library directory dir are respectively -lstr and -Ldir. tcc automatically translates command-line -j options into tld -l options, and command-line -J options into tld -L options. However the LIB and LINK identifiers are actually lists of tld options, so they should use the -l and -L forms.

5.2.2. Combining TDF Capsules

The second use of tld is to combine all the .j files in the producer half of the compilation into a single capsule. This is specified by means of the -M ("merge") command-line option to tcc described in section 3.5.4. By default, the resultant capsule is called a.j. If the -M option is used to merge all the .j files from a very large program, the resultant TDF capsule can in turn be very large. It may in fact become too large for the installer to handle. Interesting it is often the system assembler rather than TDF translator which has problems.

The -MA ("merge all") option is similar to -M, but will in addition "hide" all the external tag and token names in the resultant capsule, except for the token names required for linking with the TDF libraries and the tag names required for linking with the system libraries (plus main). In effect, all the names which are internal to the program are removed. This means that the -MA option should only be used to merge complete programs. For details on how to use tld for more selective name hiding, see below.

5.2.3. Constructing TDF Libraries

There is a final use of the TDF linker supported by tcc which has not so far been mentioned, namely the construction of TDF libraries. As has been mentioned, TDF libraries are an indexed set of TDF capsules. tld, in addition to its linking mode, also has routines for constructing and manipulating TDF libraries. The library construction mode is supported by tcc by means of the makelib environment. This tells tcc to merge all the .j files and then to stop. But it also passes an option to tld which causes the merged file to be, not a TDF capsule, but a TDF library. Thus the command-line options:

	> tcc -Ymakelib -o a.tl a.j b.j c.j
cause the TDF capsules a.j, b.j and c.j to be combined into a TDF library, a.tl.

5.2.4. Useful tld Options

tld has a number of options which may be useful to the general user. The -w option, which causes warnings to be printed about undefined tags and tokens, can often provide interesting information; other options are concerned with the hiding of tag and token names. These options can be passed directly to tld by means of the -WL, opt, ... command-line option to tcc . The tld options are fully documented on the appropriate manual page.


5.3. The TDF to Target Translator

The next compilation tool to be considered is the TDF translator. This translates an input target dependent TDF capsule (.t file) into an assembly source file (.s) file for the appropriate target machine. This is the main code generation phase of the compilation process; most of the optimisation of code which occurs happens in the translator (some machines also have optimising assemblers).

Although referred to by the generic name of trans, the TDF translators for different target machines in fact have different names. The main division between translators is in the supported processor. However, operating system dependent features such as the precise form of the assembler input, and the symbolic debugger to be supported, may also cause different versions of the basic translator to be required for different machines of the same processor group. The current generation of translators includes the following:

  1. The TDF --> i386/i486 translator is called trans386. This exists in two versions, one running on SVR4.2 and one on SCO. The two versions differ primarily in the symbolic debugger they support. trans386 has also been ported to several other i386-based machines, including MS-DOS.

  2. The TDF --> Sparc (Version 7) translator is called sparctrans . This again exists in two versions, one running on SVR4.2 and one on SunOS and Solaris 1. These versions again differ primarily in the symbolic debugger supported.

  3. The TDF --> Mips (R2000/R3000, little-endian) translator is called mipstrans. This differs from the other translators in that instead of outputting a single .s file, it outputs two files, a binasm file (with a .G suffix) and a symbol table file (with a .T suffix). This is discussed in more detail below. mipstrans runs on Ultrix, but again has two versions. One runs on Ultrix 4.1 and earlier, the other on 4.2 and later. This necessary because of a change in the format of the binasm file between these two releases.

  4. The TDF --> 68030/68040 translator also exists in two versions. One runs on HP-UX and is called hptrans; the other runs on NeXTStep and is called nexttrans (however the NeXT is not a supported platform because of its lack of standard API coverage). These differ, not only in the symbolic debugger supported, but also in the format of the assembly source output.

This list is not intended to be definitive. Development work is proceeding on new translators all the time. Existing translators are also updated to support new operating systems releases when this is necessary.

5.3.1. tcc Options Affecting the Translator

A number of tcc command-line options are aimed at controlling the behaviour of the TDF translator. The cc-compatible option -Kitem, ... specifies the behaviour indicated by the argument item. Possible values for item, together with the behaviour they specify, include:

	PIC		causes position independent code to be produced,
	ieee		causes strict conformance to the IEEE floating point standard,
	noieee		allows non-strict conformance to the IEEE standard,
	frame		specifies that a frame pointer should always be used,
	no_frame		specifies that frame pointers need not always be used,
	i386		causes code generation to be tuned for the i386 processor,
	i486		causes code generation to be tuned for the i486 processor,
	P5		causes code generation to be tuned for the P5 processor.
Obviously not all of these options are appropriate for all versions of trans. Therefore all -K options are implemented by means of environments which translate item into the appropriate trans options. If a certain item is not applicable on a particular target machine then the corresponding environment will not exist, and tcc will print a warning to this effect.

The cc-compatible -Zstr option is similarly implemented by means of environments. On those machines which support this option it can be used to specify the packing of structures. If str is p1 then they are tightly packed, with no padding. Values of p2 and p4 specify padding to 2 or 4 byte boundaries when appropriate.

Finally, the tcc command-line option -wsl causes the translator to make all string literals writable. Again, this is implemented by an environment. For many machines this behaviour is default; for others it requires an option to be passed to the translator.

5.3.2. Useful trans Options

For further specifying the behaviour of trans it may be necessary to pass options to it directly. The command-line options implemented by trans vary from machine to machine. The following options are however common to all translators and may prove useful:

	-E		switches off certain constant overflow checks,
	-X		switches off most optimisations,
	-Z		prints the version number(s) of the input capsule.
These options may be passed directly to trans by means of the -Wt, opt, ... command-line option to tcc . The -E option is also automatically invoked when the -nepc command-line option to tcc is used. The manual page for the appropriate version of trans should be consulted for more details on these and other, machine dependent, options.

5.3.3. Optimisation in TDF Translators

As has been mentioned, the TDF translator is the main optimising phase of the TDF compilation scheme. All optimisations are believed to be correct and are switched on by default. Thus the standard cc -O option, which is intended to switch optimisations on, has no effect in tcc except to cancel any previous -g option. If, due to a translator bug, a certain piece of code is being optimised incorrectly, then the optimisations can be switched off by means of the -Wt, -X option mentioned above. However this should not normally be necessary.

5.3.4. The Mips Translator and Assembler

As has been mentioned, the TDF --> Mips translator, mipstrans is genuinely exceptional in that it outputs a pair of files for each input TDF capsule, rather than a single assembly source file. The general scheme is shown in Fig. 5.

FIGURE 5. Mips Compilation Path

mipstrans translates each input target dependent TDF capsule, a.t, into a binasm source file, a.G, and an assembler symbol table file, a.T. It may optionally output an assembly source file, a.s, which combines all the information from a.G with part of the information from a.T (it is the remainder of the information in a.T which is the reason why this scheme has to be adopted). The .s file is only produced if tcc is explicit told to preserve .s files by means of one of the command-line options, -Ps, -Pa, -Fs or -S. The two main mipstrans output files, a.G and a.T, are then transformed by the auxiliary Mips assembler, as1, into a binary object file, a.o.

Although they can be preserved for examination, the .G and .T files output by mipstrans cannot subsequently be processed using tcc. If a break in compilation is required at this stage, a .s file should be produced, and then entered as a tcc input file in the normal way. The information lost from the symbol table in this process is only important when symbolic debugging information is required. Input .s files are translated into binary object files by the main Mips assembler, as, in the normal way.

So, in addition to the main assembler, which is given by the AS environmental identifier, the location of the auxiliary assembler also needs to be specified to tcc. This is done using the AS1 environmental identifier, which is normally defined in the default environment. There is a further piece of information required for the compilation scheme to operate correctly: mipstrans needs to know the as1 version number. This number can be specified by means of the VERSION environmental identifier.


5.4. The System Assembler

The system assembler is the stage in the tcc compilation path which is likely to be of least interest to normal users. The assembler translates an assembly source (or .s) file into a binary object (or .o) file. (The exception to this is the Mips auxiliary assembler discussed above.) Most assemblers are straight translation phases, although some also offer peephole optimisation and scheduling facilities. No tcc command-line options are directly concerned with the assembler, however options can be passed to it directly by means of the -Wa, opt, ... command-line option.


5.5. The System Linker

The final stage in the main tcc compilation path is the system linking. The system linker, ld, combines all the binary object files with the system libraries to form a final executable image. By default this executable is called a.out, although this can be changed using the -o command-line option to tcc. In terms of the differences between target machines, the system linker is the most complex of the tools which are controlled by tcc. Our discussion can be divided between those aspects of the linker's behaviour which are controlled by tcc environments, and those which are controlled by command-line options.

5.5.1. The System Linker and tcc Environments

The general form of tcc's calls to ld are as follows:

	ld (linker options) -o (output file)
		(initial .o files) (binary object files)
		(final .o files) (default system library directories)
		(default system libraries) (default standard libraries)
The linker may require certain default binary object files to be linked into every executable created. These are divided between the initial .o files, which come before the main list of binary object files, and the final .o files, which come after. For technical reasons, the list of initial .o files is split into two; the first list is given by the CRT0 environmental identifier, and the second by CRT1. The list of final .o files is given by the CRTN environmental identifier.

The information on the default system libraries the linker requires is given by three environmental identifiers. SYS_LINK gives a list of directories to be searched for system libraries. This will exclude /lib and /usr/lib which are usually built into ld. These directories will be given as a list of options of the form -Ldir. The default system libraries are divided into two lists. The environmental identifier SYS_LIBC gives the "standard" library options (usually just -lc), and SYS_LIB gives any other default library options. Both of these are given by lists of options of the form -lstr. This option specifies that the linker should search for the library libstr.a if linking statically, or libstr.so if linking dynamically.

So the main target dependencies affecting the system linker are described in these six environmental variables: CRT0, CRT1, CRTN, SYS_LINK, SYS_LIB and SYS_LIBC. For a given machine these will be given once and for all in the default environment. Standard API environments may modify SYS_LINK and SYS_LIB to specify the location of the system libraries containing the API implementation, although at present this has not been done.

5.5.2. The Effect of Command-Line Options on the System Linker

The most important tcc command-line options affecting the system linker are those which specify the use of certain system libraries. The option -lstr indicates that the system libraries libstr.a (or libstr .so) should be searched for. The option -Ldir indicates that the directory dir should be added to the list of directories searched for system libraries. Both these options are position dependent. They are passed to the system linker in exactly the same position relative to the input files as they were given on the command-line. Thus normally -l (and to a lesser extent -L) options should be the final command-line options given.

The following tcc command-line options are passed directly to ld. A brief description is given of the purpose of each option, however whether or not ld supports this option depends on the target machine. The local ld manual page should be consulted for details.

	-Bstr		sets library type: str can be dynamic or static,
	-G		causes a shared object rather than an executable to be produced,
	-dn		causes dynamic linking to be switched off,
	-dy		causes dynamic linking to be switched on,
	-hstr		causes str to be marked as dynamic in a shared object,
	-s		causes the resultant executable to be stripped,
	-ustr		causes str to be marked as undefined,
	-zstr		specifies error behaviour, depending on str.
The position of any -Bstr options on the command-line is significant. These positions are therefore preserved. The position of the other options is not significant. In addition to these options, the -b command-line option causes the default standard system libraries (i.e. those given by the SYS_LIBC environmental identifier) not to be passed to ld.

Other command-line options may affect the system linker indirectly. For example, the -g option may require different default .o files and system libraries, the precise details of which are target dependent. Such options will be implemented by means of environments which will change the values of the environmental identifiers controlling the linker.


5.6. The C Preprocessor

The TDF C preprocessor, tdfcpp, is invoked only when tcc is passed the -E or -P command-line option, as described in section 3.5.1. These both cause all input .c files to be preprocessed, but in the former case the output is send to the standard output, whereas in the latter it is send to the corresponding .i files.

The TDF system differs from most C compilation systems in that preprocessing is an integral part of the producer, tdfc, rather than a preliminary textual substitution phase. This is because of difficulties involved with trying to perform the preprocessing in a target independent manner. Therefore tdfcpp is merely a modified version of tdfc which halts after the preprocessing phase and prints what it has read. This means that the tdfcpp output, while being equivalent to its input, will not correspond at the textual level to the degree which is normal in C preprocessors.


5.7. The TDF Pretty Printer

The TDF pretty printer, disp, and the TDF notation compiler, tnc, have already been discussed in some detail in section 3.5.3. The TDF decoding command-line options, -disp and -disp_t, cause respectively all .j files and all .t files to be decoded into .p files. This decoding is done using disp by default, and with tnc -p if the -Ytnc command-line option is specified. The -Ytnc option also causes any input .p files to be encoded into .j files by tnc.

The pretty printer, disp, can be used as a useful check that a given .j or .t file is a legal TDF capsule. The TDF decoding routines in the TDF linker and the TDF translator assume that their input is a legal capsule. The pretty printer performs more checks and has better diagnostics for illegal capsules. By default disp only decodes capsule units which belong to "core" TDF. Options to decode other unit types can be passed directly to disp by means of the -Wd, opt, ... command-line option to tcc. The potentially useful disp options include:

	-A		causes all known unit types to be decoded,
	-g		causes diagnostic information units to be decoded,
	-D		causes a binary dump of the capsule to be printed,
	-U		causes link information units to be decoded,
	-V		causes the input not to be rationalised,
	-W		causes a warning to be printed if a token is used before it is declared.
The manual page for disp should be consulted for more details.

The TDF notation compiler, tnc, is fully documented in [4].


5.8. The TDF Archiver

A TDF archive is a tcc-specific form intended for software distribution. It consists of a set of target independent TDF capsules (.j files) and a set of tcc command-line options. It is intended that a TDF archive can be produced on one machine, and distributed to, and installed on, a number of target machines.

If a TDF archive is given as an input file to tcc (it will be recognised by its .ta suffix), then it is split into its constituent capsules and options. The options are interpreted as if they had been given on the command-line (unless the -WJ, -no_options flag is specified), and the capsules are treated as input files in the normal way. The archive splitting and archive building routines are both built into tcc; there is no separate TDF archiver tool. Options passed to the archiver using -WJ, opt are interpreted by tcc.

In order to specify that a TDF archive should be created, the -prod flag should be used. This specifies that all target independent capsules (.j files) and all options opt given by a tcc option of the form -WI, opt, ... should be combined into a TDF archive. The compilation process halts after producing this archive. By default the TDF archive created is called a.ta, but this can be changed using the -o option. Normally the names of the capsules comprising the archive are inserted into the archive, but this may be suppressed by the use of the -WJ, -no_names option.

As an example of the kind of option that might be included in an archive, suppose that the production has been done using the POSIX API. Then the installation should also be done using this same API. Alternatively expressed, if a TDF archive has been constructed using the posix environment, then the -Yposix flag should be included in the archive to ensure that the installation also takes place in this same environment. In fact the environments describing the standard APIs have been set up so that this happens automatically. For example, the posix environment contains the line:

	+FLAG "-WI,-Yposix"
Another kind of option that it might be useful to include in an archive is a -lstr option. In this way all the information on the install-time options can be specified at produce-time.

A final example of an option which might be included in an archive is the -message option. The command-line option -message str causes tcc to print the message str with any @ characters in str replaced by spaces (there are problems with escaping spaces). So, by using the command-line option:

	-WI,-message"Installing@TDF@archive@..."
one can produce an archive which prints a message as it is installed. This option is also useful in environments. By inserting the line:

	+FLAG "-message Reading@tcc@environment@..."
one can produce an environment which prints a message whenever it is read.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tcc/tcc7.html100644 1750 1750 22737 6466607533 17210 0ustar brooniebroonie tcc User's Guide: Miscellaneous Topics

tcc User's Guide

January 1998

next section previous section current document TenDRA home page document index


6.1 - Intermodular Checks
6.2 - Debugging and Profiling
6.3 - The System Environment
6.4 - The Temporary Directory
6.5 - The tcc Option Interpreter

6. Miscellaneous Topics

In this section we draw together a number of miscellaneous topics not so far covered.


6.1. Intermodular Checks

All of the extra compiler checks described in section 5.1.3 refer to a single C source file, however tcc also has support for a number of intermodular checks. These checks are enabled by means of the -im command-line option. This causes the producer to create for each C source file, in addition to its TDF capsule output file, a second output file, called a C spec file, containing a description of the C objects declared in that file. This C spec file is kept associated with the target independent TDF as it is transformed to a target dependent capsule, an assembly source file, and a binary object file. When these binary object files are linked then the associated C spec files are also linked using the C spec linker, spec_linker, into a single C spec file. This file is named a.k by default. It is this linking process which constitutes the intermodular checking (in fact spec_linker may also be invoked at the TDF merging level when the -M option is used).

When intermodular checks are specified, tcc will also recognise input files with a .k suffix as C spec files and pass them to the C spec linker.

The nature of the association between a C spec file and its binary object file needs discussion. While these files are internal to a single call of tcc it can keep track of the association, however if the compilation is halted before linking it needs to preserve this association. For example in:

	> tcc -im -c a.c
the binary object file a.o and the C spec file a.k need to be kept together. This is done by forming them into a single archive file named a.o. When a.o is subsequently linked, tcc recognises that it is an archive and splits it into its two components, one of which it passes to the system linker, and one to the C spec linker.

Intermodular checking is described in more detail in [3]. In tchk intermodular checking is on by default, but may be switched off using -im0.


6.2. Debugging and Profiling

tcc supports options for both symbolic debugging using the target machine's default debugger, and profiling using prof on those machines which have it.

The -g command-line option causes the producer to output extra debugging information in its output TDF capsule, and the TDF translator to translate this information into the appropriate assembler directives for its supported debugger (for details of which debuggers are supported by which translators, consult the appropriate manual pages). For the translator to have all the diagnostic information it requires, not only the TDF capsules output by the producer, but also those linked in by the TDF linker from the TDF libraries, need to contain this debugging information. This is ensured for the standard TDF libraries by having two versions of each library, one containing diagnostics and one not. By default the environmental identifier LINK, which gives the directories which the TDF linker should search, is set so that the non-diagnostic versions are found. However the -g option modifies LINK so that the diagnostic versions are found first.

Depending on the target machine, the -g option may also need to modify the behaviour of the system assembler and the system linker. Like all potentially target dependent options, -g is implemented by means of a standard environment, in this case tcc_diag.

The -p option is likewise implemented by means of a standard environment, tcc_prof. It causes the producer to output extra information on the names of statically declared objects, and the TDF translator to output assembler directives which enable prof to profile the number of calls to each procedure (including static procedures). The behaviour of the system assembler and system linker may also be modified by -p, depending on the target machine.


6.3. The System Environment

In section 4.3 we discussed how tcc environments can be used to specify APIs. There is one API environment however which is so exceptional that it needs to be treated separately. This is the system environment, which specifies that tcc should emulate cc on the machine on which it is being run. The system environment specifies that tcc should use the system headers directory, /usr/include, as its default include file directory, and should define all the machine dependent macros which are built into cc. It will also specify the 32-bit portability table on 32-bit machines.

Despite the differences from the normal API environments, the system environment is indeed specifying an API, namely the system headers and libraries on the host machine. This means that the .j files produced when using this environment are only "target independent" in the sense that they can be ported successfully to machines which have the exactly the same system headers and predefined macros.

Using the system headers is fraught with difficulties. In particular, they tend to be very cc-specific. It is often necessary to use the -not_ansi and -nepc options together with -Ysystem merely to negotiate the system headers. Even then, tcc may still reject some constructs. Of course, the number of problems encountered will vary considerably between different machines.

To conclude, the system environment is at best only suitable for experimental compilation. There are also potential problems involved with its use. It should therefore be used with care.


6.4. The Temporary Directory

As we have said, tcc creates a temporary directory in which to put all the intermediate files which are created, but are not to be preserved. By default, these intermediate files are left in the temporary directory until the end of the compilation, when the temporary directory is removed. However, if disk space is short, or a particularly large compilation is taking place, the -tidy command-line option may be appropriate. This causes tcc to remove each unwanted intermediate file immediately when it is no longer required.

The name of the temporary directory created by tcc to store the intermediate files is generated by the system library routine tempnam. It takes the form TEMP/tcc????, where TEMP is the main tcc temporary directory, and ???? is an automatically generated unique suffix. There are three methods of specifying TEMP, listed in order of increasing precedence:

  • by the TEMP environmental identifier (usually in the default environment),

  • by the -temp dir command-line option,

  • by the TMPDIR system variable.

Normally TEMP will be a normal temporary directory, /tmp or /usr/tmp for example, but any directory to which the user has write permission may be used. In general, the more spare disk space which is available in TEMP, the better.


6.5. The tcc Option Interpreter

All tcc command-line options and environmental directives are actually processed by the same method, namely the tcc option interpreter. A simple pattern matching algorithm is applied to the input and, if a match is found, the corresponding instructions are sent to the low-level option interpreter. The command-line option --str, ... causes str to be passed directly to the option interpreter. This is intended primarily to help in debugging tcc and not for use by the general user. However, if you are interested, --1DB is a good place to start.

	[1]	TDF and Portability, DRA, 1994.
	[2]	The C to TDF Producer, DRA, 1993.
	[3]	The TenDRA Static Checker, DRA, 1994.
	[4]	The TDF Notation Compiler, DRA, 1994.
	

Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tcc/tcc8.html100644 1750 1750 13740 6466607533 17203 0ustar brooniebroonie tcc User's Guide: tcc Reference Guide

tcc User's Guide

January 1998

next section
previous section current document TenDRA home page document index


7.1 - Input and Output Files
7.2 - Compilation Phases
7.3 - Command-line Options
7.4 - Compilation Modes
7.5 - Supported APIs
7.6 - Environment Identifiers
7.7 - Standard Environments

7. tcc Reference Guide


7.1. Input and Output Files

tcc identifies the file type of the input files it is passed by means of their file suffix. The recognised file suffixes are as follows:

Each file type is assigned an identifying letter, usually corresponding to its file suffix, which may be used in various command-line options. For example, -Fs instructs tcc to halt the compilation after creating the assembly source files, and is therefore equivalent to -S. Similarly -Po instructs it to preserve any binary object files it creates. There are a couple of special file type codes which may be used with the -P option. The option -Pa causes all intermediate files to be created, whereas -Ph causes the start-up file, tcc_startup.h, used by tcc to be preserved. The -P option can also be used to specify that intermediate files of various forms should not be preserved. For example, the option -P-o indicates that binary object files should not be preserved.

Most output file names are derived from the input file names with a simple substitution of file suffix, however certain output files (and other files) have default names. These are as follows

:

If there is a single output file, its name may be specified using the -o option. The default output filenames can also be overridden. For example, -doj b.j sets the default merged TDF capsule name to b.j.


7.2. Compilation Phases

The various compilation phases under the control of tcc may be summarised as follows:

Each compilation phase is assigned a code letter which is used to identify that phase in various command-line options. For example, in order to pass the -x option to tdfc the -Wc, -x command-line option may be used. Similarly, to set the tld executable an option of the form -EL: /usr/local/bin/tld may be used.


7.3. Command-line Options

The following options are accepted by tcc. They can be given by command-line options or the TCCOPTS system variable. The spaces in the option descriptions are optional, they show where two-part or multi-part options can be split over more than one command-line argument.



The following two tables list the producer and archiver options which can be passed using the
-Wc, opt and -WJ, opt options, respectively.




7.4. Compilation Modes

The built-in compilation modes are as follows:

  • Xs ("strict checks") denotes strict ISO/ANSI C with most extra checks enabled as warnings.

  • Xp ("partial checks") denotes strict ISO/ANSI C with some extra checks enabled.

  • Xc ("conformance") denotes strict ISO/ANSI C with no extra checks enabled (this is default).

  • Xa ("ANSI-ish") denotes ISO/ANSI C with syntactic relaxations and no extra checks.

  • Xt ("traditional") denotes traditional C with no extra checks.

The mode Xs is specified by passing the -Xs command-line option to tcc, and so on.


7.5. Supported APIs

The following standard APIs are supported in the form of TenDRA headers:

Each API is specified to tcc by means of an environment with the same name as the API. Thus, for example, -Yposix specifies POSIX 1003.1. APIs are divided into two types, base APIs, such as POSIX 1003.1, and extension APIs, such as the X11 (Release 5) Toolkit. A program API consists of a base API plus an number of extension APIs, for example, POSIX plus the X11 Toolkit. This example would be specified by means of the options -Yposix -Yx5_t, in that order (base APIs override the previous API, extension APIs add to it).

Information on the current API may be printed by passing the -info option to tcc.


7.6. Environment Identifiers

The following tcc environment identifiers are recognised:

* stands for any of the allowed environment modifiers +, < or >.


7.7. Standard Environments

In addition to the environments implementing the supported APIs (see section 7.5. on page 31), the following environments are standard:


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tcpplus/ 40755 1750 1750 0 6507734502 16244 5ustar brooniebroonietendra-doc-4.1.2.orig/doc/tcpplus/alg.html100644 1750 1750 103743 6506672670 20050 0ustar brooniebroonie C++ Producer Guide: Type system

C++ Producer Guide

March 1998

next section previous section current document TenDRA home page document index


3.2.1 - Primitive types
3.2.2 - CV_SPEC
3.2.3 - BUILTIN_TYPE
3.2.4 - BASE_TYPE
3.2.5 - INT_TYPE
3.2.6 - FLOAT_TYPE
3.2.7 - CLASS_INFO
3.2.8 - CLASS_USAGE
3.2.9 - CLASS_TYPE
3.2.10 - GRAPH
3.2.11 - VIRTUAL
3.2.12 - ENUM_TYPE
3.2.13 - TYPE
3.2.14 - DECL_SPEC
3.2.15 - HASHID
3.2.16 - QUALIFIER
3.2.17 - IDENTIFIER
3.2.18 - MEMBER
3.2.19 - NAMESPACE
3.2.20 - NAT
3.2.21 - FLOAT
3.2.22 - STRING
3.2.23 - NTEST
3.2.24 - RMODE
3.2.25 - EXP
3.2.26 - OFFSET
3.2.27 - TOKEN
3.2.28 - INSTANCE
3.2.29 - ERROR
3.2.30 - VARIABLE
3.2.31 - LOCATION
3.2.32 - POSITION
3.2.33 - BITSTREAM
3.2.34 - BUFFER
3.2.35 - OPTIONS
3.2.36 - PPTOKEN

3.2. Type system

This section describes the type system used in the C++ producer. Unless otherwise stated the types are declared using the calculus tool as part of the algebra, c_class.alg. The design of this type algebra was clearly largely based on the concepts underlying the C++ language; however TDF provided an important influence, not merely as the intended target language, but also because of its clear presentation of essential language features.


3.2.1. Primitive types

The primitive types used within the algebra c_class are defined as follows:

	int = "int" ;
	unsigned = "unsigned" ;
	string = "character *" ;
	ulong_type (ulong) = "unsigned long" ;
	BITSTREAM_P (bits) = "BITSTREAM *" ;
	PPTOKEN_P (pptok) = "PPTOKEN *" ;
The integral types are self-explanatory. All string literals used in the C++ producer are based on the character type:
	typedef unsigned char character ;
hence the definition of string. The remaining primitive give links to those portions of the type system which are defined outside of the algebra. The types BITSTREAM and PPTOKEN are described below.


3.2.2. CV_SPEC

The enumeration type CV_SPEC (short name cv) is used to represent a C++ type qualifier. It takes the form of a bitfield, the elements of which can be or-ed together to represent combinations of type qualifiers. The cv-qualifiers are represented by cv_const and cv_volatile in the obvious manner. The value cv_lvalue is used as a qualifier to indicate whether a type is an lvalue or an rvalue. Other values are used in function types to represent the function language linkage.


3.2.3. BUILTIN_TYPE

The enumeration type BUILTIN_TYPE (ntype) is used to represent the built-in C++ types (char, float, void etc.). It is used chiefly as an index into tables of type information.


3.2.4. BASE_TYPE

The enumeration type BASE_TYPE (btype) is used to represent a C++ simple type specifier such as signed, short or int. It takes the form of a bitfield, the elements of which can be or-ed together to represent combinations of type specifiers. Its chief use is when reading a type from the input file; the various simple type specifiers are combined to give a value of this type, which is then mapped to an actual C++ type.


3.2.5. INT_TYPE

The union type INT_TYPE (itype) is used to represent an integral or bitfield C++ type. The basic integral types are given by the basic field. Bitfield types are represented by the bitfield field. There are also fields representing target dependent integral promotion, arithmetic and integer literal types, plus VARIETY tokens. Only one INT_TYPE object is created for each integral type.


3.2.6. FLOAT_TYPE

The union type FLOAT_TYPE (ftype) is used to represent a floating point C++ type. The basic floating point types are given by the basic field. There are also fields representing target dependent argument promotion and arithmetic types, plus FLOAT tokens. Only one FLOAT_TYPE object is created for each floating point type.


3.2.7. CLASS_INFO

The enumeration type CLASS_INFO (cinfo) is used to represent information relating to a class or enumeration definition. It takes the form of a bitfield, the elements of which can be or-ed together to represent various combinations of properties.


3.2.8. CLASS_USAGE

The enumeration type CLASS_USAGE (cusage) is used to represent information relating to the way a class is used. It takes the form of a bitfield, the elements of which can be or-ed together to represent various combinations of properties.


3.2.9. CLASS_TYPE

The union type CLASS_TYPE (ctype) is used to represent a C++ class or union. The main components are an identifier giving the class name, class information and class usage fields, a namespace giving the class members, a graph representing the base class structure, and a virtual function table. Only one CLASS_TYPE object is created for each class or union.

Each class maintains a list, pals, of class and function identifiers which are declared as friends of that class. It also maintains a list, chums, of those class types which declare it to be a friend (this is what is actually used in the access checks). Similarly each function identifier maintains a list, chums, of those class types which declare it to be a friend.

Each class maintains a list of its constructors, destructors and conversion functions (included inherited conversion functions). It also maintains a list of its virtual base classes. This information can be obtained by other means but it is more convenient to record it within the class type itself.


3.2.10. GRAPH

The union type GRAPH (graph) is used to represent a directed acyclic graph arising from the base classes of a class. Each node of the graph has a head which is a class type, and several tails which give the base class graphs for that class. Each node has pointers, top, to the top of the graph (i.e. the most derived class), and up, to the node of which the current node is a direct base. Each node also has an access field which gives information on the base access, whether it is virtual or not, and so on, in the form of a DECL_SPEC. Virtual bases are handled by the equal field which defines an equivalence relation on the graph which identifies equivalent virtual bases.


3.2.11. VIRTUAL

The union type VIRTUAL (virt) is used to represent the virtual functions declared in a class. The table field is used to represent a virtual function table, and consists primarily of a list of VIRTUAL objects giving the virtual functions for the associated class. These virtual functions are of four kinds, each represented by a union field. A virtual function first declared in a class is represented by the simple field; a virtual function in a class which overrides an inherited virtual function is represented by the override field; an inherited, non-overridden virtual function which is not overridden in a base class is represented by the inherit field; a inherited, non-overridden virtual function which is overridden in some base class is represented by the complex field.


3.2.12. ENUM_TYPE

The union type ENUM_TYPE (etype) is used to represent a C++ enumeration type. This consists primarily of an identifier giving the enumeration name, a class information field, a type giving the underlying representation of the enumeration type, and a list of identifiers giving the enumerators comprising the enumeration.


3.2.13. TYPE

The union type TYPE (type) is used to represent a C++ type. Every type has an associated type qualifier, qual, which determines whether the type is const, volatile or an lvalue. A type may also have an associated identifier, name, giving the corresponding type name (the null identifier being used for unnamed types). The other type components are determined by the union tag. Each of the type constructs above has a corresponding field in the TYPE union: integer for integral types, floating for floating point types, bitfield for bitfield types, compound for class or union types, and enumerate for enumeration types. There are also fields top and bottom corresponding to void and bottom (the type used to represent values which never return).

Other fields of the TYPE union represent composite types; for example, the array field, representing array types, comprises a base type, sub, and an integer constant giving the array bound, size. These are generally simple, apart from func, representing a function type. This has the obvious components: a return type, ret, a list of parameter types, ptypes, and a flag indicating ellipsis functions, ellipsis. It also has an associated namespace, pars, in which the function parameters are declared. The parameter identifiers are extracted from this as a list, pids. Member function qualifiers and language linkage information are represented by a CV_QUAL, mqual. The implicit extra parameter for member functions is recorded in the list mtypes, which adds this extra type to the start of ptypes. Finally except gives any exception specifiers; the case where the exception specifier is absent being represented by the special value, univ_type_set.


3.2.14. DECL_SPEC

The enumeration type DECL_SPEC (dspec) is used to represent information on the declaration and usage of an identifier. It takes the form of a bitfield, the elements of which can be or-ed together to represent various combinations of properties. The 32 bits in this bitfield (the maximum which can be represented portably) are a significant restriction. This means that the same member of DECL_SPEC is often used to mean different things in different contexts. This can prove confusing on occasions.


3.2.15. HASHID

The union type HASHID (hashid) is used to represent a C++ identifier name. The simplest form of identifier name, name, consists of just a string of characters, such as foo. Extended identifier names, ename, are similar, but may contain Unicode characters. There are however other forms of identifier name in C++: conversion function names (conv ) such as operator int, overloaded operator names (op) such as operator+, constructor names (constr), and destructor names (destr). There are also names which are used for anonymous identifiers (anon).

Note the distinction between an identifier name and an actual identifier. The latter is a meaning associated with a name in a particular context. Every identifier name has an associated underlying meaning, id. This is used to handle keywords and macros, but for most identifier names this will be a dummy identifier. Nested underlying meanings (such as a macro hiding a keyword) are handled by linking the alias fields of the corresponding identifiers. Every identifier name also has a cache field which is used to record the look-up of this name as an unqualified identifier. This may be set to the null identifier to indicate that the look-up needs to be re-evaluated.

Identifier names are stored in one of a small number of hash tables, linked using their next field. Each name has only one entry in these tables, allowing equality of names to be implemented as EQ_hashid.


3.2.16. QUALIFIER

The enumeration type QUALIFIER (qual) is used to represent the various ways in which an identifier name can be qualified. For example, ::A::a is represented by qual_full. The value qual_mark is used in the representation of function identifier expressions to indicate that overload resolution has been performed.


3.2.17. IDENTIFIER

The union type IDENTIFIER (id) is used to represent the various kinds of C++ identifiers. Every identifier has an associated identifier name, a parent namespace, a declaration information field, and a location for its declaration or definition. Each identifier also has an alias field which is normally used to represent the aliasing which can occur in inheritance or using declarations.

The various fields of the IDENTIFIER union correspond to the various kinds of identifier which can arise in C++ - class names, functions, variables, class members, macros, keywords etc. Each field has appropriate components giving its type, its definition or whatever other information is required. For example, the variable field has a type and two expressions, giving the constructor and destructor values for the object.

Most of these identifier components are self-explanatory, however the treatment of overloaded functions bears discussion. The various fields representing functions have an over component which is used to link overloaded functions together. A set of overloaded functions is treated as if it were a single IDENTIFIER - the first in the list - for the purposes of storing in a namespace member; the other overloaded meanings are accessed by chasing down the over components. In other situations, whether a function identifier represents a single function or a set of overloaded functions can be worked out from the context. For example, in identifier expressions the identifier qualifier is used to mark whether overload resolution has taken place.


3.2.18. MEMBER

The union type MEMBER (member) is used to represent a member of a namespace. Each member contains two identifiers, id and alt. The id field gives the meaning associated with a particular name in this namespace; the alt field is used to represent a type name which may be hidden by a non-type name.

There are two kinds of member, small and large, corresponding to whether the namespace holds its members in a simple linked list or in a hash table.


3.2.19. NAMESPACE

The union type NAMESPACE (nspace) is used to represent the set of identifiers declared in a particular scope. For example, the members declared in a C++ class or namespace, the parameters declared in a function declarator and the local variables declared in a block all form scopes. The various kinds of scope are distinguished as different fields of the union, but there are basically two categories. The first, such as function blocks, which have relatively small numbers of elements, store their members as a simple linked lists. The second, such as classes, which have larger numbers of elements, store their members in hash tables. In both cases the elements are stored using the MEMBER type.

The key operation on a namespace is to look up a particular identifier name in its linked list or hash table of members to find the meaning, if any, associated with that name in the namespace. This can be a complex operation because of the need to take base classes and using directives (as stored in the use component) into account.


3.2.20. NAT

The union type NAT (nat) is used to represent an integer constant expression. Values are represented as lists of 16 bit 'digits'. Values which fit into a single digit are represented by the small field; larger values by the large field. Negated values can be represented by the neg field. Folding of integer constant expressions is performed in the producer, however the result can only be represented as described above if its value is target independent. Target dependent values are represented by the calc field which contains an expression describing how to calculate the value. The token field is used to represent NAT tokens.

Objects representing small integer constants are created at the start of the program and stored in a table for ease of access. Larger constants are created as and when they are required.


3.2.21. FLOAT

The union type FLOAT (flt) is used to represent a floating point constant expression. There is only one field, simple , which corresponds to a floating point literal. No folding of floating point constant expressions is attempted in the producer (it is virtually impossible to do so in a target independent manner).

Objects representing useful floating point constants (0.0, 1.0 etc.) are created for each floating point type and stored as part of the corresponding FLOAT_TYPE. Other values are created as and when they are required.


3.2.22. STRING

The union type STRING (str) is used to represent a string constant expression. There is only one field, simple, which corresponds to a character string literal, however the kind field can be used to modify the interpretation put on the characters appearing in the text field. By default, each character in text corresponds to a single character in the literal; however an alternative representation, in which text consists of a sequence of multibyte characters - one control character plus four value characters - is used in more complex cases.

All strings are stored in a hash table intended to ensure that the same STRING object is used for equal string literals. This not only saves space during the processing of the input file, but also facilitates the output of shared string literals in the TDF capsule.

Note that the terminal zero character does not form part of the STRING object. Instead information on this is stored as part of the type of a string literal expression. The text of the string literal is either truncated or padded with zeros until its length matches the size of the array bound in the type of the corresponding literal expression.


3.2.23. NTEST

The enumeration type NTEST (ntest) is used to represent the various C++ relational operators (==, !=, > etc.). The values correspond to the encoding of the TDF NTEST sort, which facilitates code generation. The values also have the property that the values for complementary operators (such as < and >=) always add up to the same value, ntest_negate, allowing operators to be complemented in a straightforward manner.


3.2.24. RMODE

The enumeration type RMODE (rmode) is used to represent the various C++ rounding modes (towards zero, towards smaller etc.). The values correspond to the encoding of the TDF RMODE sort, which facilitates code generation.


3.2.25. EXP

The union type EXP (exp) is used to represent a C++ expression or statement. Each expression has an associated type, type, but most of the information about an expression is stored in one of the large number of fields of the EXP union. Most of these fields are fairly simple. For example, there are fields corresponding to integer literals, floating point literals, string literals and identifiers. Composite expressions are formed in the normal way; for example, there are various binary operators comprising two argument expressions. The EXP fields corresponding to statements are slightly more complex. They each have a parent field which points to the enclosing statement. A couple of cases bear additional discussion.

The sequence field represents a compound statement or block. This contains a namespace, in which any local variables are declared, and a list of expressions, giving the statements comprising the block. The null namespace is used if the block does not constitute a scope. The first statement in the list is always a dummy to enable first and last pointers to be maintained to the start and end of the list without having to worry about null lists.

The solve_stmt field corresponds to the TDF labelled construct (in early versions of TDF this construct was called solve, hence the terminology). The problem is that C and C++ labels and gotos are totally unstructured, whereas the TDF label constructs are structured. Any statement which contains unstructured labels is enclosed in a solve_stmt construct, enclosing both the labelled statement and all jumps to it (in general this cannot be done until the end of the function). Any labels or variables which are bypassed by such unstructured jumps also need to be pulled out to the solve_stmt construct. It is not just explicit labels which can cause such problems; complex switch statements have the same effect.


3.2.26. OFFSET

The union type OFFSET (off) is used to represent an offset expression. This is used as an adjunct to the normal expression representation. The OFFSET union has fields corresponding to a type offset (used in pointer arithmetic), the offset of a member of a class and the offset of a base class. There are also simple operations on offsets, such as multiplication by an expression.


3.2.27. TOKEN

The union type TOKEN (tok) is used to represent one of a number of different categories within the C++ language. It corresponds to the sort of a token declared using the #pragma token syntax. Thus there are fields corresponding to expression, statement, integer constant, type, function, member and procedure tokens. The similarities between PROC tokens and templates have been remarked above; for example, the parameters of the template:

	template < class T, int n > class A {
	    T a [n] ;
	    // ....
	} ;
are essentially equivalent to those in the procedure token:
	PROC ( TYPE T, EXP const : int : n ) ....
(recall that non-type template arguments are always constant expressions). Thus a field, templ, of the TOKEN union is used to represent lists of template parameters. Note that a further field, class, is also required to represent template template parameters. A template type is represented by a field, templ, of the union TYPE, which comprises a template sort and a sub-type expressed in terms of the template parameters.

In addition to representing token and template sorts in this way, the TOKEN union is used to represent token and template arguments. Each of the parameter sorts listed above has an appropriate value component which can store a value of that sort. Many of the union types in the algebra, including types and expressions, have a field of the form:

	token -> {
	    IDENTIFIER tok ;
	    LIST TOKEN args ;
	}
representing the given token identifier applied to the given list of arguments.

Template instances are represented slightly differently from token applications. Each instance of a template class or a template function gives rise to a new class or function identifier. This identifier has an underlying form giving the template identifier and the template arguments. This is expressed as a token member of the TYPE union (although it is not technically a type, this happens to be the most convenient representation). Each such form has an associated INSTANCE component which gives further information about the template instance. The form for a template function instance is stored in the form component of the corresponding identifier. The form for a template class instance is stored in the form component of the corresponding class type.

Members of instances of template classes also have a form type, but in this case the form is an instance type. This gives a link back to the corresponding member of the template class.


3.2.28. INSTANCE

The union type INSTANCE (inst) is used to represent a particular instance of a template or token. Each template sort has an associated list of all the instances of that template, which is used to ensure that the same template applied with the same arguments always has the same value. Information on partial or explicit specialisations and usage information are stored as part of the corresponding INSTANCE. Each template instance identifier has a link back to its corresponding INSTANCE via its form component.


3.2.29. ERROR

The union type ERROR (err) is used to represent an error arising during the compilation of a C++ program. Errors are first class objects within the producer and can be passed to and from procedures. Each error has an associated severity (serious, warning, none etc.). Simple errors are represented by the simple field, which consists of an index, number, into the error catalogue, plus a variable length list of error arguments. Errors can be combined into composite errors using the compound field, which represents the join of two errors - head followed by tail.

The chief operation on an error after it has been built up is to report it. Each error report consists of an error object and a file location indicating where the error occurred.


3.2.30. VARIABLE

The structure type VARIABLE (var) is used to represent a variable state and is used in the variable analysis checks.


3.2.31. LOCATION

The structure type LOCATION (loc) is used to represent a location in an input file. It comprises a pointer to an input file position, posn, modified by a line number, taking #line directives into account, line. Note that character positions within the line are not currently recorded.


3.2.32. POSITION

The structure type POSITION (posn) is used to represent a position in an input file. It consists of two file names, file taking #line directives into account, and input giving the actual file name, plus a line number offset, offset, which gives the difference between the line number taking #line directives into account and the actual line number. Other information stored includes the datestamp on the input file, datestamp, and a pointer to a file location which, for files included using #include, gives the location the file was included from.


3.2.33. BITSTREAM

The structure BITSTREAM is not part of the calculus type system. It is used to represent a sequence of bits such as is used, for example, in the encoding of TDF.


3.2.34. BUFFER

The structure BUFFER is not part of the calculus type system. It is used to represent a sequence of characters.


3.2.35. OPTIONS

The structure OPTIONS is not part of the calculus type system. It is used to represent the state of the compiler options at a particular point in the input file.


3.2.36. PPTOKEN

The structure PPTOKEN is not part of the calculus type system. It is used to represent a linked list of preprocessing tokens. Each token has an associated sid lexical token number, tok, plus additional data dependent on the token type. Each token also records a pointer to the current OPTIONS value.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tcpplus/dump.html100644 1750 1750 74744 6506672671 20243 0ustar brooniebroonie C++ Producer Guide: Symbol table dump

C++ Producer Guide

March 1998

next section previous section current document TenDRA home page document index


2.4.1 - Lexical elements
2.4.2 - Overall syntax
2.4.3 - File locations
2.4.4 - Identifiers
2.4.5 - Types
2.4.6 - Sorts
2.4.7 - Token applications
2.4.8 - Errors
2.4.9 - File inclusions
2.4.10 - String literals

2.4. Symbol table dump

The symbol table dump provides a method whereby third party tools can interface with the C and C++ producers. The producer outputs information on the identifiers declared within a source file, their uses etc. into a file which can then be post-processed by a separate tool. Any error messages and warnings can also be included in this file, allowing more sophisticated error presentation tools to be written.

The file to be used as the symbol table output file, plus details of what information is to be included in the dump file can be specified using the -d command-line option. The format of the dump file is described below; a summary of the syntax is given as an annex.


2.4.1. Lexical elements

A symbol table dump file consists of a sequence of characters giving information on identifiers, errors etc. arising from a translation unit. The fundamental lexical tokens are a number, consisting of a sequence of decimal digits, and a string, consisting of a sequence of characters enclosed in angle braces. A string can have one of two forms:

	string :
		<characters>
		&number<characters>
In the first form, the characters are terminated by the first > character encountered. In the second form, the number of characters is given by the preceding number. No white space is allowed either before or after the number. To aid parsers, the C++ producer always uses the second form for strings containing more than 100 characters. There are no escape characters in strings; the characters can contain any characters, including newlines and #, except that the first form cannot contain a > character.

Space, tab and newline characters are white space. Comments begin with # and run to the end of the line. Comments are treated as white space. All other characters are treated as distinct lexical tokens.


2.4.2. Overall syntax

A symbol table dump file takes the form of a list of commands of various kinds conveying information on the analysed file. This can be represented as follows:

	dump-file :
		command-listopt

	command-list :
		command command-listopt

	command :
		version-command
		identifier-command
		scope-command
		override-command
		base-command
		api-command
		template-command
		promotion-command
		error-command
		path-command
		file-command
		include-command
		string-command
The various kinds of command are discussed below. The first command in the dump file should be of the form:
	version-command :
		V number number string
where the two numbers give the version of the dump file format (the version described here is 1.1 so both numbers should be 1) and the string gives the language being represented, for example, <C++>.


2.4.3. File locations

A location within a source file can be specified using three numbers and two strings. These give respectively, the column number, the line number taking #line directives into account, the line number not taking #line directives into account, the file name taking #line directives into account, and the file name not taking #line directives into account. Any or all of the trailing elements can be replaced by * to indicate that they have not changed relative to the last location given. Note that for the two line numbers, unchanged means that the difference of the line numbers, taking #line directives into account or not, is unchanged. Thus:

	location :
		number number number string string
		number number number string *
		number number number *
		number number *
		number *
		*
Note that there is a concept of the current file location, relative to which other locations are given. The initial value of the current file location is undefined. Unless otherwise stated, all location elements update the current file location.


2.4.4. Identifiers

Each identifier is represented in the symbol table dump by a unique number. The same number always represents the same identifier.

Identifier names

The number representing an identifier is introduced in the first declaration or use of that identifier and thereafter the number alone is used to denote the identifier:

	identifier :
		number = identifier-name accessopt scope-identifier
		number

The identifier name is given by:

	identifier-name :
		string
		C type
		D type
		O string
		T type
denoting respectively, a simple identifier name, a constructor for a type, a destructor for a type, an overloaded operator function name, and a conversion function name. The empty string is used for anonymous identifiers.

The optional identifier access is given by:

	access :
		N
		B
		P
denoting public, protected and private respectively. An absent access is equivalent to public. Note that all identifiers, not just class members, can have access specifiers; however the access of a non-member is always public.

The scope (i.e. class, namespace, block etc.) in which an identifier is declared is given by:

	scope-identifier :
		identifier
		*
denoting either a named or an unnamed scope.

Identifier uses

Each declaration or use of an identifier is represented by a command of the form:

	identifier-command :
		D identifier-info type-info
		M identifier-info type-info
		T identifier-info type-info
		Q identifier-info
		U identifier-info
		L identifier-info
		C identifier-info
		W identifier-info type-info
where:
	identifier-info :
		identifier-key location identifier
gives the kind of identifier being declared or used, the location of the declaration or use, and the number associated with the identifier. Each declaration may, depending on the identifier-key, associate various type-info with the identifier, giving its type etc.

The various kinds of identifier-command are described below. Any can be preceded by I to indicate an implicit declaration or use. D denotes a definition. M (make) denotes a declaration. T denotes a tentative definition (C only). Q denotes the end of a definition, for those identifiers such as classes and functions whose definitions may be spread over several lines. U denotes an undefine operation (such as #undef for macro identifiers). C denotes a call to a function identifier; L (load) denotes other identifier uses. Finally W denotes implicit type information such as the C producer gleans from its weak prototype analysis.

The various identifier-keys are their associated type-info fields are given by the following table:

Key Type information Description
K * keyword
MO sort object macro
MF sort function macro
MB sort built-in macro
TC type class tag
TS type structure tag
TU type union tag
TE type enumeration tag
TA type typedef name
NN * namespace name
NA scope-identifier namespace alias
VA type automatic variable
VP type function parameter
VE type extern variable
VS type static variable
FE type identifieropt extern function
FS type identifieropt static function
FB type identifieropt built-in operator function
CF type identifieropt member function
CS type identifieropt static member function
CV type identifieropt virtual member function
CM type data member
CD type static data member
E type enumerator
L * label
XO sort object token
XF sort procedure token
XP sort token parameter
XT sort template parameter

The function identifier keys can optionally be followed by C indicating that the function has C linkage, and I indicating that the function is inline. By default, functions declared in a C++ dump file have C++ linkage and functions declared in a C dump file have C linkage. The optional identifier which forms part of the type-info of these functions is used to form linked lists of overloaded functions.

Identifier scopes

Each identifier belongs to a scope, called its parent scope, in which it is declared. For example, the parent of a member of a class is the class itself. This information is expressed in an identifier declaration using a scope-identifier. In addition to the obvious scopes such as classes and namespaces, there are other scopes such as blocks in function definitions. It is possible to introduce dummy identifiers to name such scopes. The parent of such a dummy identifier will be the enclosing scope identifier, so these dummy identifiers naturally represent the block structure. The parent of the top-level block in a function definition can be considered to be the function itself.

Information on the start and end of such scopes is given by:

	scope-command :
		SS scope-key location identifier
		SE scope-key location identifier
where:
	scope-key :
		N
		S
		B
		D
		H
		CT
		CF
		CC
gives the kind of scope involved: a namespace, a class, a block, some other declarative scope, a declaration block (see below), a true conditional scope, a false conditional scope or a target dependent conditional scope.

A declaration block is a sequence of declarations enclosed in directives of the form:

	#pragma TenDRA declaration block identifier begin
	....
	#pragma TenDRA declaration block end
This allows the sequence of declarations to be associated with the given identifier in the symbol dump file. This technique is used in the API description files to aid analysis tools in determining which declarations are part of the API.

Other identifier information

Other information associated with an identifier may be expressed using other dump commands. For example:

	override-command :
		O identifier identifier
is used to express the fact that the two identifiers are virtual member functions, the first of which overrides the second.

The command:

	base-command :
		B identifier-key identifier base-graph

	base-graph :
		base-class
		base-class ( base-list )

	base-class :
		number = Vopt accessopt type-name
		number :

	base-list :
		base-graph base-listopt

associates a base class graph with a class identifier. Any class which does not have an associated base-command can be assumed to have no base classes. Each node in the graph is a type-name with an associated list of base classes. A V is used to indicate a virtual base class. Each node is numbered; duplicate numbers are used to indicate bases identified via the virtual base class structure. Any base class can then be referred to as:
	base-number :
		number : type-name
indicating the base class with the given number in the given class.

The command:

	api-command :
		X identifier-key identifier string
associates the external token name given by the string with the given tokenised identifier.

The command:

	template-command :
		Z identifier-key identifier token-application specialise-info
is used to introduce an identifier corresponding to an instance of a template, token-application. This instance may correspond to a specialisation of the primary template; this information is represented by:
	specialise-info :
		identifier
		token-application
		*
where * indicates a non-specialised instance.


2.4.5. Types

The built-in types are represented in the symbol table dump as follows:

Type Encoding Type Encoding
char c float f
signed char Sc double d
unsigned char Uc long double r
signed short s void v
unsigned short Us (bottom) u
signed int i bool b
unsigned int Ui ptrdiff_t y
signed long l size_t z
unsigned long Ul wchar_t w
signed long long x - -
unsigned long long Ux - -

Named types (classes, enumeration types etc.) can be represented by the corresponding identifier or token application:

	type-name :
		identifier
		token-application
Composite and qualified types are represented in terms of their subtypes as follows:

Type Encoding
const type C type
volatile type V type
pointer type P type
reference type R type
pointer to member type M type-name : type
function type F type parameter-types
array type A natopt : type
bitfield type B nat : type
template type t parameter-listopt : type
promotion type p type
arithmetic type a type : type
integer literal type n lit-baseopt lit-suffixopt
weak function prototype (C only) W type parameter-types
weak parameter type (C only) q type

Other types can be represented by their textual representation using the form Q string, or by *, indicating an unknown type.

The parameter types for a function type are represented as follows:

	parameter-types :
		: exception-specopt func-qualifieropt :
		. exception-specopt func-qualifieropt :
		. exception-specopt func-qualifieropt .
		, type parameter-types
where the :: form indicates that there are no further parameters, the .: form indicates that the parameters are terminated by an ellipsis, and the .. form indicates that no information is available on the further parameters (this can only happen with non-prototyped functions in C). The function qualifiers are given by:
	func-qualifier :
		C func-qualifieropt
		V func-qualifieropt
representing const and volatile member functions. The function exception specifier is given by:
	exception-spec :
		( exception-listopt )

	exception-list :
		type
		type , exception-list
with an absent exception specifier, as in C++, indicating that any exception may be thrown.

Array and bitfield sizes are represented as follows:

	nat :
		+ number
		- number
		identifier
		token-application
		string
where a string is used to hold a textual representation of complex values.

Template types are represented by a list of template parameters, which will have previously been declared using the XT identifier key, followed by the underlying type expressed in terms of these parameters. The parameters are represented as follows:

	parameter-list :
		identifier
		identifier , parameter-list

Integer literal types are represented by the value of the literal followed by a representation of the literal base and suffix. These are given by:

	lit-base :
		O
		X
representing octal and hexadecimal literals respectively (decimal is the default), and:
	lit-suffix :
		U
		l
		Ul
		x
		Ux
representing the U, L, UL, LL and ULL suffixes respectively.

Target dependent integral promotion types are represented using p, so for example the promotion of unsigned short is represented as pUs. Information on the other cases, where the promotion type is known, can be given in a command of the form:

	promotion-command :
		P type : type
Thus the fact that the promotion of short is int would be expressed by the command Ps:i.


2.4.6. Sorts

A sort in the symbol table dump corresponds to the sort of a token declared in the #pragma token syntax. Expression tokens are represented as follows:

	expression-sort :
		ZEL type
		ZER type
		ZEC type
		ZN
corresponding to lvalue, rvalue and const EXP tokens of the given type, and NAT or INTEGER tokens, respectively. Statement tokens are represent by:
	statement-sort :
		ZS

Type tokens are represented as follows:

	type-sort :
		ZTO
		ZTI
		ZTF
		ZTA
		ZTP
		ZTS
		ZTU
corresponding to TYPE, VARIETY, FLOAT, ARITHMETIC, SCALAR, STRUCT or CLASS, and UNION token respectively. There are corresponding TAG forms:
	tag-type-sort :
		ZTTS
		ZTTU

Member tokens are represented using:

	member-sort :
		ZM type : type-name
where the first type gives the member type and the second gives the parent structure or union type.

Procedure tokens can be represented using:

	proc-sort :
		ZPG parameter-listopt ; parameter-listopt : sort
		ZPS parameter-listopt : sort
The first form corresponds to the more general form of PROC token, that expressed using { .... | .... }, which has separate lists of bound and program parameters. These token parameters will have previously been declared using the XP identifier key. The second form corresponds to the case where the bound and program parameter lists are equal, that expressed as a PROC token using ( .... ). A more specialised version of this second form is a FUNC token, which is represented as:
	func-sort :
		ZF type

As noted above, template parameters are represented by a sort. Template type parameters are represented by ZTO, while template expression parameters are represent by ZEC (recall that such parameters are always constant expressions). The remaining case, template template parameters, can be represented as:

	template-sort :
		ZTt parameter-listopt :

Finally, the number of parameters in a macro definition is represented by a sort of the form:

	macro-sort :
		ZUO
		ZUF number
corresponding to a object-like macro and a function-like macro with the given number of parameters, respectively.


2.4.7. Token applications

Given an identifier representing a PROC token or a template, an application of that token or an instance of that template can be represented using:

	token-application :
		T identifier , token-argument-list :
where the token or template arguments are given by:
	token-argument-list :
		token-argument
		token-argument , token-argument-list
Note that the case where there are no arguments is generally just represented by identifier; this case is specified separately in the rest of the grammar.

A token-argument can represent a value of any of the sorts listed above: expressions, integer constants, statements, types, members, functions and templates. These are given respectively by:

	token-argument :
		E expression
		N nat
		S statement
		T type
		M member
		F identifier
		C identifier
where:
	expression :
		nat

	statement :
		expression

	member :
		identifier
		string


2.4.8. Errors

Each error in the C++ error catalogue is represented by a number. These numbers happen to correspond to the position of the error within the catalogue, but in general this need not be the case. The first use of each error introduces the error number by associating it with a string giving the error name. This has the form cpp.error where error gives an error name from the C++ (cpp) error catalogue. Thus:

	error-name :
		number = string
		number

Each error message written to the symbol table dump has the form:

	error-command :
		ES location error-info
		EW location error-info
		EI location error-info
		EF location error-info
		EC error-info
		EA error-argument
denoting constraint errors, warnings, internal errors, fatal errors, continuation errors and error arguments respectively. Note that an error message may consist of several components; the initial error plus a number of continuation errors. Each error message may also have a number of error argument associated with it. This error information is given by:
	error-info :
		error-name number number
where the first number gives the number of error arguments which should be read, and the second is nonzero to indicate that a continuation error should be read.

Each error argument has one of the forms:

	error-argument :
		B base-number
		C scope-identifier
		E expression
		H identifier-name
		I identifier
		L location
		N nat
		S string
		T type
		V number
		V - number
corresponding to the various syntactic categories described above. Note that a location error argument, while expressed relative to the current file location, does not change this location.


2.4.9. File inclusions

It is possible to include information on header files within the symbol table dump. Firstly a number is associated with each directory on the #include search path:

	path-command :
		FD number = string stringopt
The first string gives the directory pathname; the second, if present, gives the associated directory name as specified in the -N command-line option.

Now the start and end of each file are marked using:

	file-command :
		FS location directory
		FE location
where directory gives the number of the directory in the search path where the file was found, or * if the file was found by other means. It is worth noting that if, for example, a function definition is the last item in a file, the FE command will appear in the symbol table dump before the QFE command for the end of the function definition. This is because lexical analysis, where the end of file is detected, takes place before parsing, where the end of function is detected.

A #include directive, whether explicit or implicit, can be represented using:

	include-command :
		FIA location string
		FIQ location string
		FIN location string
		FIS location string
		FIE location string
		FIR location
the first three corresponding to header names of the forms <....>, "...." and [....] respectively, the next two corresponding to start-up and end-up files, and the final form being used to resume the original file after the #include directive has been processed.


2.4.10. String literals

It is possible to dump information on string literals to the symbol table dump file using the commands:

	string-command :
		A location string
		AC location string
		AL location string
		ACL location string
representing string literals, character literals, wide string literals and wide character literals respectively. The given string gives the string text.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tcpplus/dump1.html100644 1750 1750 17056 6506672672 20316 0ustar brooniebroonie C++ Producer Guide: Symbol table dump syntax

C++ Producer Guide

March 1998

next section previous section current document TenDRA home page document index


Annex B. Symbol table dump syntax

The following gives a summary of the syntax for the symbol table dump file (version 1.1):


dump-file : command-listopt command-list : command command-listopt command : version-command identifier-command scope-command override-command base-command api-command template-command promotion-command error-command path-command file-command include-command string-command version-command : V number number string
location : number number number string string number number number string * number number number * number number * number * *
identifier : number = identifier-name accessopt scope-identifier number identifier-name : string C type D type O string T type access : N B P scope-identifier : identifier * identifier-command : D identifier-info type-info M identifier-info type-info T identifier-info type-info Q identifier-info U identifier-info L identifier-info C identifier-info W identifier-info type-info I identifier-command identifier-info : identifier-key location identifier identifier-key : K MO MF MB TC TS TU TE TA NN NA VA VP VE VS FE function-keyopt FS function-keyopt FB function-keyopt CF function-keyopt CS function-keyopt CV function-keyopt CM CD E L XO XF XP XT function-key : C function-keyopt I function-keyopt type-info : type identifieropt sort scope-identifier *
scope-command : SS scope-key location identifier SE scope-key location identifier scope-key : N S B D H CT CF CC
override-command : O identifier identifier
base-command : B identifier-key identifier base-graph base-graph : base-class base-class ( base-list ) base-class : number = Vopt accessopt type-name number : base-list : base-graph base-listopt base-number : number : type-name
api-command : X identifier-key identifier string
template-command : Z identifier-key identifier token-application specialise-info specialise-info : identifier token-application *
type : type-name c s i l x b w y z f d r v u Sc Uc Us Ui Ul Ux C type V type P type R type M type-name : type F type parameter-types A natopt : type B nat : type t parameter-listopt : type p type a type : type n lit-baseopt lit-suffixopt W type parameter-types q type Q string * type-name : identifier token-application parameter-types : : exception-specopt func-qualifieropt : . exception-specopt func-qualifieropt : . exception-specopt func-qualifieropt . , type parameter-types func-qualifier : C func-qualifieropt V func-qualifieropt exception-spec : ( exception-listopt ) exception-list : type type , exception-list nat : + number - number identifier token-application string parameter-list : identifier identifier , parameter-list lit-base : O X lit-suffix : U l Ul x Ux
promotion-command : P type : type
sort : expression-sort statement-sort type-sort tag-type-sort member-sort proc-sort func-sort template-sort macro-sort expression-sort : ZEL type ZER type ZEC type ZN statement-sort : ZS type-sort : ZTO ZTI ZTF ZTA ZTP ZTS ZTU tag-type-sort : ZTTS ZTTU member-sort : ZM type : type-name proc-sort : ZPG parameter-listopt ; parameter-listopt : sort ZPS parameter-listopt : sort func-sort : ZF type template-sort : ZTt parameter-listopt : macro-sort : ZUO ZUF number token-application : T identifier , token-argument-list : token-argument-list : token-argument token-argument , token-argument-list token-argument : E expression N nat S statement T type M member F identifier C identifier expression : nat statement : expression member : identifier string
error-name : number = string number error-command : ES location error-info EW location error-info EI location error-info EF location error-info EC error-info EA error-argument error-info : error-name number number error-argument : B base-number C scope-identifier E expression H identifier-name I identifier L location N nat S string T type V number V - number
path-command : FD number = string stringopt directory : number * file-command : FS location directory FE location include-command : FIA location string FIQ location string FIN location string FIS location string FIE location string FIR location
string-command : A location string AC location string AL location string ACL location string


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tcpplus/error.html100644 1750 1750 15057 6506672672 20420 0ustar brooniebroonie C++ Producer Guide: Error catalogue

C++ Producer Guide

March 1998

next section previous section current document TenDRA home page document index


3.3. Error catalogue

This section describes the error catalogue which lies at the heart of the C++ producer's error reporting routines. The full error catalogue syntax is given as an annex. A typical entry in the catalogue is as follows:

	class_union_deriv ( CLASS_TYPE: ct )
	{
	    USAGE:              serious
	    PROPERTIES:         ansi
	    KEY (ISO)           "9.5"
	    KEY (STANDARD)      "The union '"ct"' can't have base classes"
	}
This defines an error, class_union_deriv, which takes a single parameter ct of type CLASS_TYPE. The severity of this error is serious; that is to say, a constraint error. The error property ansi indicates that the error arises from the ISO C++ standard, the associated ISO key indicating section 9.5. Finally the text to be printed for this error, including a reference to ct, is given. Looking up section 9.5 in the ISO C++ standard reveals the corresponding constraint in paragraph 1:
A union shall not have base classes.
Each constraint within the ISO C++ standard has a corresponding error in this way. The errors are named in a systematic fashion using the section names used in the draft standard. For example, section 9.5 is called class.union, so all the constraint errors arising from this section have names of the form class_union_*. These error names can be used in the low level directives such as:
	#pragma TenDRA++ error "class_union_deriv" allow
to modify the error severity. The effect of reducing the severity of a constraint error in this way is undefined.

In addition to the obvious error severity levels, serious, warning and none, the error catalogue specifies a list of optional severity levels along with their default values. For example, the entry:

	link_incompat = serious
sets up an option named link_incompat which is a constraint error by default. Errors with this severity, such as:
	dcl_stc_external ( LONG_ID: id, PTR_LOC: loc )
	{
	    USAGE:              link_incompat
	    PROPERTIES:         ansi
	    KEY (ISO)           "7.1.1"
	    KEY (STANDARD)      "'"id"' previously declared with external
				 linkage (at "loc")"
	}
are therefore constraint errors. The severity associated with link_incompat can be modified either directly, using the directive:
	#pragma TenDRA++ option "link_incompat" allow
or indirectly using the directive:
	#pragma TenDRA incompatible linkage allow
the effect being to modify the severity of the associated error messages.

The error catalogue is processed by a simple tool, make_err, which generates C code which is compiled into the C++ producer. Each error in the catalogue is assigned a number (there are currently 873 errors in the catalogue) which gives an index into an automatically generated table of error information. It is this error number, together with a list of error arguments, which forms the associated ERROR object. make_err generates a macro for each error in the catalogue which takes arguments of the appropriate types (which may be statically checked) and creates an ERROR object. For example, for the entry above this macro takes the form:

	ERROR ERR_class_union_deriv ( CLASS_TYPE ) ;
These macros hide the error catalogue numbers from the rest of the C++ producer.

It is also possible to join a number of simple ERROR objects to form a single composite ERROR. The severity of the composite error is the maximum of the severities of the component errors. To this purpose a dummy error severity level whatever is introduced which is less severe than any other level. This is intended for use with error messages which are only ever used to add information to existing errors, and which inherit their severity level from the main error.

The text of a simple error message can be found in the table of error information. The text contains certain escape sequences indicating where the error arguments are to be printed. For example, %1 indicates the second argument. The error argument sorts - what is referred to as the error signature - is also stored in the table of error information as an array of characters, each corresponding to an ERR_KEY_type macro. The producer defines printing routines for each of the types given by these values, and calls the appropriate routine to print the argument.

There are several command-line options which can be used to modify the form in which the error message is printed. The default format is as follows:

	"file.C", line 42: Error:
	    [ISO 9.5]: The union 'U' can't have base classes.
The ISO section number can be suppressed using -m-s. The -mc option causes the source code line giving rise to the error to be printed as part of the message, with !!!! marking the position of the error within the line. The -me option causes the error name, class_union_deriv, to be printed as part of the message. The -ml option causes the full file location, including the list of #include directives used in reaching the file, to be printed. The -mt option causes typedef names to be used when printing types, rather than expanding to the type definition.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tcpplus/error1.html100644 1750 1750 7322 6506672672 20455 0ustar brooniebroonie C++ Producer Guide: Error catalogue syntax

C++ Producer Guide

March 1998

next section previous section current document TenDRA home page document index


Annex C. Error catalogue syntax

The following gives a summary of the syntax for the error catalogue accepted by the make_err tool. Identifiers are normal C-style identifiers, strings consist of any sequence of characters enclosed inside "....". The escape sequences \" and \\ are allowed in strings; other characters (including newline characters) map to themselves. C-style comments are allowed.


error-database : header typesopt propertiesopt keysopt usagesopt entriesopt header : database-nameopt rig-nameopt prefixesopt
database-name : DATABASE_NAME : identifier rig-name : RIG : identifier
prefixes : PREFIX : output-prefixopt compiler-prefixopt error-prefixopt output-prefix : compiler_output -> identifier compiler-prefix : from_compiler -> identifier error-prefix : from_database -> identifier
types : TYPES : name-listopt properties : PROPERTIES : name-listopt keys : KEYS : name-listopt usages : USAGE : name-listopt name : identifier identifier = identifier identifier = identifier | identifier name-list : name name , name-list
type-name : identifier property-name : identifier key-name : identifier usage-name : identifier
entries : ENTRIES : entries-listopt entry-list : entry entry-listopt entry : identifier ( param-listopt ) { entry-body } entry-body : alt-nameopt entry-usageopt entry-propertiesopt map-listopt
parameter : type-name : identifier param-list : parameter parameter , param-list param-name : identifier
alt-name : ALT_NAME : identifier entry-usage : USAGE : usage-name USAGE : usage-name | usage-name entry-properties : PROPERTIES : property-listopt property-list : property-name property-name , property-list
map : KEY ( key-name ) message-listopt KEY ( key-name ) message-listopt | message-listopt map-list : map map-listopt message-list : string message-listopt param-name message-listopt


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tcpplus/index.html100644 1750 1750 25406 6506672673 20376 0ustar brooniebroonie C++ Producer Guide

C++ Producer Guide

March 1998

next section previous section current document TenDRA home page document index


1 - Introduction
1.1 - Updated introduction
2 - Interface descriptions
2.1 - Invocation
2.2 - Compiler configuration
2.3 - Token syntax
2.4 - Symbol table dump
2.5 - Intermodule analysis
2.6 - Implementation details
2.7 - Standard library
3 - Program overview
3.1 - Source code organisation
3.2 - Type system
3.3 - Error database
3.4 - Parsing C++
3.5 - TDF generation
References
Annexes
A - #pragma directive syntax
B - Symbol table dump syntax
C - Error catalogue syntax

1. Introduction

This document is designed as a technical overview of the TenDRA C++ to TDF/ANDF producer. It is divided into two broad areas; descriptions of the public interfaces of the producer, and an overview of the producer source code.

Whereas the interface description contains most of the information which would be required in a users' guide, it is not necessarily in a readily digestible form. The C++ producer is designed to complement the existing TenDRA C to TDF producer; although they are completely distinct programs, the same design philosophy underlies both and they share a number of common interfaces. There are no radical differences between the two producers, besides the fact that the C++ producer covers a vastly larger and more complex language. This means that much of the existing documentation on the C producer can be taken as also applying to the C++ producer. This document tries to make clear where the C++ producer extends the C producer's interfaces, and those portions of these interfaces which are not directly applicable to C++.

A familiarity with both C++ and TDF is assumed. The version of C++ implemented is that given by the draft ISO C++ standard. All references to "ISO C++" within the document should strictly be qualified using the word "draft", but for convenience this has been left implicit. The C++ producer has a number of switches which allow it to be configured for older dialects of C++. In particular, the version of C++ described in the ARM (Annotated Reference Manual) is fully supported.

The TDF specification (version 4.0) may be consulted for a description of the compiler intermediate language used. The paper TDF and Portability provides a useful (if slightly old) introduction to some of the ideas relating to static program analysis and interface checking which underlie the whole TenDRA compilation system.

The warning sign:

warning
is used within the document to indicate areas where the implementation is currently incomplete or incorrect.

1.1. Updated introduction

Since this document was originally written, the old C producer, tdfc, has been replaced by a new C producer, tdfc2, which is just a modified version of the C++ producer, tcpplus. All C producer documentation continues to apply to the new C producer, but the new C producer also has many of the features described in this document as only applying to the C++ producer.


2. Interface descriptions

The most important public interfaces of the C++ producer are the ISO C++ standard and the TDF 4.0 specification; however there are other interfaces, mostly common to both the C and C++ producers, which are described in this section.

An important design criterion of the C++ producer was that it should be strictly ISO conformant by default, but have a method whereby dialect features and extra static program analysis can be enabled. This compiler configuration is controlled by the #pragma TenDRA directives described in the first section.

The requirement that the C and C++ producers should be able to translate portable C or C++ programs into target independent TDF requires a mechanism whereby the target dependent implementations of APIs can be represented. This mechanism, the #pragma token syntax, is described in the following section. Note that at present this mechanism only contains support for C APIs; it is considered that the C++ language itself contains sufficient interface mechanisms for C++ APIs to be described.

The C and C++ producers provide two mechanisms whereby type and declaration information derived from a translation unit can be stored to a file for post-processing by other tools. The first is the symbol table dump, which is a public interface designed for use by third party tools. The second is the C/C++ spec file, which is designed for ease of reading and writing by the producers themselves, and is used for intermodule analysis.

The mapping from C++ to TDF implemented by the C++ producer is largely straightforward. There are however target dependencies arising within the language itself which require special handling. These are represented by certain standard tokens which the producer requires to be defined on the target machine. These tokens are also used to describe the interface between the producer and the run-time system. Note that the C++ producer is primarily concerned with the C++ language, not with the standard C++ library. An example implementation of those library components which are required as an integral part of the language (memory allocation, exception handling, run-time type information etc.) is provided. Otherwise, libraries should be obtained from third parties. A number of hints on integrating such libraries with the C++ producer are given.


3. Program overview

The C++ producer is a large program (over 200000 lines, including automatically generated code) written in C. A description of the coding conventions used, the API observed and the basic organisation of the source code are described in the first section.

One of the design methods used in the C++ producer is the extensive use of automatic code generation tools. The type system is based around the calculus tool, which allows complex type systems to be described in a simple format. The interface generated by calculus allows for rigorous static type checking, generic type constructors for lists, stacks etc., encapsulation of the operations on the types within the system, and optional run-time checking for null pointers and discriminated union tags. An overview is given of the type system used as the basis of the C++ producer design. Also see the calculus users' guide.

The other general purpose code generation tool used in the C++ producer is the parser generator, sid. A brief description of the problems in writing a C++ parser is given. Also see the sid users' guide.

The other code generation tools used were written specifically for the C++ producer. The error reporting routines within the producer are based on an error catalogue, from which code for constructing and printing errors is generated. The TDF output routines are based on primitives automatically generated from a standard database describing the TDF specification.

The program itself is well commented, so no lower level program documentation has been provided. When performing development work the producer should be compiled with the DEBUG macro defined. This enables the calculus run-time checks, along with other assertions, and makes available the debugging routines, DEBUG_type, which can be used to print an object from the internal type system.


References

  1. Working paper for Draft Proposed Internation Standard for Information Systems - Programming Language C++, X3J16/96-0225, December 1996: http://www.cygnus.com/misc/wp/dec96pub/ or http://www.maths.warwick.ac.uk/c++/pub/wp/html/cd2/.

  2. The Annotated C++ Reference Manual, Margaret Ellis and Bjarne Stroustrup, ISBN 0-201-51459-1, Addison-Wesley, 1990: http://heg-school.aw.com/cseng/authors/ellis/annocpp/annocpp.html

  3. TDF Specification, Issue 4.0: attached.

  4. C Checker Reference Manual: attached.

  5. TDF and Portability: attached.

  6. C Coding Standards, DRA/CIS(SE2)/WI/94/57/2.0 (OSSG internal document).


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tcpplus/lib.html100644 1750 1750 234226 6506672673 20057 0ustar brooniebroonie C++ Producer Guide: Implementation

C++ Producer Guide

March 1998

next section previous section current document TenDRA home page document index


2.6.1 - Arithmetic types
2.6.2 - Integer literal types
2.6.3 - Bitfield types
2.6.4 - Generic pointers
2.6.5 - Undefined conversions
2.6.6 - Integer division
2.6.7 - Calling conventions
2.6.8 - Pointers to data members
2.6.9 - Pointers to function members
2.6.10 - Class layout
2.6.11 - Derived class layout
2.6.12 - Constructors and destructors
2.6.13 - Virtual function tables
2.6.14 - Run-time type information
2.6.15 - Dynamic initialisation
2.6.16 - Exception handling
2.6.17 - Mangled identifier names

2.6. Implementation details

This section describes various of the implementation details of the C++ producer TDF output. In particular it describes the standard TDF tokens used to represent the target dependent aspects of the language and to provide links into the run-time system. Many of these tokens are common to the C and C++ producers. Those which are unique to the C++ producer have names of the form ~cpp.*. Note that the description is in terms of TDF tokens, not the internal tokens introduced by the #pragma token syntax.

There are two levels of implementation in the run-time system. The actual interface between the producer and the run-time system is given by the standard tokens. The provided implementation defines these tokens in a way appropriate to itself. An alternative implementation would have to define the tokens differently. It is intended that the standard tokens are sufficiently generic to allow a variety of implementations to hook into the producer output in the manner they require.


2.6.1. Arithmetic types

The representations of the basic arithmetic types are target dependent, so, for example, an int may contain 16, 32, 64 or some other number of bits. Thus it is necessary to introduce a token to stand for each of the built-in arithmetic types (including the long long types). Each integral type is represented by a VARIETY token as follows:

Type Token Encoding
char ~char 0
signed char ~signed_char 0 | 4 = 4
unsigned char ~unsigned_char 0 | 8 = 8
signed short ~signed_short 1 | 4 = 5
unsigned short ~unsigned_short 1 | 8 = 9
signed int ~signed_int 2 | 4 = 6
unsigned int ~unsigned_int 2 | 8 = 10
signed long ~signed_long 3 | 4 = 7
unsigned long ~unsigned_long 3 | 8 = 11
signed long long ~signed_longlong 3 | 4 | 16 = 23
unsigned long long ~unsigned_longlong 3 | 8 | 16 = 27

Similarly each floating point type is represent by a FLOATING_VARIETY token:

Type Token
float ~float
double ~double
long double ~long_double

Each integral type also has an encoding as a SIGNED_NAT as shown above. This number is a bit pattern built up from the following values:

Type Encoding
char 0
short 1
int 2
long 3
signed 4
unsigned 8
long long 16

Any target dependent integral type can be represented by a SIGNED_NAT token using this encoding. This representation, rather than one based on VARIETYs, is used for ease of manipulation. The token:

	~convert : ( SIGNED_NAT ) -> VARIETY
gives the mapping from the integral encoding to the representing variety. For example, it will map 6 to ~signed_int.

The token:

	~promote : ( SIGNED_NAT ) -> SIGNED_NAT
describes how to form the promotion of an integral type according to the ISO C/C++ value preserving rules, and is used by the producer to represent target dependent promotion types. For example, the promotion of unsigned short may be int or unsigned int depending on the representation of these types; that is to say, ~promote ( 9 ) will be 6 on some machines and 10 on others. Although ~promote is used by default, a program may specify another token with the same sort signature to be used in its place by means of the directive:
	#pragma TenDRA compute promote identifier
For example, a standard token ~sign_promote is defined which gives the older C sign preserving promotion rules. In addition, the promotion of an individual type can be specified using:
	#pragma TenDRA promoted type-id : promoted-type-id

The token:

	~arith_type : ( SIGNED_NAT, SIGNED_NAT ) -> SIGNED_NAT
similarly describes how to form the usual arithmetic result type from two promoted integral operand types. For example, the arithmetic type of long and unsigned int may be long or unsigned long depending on the representation of these types; that is to say, ~arith_type ( 7, 10 ) will be 7 on some machines and 11 on others.

Any tokenised type declared using:

	#pragma token VARIETY v # tv
will be represented by a SIGNED_NAT token with external name tv corresponding to the encoding of v. Special cases of this are the implementation dependent integral types which arise naturally within the language. The external token names for these types are given below:

Type Token
bool ~cpp.bool
ptrdiff_t ptrdiff_t
size_t size_t
wchar_t wchar_t

So, for example, a sizeof expression has shape ~convert ( size_t ). The token ~cpp.bool is defined in the default implementation, but the other tokens are defined according to their definitions on the target machine in the normal API library building mechanism.


2.6.2. Integer literal types

The type of an integer literal is defined in terms of the first in a list of possible integral types. The first type in which the literal value can be represented gives the type of the literal. For small literals it is possible to work out the type exactly, however for larger literals the result is target dependent. For example, the literal 50000 will have type int on machines in which 50000 fits into an int, and long otherwise. This target dependent mapping is given by a series of tokens of the form:

	~lit_* : ( SIGNED_NAT ) -> SIGNED_NAT
which map a literal value to the representation of an integral type. The token used depends on the list of possible types, which in turn depends on the base used to represent the literal and the integer suffix used, as given in the following table:

Base Suffix Token Types
decimal none ~lit_int int, long, unsigned long
octal none ~lit_hex int, unsigned int, long, unsigned long
hexadecimal none ~lit_hex int, unsigned int, long, unsigned long
any U ~lit_unsigned unsigned int, unsigned long
any L ~lit_long long, unsigned long
any UL ~lit_ulong unsigned long
any LL ~lit_longlong long long, unsigned long long
any ULL ~lit_ulonglong unsigned long long

Thus, for example, the shape of the integer literal 50000 is:

	~convert ( ~lit_int ( 50000 ) )


2.6.3. Bitfield types

The sign of a plain bitfield type, declared without using signed or unsigned, is left unspecified in C and C++. The token:

	~cpp.bitf_sign : ( SIGNED_NAT ) -> BOOL
is used to give a mapping from integral types to the sign of a plain bitfield of that type, in a form suitable for use in the TDF bfvar_bits construct. (Note that ~cpp.bitf_sign should have been a standard C token but was omitted.)


2.6.4. Generic pointers

TDF has no concept of a generic pointer type, so tokens are used to defer the representation of void * and the basic operations on it to the target machine. The fundamental token is:

	~ptr_void : () -> SHAPE
which gives the representation of void *. This shape will be denoted by pv in the description of the following tokens. It is not guaranteed that pv is a TDF pointer shape, although normally it will be implemented as a pointer to a suitable alignment.

The token:

	~null_pv : () -> EXP pv
gives the value of a null pointer of type void *. Generic pointers can also be converted to and from other pointers. These conversions are represented by the tokens:
	~to_ptr_void : ( ALIGNMENT a, EXP POINTER a ) -> EXP pv
	~from_ptr_void : ( ALIGNMENT a, EXP pv ) -> EXP POINTER a
where the given alignment describes the destination or source pointer type. Finally a generic pointer may be tested against the null pointer or two generic pointers may be compared. These operations are represented by the tokens:
	~pv_test : ( EXP pv, LABEL, NTEST ) -> EXP TOP
	~cpp.pv_compare : ( EXP pv, EXP pv, LABEL, NTEST ) -> EXP TOP
where the given NTEST gives the comparison to be applied and the given label gives the destination to jump to if the test fails. (Note that ~cpp.pv_compare should have been a standard C token but was omitted.)


2.6.5. Undefined conversions

Several conversions in C and C++ can only be represented by undefined TDF. For example, converting a pointer to an integer can only be represented in TDF by forming a union of the pointer and integer shapes, putting the pointer into the union and pulling the integer out. Such conversions are tokenised. Undefined conversions not mentioned below may be performed by combining those given with the standard, well-defined, conversions.

The token:

	~ptr_to_ptr : ( ALIGNMENT a, ALIGNMENT b, EXP POINTER a ) -> EXP POINTER b
is used to convert between two incompatible pointer types. The first alignment describes the source pointer shape while the second describes the destination pointer shape. Note that if the destination alignment is greater than the source alignment then the source pointer can be used in most TDF constructs in place of the destination pointer, so the use of ~ptr_to_ptr can be omitted (the exception is pointer_test which requires equal alignments). Base class pointer conversions are examples of these well-behaved, alignment preserving conversions.

The tokens:

	~f_to_pv : ( EXP PROC ) -> EXP pv
	~pv_to_f : ( EXP pv ) -> EXP PROC
are used to convert pointers to functions to and from void * (these conversions are not allowed in ISO C/C++ but are in older dialects).

The tokens:

	~i_to_p : ( VARIETY v, ALIGNMENT a, EXP INTEGER v ) -> EXP POINTER a
	~p_to_i : ( ALIGNMENT a, VARIETY v, EXP POINTER a ) -> EXP INTEGER v
	~i_to_pv : ( VARIETY v, EXP INTEGER v ) -> EXP pv
	~pv_to_i : ( VARIETY v, EXP pv ) -> EXP INTEGER v
are used to convert integers to and from void * and other pointers.


2.6.6. Integer division

The precise form of the integer division and remainder operations in C and C++ is left unspecified with respect to the sign of the result if either operand is negative. The tokens:

	~div : ( EXP INTEGER v, EXP INTEGER v ) -> EXP INTEGER v
	~rem : ( EXP INTEGER v, EXP INTEGER v ) -> EXP INTEGER v
are used to represent integer division and remainder. They will map onto one of the pairs of TDF constructs, div0 and rem0, div1 and rem1 or div2 and rem2.


2.6.7. Calling conventions

The function calling conventions used by the C++ producer are essentially the same as those used by the C producer with one exception. That is to say, all types except arrays are passed by value (note that individual installers may modify these conventions to conform to their own ABIs).

The exception concerns classes with a non-trivial constructor, destructor or assignment operator. These classes are passed as function arguments by taking a reference to a copy of the object (although it is often possible to eliminate the copy and pass a reference to the object directly). They are passed as function return values by adding an extra parameter to the start of the function parameters giving a reference to a location into which the return value should be copied.

Member functions

Non-static member functions are implemented in the obvious fashion, by passing a pointer to the object the method is being applied to as the first argument (or the second argument if the method has an extra argument for its return value).

Ellipsis functions

Calls to functions declared with ellipses are via the apply_proc TDF construct, with all the arguments being treated as non-variable. However the definition of such a function uses the make_proc construct with a variable parameter. This parameter can be referred to within the program using the ... expression. The type of this expression is given by the built-in token:

	~__va_t : () -> SHAPE
The va_start macro declared in the <stdarg.h> header then describes how the variable parameter (expressed as ...) can be converted to an expression of type va_list suitable for use in the va_arg macro.

Note that the variable parameter is in effect only being used to determine where the first optional parameter is defined. The assumption is that all such parameters are located contiguously on the stack, however the fact that calls to such functions do not use the variable parameter mechanism means that this is not automatically the case. Strictly speaking this means that the implementation of ellipsis functions uses undefined behaviour in TDF, however given the non-type-safe function calling rules in C this is unavoidable and installers need to make provision for such calls (by dumping any parameters from registers to the stack if necessary). Given the theoretically type-safe nature of C++ it would be possible to avoid such undefined behaviour, but the need for C-compatible calling conventions prevents this.


2.6.8. Pointers to data members

The representation of, and operations on, pointers to data members are represented by tokens to allow for a variety of implementations. It is assumed that all pointers to data members (as opposed to pointers to function members) are represented by the same shape:

	~cpp.pm.type : () -> SHAPE
This shape will be denoted by pm in the description of the following tokens.

There are two basic methods of constructing a pointer to a data member. The first is to take the address of a data member of a class. A data member is represented in TDF by an expression which gives the offset of the member from the start of its enclosing compound shape (note that it is not possible to take the address of a member of a virtual base). The mapping from this offset to a pointer to a data member is given by:

	~cpp.pm.make : ( EXP OFFSET ) -> EXP pm
The second way of constructing a pointer to a data member is to use a null pointer to member:
	~cpp.pm.null : () -> EXP pm
The other fundamental operation on a pointer to data member is to turn it back into an offset expression which can be added to a pointer to a class to access a member of that class in a .* or ->* operation. This is done by the token:
	~cpp.pm.offset : ( EXP pm, ALIGNMENT a ) -> EXP OFFSET ( a, a )
Note that it is necessary to specify an alignment in order to describe the shape of the result. The value of this token is undefined if the given expression is a null pointer to data member.

A pointer to a data member of a non-virtual base class can be converted to a pointer to a data member of a derived class. The reverse conversion is also possible using static_cast. If the base is a primary base class then these conversions are trivial and have no effect. Otherwise null pointers to data members are converted to null pointers to data members, and the non-null cases are handled by the tokens:

	~cpp.pm.cast : ( EXP pm, EXP OFFSET ) -> EXP pm
	~cpp.pm.uncast : ( EXP pm, EXP OFFSET ) -> EXP pm
where the given offset is the offset of the base class within the derived class. It is also possible to convert between any two pointers to data members using reinterpret_cast. This conversion is implied by the equality of representation between any two pointers to data members and has no effect.

The only remaining operations on pointer to data members are to test one against the null pointer to data member and to compare two pointer to data members. These are represented by the tokens:

	~cpp.pm.test : ( EXP pm, LABEL, NTEST ) -> EXP TOP
	~cpp.pm.compare : ( EXP pm, EXP pm, LABEL, NTEST ) -> EXP TOP
where the given NTEST gives the comparison to be applied and the given label gives the destination to jump to if the test fails.

In the default implementation, pointers to data members are implemented as int. The null pointer to member is represented by 0 and the address of a class member is represented by 1 plus the offset of the member (in bytes). Casting to and from a derived class then correspond to adding or subtracting the base class offset (in bytes), and pointer to member comparisons correspond to integer comparisons.


2.6.9. Pointers to function members

As with pointers to data members, pointers to function members and the operations on them are represented by tokens to allow for a range of implementations. All pointers to function members are represented by the same shape:

	~cpp.pmf.type : () -> SHAPE
This shape will be denoted by pmf in the description of the following tokens. Many of the tokens take an expression which has a shape which is a pointer to the alignment of pmf. This will be denoted by ppmf.

There are two basic methods for constructing a pointer to a function member. The first is to take the address of a non-static member function of a class. There are two cases, depending on whether or not the member function is virtual. The non-virtual case is given by the token:

	~cpp.pmf.make : ( EXP PROC, EXP OFFSET, EXP OFFSET ) -> EXP pmf
where the first argument is the address of the corresponding function, the second argument gives any base class offset which is to be added when calling this function (to deal with inherited member functions), and the third argument is a zero offset.

For virtual functions, a pointer to function member of the form above is entered in the virtual function table for the corresponding class. The actual pointer to the virtual function member then gives a reference into the virtual function table as follows:

	~cpp.pmf.vmake : ( SIGNED_NAT, EXP OFFSET, EXP, EXP ) -> EXP pmf
where the first argument gives the index of the function within the virtual function table, the second argument gives the offset of the vptr field within the class, and the third and fourth arguments are zero offsets.

The second way of constructing a pointer to a function member is to use a null pointer to function member:

	~cpp.pmf.null : () -> EXP pmf
	~cpp.pmf.null2 : () -> EXP pmf
For technical reasons there are two versions of this token, although they have the same value. The first token is used in static initialisers; the second token is used in other expressions.

The cast operations on pointers to function members are more complex than those on pointers to data members. The value to be cast is copied into a temporary and one of the tokens:

	~cpp.pmf.cast : ( EXP ppmf, EXP OFFSET, EXP, EXP OFFSET ) -> EXP TOP
	~cpp.pmf.uncast : ( EXP ppmf, EXP OFFSET, EXP, EXP OFFSET ) -> EXP TOP
is applied to modify the value of the temporary according to the given cast. The first argument gives the address of the temporary, the second gives the base class offset to be added or subtracted, the third gives the number to be added or subtracted to convert virtual function indexes for the base class into virtual function indexes for the derived class, and the fourth gives the offset of the vptr field within the class. Again, the ability to use reinterpret_cast to convert between any two pointer to function member types arises because of the uniform representation of these types.

As with pointers to data members, there are tokens implementing comparisons on pointers to function members:

	~cpp.pmf.test : ( EXP ppmf, LABEL, NTEST ) -> EXP TOP
	~cpp.pmf.compare : ( EXP ppmf, EXP ppmf, LABEL, NTEST ) -> EXP TOP
Note however that the arguments are passed by reference.

The most important, and most complex, operation is calling a function through a pointer to function member. The first step is to copy the pointer to function member into a temporary. The token:

	~cpp.pmf.virt : ( EXP ppmf, EXP, ALIGNMENT ) -> EXP TOP
is then applied to the temporary to convert a pointer to a virtual function member to a normal pointer to function member by looking it up in the corresponding virtual function table. The first argument gives the address of the temporary, the second gives the object to which the function is to be applied, and the third gives the alignment of the corresponding class. Now the base class conversion to be applied to the object can be determined by applying the token:
	~cpp.pmf.delta : ( ALIGNMENT a, EXP ppmf ) -> EXP OFFSET ( a, a )
to the temporary to find the offset to be added. Finally the function to be called can be extracted from the temporary using the token:
	~cpp.pmf.func : ( EXP ppmf ) -> EXP PROC
The function call then procedes as normal.

The default implementation is that described in the ARM, where each pointer to function member is represented in the form:

	struct PTR_MEM_FUNC {
	    short delta ;
	    short index ;
	    union {
		void ( *func ) () ;
		short off ;
	    } u ;
	} ;
The delta field gives the base class offset (in bytes) to be added before applying the function. The index field is 0 for null pointers, -1 for non-virtual function pointers and the index into the virtual function table for virtual function pointers (as described below these indexes start from 1). For non-virtual function pointers the function itself is given by the u.func field. For virtual function pointers the offset of the vptr field within the class is given by the u.off field.


2.6.10. Class layout

Consider a class with no base classes:

	class A {
	    // A's members
	} ;
Each object of class A needs its own copy of the non-static data members of A and, for polymorphic types, a means of referencing the virtual function table and run-time type information for A. This is accomplished using a layout of the form:
class A
where the A component consists of the non-static data members and vptr A is a pointer to the virtual function table for A. For non-polymorphic classes the vptr A field is omitted; otherwise space for vptr A needs to be allocated within the class and the pointer needs to be initialised in each constructor for A. The precise layout of the virtual function table and the run-time type information is given below.

Two alternative ways of laying out the non-static data members within the class are implemented. The first, which is default, gives them in the order in which they are declared in the class definition. The second lays out the public, the protected, and the private members in three distinct sections, the members within each section being given in the order in which they are declared. The latter can be enabled using the -jo command-line option.

The offset of each member within the class (including vptr A) can be calculated in terms of the offset of the previous member. The first member has offset zero. The offset of any other member is given by the offset of the previous member plus the size of the previous member, rounded up to the alignment of the current member. The overall size of the class is given by the offset of the last member plus the size of the last member, rounded up using the token:

	~comp_off : ( EXP OFFSET ) -> EXP OFFSET
which allows for any target dependent padding at the end of the class. The shape of the class is then a compound shape with this offset.

Classes with no members need to be treated slightly differently. The shape of such a class is given by the token:

	~cpp.empty.shape : () -> SHAPE
(recall that an empty class still has a nonzero size). The token:
	~cpp.empty.offset : () -> EXP OFFSET
is used to represent the offset required for an empty class when it is used as a base class. This may be a zero offset.

Bitfield members provide a slight complication to the picture above. The offset of a bitfield is additionally padded using the token:

	~pad : ( EXP OFFSET, SHAPE, SHAPE ) -> EXP OFFSET
where the two shapes give the type underlying the bitfield and the bitfield itself.

The layout of unions is similar to that of classes except that all members have zero offset, and the size of the union is the maximum of the sizes of its members, suitably padded. Of course unions cannot be polymorphic and cannot have base classes.

Pointers to incomplete classes are represented by means of the alignment:

	~cpp.empty.align : () -> ALIGNMENT
This token is also used for the alignment of a complete class if that class is never used in the generated TDF in a manner which requires it to be complete. This can lead to savings on the size of the generated code by preventing the need to define all the member offset tokens in order to find the shape of the class.


2.6.11. Derived class layout

The description of the implementation of derived classes will be given in terms of the example class hierarchy given by:

	class A {
	    // A's members
	} ;

	class B : public A {
	    // B's members
	} ;

	class C : public A {
	    // C's members
	} ;

	class D : public B, public C {
	    // D's members
	} ;
or, as a directed acyclic graph:
class D

Single inheritance

The layout of class A is given by:

class A
as above. Class B inherits all the members of class A plus those members explicitly declared within class B. In addition, class B inherits all the virtual member functions of A, some of which may be overridden in B, extended by any additional virtual functions declared in B. This may be represented as follows:
class B
where A denotes those members inherited from the base class and B denotes those members added in the derived class. Note that an object of class B contains a sub-object of class A. The fact that this sub-object is located at the start of B means that the base class conversion from B to A is trivial. Any base class with this property is called a primary base class.

Note that in theory two virtual function tables are required, the normal virtual function table for B, denoted by vtbl B, and a modified virtual function table for A, denoted by vtbl B::A, taking into account any overriding virtual functions within B, and pointing to B's run-time type information. This latter means that the dynamic type information for the A sub-object relates to B rather than A. However these two tables can usually be combined - if the virtual functions added in B are listed in the virtual function table after those inherited from A and the form of the overriding is suitably well behaved (in the sense defined below) then vptr B::A is an initial segment of vptr B. It is also possible to remove the vptr B field and use vptr B::A in its place in this case (it has to be this way round to preserve the A sub-object). Thus the items shaded in the diagram can be removed.

The class C is similarly given by:

class C

Multiple inheritance

Class D is more complex because of the presence of multiple inheritance. D inherits all the members of B, including those which B inherits from A, plus all the members of C, including those which C inherits from A. It also inherits all of the virtual member functions from B and C, some of which may be overridden in D, extended by any additional virtual functions declared in D. This may be represented as follows:

class D
Note that there are two copies of A in D because virtual inheritance has not been used.

The B base class of D is essentially similar to the single inheritance case already discussed; the C base class is different however. Note firstly that the C sub-object of D is located at a non-zero offset, delta D::C, from the start of the object. This means that the base class conversion from D to C consists of adding this offset (for pointer conversions things are further complicated by the need to allow for null pointers). Also vtbl D::C is not an initial segment of vtbl D because this contains the virtual functions inherited from B first, followed by those inherited from C, followed by those first declared in D (there are other reasons as well). Thus vtbl D::C cannot be eliminated.

Virtual inheritance

Virtual inheritance introduces a further complication. Now consider the class hierarchy given by:

	class A {
	    // A's members
	} ;

	class B : virtual public A {
	    // B's members
	} ;

	class C : virtual public A {
	    // C's members
	} ;

	class D : public B, public C {
	    // D's members
	} ;
or, as a directed acyclic graph:
class D
As before A is given by:
class A
but now B is given by:
class B
Rather than having the sub-object of class A directly as part of B, the class now contains a pointer, ptr A, to this sub-object. The virtual sub-objects are always located at the end of a class layout; their offset may therefore vary for different objects, however the offset for ptr A is always fixed. The ptr A field is initialised in each constructor for B. In order to perform the base class conversion from B to A, the contents of ptr A are taken (again provision needs to be made for null pointers in pointer conversions). In cases when the dynamic type of the B object can be determined statically it is possible to access the A sub-object directly by adding a suitable offset. Because this conversion is non-trivial (see below) the virtual function table vtbl B::A is not an initial segment of vtbl B and cannot be eliminated.

The class C is similarly given by:

class C
Now the class D is given by:
class D
Note that there is a single A sub-object of D referenced by the ptr A fields in both the B and C sub-objects. The elimination of vtbl D::B is as above.


2.6.12. Constructors and destructors

The implementation of constructors and destructors, whether explicitly or implicitly defined, is slightly more complex than that of other member functions. For example, the constructors need to set up the internal vptr and ptr fields mentioned above.

The order of initialisation in a constructor is as follows:

  1. The internal ptr fields giving the locations of the virtual base classes are initialised.
  2. The constructors for the virtual base classes are called.
  3. The constructors for the non-virtual direct base classes are called.
  4. The internal vptr fields giving the locations of the virtual function tables are initialised.
  5. The constructors for the members of the class are called.
  6. The main constructor body is executed.
To ensure that each virtual base is only initialised once, if a class has a virtual base class then all its constructors have an implicit extra parameter of type int. The first two steps above are then only applied if this flag is nonzero. In normal applications of the constructor this argument will be 1, however in base class initialisations such as those in the third and fourth steps above, it will be 0.

Note that similar steps to protect virtual base classes are not taken in an implicitly declared operator= function. The order of assignment in this case is as follows:

  1. The assignment operators for the direct base classes (both virtual and non-virtual) are called.
  2. The assignment operators for the members of the class are called.
  3. A reference to the object assigned to (i.e. *this) is returned.

The order of destruction in a destructor is essentially the reverse of the order of construction:

  1. The main destructor body is executed.
  2. The destructor for the members of the class are called.
  3. The internal vptr fields giving the locations of the virtual function tables are re-initialised.
  4. The destructors for the non-virtual direct base classes are called.
  5. The destructors for the virtual base classes are called.
  6. If necessary the space occupied by the object is deallocated.
All destructors have an extra parameter of type int. The virtual base classes are only destroyed if this flag is nonzero when and-ed with 2. The space occupied by the object is only deallocated if this flag is nonzero when and-ed with 1. This deallocation is equivalent to inserting:
	delete this ;
in the destructor. The operator delete function is called via the destructor in this way in order to implement the pseudo-virtual nature of these deallocation functions. Thus for normal destructor calls the extra argument is 2, for base class destructor calls it is 0, and for calls arising from a delete expression it is 3.

The point at which the virtual function tables are initialised in the constructor, and the fact that they are re-initialised in the destructor, is to ensure that virtual functions called from base class initialisers are handled correctly (see ISO C++ 12.7).

A further complication arises from the need to destroy partially constructed objects if an exception is thrown in a constructor. A count is maintained of the number of base classes and members constructed within a constructor. If an exception is thrown then it is caught in the constructor, the constructed base classes and members are destroyed, and the exception is re-thrown. The count variable is used to determine which bases and members need to be destroyed.

warning These partial destructors currently do not interact correctly with any exception specification on the constructor. Exceptions thrown within destructors are not correctly handled either.


2.6.13. Virtual function tables

The virtual functions in a polymorphic class are given in its virtual function table in the following order: firstly those virtual functions inherited from its direct base classes (which may be overridden in the derived class) followed by those first declared in the derived class in the order in which they are declared. Note that this can result in virtual functions inherited from virtual base classes appearing more than once. The virtual functions are numbered from 1 (this is slightly more convenient than numbering from 0 in the default implementation).

The virtual function table for this class has shape:

	~cpp.vtab.type : ( NAT ) -> SHAPE
the argument being n + 1 where n is the number of virtual functions in the class (there is also a token:
	~cpp.vtab.diag : () -> SHAPE
which is used in the diagnostic output for a generic virtual function table). The table is created using the token:
	~cpp.vtab.make : ( EXP pti, EXP OFFSET, NAT, EXP NOF ) -> EXP vt
where the first expression gives the address of the run-time type information structure for the class, the second expression gives the offset of the vptr field within the class (i.e. voff), the integer constant is n + 1, and the final expression is a make_nof construct giving information on each of the n virtual functions.

The information given on each virtual function in this table has the form of a pointer to function member formed using the token:

	~cpp.pmf.make : ( EXP PROC, EXP OFFSET, EXP OFFSET ) -> EXP pmf
as above, except that the third argument gives the offset of the base class in virtual function tables such as vtbl B::A. For pure virtual functions the function pointer in this token is given by:
	~cpp.vtab.pure : () -> EXP PROC
In the default implementation this gives a function __TCPPLUS_pure which just calls abort.

To avoid duplicate copies of virtual function tables and run-time type information structures being created, the ARM algorithm is used. The virtual function table and run-time type information structure for a class are defined in the module containing the definition of the first non-inline, non-pure virtual function declared in that class. If such a function does not exist then duplicate copies are created in every module which requires them. In the former case the virtual function table will have an external tag name; in the latter case it will be an internal tag. This scheme can be overridden using the -jv command-line option, which causes local virtual function tables to be output for all classes.

Note that the discussion above applies to both simple virtual function tables, such as vtbl B above, and to those arising from base classes, such as vtbl B::A. We are now in a position to precisely determine when vtbl B::A is an initial segment of vtbl B and hence can be eliminated. Firstly, A must be the first direct base class of B and cannot be virtual. This is to ensure both that there are no virtual functions in vtbl B before those inherited from A, and that the corresponding base class conversion is trivial so that the pointers to function members of B comprising the virtual function table can be equally regarded as pointers to function members of A. The second requirement is that if a virtual function for A, f, is overridden in B then the return type for B::f cannot differ from the return type for A::f by a non-trivial conversion (recall that ISO C++ allows the return types to differ by a base class conversion). In the non-trivial conversion case the function entered in vtbl B::A needs to be, not B::f as in vtbl B, but a stub function which calls B::f and converts its return value to the return type of A::f.

Calling virtual functions

The virtual function call mechanism is implemented using the token:

	~cpp.vtab.func : ( EXP ppvt, SIGNED_NAT ) -> EXP ppmf
which has as its arguments a reference to the vptr field of the object the function is to be called for, and the number of the virtual function to be called. It returns a reference to the corresponding pointer to function member within the object's virtual function table. The function is then called by extracting the base class offset to be added, and the function to be called, from this reference using the tokens:
	~cpp.pmf.delta : ( ALIGNMENT a, EXP ppmf ) -> EXP OFFSET ( a, a )
	~cpp.pmf.func : ( EXP ppmf ) -> EXP PROC
described as part of the pointer to function member call mechanism above.


2.6.14. Run-time type information

Each C++ type can be associated with a run-time type information structure giving information about that type. These type information structures have shape given by the token:

	~cpp.typeid.type : () -> SHAPE
which corresponds to the representation for the standard type std::type_info declared in the header <typeinfo>. Each type information structure consists of a tag number, giving information on the kind of type represented, a string literal, giving the name of the type, and a pointer to a list of base type information structures. These are combined to give a type information structure using the token:
	~cpp.typeid.make : ( SIGNED_NAT, EXP, EXP ) -> EXP ti
Each base type information structure has shape given by the token:
	~cpp.baseid.type : () -> SHAPE
It consists of a pointer to a type information structure, an expression used to describe the offset of a base class, a pointer to the next base type information structure in the list, and two integers giving information on type qualifiers etc. These are combined to give a base type information structure using the token:
	~cpp.baseid.make : ( EXP, EXP, EXP, SIGNED_NAT, SIGNED_NAT ) -> EXP bi

The following table gives the various tag numbers used in type information structures plus a list of the base type information structures associated with each type. Macros giving these tag numbers are provided in the default implementation in a header, interface.h, which is shared by the C++ producer.

Type Form Tag Base information
integer - 0 -
floating point - 1 -
void - 2 -
class or struct class T 3 [base,access,virtual], ....
union union T 4 -
enumeration enum T 5 -
pointer cv T * 6 [T,cv,0]
reference cv T & 7 [T,cv,0]
pointer to member cv T S::* 8 [S,0,0], [T,cv,0]
array cv T [n] 9 [T,cv,n]
bitfield cv T : n 10 [T,cv,n]
C++ function cv T ( S1, ...., Sn ) 11 [T,cv,0], [S1,0,0], ...., [Sn,0,0]
C function cv T ( S1, ...., Sn ) 12 [T,cv,0], [S1,0,0], ...., [Sn,0,0]

In the form column cv T is used to denote not only the normal cv-qualifiers but, when T is a function type, the member function cv-qualifiers. Arrays with an unspecified bound are treated as if their bound was zero. Functions with ellipsis are treated as if they had an extra parameter of a dummy type named ... (see below). Note the distinction between C++ and C function types.

Each base type information structure is described as a triple consisting of a type and two integers. One of these integers may be used to encode a type qualifier, cv, as follows:

Qualifier Encoding
none 0
const 1
volatile 2
const volatile3

The base type information for a class consists of information on each of its direct base classes. The includes the offset of this base within the class (for a virtual base class this is the offset of the corresponding ptr field), whether the base is virtual (1) or not (0), and the base class access, encoded as follows:

Access Encoding
public 0
protected 1
private 2

For example, the run-time type information structures for the classes declared in the diamond lattice above can be represented as follows:

typeid D

Defining run-time type information structures

For built-in types, the run-time type information structure may be referenced by the token:

	~cpp.typeid.basic : ( SIGNED_NAT ) -> EXP pti
where the argument gives the encoding of the type as given in the following table:

Type Encoding Type Encoding
char 0 unsigned long 11
(error) 1 float 12
void 2 double 13
(bottom) 3 long double 14
signed char 4 wchar_t 16
signed short 5 bool 17
signed int 6 (ptrdiff_t) 18
signed long 7 (size_t) 19
unsigned char 8 (...) 20
unsigned short9 signed long long 23
unsigned int 10 unsigned long long 27

Note that the encoding for the basic integral types is the same as that given above. The other types are assigned to unused values. Note that the encodings for ptrdiff_t and size_t are not used, instead that for their implementation is used (using the standard tokens ptrdiff_t and size_t). The encodings for bool and wchar_t are used because they are conceptually distinct types even though they are implemented as one of the basic integral types. The type labelled ... is the dummy used in the representation of ellipsis functions. The default implementation uses an array of type information structures, __TCPPLUS_typeid, to implement ~cpp.typeid.basic.

The run-time type information structures for classes are defined in the same place as their virtual function tables. Other run-time type information structures are defined in whatever modules require them. In the former case the type information structure will have an external tag name; in the latter case it will be an internal tag.

Accessing run-time type information

The primary means of accessing the run-time type information for an object is using the typeid construct. In cases where the operand type can be determined statically, the address of the corresponding type information structure is returned. In other cases the token:

	~cpp.typeid.ref : ( EXP ppvt ) -> EXP pti
is used, where the argument gives a reference to the vptr field of the object being checked. From this information it is trivial to trace the corresponding type information.

Another means of querying the run-time type information for an object is using the dynamic_cast construct. When the result cannot be determined statically, this is implemented using the token:

	~cpp.dynam.cast : ( EXP ppvt, EXP pti ) -> EXP pv
where the first expression gives a reference to the vptr field of the object being cast and the second gives the run-time type information for the type being cast to. In the default implementation this token is implemented by the procedure __TCPPLUS_dynamic_cast. The key point to note is that the virtual function table contains the offset, voff, of the vptr field from the start of the most complete object. Thus it is possible to find the address of the most complete object. The run-time type information contains enough information to determine whether this object has a sub-object of the type being cast to, and if so, how to find the address of this sub-object. The result is returned as a void *, with the null pointer indicating that the conversion is not possible.


2.6.15. Dynamic initialisation

The dynamic initialisation of variables with static storage duration in C++ is implemented by means of the TDF initial_value construct. However in order for the producer to maintain control over the order of initialisation, rather than each variable being initialised separately using initial_value, a single expression is created which initialises all the variables in a module, and this initialiser expression is used to initialise a single dummy variable using initial_value. Note that, while this enables the variables within a single module to be initialised in the order in which they are defined, the order of initialisation between different modules is unspecified.

The implementation needs to keep a list of those variables with static storage duration which have been initialised so that it can call the destructors for these objects at the end of the program. This is done by declaring a variable of shape:

	~cpp.destr.type : () -> SHAPE
for each such object with a non-trivial destructor. Each element of an array is considered a distinct object. Immediately after the variable has been initialised the token:
	~cpp.destr.global : ( EXP pd, EXP POINTER c, EXP PROC ) -> EXP TOP
is called to add the variable to the list of objects to be destroyed. The first argument is the address of the dummy variable just declared, the second is the address of the object to be destroyed, and the third is the destructor to be used. In this way a list giving the objects to be destroyed, and the order in which to destroy them, is built up. Note that partially constructed objects are destroyed within their constructors (see above) so that only completely constructed objects need to be considered.

The implementation also needs to ensure that it calls the destructors in this list at the end of the program, including calls of exit. This is done by calling the token:

	~cpp.destr.init : () -> EXP TOP
at the start of each initial_value construct. In the default implementation this uses atexit to register a function, __TCPPLUS_term, which calls the destructors. To aid alternative implementations the token:
	~cpp.start : () -> EXP TOP
is called at the start of the main function, however this has no effect in the default implementation.


2.6.16. Exception handling

Conceptually, exception handling can be described in terms of the following diagram:

try stack
At any point in the execution of the program there is a stack of currently active try blocks and currently active local variables. A try block is pushed onto the stack as it is entered and popped from the stack when it is left (whether directly or via a jump). A local variable with a non-trivial destructor is pushed onto the stack just after its constructor has been called at the start of its scope, and popped from the stack just before its destructor is called at the end of its scope (including before jumps out of its scope). Each element of an array is considered a separate object. Each try block has an associated list of handlers. Each local variable has an associated destructor.

Provided no exception is thrown this stack grows and shrinks in a well-behaved manner as execution proceeds. When an exception is thrown an exception manager is invoked to find a matching exception handler. The exception manager proceeds to execute a loop to unwind the stack as follows. If the stack is empty then the exception cannot be caught and std::terminate is called. Otherwise the top element is popped from the stack. If this is a local variable then the associated destructor is called for the variable. If the top element is a try block then the current exception is compared in turn to each of the associated handlers. If a match is found then execution jumps to the handler body, otherwise the exception manager continues to the next element of the stack.

Note that this description is purely conceptual. There is no need for exception handling to be implemented by a stack in this way (although the default implementation uses a similar technique). It does however serve to illustrate the various stages which must exist in any implementation.

Try blocks

At the start of a try block a variable of shape:

	~cpp.try.type : () -> SHAPE
is declared corresponding to the stack element for this block. This is then initialised using the token:
	~cpp.try.begin : ( EXP ptb, EXP POINTER fa, EXP POINTER ca ) -> EXP TOP

where the first argument is a pointer to this variable, the second argument is the TDF current_env construct, and the third argument is the result of the TDF make_local_lv construct on the label which is used to mark the first handler associated with the block. Note that the last two arguments enable a TDF long_jump construct to be applied to transfer control to the first handler.

When control exits from a try block, whether by reaching the end of the block or jumping out of it, the block is removed from the stack using the token:

	~cpp.try.end : ( EXP ptb ) -> EXP TOP
where the argument is a pointer to the try block variable.

Local variables

The technique used to add a local variable with a non-trivial destructor to the stack is similar to that used in the dynamic initialisation of global variables. A local variable of shape ~cpp.destr.type is declared at the start of the variable scope. This is initialised just after the constructor for the variable is called using the token:

	~cpp.destr.local : ( EXP pd, EXP POINTER c, EXP PROC ) -> EXP TOP
where the first argument is a pointer to the variable being initialised, the second is a pointer to the local variable to be destroyed, and the third is the destructor to be called. At the end of the variable scope, just before its destructor is called, the token:
	~cpp.destr.end : ( EXP pd ) -> EXP TOP
where the argument is a pointer to destructor variable, is called to remove the local variable destructor from the stack. Note that partially constructed objects are destroyed within their constructors (see above) so that only completely constructed objects need to be considered.

In cases where the local variable may be conditionally initialised (for example a temporary variable in the second operand of a || operation) the local variable of shape ~cpp.destr.type is initialised to the value given by the token:

	~cpp.destr.null : () -> EXP d
(normally it is left uninitialised). Before the destructor for this variable is called the value of the token:
	~cpp.destr.ptr : ( EXP pd ) -> EXP POINTER c
is tested. If ~cpp.destr.local has been called for this variable then this token returns a pointer to the variable, otherwise it returns a null pointer. The token ~cpp.destr.end and the destructor are only called if this token indicates that the variable has been initialised.

Throwing an exception

When a throw expression with an argument is encountered a number of steps performed. Firstly, space is allocated to hold the exception value using the token:

	~cpp.except.alloc : ( EXP VARIETY size_t ) -> EXP pv
the argument of which gives the size of the value. The space allocated is returned as an expression of type void *. Secondly, the exception value is copied into the space allocated, using a copy constructor if appropriate. Finally the exception is raised using the token:
	~cpp.except.throw : ( EXP pv, EXP pti, EXP PROC ) -> EXP BOTTOM
The first argument gives the pointer to the exception value, returned by ~cpp.except.alloc, the second argument gives a pointer to the run-time type information for the exception type, and the third argument gives the destructor to be called to destroy the exception value (if any). This token sets the current exception to the given values and invokes the exception manager as above.

A throw expression without an argument results in a call to the token:

	~cpp.except.rethrow : () -> EXP BOTTOM
which re-invokes the exception manager with the current exception. If there is no current exception then the implementation should call std::terminate.

Handling an exception

The exception manager proceeds to find an exception in the manner described above, unwinding the stack and calling destructors for local variables. When a try block is popped from the stack a TDF long_jump is applied to transfer control to its list of handlers. For each handler in turn it is checked whether the handler can catch the current exception. For ... handlers this is always true; for other handlers it is checked using the token:

	~cpp.except.catch : ( EXP pti ) -> EXP VARIETY int
where the argument is a pointer to the run-time type information for the handler type. This token gives 1 if the exception is caught by this handler, and 0 otherwise. If the exception is not caught by the handler then the next handler is checked, until there are no more handlers associated with the try block. In this case control is passed back to the exception manager by re-throwing the current exception using ~cpp.except.rethrow.

If an exception is caught by a handler then a number of steps are performed. Firstly, if appropriate, the handler variable is initialised by copying the current exception value. A pointer to the current exception value can be obtained using the token:

	~cpp.except.value : () -> EXP pv
Once this initialisation is complete the token:
	~cpp.except.caught : () -> EXP TOP
is called to indicate that the exception has been caught. The handler body is then executed. When control exits from the handler, whether by reaching the end of the handler or by jumping out of it, the token:
	~cpp.except.end : () -> EXP TOP
is called to indicate that the exception has been completed. Note that the implementation should call the destructor for the current exception and free the space allocated by ~cpp.except.alloc at this point. Execution then continues with the statement following the handler.

To conclude, the TDF generated for a try block and its associated list of handlers has the form:

	variable (
	    long_jump_access,
	    stack_tag,
	    make_value ( ~cpp.try.type ),
	    conditional (
		handler_label,
		sequence (
		    ~cpp.try.begin (
			obtain_tag ( stack_tag ),
			current_env,
			make_local_lv ( handler_label ) ),
			try-block-body,
			~cpp.try.end ),
		    conditional (
			catch_label_1,
			sequence (
			    integer_test (
				not_equal,
				catch_label_1,
				~cpp.except.catch (
				    handler-1-typeid ) )
			    variable (
				handler_tag_1,
				handler-1-init (
				    ~cpp.except.value ),
				sequence (
				    ~cpp.except.caught,
				    handler-1-body ) )
			    ~cpp.except.end )
			conditional (
			    catch_label_2,
			    further-handlers,
			    ~cpp.except.rethrow ) ) ) )

Note that for a local variable to maintain its previous value when an exception is caught in this way it is necessary to declare it using the TDF long_jump_access construct. Any local variable which contains a try block in its scope is declared in this way.

To aid implementations in the writing of exception managers the following standard tokens are provided:

	~cpp.ptr.code : () -> SHAPE POINTER ca
	~cpp.ptr.frame : () -> SHAPE POINTER fa
	~cpp.except.jump : ( EXP POINTER fa, EXP POINTER ca ) -> EXP BOTTOM
These give the shape of the TDF make_local_lv construct, the shape of the TDF current_env construct, and direct access to the TDF long_jump access. The exception manager in the default implementation is a function called __TCPPLUS_throw.

Exception specifications

If a function is declared with an exception specification then extra code needs to be generated in the function definition to catch any unexpected exceptions thrown by the function and to call std::unexpected . Since this is a potentially high overhead for small functions, this extra code is not generated if it can be proved that such unexpected exceptions can never be thrown (the analysis is essentially the same as that in the exception analysis check).

The implementation of exception specification is to enclose the entire function definition in a try block. The handler for this block uses ~cpp.except.catch to check whether the current exception can be caught by any of the types listed in the exception specification. If so the current exception is re-thrown. If none of these types catch the current exception then the token:

	~cpp.except.bad : ( SIGNED_NAT ) -> EXP TOP
is called. The argument is 1 if the exception specification includes the special type std::bad_exception, and 0 otherwise. The implementation should call std::unexpected, but how any exceptions thrown during this call are to be handled depends on the value of the argument.


2.6.17. Mangled identifier names

In a similar fashion to other C++ compilers, the C++ producer needs a method of mapping C++ identifiers to a form suitable for further processing, namely TDF tag names. This mangled name contains an encoding of the identifier name, its parent namespace or class and its type. Identifiers with C linkage are not mangled. The producer contains a built-in name unmangler which performs the reverse operation of transforming the mangled form of an identifier name back to the underlying identifier. This can be useful when analysing system linker errors.

Note that the type of an identifier forms part of its mangled name not only for functions, but also for variables. Many other compilers do not mangle variable names, however the ISO C++ rules on namespaces and variables with C linkage make it necessary (this can be suppressed using the -j-n command-line option). Declaring the language linkage of a variable inconsistently can therefore lead to linking errors with the C++ producer which are not detected by other compilers. A common example is:

	extern int errno ;
which, leaving aside whether errno is actually an external variable, should be:
	extern "C" int errno ;

As described above, the mangled form of an identifier has three components; the identifier name, the identifier namespace and the identifier type. Two underscores (__) are used to separate the name component from the namespace and type components. The mangling scheme used is based on that described in the ARM. The description below is not complete; the mangling and unmangling routines themselves should be consulted for a complete description.

Mangling identifier names

Simple identifier names are mapped to themselves. Unicode characters of the forms \uxxxx and \Uxxxxxxxx are mapped to __kxxxx and __Kxxxxxxxx respectively, where the hex digits are output in their canonical lower-case form. Constructors are mapped to __ct and destructors to __dt. Conversions functions are mapped to __optype where type is the mangled form of the conversion type. Overloaded operator functions, operator@, are mapped as follows:

Operator Mapping Operator Mapping Operator Mapping
& __ad &= __aad [] __vc
-> __rf ->* __rm = __as
, __cm ~ __co / __dv
/= __adv == __eq () __cl
> __gt >= __ge < __lt
<= __le && __aa || __oo
<< __ls <<= __als - __mi
-= __ami -- __mm ! __nt
!= __ne | __or |= __aor
+ __pl += __apl ++ __pp
% __md %= __amd >> __rs
>>= __ars * __ml *= __aml
^ __er ^= __aer delete __dl
delete [] __vd new __nw new [] __vn
?: __cn : __cs :: __cc
. __df .* __dm abs __ab
max __mx min __mn sizeof __sz
typeid __td vtable __tb - -

Note that this table contains a number of operators which are not part of C++ or cannot be overloaded in C++. These are used in the representation of target dependent integer constants.

Mangling namespace names

The global namespace is mapped to an empty string. Simple namespace and class names are mapped as above, but are preceded by a series of decimal digits giving the length of the mangled name. Nested namespaces and classes are represented by a sequence of such namespace names, preceded by the number of elements in the sequence. This takes the form Qdigit if there are less than 10 elements, or Q_digits_ if there are more than 10. Note that members of anonymous classes or namespaces are local to their translation unit, and so do not have external tag names.

Mangling types

The mangling of types is essentially similar to that used in the symbol table dump format. The type used in the mangled name for an identifier ignores the return type for a function and ignores the most significant bound for an array.

The built-in types are mapped in precisely the same way as in the symbol table dump. Class and enumeration types are mapped to their type names mangled in the same way as the namespace names above. The exception to this is that in a class member, the parent class is mapped to X.

The composite types are again mapped in a similar fashion to that in the dump file. For example, PCc represents const char *. The only difficult case concerns function parameter types where the ARM T and N encodings are used for duplicate parameter types. The function return type is included in the mangled form except for function identifier types. In the cases where the identifier is known always to represent a function (constructors, destructors etc.) the initial F indicating a function type is also omitted.

The types of template functions and classes are represented by the underlying template and the template arguments giving rise to the instance. Template classes are preceded by t; template functions are preceded by G rather than F. Type arguments are represented by Z followed by the type value; non-type arguments are represented by the argument type followed by the argument value. In the underlying type the template parameters are represented by m0, m1 etc. An alternative scheme, in which the mangled form of a template function includes the type of that instance, rather than the underlying template, can be enabled using the -j-f command-line option.

Other mangled names

The virtual function table for a class, when this is a variable with external linkage, is named __vt__type , where type is the mangled form of the class name. The virtual function table for a base class is named __vt__base where base is a sequence of mangled class names specifying the base class. The run-time type information structure for a type, when this is a variable with external linkage, is named __ti__type, where type is the mangled form of the type name.

Mangled name examples

The following gives some examples of the name mangling scheme:

	class A {
	    static int a ;			// a__1Ai
	public :
	    A () ;				// __ct__1A
	    A ( int ) ;				// __ct__1Ai
	    A ( const A & ) ;			// __ct__1ARCX
	    virtual ~A () ;			// __dt__1A
	    operator bool () ;			// __opb__1A
	    bool operator! () ;			// __nt__1A
	} ;

	// virtual function table	__vt__1A
	// run-time type information	__ti__1A

	int f ( A *, int, A * ) ;		// f__FP1AiT1
	int b = 2 ;				// b__i
	int c [3] ;				// c__A_i

	namespace N {
	    int *p = 0 ;			// p__1NPi
	}


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tcpplus/link.html100644 1750 1750 3116 6506672674 20177 0ustar brooniebroonie C++ Producer Guide: Intermodule analysis

C++ Producer Guide

March 1998

next section previous section current document TenDRA home page document index


2.5. Intermodule analysis

warning The C++ spec linking routines have not yet been completely implemented, and so are disabled in the current version of the C++ producer.

A C++ spec file is a dump of the C++ producer's internal representation of a translation unit. Such files can be written to, and read from, disk to perform such operations as intermodule analysis.

Note that the format of a C++ spec file is specific to the C++ producer and may change between releases to reflect modifications in the internal type system. The C producer has a similar dump format, called a C spec file, however the two are incompatible. If intermodule analysis between C and C++ source files is required then the symbol table dump format should be used.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tcpplus/man.html100644 1750 1750 43430 6506672675 20041 0ustar brooniebroonie C++ Producer Guide: Invocation

C++ Producer Guide

March 1998

next section previous section current document TenDRA home page document index


2.1.1 - Compilation scheme
2.1.2 - Producer options

2.1. Invocation

In this section it is described how the C++ to TDF producer, tcpplus, fits into the overall compilation scheme controlled by the TenDRA compiler front-end, tcc, or the TenDRA checker front-end, tchk. While it is possible to use tcpplus as a stand-alone program, it is recommended that it should be invoked via tcc or tchk. The tcc users' guide should be consulted for more details.

tcc and tchk require the -Yc++ command-line option in order to enable their C++ capabilities. Files with a .C suffix are recognised as C++ source files and passed to tcpplus for processing (see below). It is possible to change the suffix used for C++ source files; for example -sC:cc causes .cc files to be recognised as C++ source files. An interesting variation is -sC:c which causes C source files to be processed by the C++ producer. Similarly .I files are recognised as preprocessed C++ source files and .K files are recognised as C++ spec files.

Most of the command-line option handling for tcpplus is done by tcc and tchk, however it is possible to pass the option opt directly to tcpplus using the option -Wx,opt to tcc or tchk. Similarly -Wg,opt and -WS,opt can be used to pass options to the C++ preprocessor and the C++ spec linker (both of which are actually tcpplus invoked with different options) respectively.


2.1.1. Compilation scheme

The overall compilation scheme controlled by tcc, as it relates to the C++ producer, can be represented as follows:

compilation scheme
Each C++ source file, a.C say, is processed using tcpplus to give an output TDF capsule, a.j, which is passed to the installer phase of tcc. The capsule is linked with any target dependent token definition libraries, translated to assembler and assembled to give a binary object file, a.o. The various object files comprising the program are then linked with the system libraries to give a final executable, a.out.

In addition to this main compilation scheme, tcpplus can additionally be made to output a C++ spec file for each C++ source file, a.K say. These C++ spec files can be linked, using tcpplus in its spec linker mode, to give an additional TDF capsule, x.j say, and a combined C++ spec file, x.K. The main purpose of this C++ spec linking is to perform intermodule checks on the program, however in the course of this checking exported templates which are defined in one module and used in another are instantiated. This extra code is output to x.j, which is then installed and linked in the normal way.

Note that intermodule checks, and hence intermodule template instantiations, are only performed if the -im option is passed to tcc.

The TenDRA checker, tchk, is similar to tcc except that it disables TDF output and has intermodule analysis enabled by default.


2.1.2. Producer options

The general form for the invocation of tcpplus is as follows:

	tcpplus [ options ] [ input-file ] .... [ output-file ]
The output file can alternatively be specified using the -o option. If no output file is given, or the output file is -, the standard output is used. In general there can be any number of input files. If no input file is given, or the input file is -, the standard input is used.

tcpplus has three modes which determine the form of its input and output files. The default mode is compilation, in which a single input C++ source file is translated into an output TDF capsule. In preprocessing mode, specified using the -E option, a single input C++ source file is preprocessed into an output C++ source file. Note that the preprocessor is built into tcpplus, rather than, as with most other compilers, being a separate program. The final mode is C++ spec linking, specified using the -S option. Any number of C++ spec input files are linked and any code generated as a result (for example, template instantiations) is written to the output TDF capsule.

In either compilation or spec linking mode, a C++ spec output file can be generated, in addition to the TDF capsule, using the -s option. In any mode a symbol table dump output file can generated using the -d option.

Command-line options can appear in any order and can be interspersed with the input and output files, except following a -- option. All the multi-part options can be given either as one or two command-line arguments, so that -Idirectory and -I directory are equivalent. The recognised options are as follows:

-Apredicate(tokens)
Asserts that the given predicate is true, that is to say:
	#assert predicate ( tokens )
The special case -A- undefines all the built-in predicates (of which there are none). Use of this option automatically enables support for the #assert and #unassert directives.

-Dmacro
-Dmacro=tokens
Defines the given macro to be 1 in the first case, or the given sequence of preprocessing tokens in the second case, that is to say:
	#define macro 1
	#define macro tokens
respectively. In fact -D and -U options to tcc are not passed as -D and -U options to tcpplus. Instead a start-up file containing the equivalent #define and #undef directives is used.

-E
Enables preprocessing mode in which the input C++ source file is preprocessed into the output file.

-Ffile
Causes a list of command-line options to be read from file. Other than empty lines and lines beginning with #, each line in the file is treated as if it had been specified as a separate command-line option.

-H
Enables verbose inclusion mode in which warnings are printed at the start and end of each included source file.

-Idirectory
Adds the given directory to the list searched for included source files. No such directories are built into the producer by default.

-Nname:directory
This is identical to -Idirectory except that it also associates the given identifier with the directory. The directory name can be used to specify a compilation profile to be used on files included from this directory.

-S
Enables C++ spec linker mode, in which any number of C++ spec input files are linked together.

-Umacro
Undefines the given macro, that is to say:
	#undef macro
The special case -U- undefines all the built-in macros. These may be described as follows:
	#define __FILE__		(current file)
	#define __LINE__		(current line)
	#define __TIME__		(current time)
	#define __DATE__		(current date)
	#define __STDC__		1
	#define __STDC_VERSION__	199409L
	#define __cplusplus		199711L
The actual value of __cplusplus gives the date of the draft ISO C++ standard on which the current version of the producer is based. The value given above gives the expected date of the final C++ standard.

-V
Causes the name of each function to be printed to the standard output as it is compiled.

-Woption
Sets the given compiler option to give a warning, that is to say:
	#pragma TenDRA option "option" warning
The special case -Wall enables a wide range of warnings.

-X
Disables exception handling. The current implementation can be a large run-time overhead if not required. The effect of linking any module compiled with this option with a module which throws an exception is undefined. This is equivalent to -j-e.

-a
Causes complete program analysis to be applied. That is to say it is assumed that no other translation units need to be linked in order for the program to execute.

-c
Disables TDF output. The output file will still be a valid TDF capsule, but it will contain no information. This is equivalent to -j-c.

-dopt=dump-file
Specifies the given file as a symbol table dump output file. opt will be a series of characters describing the information to be dumped, as follows:

Key Description
a equivalent to ehlmu
c dump string literals
e dump error messages
h dump header information
k dump keyword identifiers
l dump local variables
m dump macro identifiers
s dump scope information
u dump identifier usage information

Note that these correspond to the tcc -sym options.

-efile
Specifies the given file as an end-up file. This is equivalent to adding:
	#include "file"
at the end of the input source file. More than one end-up file may be given; they are processed in the order given.

-ffile
Specifies the given file as a start-up file. This is equivalent to adding:
	#include "file"
at the start of the input source file. More than one start-up file may be given; they are processed in the order given.

-g
Specifies that the output TDF capsule should also contain information to allow for the generation of run-time debugging directives. This is equivalent to -jg.

-h
Causes a full list of command-line options to be printed. This includes a number not documented here which are unlikely to prove useful to the normal user.

-jopt
Sets the TDF output options given by opt. This consists of a sequence of characters describing the options to be enabled or disabled. By default, or following a +, the options are enabled; following a - they are disabled. The available options are as follows:

Key Default Description
a off output external names for local objects
b off work round old installer bugs
c on output TDF capsule
d off output termination function
e on output exceptions
f on mangle template function signatures
g off output debugging information
i off output dynamic initialisers as a function
n on mangle object names
o off order class data members by access
p on output partial destructors
r on output run-time type information
s on output shared string literals
t off output token declarations
u on output unused static variables
v off output local virtual function tables

-mopt
Sets the error formatting options given by opt. This consists of a sequence of characters describing the options to be enabled or disabled. By default, or following a +, the options are enabled; following a - they are disabled. The available options are as follows:

Key Default Description
c off show source code with error
e off show error name
f on reliable fseek function
g off record statement locations
i on reliable stat function
k off enable C++ spec output
l off output full error location
s on output ISO section number
t off use typedef names in errors
w off disable warnings
z off continue after error

-nport-table
Specifies that the given portability table should be used to specify the basic configuration parameters.

-ooutput-file
Gives an alternative method of specifying the output file.

-q
Causes the program to quit immediately without processing its input files. This is useful primarily in version and command-line option queries.

-sspec-file
Specifies the given file as a C++ spec output file.

-t
Specifies that token declarations should be included in the output TDF capsule. While these are strictly unnecessary, they help when pretty-printing the output. This is equivalent to -jt.

-u
The form:
	tcpplus -u name .... name
can be used to print the unmangled forms of a list of mangled identifier names to the standard output.

-v
Causes the C++ producer version number, plus information on the versions of C++ and TDF supported, to be printed to the standard error.

-w
Disables all warning messages. This is equivalent to -mw.

-z
Forces an output file to be created even if compilation errors occur. The effect of installing a TDF capsule produced using this option is undefined. This is equivalent to -mz.

--
Marks the last option. Any subsequent arguments are interpreted as input and output files even if they resemble command-line options.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tcpplus/parse.html100644 1750 1750 10316 6506672675 20375 0ustar brooniebroonie C++ Producer Guide: Parsing C++

C++ Producer Guide

March 1998

next section previous section current document TenDRA home page document index


3.4. Parsing C++

The parser used in the C++ producer is generated using the sid tool. Because of the large size of the generated code (1.3MB), the sid output is run through a simple program, sidsplit, which splits the output into a number of more manageable modules. It also transforms the code to use the PROTO macros used in the rest of the program.

sid is designed as a parser for grammars which can be transformed into LL(1) grammars. The distinguishing feature of these grammars is that the parser can always decide what to do next based on the current terminal. This is not the case in C++; in some circumstances a potentially unlimited look-ahead is required to distinguish, for example, declaration statements from expression statements. In the technical phrase, C++ is an LL(k) grammar. Fortunately there are relatively few such situations, and sid provides a mechanism, predicates, for bypassing the normal parsing mechanism in these cases. Thus it is possible, although difficult, to express C++ as a sid grammar.

The sid grammar file, syntax.sid, is closely based on the ISO C++ grammar. In particular, the same production names have been used. The grammar has been extended slightly to allow common syntactic errors to be detected elegantly. Other parsing errors are handled by sid's exception mechanism. At present there is only limited recovery after such errors.

The lexical analysis routines in the C++ producer are hand-crafted, based on an initial version generated by the simple lexical analyser generator, lexi. lexi has been used more directly to generate the lexical analysers for certain of the other automatic code generating tools, including calculus, used in the producer.

The sid grammar contains a number of entry points. The most important is parse_file, which is used to parse a complete C++ translation unit. The syntax for the #pragma TenDRA directives is included within the same grammar with two entry points, parse_tendra in normal use, and parse_preproc for use in preprocessing mode. There are also entry points in the grammar for each of the kinds of token argument. The parsing routines for token and template arguments are largely hand-crafted, based on these primitives.

Certain parsing operations are performed before control passes to the sid grammar. As mentioned above, these include the processing of token and template applications. The other important case concerns nested name specifiers. For example, in:

	class A {
	    class B {
		static int c ;
	    } ;
	} ;

	int A::B::c = 0 ;
the qualified identifier A::B::c is split into two terminals, a nested name specifier, A::B::, and an identifier, c, which is looked up in the corresponding namespace. Note that it is at this stage that name look-up occurs. An identifier can be mapped to one of a number of terminals, including keywords, type names, namespace names and other identifiers, according to the result of this look-up. If the look-up gives a macro then this is expanded at this stage.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tcpplus/pragma.html100644 1750 1750 234714 6506672676 20565 0ustar brooniebroonie C++ Producer Guide: Configuration

C++ Producer Guide

March 1998

next section previous section current document TenDRA home page document index


2.2.1 - Portability tables
2.2.2 - Low level configuration
2.2.3 - Checking scopes
2.2.4 - Implementation limits
2.2.5 - Lexical analysis
2.2.6 - Keywords
2.2.7 - Comments
2.2.8 - Identifier names
2.2.9 - Integer literals
2.2.10 - Character literals and built-in types
2.2.11 - String literals
2.2.12 - Escape sequences
2.2.13 - Preprocessing directives
2.2.14 - Target dependent conditional inclusion
2.2.15 - File inclusion directives
2.2.16 - Macro definitions
2.2.17 - Empty source files
2.2.18 - The std namespace
2.2.19 - Object linkage
2.2.20 - Static identifiers
2.2.21 - Empty declarations
2.2.22 - Implicit int
2.2.23 - Extended integral types
2.2.24 - Bitfield types
2.2.25 - Elaborated type specifiers
2.2.26 - Implicit function declarations
2.2.27 - Weak function prototypes
2.2.28 - printf and scanf argument checking
2.2.29 - Type declarations
2.2.30 - Type compatibility
2.2.31 - Incomplete types
2.2.32 - Type conversions
2.2.33 - Cast expressions
2.2.34 - Ellipsis functions
2.2.35 - Overloaded functions
2.2.36 - Expressions
2.2.37 - Initialiser expressions
2.2.38 - Lvalue expressions
2.2.39 - Discarded expressions
2.2.40 - Conditional and iteration statements
2.2.41 - Switch statements
2.2.42 - For statements
2.2.43 - Return statements
2.2.44 - Unreached code analysis
2.2.45 - Variable flow analysis
2.2.46 - Variable hiding
2.2.47 - Exception analysis
2.2.48 - Template compilation
2.2.49 - Other checks

2.2. Compiler configuration

This section describes how the C++ producer can be configured to apply extra static checks or to support various dialects of C++. In all cases the default behaviour is precisely that specified in the ISO C++ standard with no extra checks.

Certain very basic configuration information is specified using a portability table, however the primary method of configuration is by means of #pragma directives. These directives may be placed within the program itself, however it is generally more convenient to group them into a start-up file in order to create a user-defined compilation profile. The #pragma directives recognised by the C++ producer have one of the equivalent forms:

	#pragma TenDRA ....
	#pragma TenDRA++ ....
Some of these are common to the C and C++ producers (although often with differing default behaviour). The C producer will ignore any TenDRA++ directives, so these may be used in compilation profiles which are to be used by both producers. In the descriptions below, the presence of a ++ is used to indicate a directive which is C++ specific; the other directives are common to both producers.

Within the description of the #pragma syntax, on stands for on, off or warning, allow stands for allow, disallow or warning, string-literal is any string literal, integer-literal is any integer literal, identifier is any simple, unqualified identifier name, and type-id is any type identifier. Other syntactic items are described in the text. A complete grammar for the #pragma directives accepted by the C++ producer is given as an annex.


2.2.1. Portability tables

Certain very basic configuration information is read from a file called a portability table, which may be specified to the producer using a -n option. This information includes the minimum sizes of the basic integral types, the sign of plain char, and whether signed types can be assumed to be symmetric (for example, [-127,127]) or maximum (for example, [-128,127]).

The default portability table values, which are built into the producer, can be expressed in the form:

	char_bits			8
	short_bits			16
	int_bits			16
	long_bits			32
	signed_range			symmetric
	char_type			either
	ptr_int				none
	ptr_fn				no
	non_prototype_checks		yes
	multibyte			1
This illustrates the syntax for the portability table; note that all ten entries are required, even though the last four are ignored.


2.2.2. Low level configuration

The simplest level of configuration is to reset the severity level of a particular error message using:

	#pragma TenDRA++ error string-literal on
	#pragma TenDRA++ error string-literal allow
The given string-literal should name an error from the error catalogue. A severity of on or disallow indicates that the associated diagnostic message should be an error, which causes the compilation to fail. A severity of warning indicates that the associated diagnostic message should be a warning, which is printed but allows the compilation to continue. A severity of off or allow indicates that the associated error should be ignored. Reducing the severity of any error from its default value, other than via one of the dialect directives described in this section, results in undefined behaviour.

The next level of configuration is to reset the severity level of a particular compiler option using:

	#pragma TenDRA++ option string-literal on
	#pragma TenDRA++ option string-literal allow
The given string-literal should name an option from the option catalogue. The simplest form of compiler option just sets the severity level of one or more error messages. Some of these options may require additional processing to be applied.

It is possible to link a particular error message to a particular compiler option using:

	#pragma TenDRA++ error string-literal as option string-literal

Note that the directive:

	#pragma TenDRA++ use error string-literal 
can be used to raise a given error at any point in a translation unit in a similar fashion to the #error directive. The values of any parameters for this error are unspecified.

The directives just described give the primitive operations on error messages and compiler options. Many of the remaining directives in this section are merely higher level ways of expressing these primitives.


2.2.3. Checking scopes

Most compiler options are scoped. A checking scope may be defined by enclosing a list of declarations within:

	#pragma TenDRA begin
	....
	#pragma TenDRA end
If the final end directive is omitted then the scope ends at the end of the translation unit. Checking scopes may be nested in the obvious way. A checking scope inherits its initial set of checks from its enclosing scope (this includes the implicit main checking scope consisting of the entire input file). Any checks switched on or off within a scope apply only to the remainder of that scope and any scope it contains. A particular check can only be set once in a given scope. The set of applied checks reverts to its previous state at the end of the scope.

A checking scope can be named using the directives:

	#pragma TenDRA begin name environment identifier
	....
	#pragma TenDRA end
Checking scope names occupy a namespace distinct from any other namespace within the translation unit. A named scope defines a set of modifications to the current checking scope. These modifications may be reapplied within a different scope using:
	#pragma TenDRA use environment identifier
The default behaviour is not to allow checks set in the named checking scope to be reset in the current scope. This can however be modified using:
	#pragma TenDRA use environment identifier reset allow

Another use of a named checking scope is to associate a checking scope with a named include file directory. This is done using:

	#pragma TenDRA directory identifier use environment identifier
where the directory name is one introduced via a -N command-line option. The effect of this directive, if a #include directive is found to resolve to a file from the given directory, is as if the file was enclosed in directives of the form:
	#pragma TenDRA begin
	#pragma TenDRA use environment identifier reset allow
	....
	#pragma TenDRA end

The checks applied to the expansion of a macro definition are those from the scope in which the macro was defined, not that in which it was expanded. The macro arguments are checked in the scope in which they are specified, that is to say, the scope in which the macro is expanded. This enables macro definitions to remain localised with respect to checking scopes.


2.2.4. Implementation limits

This table gives the default implementation limits imposed by the C++ producer for the various implementation quantities listed in Annex B of the ISO C++ standard, together with the minimum limits allowed in ISO C and C++. A default limit of none means that the quantity is limited only by the size of the host machine (either ULONG_MAX or until it runs out of memory). A limit of target means that while no limits is imposed by the C++ front-end, particular target machines may impose such limits.

Quantity identifier Min C limit Min C++ limit Default limit
statement_depth 15 256 none
hash_if_depth 8 256 none
declarator_max 12 256 none
paren_depth 32 256 none
name_limit 31 1024 none
extern_name_limit 6 1024 target
external_ids 511 65536 target
block_ids 127 1024 none
macro_ids 1024 65536 none
func_pars 31 256 none
func_args 31 256 none
macro_pars 31 256 none
macro_args 31 256 none
line_length 509 65536 none
string_length 509 65536 none
sizeof_object 32767 262144 target
include_depth 8 256 256
switch_cases 257 16384 none
data_members 127 16384 none
enum_consts 127 4096 none
nested_class 15 256 none
atexit_funcs 32 32 target
base_classes N/A 16384 none
direct_bases N/A 1024 none
class_members N/A 4096 none
virtual_funcs N/A 16384 none
virtual_bases N/A 1024 none
static_members N/A 1024 none
friends N/A 4096 none
access_declarations N/A 4096 none
ctor_initializers N/A 6144 none
scope_qualifiers N/A 256 none
external_specs N/A 1024 none
template_pars N/A 1024 none
instance_depth N/A 17 17
exception_handlers N/A 256 none
exception_specs N/A 256 none

It is possible to impose lower limits on most of the quantities listed above by means of the directive:

	#pragma TenDRA++ option value string-literal integer-literal
where string-literal gives one of the quantity identifiers listed above and integer-literal gives the limit to be imposed. An error is reported if the quantity exceeds this limit (note however that checks have not yet been implemented for all of the quantities listed). Note that the name_limit and include_depth implementation limits can be set using dedicated directives.

The maximum number of errors allowed before the producer bails out can be set using the directive:

	#pragma TenDRA++ set error limit integer-literal
The default value is 32.


2.2.5. Lexical analysis

During lexical analysis, a source file which is not empty should end in a newline character. It is possible to relax this constraint using the directive:

	#pragma TenDRA no nline after file end allow


2.2.6. Keywords

In several places in this section it is described how to introduce keywords for TenDRA language extensions. By default, no such extra keywords are defined. There are also low-level directives for defining and undefining keywords. The directive:

	#pragma TenDRA++ keyword identifier for keyword identifier 
can be used to introduce a keyword (the first identifier) standing for the standard C++ keyword given by the second identifier. The directive:
	#pragma TenDRA++ keyword identifier for operator operator 
can similarly be used to introduce a keyword giving an alternative representation for the given operator or punctuator, as, for example, in:
	#pragma TenDRA++ keyword and for operator &&
Finally the directive:
	#pragma TenDRA++ undef keyword identifier 
can be used to undefine a keyword.


2.2.7. Comments

C-style comments do not nest. The directive:

	#pragma TenDRA nested comment analysis on
enables a check for the characters /* within C-style comments.


2.2.8. Identifier names

During lexical analysis, each character in the source file has an associated look-up value which is used to determine whether the character can be used in an identifier name, is a white space character etc. These values are stored in a simple look-up table. It is possible to set the look-up value using:

	#pragma TenDRA++ character character-literal as character-literal allow 
which sets the look-up for the first character to be the default look-up for the second character. The form:
	#pragma TenDRA++ character character-literal disallow 
sets the look-up of the character to be that of an invalid character. The forms:
	#pragma TenDRA++ character string-literal as character-literal allow 
	#pragma TenDRA++ character string-literal disallow 
can be used to modify the look-up values for the set of characters given by the string literal. For example:
	#pragma TenDRA character '$' as 'a' allow
	#pragma TenDRA character '\r' as ' ' allow
allows $ to be used in identifier names (like a) and carriage return to be a white space character. The former is a common dialect feature and can also be controlled by the directive:
	#pragma TenDRA dollar as ident allow

The maximum number of characters allowed in an identifier name can be set using the directives:

	#pragma TenDRA set name limit integer-literal
	#pragma TenDRA++ set name limit integer-literal warning 
This length is given by the name_limit implementation quantity mentioned above. Identifiers which exceed this length raise an error or a warning, but are not truncated.


2.2.9. Integer literals

The rules for finding the type of an integer literal can be described using directives of the form:

	#pragma TenDRA integer literal literal-spec
where:
	literal-spec :
		literal-base literal-suffixopt literal-type-list

	literal-base :
		octal
		decimal
		hexadecimal

	literal-suffix :
		unsigned
		long
		unsigned long
		long long
		unsigned long long

	literal-type-list :
		* literal-type-spec
		integer-literal literal-type-spec | literal-type-list
		? literal-type-spec | literal-type-list

	literal-type-spec :
		: type-id
		* allowopt : identifier
		* * allowopt :
Each directive gives a literal base and suffix, describing the form of an integer literal, and a list of possible types for literals of this form. This list gives a mapping from the value of the literal to the type to be used to represent the literal. There are three cases for the literal type; it may be a given integral type, it may be calculated using a given literal type token, or it may cause an error to be raised. There are also three cases for describing a literal range; it may be given by values less than or equal to a given integer literal, it may be given by values which are guaranteed to fit into a given integral type, or it may be match any value. For example:
	#pragma token PROC ( VARIETY c ) VARIETY l_i # ~lit_int
	#pragma TenDRA integer literal decimal 32767 : int | ** : l_i
describes how to find the type of a decimal literal with no suffix. Values less that or equal to 32767 have type int; larger values have target dependent type calculated using the token ~lit_int. Introducing a warning into the directive will cause a warning to be printed if the token is used to calculate the value.

Note that this scheme extends that implemented by the C producer, because of the need for more accurate information in the C++ producer. For example, the specification above does not fully express the ISO rule that the type of a decimal integer is the first of the types int, long and unsigned long which it fits into (it only expresses the first step). However with the C++ extensions it is possible to write:

	#pragma token PROC ( VARIETY c ) VARIETY l_i # ~lit_int
	#pragma TenDRA integer literal decimal ? : int | ? : long |\
	    ? : unsigned long | ** : l_i


2.2.10. Character literals and built-in types

By default, a simple character literal has type int in C and type char in C++. The type of such literals can be controlled using the directive:

	#pragma TenDRA++ set character literal : type-id 
The type of a wide character literal is given by the implementation defined type wchar_t. By default, the definition of this type is taken from the target machine's <stddef.h> C header (note that in ISO C++, wchar_t is actually a keyword, but its underlying representation must be the same as in C). This definition can be overridden in the producer by means of the directive:
	#pragma TenDRA set wchar_t : type-id
for an integral type type-id. Similarly, the definitions of the other implementation dependent integral types which arise naturally within the language - the type of the difference of two pointers, ptrdiff_t, and the type of the sizeof operator, size_t - given in the <stddef.h> header can be overridden using the directives:
	#pragma TenDRA set ptrdiff_t : type-id
	#pragma TenDRA set size_t : type-id
These directives are useful when targeting a specific machine on which the definitions of these types are known; while they may not affect the code generated they can cut down on spurious conversion warnings. Note that although these types are built into the producer they are not visible to the user unless an appropriate header is included (with the exception of the keyword wchar_t in ISO C++), however the directives:
	#pragma TenDRA++ type identifier for type-name 
can be used to make these types visible. They are equivalent to a typedef declaration of identifier as the given built-in type, ptrdiff_t, size_t or wchar_t.

Whether plain char is signed or unsigned is implementation dependent. By default the implementation is determined by the definition of the ~char token, however this can be overridden in the producer either by means of the portability table or by the directive:

	#pragma TenDRA character character-sign
where character-sign can be signed, unsigned or either (the default). Again this directive is useful primarily when targeting a specific machine on which the signedness of char is known.


2.2.11. String literals

By default, character string literals have type char [n] in C and older dialects of C++, but type const char [n] in ISO C++. Similarly wide string literals have type wchar_t [n] or const wchar_t [n]. Whether string literals are const or not can be controlled using the two directives:

	#pragma TenDRA++ set string literal : const 
	#pragma TenDRA++ set string literal : no const 
In the case where literals are const, the array-to-pointer conversion is allowed to cast away the const to allow for a degree of backwards compatibility. The status of this deprecated conversion can be controlled using the directive:
	#pragma TenDRA writeable string literal allow
(yes, I know that that should be writable). Note that this directive has a slightly different meaning in the C producer.

Adjacent string literals tokens of similar types (either both character string literals or both wide string literals) are concatenated at an early stage in parser, however it is unspecified what happens if a character string literal token is adjacent to a wide string literal token. By default this gives an error, but the directive:

	#pragma TenDRA unify incompatible string literal allow
can be used to enable the strings to be concatenated to give a wide string literal.

If a ' or " character does not have a matching closing quote on the same line then it is undefined whether an implementation should report an unterminated string or treat the quote as a single unknown character. By default, the C++ producer treats this as an unterminated string, but this behaviour can be controlled using the directive:

	#pragma TenDRA unmatched quote allow


2.2.12. Escape sequences

By default, if the character following the \ in an escape sequence is not one of those listed in the ISO C or C++ standards then an error is given. This behaviour, which is left unspecified by the standards, can be controlled by the directive:

	#pragma TenDRA unknown escape allow
The result is that the \ in unknown escape sequences is ignored, so that \z is interpreted as z, for example. Individual escape sequences can be enabled or disabled using the directives:
	#pragma TenDRA++ escape character-literal as character-literal allow 
	#pragma TenDRA++ escape character-literal disallow 
so that, for example:
	#pragma TenDRA++ escape 'e' as '\033' allow 
	#pragma TenDRA++ escape 'a' disallow 
sets \e to be the ASCII escape character and disables the alert character \a.

By default, if the value of a character, given for example by a \x escape sequence, does not fit into its type then an error is given. This implementation dependent behaviour can however be controlled by the directive:

	#pragma TenDRA character escape overflow allow
the value being converted to its type in the normal way.


2.2.13. Preprocessing directives

Non-standard preprocessing directives can be controlled using the directives:

	#pragma TenDRA directive ppdir allow
	#pragma TenDRA directive ppdir (ignore) allow
where ppdir can be assert, file, ident, import (C++ only), include_next (C++ only), unassert, warning (C++ only) or weak. The second form causes the directive to be processed but ignored (note that there is no (ignore) disallow form). The treatment of other unknown preprocessing directives can be controlled using:
	#pragma TenDRA unknown directive allow
Cases where the token following the # in a preprocessing directive is not an identifier can be controlled using:
	#pragma TenDRA no directive/nline after ident allow
When permitted, unknown preprocessing directives are ignored.

By default, unknown #pragma directives are ignored without comment, however this behaviour can be modified using the directive:

	#pragma TenDRA unknown pragma allow
Note that any unknown #pragma TenDRA directives always give an error.

Older preprocessors allowed text after #else and #endif directives. The following directive can be used to enable such behaviour:

	#pragma TenDRA text after directive allow
Such text after a directive is ignored.

Some older preprocessors have problems with white space in preprocessing directives - whether at the start of the line, before the initial #, or between the # and the directive identifier. Such white space can be detected using the directives:

	#pragma TenDRA indented # directive allow
	#pragma TenDRA indented directive after # allow
respectively.


2.2.14. Target dependent conditional inclusion

One of the effects of trying to compile code in a target independent manner is that it is not always possible to completely evaluate the condition in a #if directive. Thus the conditional inclusion needs to be preserved until the installer phase. This can only be done if the target dependent #if is more structured than is normally required for preprocessing directives. There are two cases; in the first, where the #if appears in a statement, it is treated as if it were a if statement with braces including its branches; that is:

	#if cond
	    true_statements
	#else
	    false_statements
	#endif
maps to:
	if ( cond ) {
	    true_statements
	} else {
	    false_statements
	}
In the second case, where the #if appears in a list of declarations, normally gives an error. The can however be overridden by the directive:
	#pragma TenDRA++ conditional declaration allow
which causes both branches of the #if to be analysed.


2.2.15. File inclusion directives

There is a maximum depth of nested #include directives allowed by the C++ producer. This depth is given by the include_depth implementation quantity mentioned above. Its value is fairly small in order to detect recursive inclusions. The maximum depth can be set using:

	#pragma TenDRA includes depth integer-literal

A further check, for full pathnames in #include directives (which may not be portable), can be enabled using the directive:

	#pragma TenDRA++ complete file includes allow 


2.2.16. Macro definitions

By default, multiple consistent definitions of a macro are allowed. This behaviour can be controlled using the directive:

	#pragma TenDRA extra macro definition allow
The ISO C/C++ rules for determining whether two macro definitions are consistent are fairly restrictive. A more relaxed rule allowing for consistent renaming of macro parameters can be enabled using:
	#pragma TenDRA weak macro equality allow

In the definition of macros with parameters, a # in the replacement list must be followed by a parameter name, indicating the stringising operation. This behaviour can be controlled by the directive:

	#pragma TenDRA no ident after # allow
which allows a # which is not followed by a parameter name to be treated as a normal preprocessing token.

In a list of macro arguments, the effect of a sequence of preprocessing tokens which otherwise resembles a preprocessing directive is undefined. The C++ producer treats such directives as normal sequences of preprocessing tokens, but can be made to report such behaviour using:

	#pragma TenDRA directive as macro argument allow


2.2.17. Empty source files

ISO C requires that a translation unit should contain at least one declaration. C++ and older dialects of C allow translation units which contain no declarations. This behaviour can be controlled using the directive:

	#pragma TenDRA no external declaration allow


2.2.18. The std namespace

Several classes declared in the std namespace arise naturally as part of the C++ language specification. These are as follows:

	std::type_info		// type of typeid construct
	std::bad_cast		// thrown by dynamic_cast construct
	std::bad_typeid		// thrown by typeid construct
	std::bad_alloc		// thrown by new construct
	std::bad_exception	// used in exception specifications
The definitions of these classes are found, when needed, by looking up the appropriate class name in the std namespace. Depending on the context, an error may be reported if the class is not found. It is possible to modify the namespace which is searched for these classes using the directive:
	#pragma TenDRA++ set std namespace : scope-name
where scope-name can be an identifier giving a namespace name or ::, indicating the global namespace.


2.2.19. Object linkage

If an object is declared with both external and internal linkage in the same translation unit then, by default, an error is given. This behaviour can be changed using the directive:

	#pragma TenDRA incompatible linkage allow
When incompatible linkages are allowed, whether the resultant identifier has external or internal linkage can be set using one of the directives:
	#pragma TenDRA linkage resolution : off
	#pragma TenDRA linkage resolution : (external) on
	#pragma TenDRA linkage resolution : (internal) on

It is possible to declare objects with external linkage in a block. C leaves it undefined whether declarations of the same object in different blocks, such as:

	void f ()
	{
	    extern int a ;
	    ....
	}

	void g ()
	{
	    extern double a ;
	    ....
	}
are checked for compatibility. However in C++ the one definition rule implies that such declarations are indeed checked for compatibility. The status of this check can be set using the directive:
	#pragma TenDRA unify external linkage on
Note that it is not possible in ISO C or C++ to declare objects or functions with internal linkage in a block. While static object definitions in a block have a specific meaning, there is no real reason why static functions should not be declared in a block. This behaviour can be enabled using the directive:
	#pragma TenDRA block function static allow

Inline functions have external linkage by default in ISO C++, but internal linkage in older dialects. The default linkage can be set using the directive:

	#pragma TenDRA++ inline linkage linkage-spec 
where linkage-spec can be external or internal. Similarly const objects have internal linkage by default in C++, but external linkage in C. The default linkage can be set using the directive:
	#pragma TenDRA++ const linkage linkage-spec 

Older dialects of C treated all identifiers with external linkage as if they had been declared volatile (i.e. by being conservative in optimising such values). This behaviour can be enabled using the directive:

	#pragma TenDRA external volatile_t

It is possible to set the default language linkage using the directive:

	#pragma TenDRA++ external linkage string-literal 
This is equivalent to enclosing the rest of the current checking scope in:
	extern string-literal {
	    ....
	}
It is unspecified what happens if such a directive is used within an explicit linkage specification and does not nest correctly. This directive is particularly useful when used in a named environment associated with an include directory. For example, it can be used to express the fact that all the objects declared in headers included from that directory have C linkage.

A change in ISO C++ relative to older dialects is that the language linkage of a function now forms part of the function type. For example:

	extern "C" int f ( int ) ;
	int ( *pf ) ( int ) = f ;		// error
The directive:
	#pragma TenDRA++ external function linkage on 
can be used to control whether function types with differing language linkages, but which are otherwise compatible, are considered compatible or not.


2.2.20. Static identifiers

By default, objects and functions with internal linkage are mapped to tags without external names in the output TDF capsule. Thus such names are not available to the installer and it needs to make up internal names to represent such objects in its output. This is not desirable in such operations as profiling, where a meaningful internal name is needed to make sense of the output. The directive:

	#pragma TenDRA preserve identifier-list
can be used to preserve the names of the given list of identifiers with internal linkage. This is done using the static_name_def TDF construct. The form:
	#pragma TenDRA preserve *
will preserve the names of all identifiers with internal linkage in this way.


2.2.21. Empty declarations

ISO C++ requires every declaration or member declaration to introduce one or more names into the program. The directive:

	#pragma TenDRA unknown struct/union allow
can be used to relax one particular instance of this rule, by allowing anonymous class definitions (recall that anonymous unions are objects, not types, in C++ and so are not covered by this rule). The C++ grammar also allows a solitary semicolon as a declaration or member declaration; however such a declaration does not introduce a name and so contravenes the rule above. The rule can be relaxed in this case using the directive:
	#pragma TenDRA extra ; allow
Note that the C++ grammar explicitly allows for an extra semicolon following an inline member function definition, but that semicolons following other function definitions are actually empty declarations of the form above. A solitary semicolon in a statement is interpreted as an empty expression statement rather than an empty declaration statement.


2.2.22. Implicit int

The C "implicit int" rule, whereby a type of int is inferred in a list of type or declaration specifiers which does not contain a type name, has been removed in ISO C++, although it was supported in older dialects of C++. This check is controlled by the directive:

	#pragma TenDRA++ implicit int type allow 
Partial relaxations of this rules are allowed. The directive:
	#pragma TenDRA++ implicit int type for const/volatile allow 
will allow for implicit int when the list of type specifiers contains a cv-qualifier. Similarly the directive:
	#pragma TenDRA implicit int type for function return allow
will allow for implicit int in the return type of a function definition (this excludes constructors, destructors and conversion functions, where special rules apply). A function definition is the only kind of declaration in ISO C where a declaration specifier is not required. Older dialects of C allowed declaration specifiers to be omitted in other cases. Support for this behaviour can be enabled using:
	#pragma TenDRA implicit int type for external declaration allow
The four cases can be demonstrated in the following example:
	extern a ;		// implicit int
	const b = 1 ;		// implicit const int

	f ()			// implicit function return
	{
	    return 2 ;
	}

	c = 3 ;			// error: not allowed in C++


2.2.23. Extended integral types

The long long integral types are not part of ISO C or C++ by default, however support for them can be enabled using the directive:

	#pragma TenDRA longlong type allow
This support includes allowing long long in type specifiers and allowing LL and ll as integer literal suffixes.

There is a further directive given by the two cases:

	#pragma TenDRA set longlong type : long long
	#pragma TenDRA set longlong type : long
which can be used to control the implementation of the long long types. Either they can be mapped to the default representation, which is guaranteed to contain at least 64 bits, or they can be mapped to the corresponding long types.

Because these long long types are not an intrinsic part of C++ the implementation does not integrate them into the language as fully as is possible. This is to prevent the presence or otherwise of long long types affecting the semantics of code which does not use them. For example, it would be possible to extend the rules for the types of integer literals, integer promotion types and arithmetic types to say that if the given value does not fit into the standard integral types then the extended types are tried. This has not been done, although these rules could be implemented by changing the definitions of the standard tokens used to determine these types. By default, only the rules for arithmetic types involving a long long operand and for LL integer literals mention long long types.


2.2.24. Bitfield types

The C++ rules on bitfield types differ slightly from the C rules. Firstly any integral or enumeration type is allowed in a bitfield, and secondly the bitfield width may exceed the underlying type size (the extra bits being treated as padding). These properties can be controlled using the directives:

	#pragma TenDRA extra bitfield int type allow
	#pragma TenDRA bitfield overflow allow
respectively.


2.2.25. Elaborated type specifiers

In elaborated type specifiers, the class key (class, struct, union or enum) should agree with any previous declaration of the type (except that class and struct are interchangeable). This requirement can be relaxed using the directive:

	#pragma TenDRA ignore struct/union/enum tag on

In ISO C and C++ it is not possible to give a forward declaration of an enumeration type. This constraint can be relaxed using the directive:

	#pragma TenDRA forward enum declaration allow
Until the end of its definition, an enumeration type is treated as an incomplete type (as with class types). In enumeration definitions, and a couple of other contexts where comma-separated lists are required, the directive:
	#pragma TenDRA extra , allow
can be used to allow a trailing comma at the end of the list.

The directive:

	#pragma TenDRA complete struct/union analysis on
can be used to enable a check that every class or union has been completed within each translation unit in which it is declared.


2.2.26. Implicit function declarations

C, but not C++, allows calls to undeclared functions, the function being declared implicitly. It is possible to enable support for implicit function declarations using the directive:

	#pragma TenDRA implicit function declaration on
Such implicitly declared functions have C linkage and type int ( ... ).


2.2.27. Weak function prototypes

The C producer supports a concept, weak prototypes, whereby type checking can be applied to the arguments of a non-prototype function. This checking can be enabled using the directive:

	#pragma TenDRA weak prototype analysis on
The concept of weak prototypes is not applicable to C++, where all functions are prototyped. The C++ producer does allow the syntax for explicit weak prototype declarations, but treats them as if they were normal prototypes. These declarations are denoted by means of a keyword, WEAK say, introduced by the directive:
	#pragma TenDRA keyword identifier for weak
preceding the ( of the function declarator. The directives:
	#pragma TenDRA prototype allow
	#pragma TenDRA prototype (weak) allow
which can be used in the C producer to warn of prototype or weak prototype declarations, are similarly ignored by the C++ producer.

The C producer also allows the directives:

	#pragma TenDRA argument type-id as type-id
	#pragma TenDRA argument type-id as ...
	#pragma TenDRA extra ... allow
	#pragma TenDRA incompatible promoted function argument allow
which control the compatibility of function types. These directives are ignored by the C++ producer (some of them would make sense in the context of C++ but would over-complicate function overloading).


2.2.28. printf and scanf argument checking

The C producer includes a number of checks that the arguments in a call to a function in the printf or scanf families match the given format string. The check is implemented by using the directives:

	#pragma TenDRA type identifier for ... printf
	#pragma TenDRA type identifier for ... scanf
to introduce a type representing a printf or scanf format string. For most purposes this type is treated as const char *, but when it appears in a function declaration it alerts the producer that any extra arguments passed to that function should match the format string passed as the corresponding argument. The TenDRA API headers conditionally declare printf, scanf and similar functions in something like the form:
	#ifdef __NO_PRINTF_CHECKS
	typedef const char *__printf_string ;
	#else
	#pragma TenDRA type __printf_string for ... printf
	#endif

	int printf ( __printf_string, ... ) ;
	int fprintf ( FILE *, __printf_string, ... ) ;
	int sprintf ( char *, __printf_string, ... ) ;
These declarations can be skipped, effectively disabling this check, by defining the __NO_PRINTF_CHECKS macro.

warning These printf and scanf format string checks have not yet been implemented in the C++ producer due to presence of an alternative, type checked, I/O package - namely <iostream>. The format string types are simply treated as const char *.


2.2.29. Type declarations

C does not allow multiple definitions of a typedef name, whereas C++ allows multiple consistent definitions. This behaviour can be controlled using the directive:

	#pragma TenDRA extra type definition allow


2.2.30. Type compatibility

The directive:

	#pragma TenDRA incompatible type qualifier allow
allows objects to be redeclared with different cv-qualifiers (normally such redeclarations would be incompatible). The composite type is qualified using the join of the cv-qualifiers in the various redeclarations.

The directive:

	#pragma TenDRA compatible type : type-id == type-id : allow

asserts that the given two types are compatible. Currently the only implemented version is char * == void * which enables char * to be used as a generic pointer as it was in older dialects of C.


2.2.31. Incomplete types

Some dialects of C allow incomplete arrays as member types. These are generally used as a place-holder at the end of a structure to allow for the allocation of an arbitrarily sized array. Support for this feature can be enabled using the directive:

	#pragma TenDRA incomplete type as object type allow


2.2.32. Type conversions

There are a number of directives which allow various classes of type conversion to be checked. The directives:

	#pragma TenDRA conversion analysis (int-int explicit) on
	#pragma TenDRA conversion analysis (int-int implicit) on
will check for unsafe explicit or implicit conversions between arithmetic types. Similarly conversions between pointers and arithmetic types can be checked using:
	#pragma TenDRA conversion analysis (int-pointer explicit) on
	#pragma TenDRA conversion analysis (int-pointer implicit) on
or equivalently:
	#pragma TenDRA conversion analysis (pointer-int explicit) on
	#pragma TenDRA conversion analysis (pointer-int implicit) on
Conversions between pointer types can be checked using:
	#pragma TenDRA conversion analysis (pointer-pointer explicit) on
	#pragma TenDRA conversion analysis (pointer-pointer implicit) on

There are some further variants which can be used to enable useful sets of conversion checks. For example:

	#pragma TenDRA conversion analysis (int-int) on
enables both implicit and explicit arithmetic conversion checks. The directives:
	#pragma TenDRA conversion analysis (int-pointer) on
	#pragma TenDRA conversion analysis (pointer-int) on
	#pragma TenDRA conversion analysis (pointer-pointer) on
are equivalent to their corresponding explicit forms (because the implicit forms are illegal by default). The directive:
	#pragma TenDRA conversion analysis on
is equivalent to the four directives just given. It enables checks on implicit and explicit arithmetic conversions, explicit arithmetic to pointer conversions and explicit pointer conversions.

The default settings for these checks are determined by the implicit and explicit conversions allowed in C++. Note that there are differences between the conversions allowed in C and C++. For example, an arithmetic type can be converted implicitly to an enumeration type in C, but not in C++. The directive:

	#pragma TenDRA conversion analysis (int-enum implicit) on 
can be used to control the status of this conversion. The level of severity for an error message arising from such a conversion is the maximum of the severity set by this directive and that set by the int-int implicit directive above.

The implicit pointer conversions described above do not include conversions to and from the generic pointer void *, which have their own controlling directives. A pointer of type void * can be converted implicitly to another pointer type in C but not in C++; this is controlled by the directive:

	#pragma TenDRA++ conversion analysis (void*-pointer implicit) on 
The reverse conversion, from a pointer type to void * is allowed in both C and C++, and has a controlling directive:
	#pragma TenDRA++ conversion analysis (pointer-void* implicit) on 

In ISO C and C++, a function pointer can only be cast to other function pointers, not to object pointers or void *. Many dialects however allow function pointers to be cast to and from other pointers. This behaviour can be controlled using the directive:

	#pragma TenDRA function pointer as pointer allow
which causes function pointers to be treated in the same way as all other pointers.

The integer conversion checks described above only apply to unsafe conversions. A simple-minded check for shortening conversions is not adequate, as is shown by the following example:

	char a = 1, b = 2 ;
	char c = a + b ;
the sum a + b is evaluated as an int which is then shortened to a char. Any check which does not distinguish this sort of "safe" shortening conversion from unsafe shortening conversions such as:
	int a = 1, b = 2 ;
	char c = a + b ;
is not likely to be very useful. The producer therefore associates two types with each integral expression; the first is the normal, representation type and the second is the underlying, semantic type. Thus in the first example, the representation type of a + b is int, but semantically it is still a char. The conversion analysis is based on the semantic types.

warning The C producer supports a directive:

	#pragma TenDRA keyword identifier for type representation
whereby a keyword can be introduced which can be used to explicitly declare a type with given representation and semantic components. Unfortunately this makes the C++ grammar ambiguous, so it has not yet been implemented in the C++ producer.

It is possible to allow individual conversions by means of conversion tokens. A procedure token which takes one rvalue expression program parameter and returns an rvalue expression, such as:

	#pragma token PROC ( EXP : t : ) EXP : s : conv #
can be regarded as mapping expressions of type t to expressions of type s. The directive:
	#pragma TenDRA conversion identifier-list allow
can be used to nominate such a token as a conversion token. That is to say, if the conversion, whether explicit or implicit, from t to s cannot be done by other means, it is done by applying the token conv, so:
	t a ;
	s b = a ;		// maps to conv ( a )
Note that, unlike conversion functions, conversion tokens can be applied to any types.


2.2.33. Cast expressions

ISO C++ introduces the constructs static_cast, const_cast and reinterpret_cast, which can be used in various contexts where an old style explicit cast would previously have been used. By default, an explicit cast can perform any combination of the conversions performed by these three constructs. To aid migration to the new style casts the directives:

	#pragma TenDRA++ explicit cast as cast-state allow 
	#pragma TenDRA++ explicit cast allow 
where cast-state is defined as follows:
	cast-state :
		static_cast
		const_cast
		reinterpret_cast
		static_cast | cast-state
		const_cast | cast-state
		reinterpret_cast | cast-state
can be used to restrict the conversions which can be performed using explicit casts. The first form sets the interpretation of explicit cast to be combinations of the given constructs; the second resets the interpretation to the default. For example:
	#pragma TenDRA++ explicit cast as static_cast | const_cast allow
means that conversions requiring reinterpret_cast (the most unportable conversions) will not be allowed to be performed using explicit casts, but will have to be given as a reinterpret_cast construct. Changing allow to warning will also cause a warning to be issued for every explicit cast expression.


2.2.34. Ellipsis functions

The directive:

	#pragma TenDRA ident ... allow
may be used to enable or disable the use of ... as a primary expression in a function defined with ellipsis. The type of such an expression is implementation defined. This expression is used in the definition of the va_start macro in the <stdarg.h> header. This header automatically enables this switch.


2.2.35. Overloaded functions

Older dialects of C++ did not report ambiguous overloaded function resolutions, but instead resolved the call to the first of the most viable candidates to be declared. This behaviour can be controlled using the directive:

	#pragma TenDRA++ ambiguous overload resolution allow 
There are occasions when the resolution of an overloaded function call is not clear. The directive:
	#pragma TenDRA++ overload resolution allow 
can be used to report the resolution of any such call (whether explicit or implicit) where there is more than one viable candidate.

An interesting consequence of compiling C++ in a target independent manner is that certain overload resolutions can only be determined at install-time. For example, in:

	int f ( int ) ;
	int f ( unsigned int ) ;
	int f ( long ) ;
	int f ( unsigned long ) ;

	int a = f ( sizeof ( int ) ) ;	// which f?
the type of the sizeof operator, size_t, is target dependent, but its promotion must be one of the types int, unsigned int, long or unsigned long. Thus the call to f always has a unique resolution, but what it is is target dependent. The equivalent directives:
	#pragma TenDRA++ conditional overload resolution allow 
	#pragma TenDRA++ conditional overload resolution (complete) allow 
can be used to warn about such target dependent overload resolutions. By default, such resolutions are only allowed if there is a unique resolution for each possible implementation of the argument types (note that, for simplicity, the possibility of long long implementation types is ignored). The directive:
	#pragma TenDRA++ conditional overload resolution (incomplete) allow 
can be used to allow target dependent overload resolutions which only have resolutions for some of the possible implementation types (if one of the f declarations above was removed, for example). If the implementation does not match one of these types then an install-time error is given.

There are restrictions on the set of candidate functions involved in a target dependent overload resolution. Most importantly, it should be possible to bring their return types to a common type, as if by a series of ?: operations. This common type is the type of the target dependent call. By this means, target dependent types are prevented from propagating further out into the program. Note that since sets of overloaded functions usually have the same semantics, this does not usually present a problem.


2.2.36. Expressions

The directive:

	#pragma TenDRA operator precedence analysis on 
can be used to enable a check for expressions where the operator precedence is not necessarily what might be expected. The intended precedence can be clarified by means of explicit parentheses. The precedence levels checked are as follows:
  1. && versus ||.
  2. << and >> versus binary + and -.
  3. Binary & versus binary +, -, ==, !=, >, >=, < and <=.
  4. ^ versus binary &, +, -, ==, !=, >, >=, < and <=.
  5. | versus binary ^, &, +, -, ==, !=, >, >=, < and <= .
Also checked are expressions such as a < b < c which do not have their normal mathematical meaning. For example, in:
	d = a << b + c ;	// precedence is a << ( b + c )
the precedence is counter-intuitive, although strangely enough, it isn't in:
	cout << b + c ;		// precedence is cout << ( b + c )

Other dubious arithmetic operations can be checked for using the directive:

	#pragma TenDRA integer operator analysis on
This includes checks for operations, such as division by a negative value, which are implementation dependent, and those such as testing whether an unsigned value is less than zero, which serve no purpose. Similarly the directive:
	#pragma TenDRA++ pointer operator analysis on 
checks for dubious pointer operations. This includes very simple bounds checking for arrays and checking that only the simple literal 0 is used in null pointer constants:
	char *p = 1 - 1 ;	// valid, but weird

The directive:

	#pragma TenDRA integer overflow analysis on
is used to control the treatment of overflows in the evaluation of integer constant expressions. This includes the detection of division by zero.


2.2.37. Initialiser expressions

C, but not C++, only allows constant expressions in static initialisers. The directive:

	#pragma TenDRA variable initialization allow
can be enable support for C++-style dynamic initialisers. Conversely, it can be used in C++ to detect such dynamic initialisers.

In older dialects of C it was not possible to initialise an automatic variable of structure or union type. This can be checked for using the directive:

	#pragma TenDRA initialization of struct/union (auto) allow

The directive:

	#pragma TenDRA++ complete initialization analysis on 
can be used to check aggregate initialisers. The initialiser should be fully bracketed (i.e. with no elision of braces), and should have an entry for each member of the structure or array.


2.2.38. Lvalue expressions

C++ defines the results of several operations to be lvalues, whereas they are rvalues in C. The directive:

	#pragma TenDRA conditional lvalue allow
is used to apply the C++ rules for lvalues in conditional (?:) expressions.

Older dialects of C++ allowed this to be treated as an lvalue. It is possible to enable support for this dialect feature using the directive:

	#pragma TenDRA++ this lvalue allow 
however it is recommended that programs using this feature should be modified.


2.2.39. Discarded expressions

The directive:

	#pragma TenDRA discard analysis on
can be used to enable a check for values which are calculated but not used. There are three checks controlled by this directive, each of which can be controlled independently. The directive:
	#pragma TenDRA discard analysis (function return) on
checks for functions which return a value which is not used. The check needs to be enabled for both the declaration and the call of the function in order for a discarded function return to be reported. Discarded returns for overloaded operator functions are never reported. The directive:
	#pragma TenDRA discard analysis (value) on
checks for other expressions which are not used. Finally, the directive:
	#pragma TenDRA discard analysis (static) on
checks for variables with internal linkage which are defined but not used.

An unused function return or other expression can be asserted to be deliberately discarded by explicitly casting it to void or, equivalently, preceding it by a keyword introduced using the directive:

	#pragma TenDRA keyword identifier for discard value
A static variable can be asserted to be deliberately unused by including it in list of identifiers in a directive of the form:
	#pragma TenDRA suspend static identifier-list


2.2.40. Conditional and iteration statements

The directive:

	#pragma TenDRA const conditional allow 
can be used to enable a check for constant expressions used in conditional contexts. A literal constant is allowed in the condition of a while , for or do statement to allow for such common constructs as:
	while ( true ) {
	    // while statement body
	}
and target dependent constant expressions are allowed in the condition of an if statement, but otherwise constant conditions are reported according to the status of this check.

The common error of writing = rather than == in conditions can be detected using the directive:

	#pragma TenDRA assignment as bool allow
which can be used to disallow such assignment expressions in contexts where a boolean is expected. The error message can be suppressed by enclosing the assignment within parentheses.

Another common error associated with iteration statements, particularly with certain heretical brace styles, is the accidental insertion of an extra semicolon as in:

	for ( init ; cond ; step ) ;
	{
	    // for statement body
	}
The directive:
	#pragma TenDRA extra ; after conditional allow
can be used to enable a check for such suspicious empty iteration statement bodies (it actually checks for ;{).


2.2.41. Switch statements

A switch statement is said to be exhaustive if its control statement is guaranteed to take one of the values of its case labels, or if it has a default label. The TenDRA C and C++ producers allow a switch statement to be asserted to be exhaustive using the syntax:

	switch ( cond ) EXHAUSTIVE {
	    // switch statement body
	}
where EXHAUSTIVE is either the directive:
	#pragma TenDRA exhaustive
or a keyword introduced using:
	#pragma TenDRA keyword identifier for exhaustive
Knowing whether a switch statement is exhaustive or not means that checks relying on flow analysis (including variable usage checks) can be applied more precisely.

In certain circumstances it is possible to deduce whether a switch statement is exhaustive or not. For example, the directive:

	#pragma TenDRA enum switch analysis on 
enables a check on switch statements on values of enumeration type. Such statements should be exhaustive, either explicitly by using the EXHAUSTIVE keyword or declaring a default label, or implicitly by having a case label for each enumerator. Conversely, the value of each case label should equal the value of an enumerator. For the purposes of this check, boolean values are treated as if they were declared using an enumeration type of the form:
	enum bool { false = 0, true = 1 } ;

A common source of errors in switch statements is the fall-through from one case or default statement to the next. A check for this can be enabled using:

	#pragma TenDRA fall into case allow
case or default labels where fall-through from the previous statement is intentional can be marked by preceding them by a keyword, FALL_THRU say, introduced using the directive:
	#pragma TenDRA keyword identifier for fall into case


2.2.42. For statements

In ISO C++ the scope of a variable declared in a for-init-statement is the body of the for statement; in older dialects it extended to the end of the enclosing block. So:

	for ( int i = 0 ; i < 10 ; i++ ) {
	    // for statement body
	}
	return i ;	// OK in older dialects, error in ISO C++
This behaviour is controlled by the directive:
	#pragma TenDRA++ for initialization block on 
a state of on corresponding to the ISO rules and off to the older rules. Perhaps most useful is the warning state which implements the old rules but gives a warning if a variable declared in a for-init-statement is used outside the corresponding for statement body. A program which does not give such warnings should compile correctly under either set of rules.


2.2.43. Return statements

In C, but not in C++, it is possible to have a return statement without an expression in a function which does not return void. It is possible to enable this behaviour using the directive:

	#pragma TenDRA incompatible void return allow
Note that this check includes the implicit return caused by falling off the end of a function. The effect of such a return statement is undefined. The C++ rule that falling off the end of main is equivalent to returning a value of 0 overrides this check.


2.2.44. Unreached code analysis

The directive:

	#pragma TenDRA unreachable code allow
enables a flow analysis check to detect unreachable code. It is possible to assert that a statement is reached or not reached by preceding it by a keyword introduced by one of the directives:
	#pragma TenDRA keyword identifier for set reachable
	#pragma TenDRA keyword identifier for set unreachable

The fact that certain functions, such as exit, do not return a value can be exploited in the flow analysis routines. The equivalent directives:

	#pragma TenDRA bottom identifier
	#pragma TenDRA++ type identifier for bottom
can be used to introduce a typedef declaration for the type, bottom, returned by such functions. The TenDRA API headers declare exit and similar functions in this way, for example:
	#pragma TenDRA bottom __bottom
	__bottom exit ( int ) ;
	__bottom abort ( void ) ;
The bottom type is compatible with void in function declarations to allow such functions to be redeclared in their conventional form.


2.2.45. Variable flow analysis

The directive:

	#pragma TenDRA variable analysis on
enables checks on the uses of automatic variables and function parameters. These checks detect:
  1. If a variable is not used in its scope.
  2. If the value of a variable is used before it has been assigned to.
  3. If a variable is assigned to twice without an intervening use.
  4. If a variable is assigned to twice without an intervening sequence point.
as illustrated by the variables a, b, c and d respectively in:
	void f ()
	{
	    int a ;			// a never used
	    int b ;
	    int c = b ;			// b not initialised
	    c = 0 ;			// c assigned to twice
	    int d = 0 ;
	    d = ++d ;			// d assigned to twice
	}
The second, and more particularly the third, of these checks requires some fairly sophisticated flow analysis, so any hints which can be picked up from exhaustive switch statements etc. is likely to increase the accuracy of the errors detected.

In a non-static member function the various non-static data members are analysed as if they were automatic variables. It is checked that each member is initialised in a constructor. A common source of initialisation problems in a constructor is that the base classes and members are initialised in the canonical order of virtual bases, non-virtual direct bases and members in the order of their declaration, rather than in the order in which their initialisers appear in the constructor definition. Therefore a check that the initialisers appear in the canonical order is also applied.

It is possible to change the state of a variable during the variable analysis using the directives:

	#pragma TenDRA set expression
	#pragma TenDRA discard expression
The first asserts that the variable given by the expression has been assigned to; the second asserts that the variable is not used. An alternative way of expressing this is by means of keywords:
	SET ( expression )
	DISCARD ( expression )
introduced using the directives.
	#pragma TenDRA keyword identifier for set
	#pragma TenDRA keyword identifier for discard variable
respectively. These expressions can appear in expression statements and as the first argument of a comma expression.

warning The variable flow analysis checks have not yet been completely implemented. They may not detect errors in certain circumstances and for extremely convoluted code may occasionally give incorrect errors.


2.2.46. Variable hiding

The directive:

	#pragma TenDRA variable hiding analysis on
can be used to enable a check for hiding of other variables and, in member functions, data members, by local variable declarations.


2.2.47. Exception analysis

The ISO C++ rules do not require exception specifications to be checked statically. This is to facilitate the integration of large systems where a single change in an exception specification could have ramifications throughout the system. However it is often useful to apply such checks, which can be enabled using the directive:

	#pragma TenDRA++ throw analysis on
This detects any potentially uncaught exceptions and other exception problems. In the error messages arising from this check, an uncaught exception of type ... means that an uncaught exception of an unknown type (arising, for example, from a function without an exception specification) may be thrown. For example:
	void f ( int ) throw ( int ) ;
	void g ( int ) throw ( long ) ;
	void h ( int ) ;

	void e () throw ( int )
	{
	    f ( 1 ) ;			// OK
	    g ( 2 ) ;			// uncaught 'long' exception
	    h ( 3 ) ;			// uncaught '...' exception
	}


2.2.48. Template compilation

The C++ producer makes the distinction between exported templates, which may be used in one module and defined in another, and non-exported templates, which must be defined in every module in which they are used. As in the ISO C++ standard, the export keyword is used to distinguish between the two cases. In the past, different compilers have had different template compilation models; either all templates were exported or no templates were exported. The latter is easily emulated - if the export keyword is not used then no templates will be exported. To emulate the former behaviour the directive:

	#pragma TenDRA++ implicit export template on
can be used to treat all templates as if they had been declared using the export keyword.

warning The automatic instantiation of exported templates has not yet been implemented correctly. It is intended that such instantiations will be generated during intermodule analysis (where they conceptually belong). At present it is necessary to work round this using explicit instantiations.


2.2.49. Other checks

Several checks of varying utility have been implemented in the C++ producer but do not as yet have individual directives controlling their use. These can be enabled en masse using the directive:

	#pragma TenDRA++ catch all allow 
It is intended that this directive will be phased out as these checks are assigned controlling directives. It is possible to achieve finer control over these checks by enabling their individual error messages as described above.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tcpplus/pragma1.html100644 1750 1750 41353 6506672676 20621 0ustar brooniebroonie C++ Producer Guide: #pragma directive syntax

C++ Producer Guide

March 1998

next section previous section current document TenDRA home page document index


Annex A. #pragma directive syntax

The following gives a summary of the syntax for the #pragma directives used for compiler configuration and token specification:


pragma-directive : # pragma TenDRA ++opt tendra-directive # pragma token-directive tendra-directive : scope-directive low-level-directive analysis-directive on check-directive allow keyword-directive type-directive linkage-directive misc-directive tendra-token-directive on : on warning off allow : allow warning disallow
scope-directive : begin begin name environment identifier end directory identifier use environment identifier use environment identifier use environment identifier reset allow
low-level-directive : error string-literal allow error string-literal on error string-literal as option string-literal option string-literal allow option string-literal on option value string-literal integer-literal use error string-literal
analysis-directive : complete initialization analysis complete struct / union analysis conversion analysis conversion-specopt discard analysis discard-specopt enum switch analysis external function linkage for initialization block ignore struct / union / enum tag implicit export template implicit function declaration integer operator analysis integer overflow analysis nested comment analysis operator precedence analysis pointer operator analysis throw analysis unify external linkage variable analysis variable hiding analysis weak prototype analysis conversion-spec : ( int - int implicit-specopt ) ( int - pointer implicit-specopt ) ( pointer - int implicit-specopt ) ( pointer - pointer implicit-specopt ) ( int - enum implicit ) ( pointer - void * implicit ) ( void * - pointer implicit ) implicit-spec : implicit explicit discard-spec : ( function return ) ( static ) ( value )
check-directive : ambiguous overload resolution assignment as bool bitfield overflow block function static catch all character escape overflow compatible token complete file includes conditional declaration conditional lvalue conditional overload resolution overload-specopt const conditional directive as macro argument dollar as ident extra , extra ; extra ; after conditional extra ... extra bitfield int type extra macro definition extra type definition fall into case forward enum declaration function pointer as pointer ident ... implicit int type inttype-specopt implicit token definition incompatible interface declaration incompatible member declaration incompatible linkage incompatible promoted function argument incompatible type qualifier incompatible void return incomplete type as object type indented # directive indented directive after # initialization of struct / union ( auto ) longlong type no directive / nline after ident no external declaration no ident after # no nline after file end no token definition overload resolution prototype prototype ( weak ) rvalue token as const text after directive this lvalue unify incompatible string literal unknown directive unknown escape unknown pragma unknown struct / union unmatched quote unreachable code variable initialization weak macro equality writeable string literal inttype-spec : for const / volatile for external declaration for function return overload-spec : ( complete ) ( incomplete )
keyword-directive : keyword identifier for keyword-spec undef keyword identifier keyword-spec : discard value discard variable exhaustive fall into case keyword identifier operator operator set set reachable set unreachable type representation weak
type-directive : bottom identifier character character-sign character character-literal character-mapping character string-literal character-mapping compute promote identifier escape character-literal character-mapping integer literal literal-spec promoted type-id : type-id set character literal : type-id set longlong type : longlong-spec set ptrdiff_t : type-id set size_t : type-id set wchar_t : type-id set string literal : string-const set std namespace : scope-name type identifier for type-spec character-sign : signed unsigned either character-mapping : as character-literal allow disallow literal-spec : literal-base literal-suffixopt literal-type-list literal-base : decimal octal hexadecimal literal-suffix : unsigned long unsigned long long long unsigned long long literal-type-list : * literal-type-spec integer-literal literal-type-spec | literal-type-list ? literal-type-spec | literal-type-list literal-type-spec : : type-id * allowopt : identifier * * allowopt : longlong-spec : long long long string-const : const no const scope-name : identifier :: type-spec : bottom ptrdiff_t size_t wchar_t ... printf ... scanf
linkage-directive : const linkage linkage external linkage string-literal external volatile_t inline linkage linkage linkage resolution : linkage-spec linkage : external internal linkage-spec : ( linkage ) on ( linkage ) warning off
misc-directive : argument type-id as ... argument type-id as type-id compatible type : type-id == type-id : allow conversion identifier-list allow declaration block identifier begin declaration block end directive directive-spec directive-state discard expression exhaustive explicit cast cast-specopt allow includes depth integer-literal preserve preserve-list set expression set error limit integer-literal set name limit integer-literal warningopt suspend static identifier-list directive-spec : assert file ident import include_next unassert warning weak directive-state : allow warning disallow ( ignore ) allow ( ignore ) warning cast-operator : static_cast const_cast reinterpret_cast cast-spec : as cast-operator cast-spec | cast-operator preserve-list : identifier-list * identifier-list : identifier identifier-listopt
token-directive : token token-spec no_def token-list define token-list ignore token-list interface token-list undef token token-list extend interface header-name implement interface header-name tendra-token-directive : token token-spec no_def token-list define token-list reject token-list interface token-list undef token token-list extend header-name implement header-name member definition type-id : identifier member-offset member-offset : ::opt id-expression member-offset . ::opt id-expression member-offset [ constant-expression ] token-list : token-id token-listopt # preproc-token-list token-id : token-namespaceopt identifier type-id . identifier
token-spec : token-introduction token-identification token-introduction : exp-token statement-token type-token member-token procedure-token token-identification : token-namespaceopt identifier # external-identifieropt token-namespace : TAG external-identifier : - preproc-token-list exp-token : EXP exp-storageopt : type-id : NAT INTEGER exp-storage : lvalue rvalue const statement-token : STATEMENT type-token : TYPE VARIETY VARIETY signed VARIETY unsigned FLOAT ARITHMETIC SCALAR CLASS STRUCT UNION member-token : MEMBER access-specifieropt member-type-id : type-id : member-type-id : type-id type-id % constant-expression access-specifier : public protected private procedure-token : general-procedure simple-procedure function-procedure general-procedure : PROC { bound-toksopt | prog-parsopt } token-introduction bound-toks : bound-token bound-token , bound-toks bound-token : token-introduction token-namespaceopt identifier prog-pars : program-parameter program-parameter , prog-pars program-parameter : EXP identifier STATEMENT identifier TYPE type-id MEMBER type-id : identifier PROC identifier simple-procedure : PROC ( simple-toksopt ) token-introduction simple-toks : simple-token simple-token , simple-toks simple-token : token-introduction token-namespaceopt identifieropt function-procedure : FUNC type-id :


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tcpplus/std.html100644 1750 1750 15605 6506672677 20065 0ustar brooniebroonie C++ Producer Guide: Standard library

C++ Producer Guide

March 1998

next section previous section current document TenDRA home page document index


2.7.1 - Common porting problems
2.7.2 - Porting libio

2.7. Standard library

At present the default implementation contains only a very small fraction of the ISO C++ library, namely those headers - <exception>, <new> and <typeinfo> - which are an integral part of the language specification. These headers are also those which require the most cooperation between the producer and the library implementation, as described in the previous section.

It is suggested that if further library components are required then they be acquired from third parties. It should be noted however that such libraries may require some effort to be ported to an ISO compliant compiler; for example, some information on porting the libio component of libg++, which contains some very compiler-dependent code, are given below. Libraries compiled with other C++ compilers may not link correctly with modules compiled using tcc.


2.7.1. Common porting problems

Experience in porting pre-ISO C++ programs has shown that the following new ISO C++ features tend to cause the most problems:

  1. Implicit int has been banned.
  2. String literals are now const , although in simple assignments the const is implicitly removed.
  3. The scope of a variable declared in a for-init-statement is the for statement itself.
  4. Variables have linkage and so should be declared extern "C" if appropriate.
  5. The standard C library is now declared in the std namespace.
  6. The template compilation model has been clarified. The notation for explicit instantiation and specialisation has changed.
  7. Templates are analysed at their point of definition as well as their point of instantiation.
  8. New keywords have been introduced.
Note that many of these features have controlling #pragma directives, so that it is possible to switch to using the pre-ISO features.


2.7.2. Porting libio

Perhaps the library component which is most likely to be required is <iostream>. A readily available freeware implementation of a pre-ISO (i.e. non-template) <iostream> package is given by the libio component of libg++. This section describes some of the problems encountered in porting this package (version 2.7.1).

The tcc compiler flags used in porting libio were:

	tcc -Yposix -Yc++ -sC:cc
indicating that the POSIX API is to be used and that the .cc suffix is used to identify C++ source files.

In iostream.h, cin, cout, cerr and clog should be declared with C linkage, otherwise the C++ producer includes the type in the mangled name and the fake iostream hacks in stdstream.cc don't work. The definition of EOF in this header can cause problems if both iostream.h and stdio.h are included. In this case stdio.h should be included first.

In stdstream.cc, the correct definitions for the fake iostream structures are as follows:

	struct _fake_istream::myfields {
	    _ios_fields *vb ;		// pointer to virtual base class ios
	    _IO_ssize_t _gcount ;	// istream fields
	    void *vptr ;		// pointer to virtual function table
	} ;

	struct _fake_ostream::myfields {
	    _ios_fields *vb ;		// pointer to virtual base class ios
	    void *vptr ;		// pointer to virtual function table
	} ;
The fake definition macros are then defined as follows:
	#define OSTREAM_DEF( NAME, SBUF, TIE, EXTRA_FLAGS )\
	    extern "C" _fake_ostream NAME = { { &NAME.base, 0 }, .... } ;

	#define ISTREAM_DEF( NAME, SBUF, TIE, EXTRA_FLAGS )\
	    extern "C" _fake_istream NAME = { { &NAME.base, 0, 0 }, .... } ;
Note that these are declared with C linkage as above.

In stdstrbufs.cc, the correct definitions for the virtual function table names are as follows:

	#define filebuf_vtable		__vt__7filebuf
	#define stdiobuf_vtable		__vt__8stdiobuf
Note that the _G_VTABLE_LABEL_PREFIX macro is incorrectly defined by the configuration process (it should be __vt__), but the ## directives in which it is used don't work on an ISO compliant preprocessor anyway (token concatenation takes place after replacement of macro parameters, but before further macro expansion). The dummy virtual function tables should also be declared with C linkage to suppress name mangling.

In addition, the initialisation of the standard streams relies on the file pointers stdout etc. being constant expressions, which in general they are not. The directive:
	#pragma TenDRA++ rvalue token as const allow
will cause the C++ producer to assume that all tokenised rvalue expressions are constant.

In streambuf.cc, if errno is to be explicitly declared it should have C linkage or be declared in the std namespace.

In iomanip.cc, the explicit template instantiations should be prefixed by template. The corresponding template declarations in iomanip.h should be declared using export (note that the __GNUG__ version uses extern, which may yet win out over export).


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tcpplus/style.html100644 1750 1750 40053 6506672677 20426 0ustar brooniebroonie C++ Producer Guide: Style guide

C++ Producer Guide

March 1998

next section previous section current document TenDRA home page document index


3.1.1 - C coding standard
3.1.2 - API usage and target dependencies
3.1.3 - Source code modules

3.1. Source code organisation

This section describes the basic organisation of the source code for the C++ producer. This includes the coding conventions applied, the application programming interface (API) observed and the division of the code into separate modules.


3.1.1. C coding standard

The C++ producer is written in a subset of C which is compatible with C++ (it compiles with most C compilers, but also bootstraps itself). It has been written to conform to the local (OSSG) C coding standard; most of the conformance checking being automated by use of a user-defined compilation profile, ossg_std.h. The standard macros described in the coding standard are defined in the standard header ossg.h. This is included from the header config.h which is included by all source files. The default definitions for these macros, set according to the value of __STDC__ and other compiler-defined macros, should be correct, but they can be overridden by defining the FS_* macros, described in the header, as command-line options.

The most important of these macros are those used to handle function prototypes, enabling both ISO and pre-ISO C compilers to be accommodated. Simple function definitions take the form:

	ret function
	    PROTO_N ( ( p1, p2, ...., pn ) )
	    PROTO_T ( par1 p1 X par2 p2 X .... X parn pn )
	{
	    ....
	}
with the PROTO_N macro being used to list the parameter names (note the double bracket) and the PROTO_T macro being used to list the parameter types using X (cartesian product) as a separator. The corresponding function declaration will have the form:
	ret function PROTO_S ( ( par1, par2, ...., parn ) ) ;
The case where there are no parameter types is defined using:
	ret function
	    PROTO_Z ()
	{
	    ....
	}
and declared as:
	ret function PROTO_S ( ( void ) ) ;
Functions with ellipses are defined using:
	#if FS_STDARG
	#include <stdarg.h>
	#else
	#include <varargs.h>
	#endif

	ret function
	    PROTO_V ( ( par1 p1, par2 p2, ...., parn pn, ... ) )
	{
	    va_list args ;
	    ....
	#if FS_STDARG
	    va_start ( args, pn ) ;
	#else
	    par1 p1 ;
	    par2 p2 ;
	    ....
	    parn pn ;
	    va_start ( args ) ;
	    p1 = va_arg ( args, par1 ) ;
	    p2 = va_arg ( args, par2 ) ;
	    ....
	    pn = va_arg ( args, parn ) ;
	#endif
	    ....
	    va_end ( args ) ;
	    ....
	}
and declared as:
	ret function PROTO_W ( ( par1, par2, ...., parn, ... ) ) ;
Note that <varargs.h> does not allow for parameters preceding the va_alist, so the fixed parameters need to be explicitly assigned from args.

The following TenDRA keywords are defined (with suitable default values for non-TenDRA compilers):

	#pragma TenDRA keyword SET for set
	#pragma TenDRA keyword UNUSED for discard variable
	#pragma TenDRA keyword IGNORE for discard value
	#pragma TenDRA keyword EXHAUSTIVE for exhaustive
	#pragma TenDRA keyword REACHED for set reachable
	#pragma TenDRA keyword UNREACHED for set unreachable
	#pragma TenDRA keyword FALL_THROUGH for fall into case

Various flags giving properties of the compiler being used are defined in ossg.h. Among the most useful are FS_STDARG, which is true if the compiler supports ellipsis functions (see above), and FS_STDC_HASH, which is true if the preprocessor supports the ISO stringising and concatenation operators. The macros CONST and VOLATILE, to be used in place of const and volatile, are also defined.

A policy of rigorous static program checking is enforced. The TenDRA C producer is applied with the user-defined compilation mode ossg_std.h and intermodule checks enabled. Checking is applied with both the C and #pragma token calculus output files. The C++ producer itself is applied with the same checks. gcc -Wall and various versions of lint are also periodically applied.


3.1.2. API usage and target dependencies

Most of the API features used in the C++ producer are to be found in the ISO C API, with just a couple of extensions from POSIX required. These POSIX features can be disabled with minimal loss of functionality by defining the macro FS_POSIX to be false.

The following features are used from the ISO <stdio.h> header:

	BUFSIZ		EOF		FILE		SEEK_SET
	fclose		fflush		fgetc		fgets
	fopen		fprintf		fputc		fputs
	fread		fseek		fwrite		rewind
	sprintf		stderr		stdin		stdout
	vfprintf
from the ISO <stdlib.h> header:
	EXIT_SUCCESS	EXIT_FAILURE	NULL		abort
	exit		free		malloc		realloc
	size_t
and from the ISO <string.h> header:
	memcmp		memcpy		strchr		strcmp
	strcpy		strlen		strncmp		strrchr
The three headers just mentioned are included in all source files via the ossg_api.h header file (included by config.h). The remaining headers are only included as and when they are needed. The following features are used from the ISO <ctype.h> header:
	isalpha		isprint
from the ISO <limits.h> header:
	UCHAR_MAX	UINT_MAX	ULONG_MAX
from the ISO <stdarg.h> header:
	va_arg		va_end		va_list		va_start
(note that if FS_STDARG is false the XPG3 <varargs.h> header is used instead); and from the ISO <time.h> header:
	localtime	time		time_t		struct tm
	tm::tm_hour	tm::tm_mday	tm::tm_min	tm::tm_mon
	tm::tm_sec	tm::tm_year
The following features are used from the POSIX <sys/stat.h> header:
	stat		struct stat	stat::st_dev	stat::st_ino
	stat::st_mtime
The <sys/types.h> header is also included to provide the necessary types for <sys/stat.h>.

There are a couple of target dependencies in the producer which can overridden using command-line options:

  1. It assumes that if a count of the number of characters read from an input file is maintained, then that count value can be used as an argument to fseek. This may not be true on machines where the end of line marker consists of both a newline and a carriage return. In this case the -m-f command-line option can be used to switch to a slower, but more portable, algorithm for setting file positions.

  2. It assumes that a file is uniquely determined by the st_dev and st_ino fields of its corresponding stat value. This is used when processing #include directives to prevent a file being read more than once if this is not necessary. This assumption may not be true on machines with a small ino_t type which have file systems mounted from machines with a larger ino_t type. In this case the -m-i command-line option can be used to disable this check.


3.1.3. Source code modules

For convenience, the source code is divided between a number of directories:

  1. The base directory contains only the module containing the main function, the basic type descriptions and the Makefile.
  2. The directories obj_c and obj_tok contain respectively the C and #pragma token headers generated from the type algebra by calculus . The directory obj_templ contains certain calculus template files.
  3. The directory utility contains routines for such utility operations as memory allocation and error reporting, including the error catalogue.
  4. The directory parse contains routines concerned with parsing and preprocessing the input, including the sid grammar.
  5. The directory construct contains routines for building up and analysing the internal representation of the parsed code.
  6. The directory output contains routines for outputting the internal representation in various formats including as a TDF capsule, a C++ spec file, or a symbol table dump file.

Each module consists of a C source file, file.c say, containing function definitions, and a corresponding header file file.h containing the declarations of these functions. The header is included within its corresponding source file to check these declarations; it is protected against multiple inclusions by a macro of the form FILE_INCLUDED. The header contains a brief comment describing the purpose of the module; each function in the source file contains a comment describing its purpose, its inputs and its output.

The following table lists all the source modules in the C++ producer with a brief description of the purpose of each:

Module Directory Purpose
access construct member access control
allocate construct new and delete expressions
assign construct assignment expressions
basetype construct basic type operations
buffer utility buffer reading and writing routines
c_class obj_c calculus support routines
capsule output top-level TDF encoding routines
cast construct cast expressions
catalog utility error catalogue definition
char parse character sets
check construct expression checking
chktype construct type checking
class construct class and enumeration definitions
compile output TDF tag definition encoding routines
constant parse integer constant evaluation
construct construct constructors and destructors
convert construct standard type conversions
copy construct expression copying
debug utility development aids
declare construct variable and function declarations
decode output bitstream reading routines
derive construct base class graphs; inherited members
destroy construct garbage collection routines
diag output TDF diagnostic output routines
dump output symbol table dump routines
encode output bitstream writing routines
error utility error output routines
exception construct exception handling
exp output TDF expression encoding routines
expression construct expression processing
file parse low-level I/O routines
function construct function definitions and calls
hash parse hash table and identifier name routines
identifier construct identifier expressions
init output TDF initialiser expression encoding routines
initialise construct variable initialisers
instance construct template instances and specialisations
inttype construct integer and floating point type routines
label construct labels and jumps
lex parse lexical analysis
literal parse integer and string literals
load output C++ spec reading routines
macro parse macro expansion
main - main routine; command-line arguments
mangle output identifier name mangling
member construct member selector expressions
merge construct intermodule merge routines
namespace construct namespaces; name look-up
operator construct overloaded operators
option utility compiler options
overload construct overload resolution
parse parse low-level parser routines
pragma parse #pragma directives
predict parse parser look-ahead routines
preproc parse preprocessing directives
print utility error argument printing routines
quality construct extra expression checks
redeclare construct variable and function redeclarations
rewrite construct inline member function definitions
save output C++ spec writing routines
shape output TDF shape encoding routines
statement construct statement processing
stmt output TDF statement encoding routines
struct output TDF structure encoding routines
syntax[0-9]* parse sid parser output
system utility system dependent routines
table parse portability table reading
template construct template declarations and checks
throw output TDF exception handling encoding routines
tok output TDF standard tokens encoding
tokdef construct token definitions
token construct token declarations and expansion
typeid construct run-time type information
unmangle output identifier name unmangling
variable construct variable analysis
virtual construct virtual functions
xalloc utility memory allocation routines


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tcpplus/tdf.html100644 1750 1750 4564 6506672700 20015 0ustar brooniebroonie C++ Producer Guide: TDF generation

C++ Producer Guide

March 1998

next section previous section current document TenDRA home page document index


3.5. TDF generation

The TDF encoding as a bitstream is expressed as a series of macros generated by the make_tdf tool from the TDF specification database. Note that the version of the TDF database used contains a couple of corrections from the standard version:

  1. A construct make_token_def has been added to represent a token definition.
  2. The sort diag_tag has been added to the edge constructors.
The macros generated only handle the encoding of the construct - the construct parameters need to be encoded by hand (the C producer does something similar, but including the construct parameters). For example, make_tdf generates a macro:
	void ENC_plus ( BITSTREAM * ) ;
which encodes the plus construct (91 as 7 bits in extended format). A typical use of this macro, for adding the expressions a and b would be:
	ENC_plus ( bs ) ;
	ENC_impossible ( bs ) ;
	bs = enc_exp ( bs, a ) ;
	bs = enc_exp ( bs, b ) ;

Each function or variable is compiled to TDF as its definition is encountered. For some definitions, such as inline functions, the compilation may be deferred until it is clear whether or not the identifier has been used. There is a final pass over all identifiers during the variable analysis routines which incorporates this check. Because of the organisation of a TDF capsule it is necessary to store all of the compiled TDF in memory until the end of the program, when the complete capsule, including external tag and token names and linkage information, is written to the output file.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tcpplus/token.html100644 1750 1750 41412 6506672700 20371 0ustar brooniebroonie C++ Producer Guide: Token syntax

C++ Producer Guide

March 1998

next section previous section current document TenDRA home page document index


2.3.1 - Token specifications
2.3.2 - Token arguments
2.3.3 - Defining tokens

2.3. Token syntax

The C and C++ producers allow place-holders for various categories of syntactic classes to be expressed using directives of the form:

	#pragma TenDRA token token-spec
or simply:
	#pragma token token-spec
These place-holders are represented as TDF tokens and hence are called tokens. These tokens stand for a certain type, expression or whatever which is to be represented by a certain named TDF token in the producer output. This mechanism is used, for example, to allow C API specifications to be represented target independently. The types, functions and expressions comprising the API can be described using #pragma token directives and the target dependent definitions of these tokens, representing the implementation of the API on a particular machine, can be linked in later. This mechanism is described in detail elsewhere.

A summary of the grammar for the #pragma token directives accepted by the C++ producer is given as an annex.


2.3.1. Token specifications

A token specification is divided into two components, a token-introduction giving the token sort, and a token-identification giving the internal and external token names:

	token-spec :
		token-introduction token-identification

	token-introduction :
		exp-token
		statement-token
		type-token
		member-token
		procedure-token

	token-identification :
		token-namespaceopt identifier # external-identifieropt

	token-namespace :
		TAG

	external-identifier :
		-
		preproc-token-list
The TAG qualifier is used to indicate that the internal name lies in the C tag namespace. This only makes sense for structure and union types. The external token name can be given by any sequence of preprocessing tokens. These tokens are not macro expanded. If no external name is given then the internal name is used. The special external name - is used to indicate that the token does not have an associated external name, and hence is local to the current translation unit. Such a local token must be defined. White space in the external name (other than at the start or end) is used to indicate that a TDF unique name should be used. The white space serves as a separator for the unique name components.

Expression tokens

Expression tokens are specified as follows:

	exp-token :
		EXP exp-storageopt : type-id :
		NAT
		INTEGER
representing a expression of the given type, a non-negative integer constant and general integer constant, respectively. Each expression has an associated storage class:
	exp-storage :
		lvalue
		rvalue
		const
indicating whether it is an lvalue, an rvalue or a compile-time constant expression. An absent exp-storage is equivalent to rvalue. All expression tokens lie in the macro namespace; that is, they may potentially be defined as macros.

For backwards compatibility with the C producer, the directive:

	#pragma TenDRA++ rvalue token as const allow
causes rvalue tokens to be treated as const tokens.

Statement tokens

Statement tokens are specified as follows:

	statement-token :
		STATEMENT
All statement tokens lie in the macro namespace.

Type tokens

Type tokens are specified as follows:

	type-token :
		TYPE
		VARIETY
		VARIETY signed
		VARIETY unsigned
		FLOAT
		ARITHMETIC
		SCALAR
		CLASS
		STRUCT
		UNION
representing a generic type, an integral type, a signed integral type, an unsigned integral type, a floating point type, an arithmetic (integral or floating point) type, a scalar (arithmetic or pointer) type, a class type, a structure type and a union type respectively.

warning Floating-point, arithmetic and scalar token types have not yet been implemented correctly in either the C or C++ producers.

Member tokens

Member tokens are specified as follows:

	member-token :
		MEMBER access-specifieropt member-type-id : type-id :
where an access-specifier of public is assumed if none is given. The member type is given by:
	member-type-id :
		type-id
		type-id % constant-expression
where % is used to denote bitfield members (since : is used as a separator). The second type denotes the structure or union the given member belongs to. Different types can have members with the same internal name, but the external token name must be unique. Note that only non-static data members can be represented in this form.

Two declarations for the same MEMBER token (including token definitions) should have the same type, however the directive:

	#pragma TenDRA++ incompatible member declaration allow
allows declarations with different types, provided these types have the same size and alignment requirements.

Procedure tokens

Procedure, or high-level, tokens are specified in one of three ways:

	procedure-token :
		general-procedure
		simple-procedure
		function-procedure
All procedure tokens (except ellipsis functions - see below) lie in the macro namespace. The most general form of procedure token specifies two sets of parameters. The bound parameters are those which are used in encoding the actual TDF output, and the program parameters are those which are specified in the program. The program parameters are expressed in terms of the bound parameters. A program parameter can be an expression token parameter, a statement token parameter, a member token parameter, a procedure token parameter or any type. The bound parameters are deduced from the program parameters by a similar process to that used in template argument deduction.
	general-procedure :
		PROC { bound-toksopt | prog-parsopt } token-introduction


	bound-toks :
		bound-token
		bound-token , bound-toks

	bound-token :
		token-introduction token-namespaceopt identifier

	prog-pars :
		program-parameter
		program-parameter , prog-pars

	program-parameter :
		EXP identifier
		STATEMENT identifier
		TYPE type-id
		MEMBER type-id : identifier
		PROC identifier

The simplest form of a general-procedure is one in which the prog-pars correspond precisely to the bound-toks. In this case the syntax:

	simple-procedure :
		PROC ( simple-toksopt ) token-introduction

	simple-toks :
		simple-token
		simple-token , simple-toks

	simple-token :
		token-introduction token-namespaceopt identifieropt
may be used. Note that the parameter names are optional.

A function token is specified as follows:

	function-procedure :
		FUNC type-id :
where the given type is a function type. This has two effects: firstly a function with the given type is declared; secondly, if the function type has the form:
	r ( p1, ...., pn )
a procedure token with sort:
	PROC ( EXP rvalue : p1 :, ...., EXP rvalue : pn : ) EXP rvalue : r :
is declared. For ellipsis function types only the function, not the token, is declared. Note that the token behaves like a macro definition of the corresponding function. Unless explicitly enclosed in a linkage specification, a function declared using a FUNC token has C linkage. Note that it is possible for two FUNC tokens to have the same internal name, because of function overloading, however external names must be unique.

The directive:

	#pragma TenDRA incompatible interface declaration allow
can be used to allow incompatible redeclarations of functions declared using FUNC tokens. The token declaration takes precedence.

warning Certain of the more complex examples of PROC tokens such as, for example, tokens with PROC parameters, have not been implemented in either the C or C++ producers.


2.3.2. Token arguments

As mentioned above, the program parameters for a PROC token are those specified in the program itself. These arguments are expressed as a comma-separated list enclosed in brackets, the form of each argument being determined by the corresponding program parameter.

An EXP argument is an assignment expression. This must be an lvalue for lvalue tokens and a constant expression for const tokens. The argument is converted to the token type (for lvalue tokens this is essentially a conversion between the corresponding reference types). A NAT or INTEGER argument is an integer constant expression. In the former case this must be non-negative.

A STATEMENT argument is a statement. This statement should not contain any labels or any goto or return statements.

A type argument is a type identifier. This must name a type of the correct category for the corresponding token. For example, a VARIETY token requires an integral type.

A member argument must describe the offset of a member or nested member of the given structure or union type. The type of the member should agree with that of the MEMBER token. The general form of a member offset can be described in terms of member selectors and array indexes as follows:

	member-offset :
		::opt id-expression
		member-offset . ::opt id-expression
		member-offset [ constant-expression ]

A PROC argument is an identifier. This identifier must name a PROC token of the appropriate sort.


2.3.3. Defining tokens

Given a token specification of a syntactic object and a normal language definition of the same object (including macro definitions if the token lies in the macro namespace), the producers attempt to unify the two by defining the TDF token in terms of the given definition. Whether the token specification occurs before or after the language definition is immaterial. Unification also takes place in situations where, for example, two types are known to be compatible. Multiple consistent explicit token definitions are allowed by default when allowed by the language; this is controlled by the directive:

	#pragma TenDRA compatible token allow
The default unification behaviour may be modified using the directives:
	#pragma TenDRA no_def token-list
	#pragma TenDRA define token-list
	#pragma TenDRA reject token-list
or equivalently:
	#pragma no_def token-list
	#pragma define token-list
	#pragma ignore token-list
which set the state of the tokens given in token-list. A state of no_def means that no unification is attempted and that any attempt to explicitly define the token results in an error. A state of define means that unification takes place and that the token must be defined somewhere in the translation unit. A state of reject means that unification takes place as normal, but any resulting token definition is discarded and not output to the TDF capsule.

If a token with the state define is not defined, then the behaviour depends on the sort of the token. A FUNC token is implicitly defined in terms of its underlying function, such as:

	#define f( a1, ...., an )	( f ) ( a1, ...., an )
Other undefined tokens cause an error. This behaviour can be modified using the directives:
	#pragma TenDRA++ implicit token definition allow
	#pragma TenDRA++ no token definition allow
respectively.

The primitive operations, no_def, define and reject, can also be expressed using the context sensitive directive:

	#pragma TenDRA interface token-list
or equivalently:
	#pragma interface token-list
By default this is equivalent to no_def, but may be modified by inclusion using one of the directives:
	#pragma TenDRA extend header-name
	#pragma TenDRA implement header-name
or equivalently:
	#pragma extend interface header-name
	#pragma implement interface header-name
These are equivalent to:
	#include header-name
except that the form [....] is allowed as a header name. This is equivalent to <....> except that it starts the directory search after the point at which the including file was found, rather than at the start of the path (i.e. it is equivalent to the #include_next directive found in some preprocessors). The effect of the extend directive on the state of the interface directive is as follows:
	no_def -> no_def
	define -> reject
	reject -> reject
The effect of the implement directive is as follows:
	no_def -> define
	define -> define
	reject -> reject
That is to say, a implement directive will cause all the tokens in the given header to be defined and their definitions output. Any tokens included in this header by extend may be defined, but their definitions will not be output. This is precisely the behaviour which is required to ensure that each token is defined exactly once in an API library build.

The lists of tokens in the directives above are expressed in the form:

	token-list :
		token-id token-listopt
		# preproc-token-list
where a token-id represents an internal token name:
	token-id :
		token-namespaceopt identifier
		type-id . identifier
Note that member tokens are specified by means of both the member name and its parent type. In this type specifier, TAG, rather than class, struct or union, may be used in elaborated type specifiers for structure and union tokens. If the token-id names an overloaded function then the directive is applied to all FUNC tokens of that name. It is possible to be more selective using the # form which allows the external token name to be specified. Such an entry must be the last in a token-list.

A related directive has the form:

	#pragma TenDRA++ undef token token-list
which undefines all the given tokens so that they are no longer visible.

As noted above, a macro is only considered as a token definition if the token lies in the macro namespace. Tokens which are not in the macro namespace, such as types and members, cannot be defined using macros. Occasionally API implementations do define member selector as macros in terms of other member selectors. Such a token needs to be explicitly defined using a directive of the form:

	#pragma TenDRA member definition type-id : identifier member-offset

where member-offset is as above.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tdf/ 40755 1750 1750 0 6507734503 15330 5ustar brooniebroonietendra-doc-4.1.2.orig/doc/tdf/register.html100644 1750 1750 120211 6466607527 20205 0ustar brooniebroonie TDF Token Register

TDF Token Register

January 1998

next section previous section current document TenDRA home page document index


1 - Introduction
1.1 - Background
1.2 - Token Register Objectives
2 - Naming scheme
3 - Target dependency tokens
3.1 - Integer variety representations
3.2 - Floating variety representations
3.3 - Non-numeric representations
3.4 - Common conversion routines
4 - Basic mapping tokens
4.1 - C mapping tokens
4.2 - Fortran mapping tokens
5 - TDF Interface tokens
5.1 - Exception handling
5.2 - TDF Diagnostic Specification
5.3 - Accessing variable parameter lists
6 - Language Programming Interfaces
6.1 - The DRA C LPI
6.2 - The DRA C++ LPI
6.3 - The Etnoteam Fortran LPI
7 - Application Programming Interfaces
7.1 - ANSI C standard functions
7.2 - Common exceptional cases

1. Introduction

1.1. Background

TDF is an interface used for architecture neutral and programming language neutral representation of program. It is used both within portable language specific compilation systems, and for architecture neutral distribution of compiled programs. For full details see TDF Specification, Issue 4.0 (Revision 1).

TDF tokens offer a general encapsulation and expansion mechanism which allows any implementation detail to be delayed to the most appropriate stage of program translation. This provides a means for encapsulating any target dependencies in a neutral form, with specific implementations defined through standard TDF features. This raises a natural opportunity for well understood sets of TDF tokens to be included along with TDF itself as interface between TDF tools.

This first revision includes additional tokens for accessing variable parameter lists (see section 5.3), and a C mapping token to support the optional type long long int.

1.2. Token Register Objectives

As TDF tokens may be used to represent any piece of TDF, they may be used to supplement any TDF interface between software tools. However, that raises the issue of control authority for such an interface. In many cases, the interfaces may be considered to `belong' to a particular tool. In other cases, the names and specifications of tokens need to be recorded for common use.

This token register is used to record the names and specifications of tokens which may need to be assumed by more than one software tool. It also defines a naming scheme which should be used consistently to avoid ambiguity between tokens.

Five classes of tokens are identified:

  1. target dependency tokens, which are concerned with describing target architecture or translator detail;

  2. basic mapping tokens, which relate general language features to architecture detail;

  3. TDF interface tokens, which may be required to complete the specification of some TDF constructs;

  4. language programming interfaces (LPI) which may be specific to a particular producer;

  5. application programming interfaces (API).
These classes are discussed separately, in sections 3 to 7 below.


2. Naming scheme

A flat name space will suffice for TDF token names if producer writers adopt the simple constraints described here. TDF has separate provision for a hierarchic unique naming scheme, but that was intended for a specific purpose that has not yet been realised.

External names for program or application specific tokens should be confined to `simple names', which we define to mean that they consist only of letters, digits and underscore, the characters allowed in C identifiers. Normally there will be very few such external names, as tokens internal to a single capsule do not require to be named. All other token names will consist of some controlled prefix followed by a simple name, with the prefix identifying the control authority.

For API tokens, the prefix will consist of a sequence of simple names, each followed by a dot, where the first simple name is the name of the API as listed or referred to in section 7.

The prefix for producer specific and target dependency tokens will begin and end with characters that distinguish them from the above cases. However, common tools such as DISP, TNC and PL-TDF assume that token names contain only letters, digits, underscore, dot, and/or twiddle.

The following prefixes are currently reserved:

~
TDF interface tokens as specified in section 5 below, and also LPI tokens specific to DRA's C producer.
.~
Registered target dependency tokens as specified in section 3 below, and basic mapping tokens specified in section 4.
~cpp.
LPI tokens specific to DRA's C++ producer, other than those it shares with the C producer.
.Et~
LPI tokens specific to Etnoteam's Fortran77 producer.


3. Target dependency tokens

Target dependency tokens provide a common interface to simple constructs where the required detail for any specific architecture can be expressed within TDF, but the detail will be architecture specific. Every installer should have associated with it, a capsule containing the installer specific definitions of all the tokens specificed within this section 3.

Some of these tokens provide information about the integer and floating point variety representations supported by an installer, in a form that may be used by TDF analysis tools for architecture specific analysis, or by library generation tools when generating an architecture specific version of a library. Other target dependency tokens provide commonly required conversion routines.

It is recommended that these tokens should not be used directly within application programs. They are designed for use within LPI definitions, which can provide a more appropriate interface for applications.

3.1. Integer variety representations

Since TDF specifies integer representations to be twos-complement, the number of bits required to store an integer variety representation fully specifies that representation. The minimum or maximum signed or unsigned integer that can be represented within any variety representation can easily be determined from the number of bits.

3.1.1. .~rep_var_width

	w:	NAT
		-> NAT
If w lies within the range of VARIETY sizes supported by the associated installer, rep_var_width(w) will be the number of bits required to store values of VARIETY var_width( b,w), for any BOOL b.

If w is outside the range of VARIETY sizes supported by the associated installer, rep_var_width(w) will be 0.

3.1.2. .~rep_atomic_width

		-> NAT
.~rep_atomic_width will be the number of bits required to store values of some VARIETY v such that assign and assign_with_mode are atomic operations if the value assigned has SHAPE integer(v). The TDF specification guarantees existence of such a number.

3.2. Floating variety representations

Floating point representations are much more diverse than integers, but we may assume that each installer will support a finite set of distinct representations. For convenience in distinguishing between these representations within architecture specific TDF, the set of distinct representations supported by any specific installer are stated to be ordered into a sequence of non-decreasing memory size. An analysis tool can easily count through this sequence to determine the properties of all supported representations, starting at 1 and using .~rep_fv_width to test for the sequence end.

3.2.1. .~rep_fv

	n:	NAT
		-> FLOATING_VARIETY
.~rep_fv(n) will be the FLOATING_VARIETY whose representation is the nth of the sequence of supported floating point representations. n will lie within this range.

3.2.2. .~rep_fv_width

	n:	NAT
		-> NAT
If n lies within the sequence range of supported floating point representations, .~rep_fv_width(n) will be the number of bits required to store values of FLOATING_VARIETY .~rep_fv(n).

If n is outside the sequence range of supported floating point representations, .~rep_fv_width(n) will be 0.

3.2.3. .~rep_fv_radix

	n:	NAT
		-> NAT
.~rep_fv_radix(n) will be the radix used in the representation of values of FLOATING_VARIETY .~rep_fv(n).

n will lie within the sequence range of supported floating point representations.

3.2.4. .~rep_fv_mantissa

	n:	NAT
		-> NAT
.~rep_fv_mantissa(n) will be the number of base .~rep_fv_radix(n) digits in the mantissa representation of values of FLOATING_VARIETY .~rep_fv(n).

n will lie within the sequence range of supported floating point representations.

3.2.5. .~rep_fv_min_exp

	n:	NAT
		-> NAT
.~rep_fv_min_exp(n) will be the maximum integer m such that (.~rep_fv_radix(n))-m is exactly representable (though not necessarily normalised) by the FLOATING_VARIETY .~rep_fv(n).

n will lie within the sequence range of supported floating point representations.

3.2.6. .~rep_fv_max_exp

	n:	NAT
		-> NAT
.~rep_fv_max_exp(n) will be the maximum integer m such that (.~rep_fv_radix(n))m is exactly representable by the FLOATING_VARIETY .~rep_fv( n).

n will lie within the sequence range of supported floating point representations.

3.2.7. .~rep_fv_epsilon

	n:	NAT
		-> EXP FLOATING .~rep_fv(n)
.~rep_fv_epsilon(n) will be the smallest strictly positive real x such that (1.0 + x) is exactly representable by the FLOATING_VARIETY .~rep_fv(n).

n will lie within the sequence range of supported floating point representations.

3.2.8. .~rep_fv_min_val

	n:	NAT
		-> EXP FLOATING .~rep_fv(n)
.~rep_fv_min_val(n) will be the smallest strictly positive real number that is exactly representable (though not necessarily normalised)) by the FLOATING_VARIETY .~rep_fv(n).

n will lie within the sequence range of supported floating point representations.

3.2.9. .~rep_fv_max_val

	n:	NAT
		-> EXP FLOATING .~rep_fv(n)
.~rep_fv_max_val(n) will be the largest real number that is exactly representable by the FLOATING_VARIETY .~rep_fv(n).

n will lie within the sequence range of supported floating point representations.

3.3. Non-numeric representations

3.3.1. .~ptr_width

		-> NAT
.~ptr_width will be the minimum .~rep_var_width(w) for any w such that any pointer to any alignment may be converted to an integer of VARIETY var_width(b,w), for some BOOL b, and back again without loss of information, using the conversions .~ptr_to_int and .~int_to_ptr (q.v.).

3.3.2. .~best_div

		-> NAT
.~best_div is 1 or 2 to indicate preference for class 1 or class 2 division and modulus (as defined in the TDF Specification). This token would be used in situations where either class is valid but must be used consistently.

3.3.3. .~little_endian

		-> BOOL
.~little_endian is a property of the relationship between different variety representations and arrays. If an array of a smaller variety can be mapped onto a larger variety, and .~little_endian is true, then smaller indices of the smaller variety array map onto smaller ranges of the larger variety. If .~little_endian is false, no such assertion can be made.

3.4. Common conversion routines

This subsection contains a set of conversion routines between values of different shapes, that are not required to have any specific meaning apart from reversability. If the storage space requirements for the two shapes are identical, the conversion can usually be achieved without change of representation. When that is the case, and if the two shapes can be stored at a common alignment, the conversion can simply be achieved by assignment via a common union, which will ensure the required alignment consistency.

3.4.1. .~ptr_to_ptr

	a1:	ALIGNMENT
	a2:	ALIGNMENT
	p:	EXP POINTER(a1)
		-> EXP POINTER(a2)
.~ptr_to_ptr converts pointers from one pointer shape to another.

If p is any pointer with alignment a1, then .~ptr_to_ptr (a2, a1, .~ptr_to_ptr(a1, a2, p)) shall result in the same pointer p, provided that the number of bits required to store a pointer with alignment a2 is not less than that required to store a pointer with alignment a1.

3.4.2. .~ptr_to_int

	a:	ALIGNMENT
	v:	VARIETY
	p:	EXP POINTER(a)
		-> EXP INTEGER(v)
.~ptr_to_int converts a pointer to an integer. The result is undefined if the VARIETY v is insufficient to distinguish between all possible distinct pointers p of alignment a.

3.4.3. .~int_to_ptr

	v:	VARIETY
	a:	ALIGNMENT
	i:	EXP INTEGER(v)
		-> EXP POINTER(a)
.~int_to_ptr converts an integer to a pointer. The result is undefined unless the integer i was obtained without modification from some pointer using .~ptr_to_int with the same variety and alignment arguments.

If p is any pointer with alignment a, and v is var_width(b, .~ptr_width) for some BOOL b, then .~int_to_ptr(v, a, .~ptr_to_int (a, v, p)) shall result in the same pointer p.

3.4.4. .~f_to_ptr

	a:	ALIGNMENT
	fn:	EXP PROC
		-> EXP POINTER(a)
.~f_to_ptr converts a procedure to a pointer. The result is undefined except as required for consistency with .~ptr_to_f.

3.4.5. .~ptr_to_f

	a:	ALIGNMENT
	p:	EXP POINTER(a)
		-> EXP PROC
.~ptr_to_f converts a pointer to a procedure. The result is undefined unless the pointer p was obtained without modification from some procedure f using .~f_to_ptr(a, f). The same procedure f is delivered.


4. Basic mapping tokens

Basic mapping tokens provide target specific detail for specific language features that are defined to be target dependent. This detail need not be fixed for a particular target architecture, but needs to provide compatibility with any external library with which an application program is to be linked.

Tokens specific to the C and Fortran language families are included. Like the target dependency tokens, it is again recommended that these tokens should not be used directly within application programs. They are designed for use within LPI definitions, which can provide a more appropriate interface for applications.

Every operating system variant of an installer should have associated with it, a capsule containing the definitions of all the tokens specificed within this section 4.

4.1. C mapping tokens

4.1.1. .~char_width

		-> NAT
.~char_width is the number of bits required to store values of the representation VARIETY that corresponds to the C type char.

4.1.2. .~short_width

		-> NAT
.~short_width is the number of bits required to store values of the representation VARIETY that corresponds to the C type short int.

4.1.3. .~int_width

		-> NAT
.~int_width is the number of bits required to store values of the representation VARIETY that corresponds to the C type int.

4.1.4. .~long_width

		-> NAT
.~long_width is the number of bits required to store values of the representation VARIETY that corresponds to the C type long int.

4.1.5. .~longlong_width

		-> NAT
.~longlong_width is the number of bits required to store values of the representation VARIETY that corresponds to the C type long long int.

4.1.6. .~size_t_width

		-> NAT
.~size_t_width is the number of bits required to store values of the representation VARIETY that corresponds to the C type size_t. It will be the same as one of .~short_width, .~int_width, or .~long_width.

4.1.7. .~fl_rep

		-> NAT
.~fl_rep is the sequence number (see subsection 3.2) of the floating point representation to be used for values of C type float.

4.1.8. .~dbl_rep

		-> NAT
.~dbl_rep is the sequence number (see subsection 3.2) of the floating point representation to be used for values of C type double.

4.1.9. .~ldbl_rep

		-> NAT
.~ldbl_rep is the sequence number (see subsection 3.2) of the floating point representation to be used for values of C type long double.

4.1.10. .~pv_align

		-> ALIGNMENT
.~pv_align is the common alignment for all pointers that can be represented by the C generic pointer type void*. For architecture independence, this would have to be a union of several alignments, but for many installers it can be simplified to alignment(integer(var_width(false, .~char_width))).

4.1.11. .~min_struct_rep

		-> NAT
.~min_struct_rep is the number of bits required to store values of the smallest C integral type which share the same alignment properties as a structured value whose members are all of that same integral type. It will be the same as one of .~char_width, .~short_width, .~int_width, or .~long_width.

4.1.12. .~char_is_signed

		-> BOOL
.~char_is_signed is true if the C type char is treated as signed, or false if it is unsigned.

4.1.13. .~bitfield_is_signed

		-> BOOL
.~bitfield_is_signed is true if bitfield members of structures in C are treated as signed, or false if unsigned.

4.2. Fortran mapping tokens

4.2.1. .~F_char_width

		-> NAT
.~F_char_width is the number of bits required to store values of the representation VARIETY that corresponds to the Fortran77 type CHARACTER.

In most cases, .~F_char_width is the same as .~char_width.

4.2.2. .~F_int_width

		-> NAT
.~F_int_width is the number of bits required to store values of the representation VARIETY that corresponds to the Fortran77 type INTEGER.

In most cases, .~F_int_width is the same as .~int_width.

4.2.3. .~F_fl_rep

		-> NAT
.~F_fl_rep is the sequence number (see subsection 3.2) of the floating point representation to be used for values of Fortran77 type REAL, with the constraint that .~rep_fv_width(.~F_fl_rep ) = .~F_int_width.

If this constraint cannot be met, .~F_fl_rep will be 0.

4.2.4. .~F_dbl_rep

		-> NAT
.~F_dbl_rep is the sequence number (see subsection 3.2) of the floating point representation to be used for values of Fortran77 type DOUBLE PRECISION, with the constraint that .~rep_fv_width( .~F_dbl_rep) = 2 * .~F_int_width.

If this constraint cannot be met, .~F_dbl_rep will be 0.


5. TDF Interface tokens

A very few specifically named tokens are referred to within the TDF specification, which are required to complete the ability to use certain TDF constructs. Responsibility for providing appropriate definitions for these tokens is indicated with the specifications below.

Similarly, a few tokens are specified within the TDF Diagnostic Specification.

5.1. Exception handling

5.1.1. ~Throw

	n:	NAT
		-> EXP BOTTOM
The EXP e defined as the body of this token will be evaluated on occurrence of any error whose ERROR_TREATMENT is trap. The type of error can be determined within e from the NAT n, which will be error_val(ec) for some ERROR_CODE ec. The token definition body e will typically consist of a long_jump to some previously set exception handler.

Exception handling using trap and ~Throw will usually be determined by producers for languages that specify their own exception handling semantics. Responsibility for the ~Throw token definition will therefore normally rest with producers, by including this token within the producer specific LPI.

5.1.2. ~Set_signal_handler

		-> EXP OFFSET (locals_alignment, locals_alignment)
~Set_signal_handler must be applied before any use of the ERROR_TREATMENT trap, to indicate the need for exception trapping. Responsibility for the ~Set_signal_handler token definition will rest with installers. Responsibility for applying it will normally rest with producers.

The resulting offset value will contain the amount of space beyond any stack limit, which must be reserved for use when handling a stack_overflow trap raised by exceeding that limit.

5.1.3. ~Sync_handler

		-> EXP TOP
~Sync_handler delays subsequent processing until any pending exceptions have been raised, as necessary to synchronise exception handler modification. It must be applied immediately prior to any action that modifies the effect of ~Throw, such as assignment to a variable holding an exception handler as long_jump destination Responsibility for the ~Sync_handler token definition will rest with installers. Responsibility for applying it will normally rest with producers.

5.2. TDF Diagnostic Specification

The TDF Diagnostic Specification is a separate document which describes an extension to TDF, optionally used to provide program diagnostic information that can be transformed by installers to the form required by popular platform-specific debuggers. This extension cannot be considered fully developed and is therefore not included as part of standard TDF. Its use for other than DRA's C producer has not been considered.

5.2.1. ~exp_to_source, ~diag_id_scope, ~diag_type_scope, ~diag_tag_scope

	bdy:	EXP
	... :	 ...
		-> EXP
Each of these four tokens has several arguments of which the first, bdy, is an EXP. In each case the default definition body, when no diagnostic information is required, is simply bdy. Note that this description is quite sufficient to enable installers to ignore any diagnostic information that may be included in produced TDF, without needing any further knowledge of the TDF Diagnostic Specification.

5.3. Accessing variable parameter lists

Installers should provide token definitions for the tokens listed in this section.

5.3.1. ~va_list

		-> SHAPE
This is the SHAPE of a variable capable of holding state information used for stepping through the anonymous parameters of a procedure created by make_proc.

5.3.2. ~__va_start

	p:	EXP POINTER var_param_alignment
		-> EXP ~va_list
If t is the TAG introduced by var_intro OPTION(TAGACC) in make_proc, then the token application ~__va_start(obtain_tag(t)) will provide the initial value for a local variable to be used for stepping through the anonymous parameters of the procedure, starting with the first actual parameter (if any) that does not have a corresponding entry in the make_proc params_intro list.

5.3.3. ~va_arg

	v:	EXP POINTER (alignment(~va_list))
	s:	SHAPE
		-> EXP s
If v is the variable initialised by ~__va_start (see above), then successive token applications ~va_arg(v,s) will deliver the anonymous parameter values in turn. The successive SHAPEs s must be the appropriate SHAPEs for the successive parameters.

5.3.4. ~va_end

	v:	EXP POINTER (alignment(~va_list))
		-> EXP TOP
If v is a variable initialised by ~__va_start, the token application ~va_end(v) indicates that no further use will be made of v.

5.3.5. ~next_caller_offset

	o1:	EXP OFFSET (fa,parameter_alignment(s1))
	s1:	SHAPE
	s2:	SHAPE
		-> EXP OFFSET (fa,parameter_alignment(s2))
~next_caller_offset is used to provide access to successive elements of the caller_params of an apply_general_proc, by delivering successive OFFSETs of their positions relative to the environment pointer created by that procedure application. Both the apply_general_proc and associated make_general_proc will include PROCPROPS var_callers.

o1 will be the OFFSET for a caller_params element of SHAPE s1, and will be derived either from env_offset for a TAG introduced by caller_intro of the make_general_proc , or from a previous application of ~next_caller_offset. s2 will be the SHAPE of the subsequent caller_params element, whose OFFSET is delivered. fa will include the set union of ALIGNMENTs appropriate to the make_general_proc (as specified by current_env).

5.3.6. ~next_callee_offset

	o1:	EXP OFFSET (fa,parameter_alignment(s1))
	s1:	SHAPE
	s2:	SHAPE
		-> EXP OFFSET (fa,parameter_alignment(s2))
~next_callee_offset is used to provide access to successive elements of the CALLEES of an apply_general_proc or tail_call, by delivering successive OFFSETs of their positions relative to the environment pointer created by that procedure application. Both the procedure application and associated make_general_proc will include PROCPROPS var_callees.

o1 will be the OFFSET for a CALLEES element of SHAPE s1, and will be derived either from env_offset for a TAG introduced by callee_intro of the make_general_proc, or from a previous application of ~next_callee_offset. s2 will be the SHAPE of the subsequent CALLEES element, whose OFFSET is delivered. fa will include the set union of ALIGNMENTs appropriate to the make_general_proc (as specified by current_env ).


6. Language Programming Interfaces

A Language Programming Interface (LPI) is here defined to mean a set of tokens, usually specific to a particular producer, which will encapsulate language features at a higher level than basic TDF constructs, more convenient for the producer to produce.

Responsibility for the specification of individual LPIs lies with the appropriate producer itself. Before an application can be installed on some target platform, the appropriate LPI token definitions must have been built for that platform. In this sense, the LPI can be considered as a primitive API, which is discussed in section 7.

The process by which the LPI token definition library or capsule is generated for any specific platform will vary according to the LPI, and responsibility for defining that process will also lie with the appropriate producer. Some LPIs, such as that associated with DRA's C producer, can be fully defined by architecture neutral TDF, using the tokens specified in sections 3 and 4 to encapsulate any target dependencies. When that is the case, the generation process can be fully automated. For other LPIs the process may be much less automated. In some cases where the source language implies a complex run-time system, this might even require a small amount of new code to be written for each platform.

Generally, the individual LPI tokens do not need to be specified in the token registry, provided they follow a registered naming scheme to ensure uniqueness (see section 2). In exceptional circumstances it may be necessary for some TDF tool to recognise individual LPI tokens explicitly by name. This will be the case when experimenting with potential extensions to TDF, in the field of parallelism for example. In other cases a TDF installer or other tool may recognise an LPI token by name rather than its definition by choice, for some unspecified advantage. We make a pragmatic choice in such cases whether to include such token specifications in the token registry. For widely used producers, we can assume availability of the LPI token specifcations, or standard definitions, separately from the token register, but we should expect any such tokens to be specified within the register for all cases where significant advantage could be taken by an installer only if it recognises the token by name.

6.1. The DRA C LPI

DRA's C producer LPI is defined by an architecture neutral token definition capsule provided with the producer. Target specific detail is included only by use of the target dependency tokens and C mapping tokens specified in sections 3 and 4.1 respectively. Target specific versions of this capsule are obtained by transformation, using the `preprocessing' action of the TDF tool tnc, with definitions of the target dependency and C mapping tokens that are provided with the target installer. No special treatment is required for any of the C LPI tokens, though translation time can be slightly improved in a few cases if the names are recognised and standard token definition exercised explicitly within some installers.

The DRA C LPI does not include standard library features, for which the C language requires header files. The standard C library is one example of an API, discussed in section 7.

6.2. The DRA C++ LPI

The DRA C++ LPI extends the DRA C LPI adding tokens for target specific C++ features not found in C. Again, standard library features are treated as an API.

6.3. The Etnoteam Fortran LPI

The details in this subsection are provisional, subject to confirmation of argument and result SORTs, and development of model token definitions.

The following tokens are named here in case any installers may be able to produce better code than could be achieved by normal token expansion. In particular, some installers may be able to inline standard function calls.

  • .Et~SQRT: square root of any floating variety, including complex.
  • .Et~EXP: exponential (e ** x) of any floating variety, including complex.
  • .Et~LOG: (natural) logarithm of any floating variety, including complex.
  • .Et~LOG_10: base 10 logarithm of any floating variety, including complex.
  • .Et~LOG_2: base 2 logarithm of any floating variety, including complex.
  • .Et~SIN: sine of any floating variety, including complex.
  • .Et~COS: cosine of any floating variety, including complex.
  • .Et~TAN: tangent of any floating variety, including complex.
  • .Et~ASIN: inverse sine of any floating variety, including complex.
  • .Et~ACOS: inverse cosine of any floating variety, including complex.
  • .Et~ATAN: inverse (one argument) tangent of any floating variety, including complex.
  • .Et~ATAN2: inverse (two arguments) tangent of any floating variety, excluding complex.
  • .Et~SINH: hyperbolic sine of any floating variety, including complex.
  • .Et~COSH: hyperbolic cosine of any floating variety, including complex.
  • .Et~TANH: hyperbolic tangent of any floating variety, including complex.
  • .Et~ASINH: inverse hyperbolic sine of any floating variety, including complex.
  • .Et~ACOSH: inverse hyperbolic cosine of any floating variety, including complex.
  • .Et~ATANH: inverse hyperbolic tangent of any floating variety, including complex.
  • .Et~MOD: floating point remainder of any floating variety, excluding complex.


7. Application Programming Interfaces

Application Programming Interfaces are typically specified with a C mapping, which define the required contents for C header files which a portable C program must include by name to gain access to target specific implementations of an API library. The TDF approach to API specification includes using a #pragma token syntax within architecture neutral C header files, such that all implementation dependencies are encapsulated by API specific tokens. These API tokens are the TDF representation of the API. Both the API library and API token definitions are required before a TDF program using the API can be installed on any particular platform.

Platform specific definitions for API tokens are produced automatically, with few exceptions, for any platform with a conformant implementation of the API. This is achieved by a token library building process which analyses the architecture neutral header files for the API concerned, together with the platform specific header files that provide normal (non-TDF) C access to the API. The few exceptions occur where the platform specific header files have been written to make use of specific C compiler built-in features, typically recognised by identifiers with a prefix such as `__builtin_'. Such cases are very likely to require explicit recognition of the corresponding token name in TDF installers.

Generally, API token names and specifications are not detailed in this token register. The token specifications are clearly dependent on the associated API specifications. Authority for controlling the actual API token names, and the relationship between API tokens and the various API standardisation authorities, remain separate subjects of discussion.

Names and specifications are given or implied below for those API tokens which frequently require built-in support from installers, and for other cases where an installer may be able to produce better code than could be achieved by normal token expansion, for example by inlining standard function calls.

7.1. ANSI C standard functions

The set of tokens implied below all have the form:

7.1.1. ansi.header.function

	... :	 ...
		-> EXP
Tokens are defined for all cases where header is ctype or string or math or stdlib, and function is the name of a non-ellipsis function specified in the ANSI C standard library, declared within the corresponding header <header.h>. (Note that ellipsis functions, such as printf, cannot be represented as tokens since they may take a variable number of arguments.)

These tokens have arguments all of SORT EXP, whose number and shape, and token result shape, all correspond to the implementation shape of the named ANSI C standard library function parameters and result. For the few cases where the function is specified not to return (e.g. ansi.stdlib.abort), the result shape may be either TOP or BOTTOM.

7.2. Common exceptional cases

7.2.1. ansi.setjmp.setjmp

	jb:	EXP
		-> EXP
ansi.setjmp.setjmp is a token which has the semantics and argument and result implementation shapes corresponding to the ANSI C macro setjmp declared within <setjmp.h>.

7.2.2. ansi.setjmp.longjmp

	jb:	EXP
	v:	EXP
		-> EXP
ansi.setjmp.longjmp is a token which has the semantics and argument implementation shapes corresponding to the ANSI C macro longjmp declared within <setjmp.h>. The result shape may be either TOP or BOTTOM.

7.2.3. ~alloca

	i:	EXP
		-> EXP
~alloca is a token which has the semantics and argument and result implementation shapes corresponding to the BSD specified function alloca.

7.2.4. ansi.stdarg.va_list, ansi.stdarg.__va_start, ansi.stdarg.va_arg, ansi.stdarg.va_end

These four tokens are identical to the Interface Tokens ~va_list, ~__va_start, ~va_arg and ~va_end respectively.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tdf/spec1.html100644 1750 1750 10352 6466607530 17352 0ustar brooniebroonie TDF Specification, Issue 4.0

TDF Specification, Issue 4.0

January 1998

next section previous section current document TenDRA home page document index


Preface
1 - Introduction
2 - Structure of TDF
3 - Describing the Structure
4 - Installer Behaviour
5 - Specification of TDF Constructs
6 - Supplementary UNIT
7 - Notes
8 - The bit encoding of TDF
Index

Preface

This is Issue 4.0 of the TDF Specification. TDF version 4.0 is not bitwise compatible with earlier versions.

Major changes from Issue 3.1

A new SORT for STRING is introduced having the same relationship to TDFSTRING as BOOL has to TDFBOOL. This is used in place of TDFSTRING in various 3.1 constructions.

They are also used in modified tag and token definitions and declarations to provide extra consistency checks in the use of these tags or tokens, and also may be used as names external to the TDF system. For example, the signature of make_id_tagdec is now:

	t_intro:	TDFINT
	acc:		OPTION(ACCESS)
	signature:	OPTION(STRING)
	x:		SHAPE
		   -> TAGDEC
A new EXP constructor, initial_value, is introduced to allow dynamic initialisation of global tags.

These changes arise mainly from requirements of C++, but are clearly applicable elsewhere.

Magic numbers are introduced at the start of files containing TDF bitstream information.

The version 3.1 constructor set_stack_limit has had to be modified in the light of experience with platforms with ABIs which require upward-growing stacks or use disjoint frame stacks and alloca stacks.

Various other minor changes have been made to elucidate some rather pathological cases, e.g. make_nof must have at least one element. Also there are some cosmetic changes to improve consistency, e.g. the order of the arguments of token are now consistent with token_definition.

Notes on Revision 1

This Revision 1 of Issue 4.0 incorporates a number of corrections which have arisen where inconsistency or impracticability became evident when validating the OSF Research Institute's AVS (ANDF Validation Suite). Apart from minor textual corrections, the changes are:

  • Use of installer-defined TOKENs for accessing variable parameter lists - see the companion document TDF Token Register (Revision 1).

  • Tolerance of overflow necessary to allow simple implementation of complex multiply and divide.

  • Modified constraints on the arguments of shift_left, shift_right, rotate_left, rotate_right, make_dynamic_callees, make_var_tagdec, make_tokdec, make_tokdef, and user_info.

  • Modified constant evaluation constraints with respect to env_size and env_offset.

  • chain_extern no longer supported.

Changes under consideration but not included in Issue 4.0:

  • Tokenisation of the various LIST constructs.

  • Inclusion of the specification of run-time diagnostic information in the main specification. This is currently given as an appendix, as it it is less mature than the main specification.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tdf/spec10.html100644 1750 1750 160207 6466607530 17457 0ustar brooniebroonie Notes

TDF Specification, Issue 4.0

January 1998

next section previous section current document TenDRA home page document index


7.1 - Binding
7.2 - Character codes
7.3 - Constant evaluation
7.4 - Division and modulus
7.5 - Equality of EXPs
7.6 - Equality of SHAPEs
7.7 - Equality of ALIGNMENTS
7.8 - Exceptions and jumps
7.9 - Procedures
7.10 - Frames
7.11 - Lifetimes
7.12 - Alloca
7.13 - Memory Model
7.13.1 - Simple model
7.13.2 - Comparison of pointers and offsets
7.13.3 - Circular types in languages
7.13.4 - Special alignments
7.13.5 - Atomic assignment
7.14 - Order of evaluation
7.15 - Original pointers
7.16 - Overlapping
7.17 - Incomplete assignment
7.18 - Representing integers
7.19 - Overflow and Integers
7.20 - Representing floating point
7.21 - Floating point errors
7.22 - Rounding and floating point
7.23 - Floating point accuracy
7.24 - Representing bitfields
7.25 - Permitted limits
7.26 - Least Upper Bound
7.27 - Read-only areas
7.28 - Tag and Token signatures
7.29 - Dynamic initialisation

7. Notes


7.1. Binding

The following constructions introduce TAGs: identify, variable, make_proc, make_general_proc, make_id_tagdec, make_var_tagdec, common_tagdec.

During the evaluation of identify and variable a value, v, is produced which is bound to the TAG during the evaluation of an EXP or EXPs. The TAG is "in scope" for these EXPs. This means that in the EXP a use of the TAG is permissible and will refer to the declaration.

The make_proc and make_general_proc construction introduces TAGs which are bound to the actual parameters on each call of the procedure. These TAGs are "in scope" for the body of the procedure.

If a make_proc or make_general_proc construction occurs in the body of another make_proc or make_general_proc, the TAGs of the inner procedure are not in scope in the outer procedure, nor are the TAGs of the outer in scope in the inner.

The apply_general_proc construction permits the introduction of TAGs whose scope is the postlude argument. These are bound to the values of caller parameters after the evaluation of the body of the procedure.

The make_id_tagdec, make_var_tagdec and common_tagdec constructions introduce TAGs which are "in scope" throughout all the tagdef UNITs. These TAGs may have values defined for them in the tagdef UNITs, or values may be supplied by linking.

The following constructions introduce LABELs: conditional, repeat, labelled.

The construction themselves define EXPs for which these LABELs are "in scope". This means that in the EXPs a use of the LABEL is permissible and will refer to the introducing construction.

TAGs, LABELs and TOKENs (as TOKEN parameters) introduced in the body of a TOKEN definition are systematically renamed in their scope each time the TOKEN definition is applied. The scope will be completely included by the TOKEN definition.

Each of the values introduced in a UNIT will be named by a different TAG, and the labelling constructions will use different labels, so no visibility rules are needed. The set of TAGs and LABELs used in a simple UNIT are considered separately from those in another simple UNIT, so no question of visibility arises. The compound and link UNITs provide a method of relating the items in one simple UNIT to those in another, but this is through the intermediary of another set of TAGs and TOKENs at the CAPSULE level.


7.2. Character codes

TDF does not have a concept of characters. It transmits integers of various sizes. So if a producer wishes to communicate characters to an installer, it will usually have to do so by encoding them in some way as integers.

An ANSI C producer sending a TDF program to a set of normal C environments may well choose to encode its characters using the ASCII codes, an EBCDIC based producer transmitting to a known set of EBCDIC environments might use the code directly, and a wide character producer might likewise choose a specific encoding. For some programs this way of proceeding is necessary, because the codes are used both to represent characters and for arithmetic, so the particular encoding is enforced. In these cases it will not be possible to translate the characters to another encoding because the character codes will be used in the TDF as ordinary integers, which must not be translated.

Some producers may wish to transmit true characters, in the sense that something is needed to represent particular printing shapes and nothing else. These representations will have to be transformed into the correct character encoding on the target machine.

Probably the best way to do this is to use TOKENs. A fixed representation for the printing marks could be chosen in terms of integers and TOKENs introduced to represent the translation from these integers to local character codes, and from strings of integers to strings of local character codes. These definitions could be bound on the target machine and the installer should be capable of translating these constructions into efficient machine code. To make this a standard, unique TOKENs should be used.

But this raises the question, who chooses the fixed representation and the unique TOKENs and their specification? Clearly TDF provides a mechanism for performing the standardisation without itself defining a standard.

Here TDF gives rise to the need for extra standards, especially in the specification of globally named unique TOKENs.


7.3. Constant evaluation

Some constructions require an EXP argument which is "constant at install time". For an EXP to satisfy this condition it must be constructed according to the following rules after substitution of token definitions and selection of exp_cond branches.

If it contains obtain_tag then the tag will be introduced within the EXP, or defined with make_id_tagdef within the current capsule.

It may not contain any of the following constructions: apply_proc, apply_general_proc, assign_with_mode, contents_with_mode, continue, current_env, error_jump, goto_local_lv, make_local_lv, move_some, repeat, round_as_state.

Unless it is the EXP argument of a TAGDEF, a "constant at install time" may not contain env_offset or env_size.

Any use of contents or assign will be applied only to POINTERs derived from variable constructions.

If it contains labelled there will only be jumps to the LABELs from within starter, not from within any of the places.

Any use of obtain_tag defined with make_id_tagdef will occur after the end of the make_id_tagdef.

Note specifically that a constant EXP forming the defining value of a TAGDEF construct may contain env_offset and/or env_size.


7.4. Division and modulus

Two classes of division (D) and remainder (M) construct are defined. The two classes have the same definition if both operands have the same sign. Neither is defined if the second argument is zero.

Class 1:

	p D1 q = n
where:
	p = n*q + (p M1 q)
	sign(p M1 q) = sign(q)
	0 <= |p M1 q| < |q|

Class 2:

	p D2 q = n
where:
	p = n*q + (p M2 q)
	sign(p M2 q) = sign(p)
	0 <= |p M2 q| < |q|


7.5. Equality of EXPs

A definition of equality of EXPs would be a considerable part of a formal specification of TDF, and is not given here.


7.6. Equality of SHAPEs

Equality of SHAPEs is defined recursively:
  • Two SHAPEs are equal if they are both BOTTOM, or both TOP or both PROC.

  • Two SHAPEs are equal if they are both integer or both floating, or both bitfield, and the corresponding parameters are equal.

  • Two SHAPEs are equal if they are both NOF, the numbers of items are equal and the SHAPE parameters are equal.

  • Two OFFSETs or two POINTERs are equal if their ALIGNMENT parameters are pairwise equal.

  • Two COMPOUNDs are equal if their OFFSET EXPs are equal.

  • No other pairs of SHAPEs are equal.


7.7. Equality of ALIGNMENTs

Two ALIGNMENTs are equal if and only if they are equal sets.


7.8. Exceptions and jumps

TDF allows simply for labels and jumps within a procedure, by means of the conditional, labelled and repeat constructions, and the goto, case and various test constructions. But there are two more complex jumping situations.

First there is the jump, known to stay within a procedure, but to a computed destination. Many languages have discouraged this kind of construction, but it is still available in Cobol (implicitly), and it can be used to provide other facilities (see below). TDF allows it by means of the POINTER({code}). TDF is arranged so that this can usually be implemented as the address of the label. The goto_local_lv construction just jumps to the label.

The other kind of construction needed is the jump out of a procedure to a label which is still active, restoring the environment of the destination procedure: the long jump. Related to this is the notion of exception. Unfortunately long jumps and exceptions do not co-exist well. Exceptions are commonly organised so that any necessary destruction operations are performed as the stack of frames is traversed; long jumps commonly go directly to the destination. TDF must provide some facility which can express both of these concepts. Furthermore exceptions come in several different versions, according to how the exception handlers are discriminated and whether exception handling is initiated if there is no handler which will catch the exception.

Fortunately the normal implementations of these concepts provide a suggestion as to how they can be introduced into TDF. The local label value provides the destination address, the environment (produced by current_env) provides the stack frame for the destination, and the stack re-setting needed by the local label jumps themselves provides the necessary stack information. If more information is needed, such as which exception handlers are active, this can be created by producing the appropriate TDF.

So TDF takes the long jump as the basic construction, and its parameters are a local label value and an environment. Everything else can be built in terms of these.

The TDF arithmetic constructions allows one to specify a LABEL as destination if the result of the operation is exceptional. This is sufficient for the kind of explicit exception handling found in C++ and, in principle, could also be used to implement the kind of "automatic" exception detection and handling found in Ada, for example.

However many architectures have facilities for automatically trapping on exceptions without explicit testing. To take advantage of this, there is a trap ERROR_TREATMENT with associated ERROR_CODEs. The action taken on an exception with trap ERROR_TREATMENT will be to "throw" the ERROR_CODE. Since each language has its own idea of how to interpret the ERROR_CODE and handle exceptions, the onus is on the producer writer to describe how to throw an ERROR_CODE.

The producer writer must give a definition of a TOKEN ~Throw : NAT -> EXP where the NAT will be the error_val of some ERROR_CODE. The expansion of this token will be consistent with the interpretation of the relevant ERROR_CODE and the method of handling exceptions. Usually this will consist of decoding the ERROR_CODE and doing a long_jump on some globals set up by the procedure in which the exception handling takes place.

The translator writer will provide a parameterless EXP TOKEN, ~Set_signal_handler. This TOKEN will use ~Throw and must be applied before any possible exceptions. This implies that the definition of both ~Throw and ~Set_signal_handler must be bound before translation of any CAPSULE which uses them, presumeably by linking with some TDF libraries.

These tokens are specified in more detail in the companion document, TDF Token Register.


7.9. Procedures

The var_intro of a make_proc, if present, may be used under one of two different circumstances. In one circumstance, the POINTER TAG provided by the var_intro is used to access the actual var_param of an apply_proc. If this is the case, all uses of apply_proc which have the effect of calling this procedure will have the var_param option present, and they will all have precisely the same number of params as params_intro in the make_proc. The body of the make_proc can access elements of the var_param by using OFFSET arithmetic relative to the POINTER TAG. This provides a method of supplying a variable number of parameters, by composing them into a compound value which is supplied as the var_param.

However, this has proved to be unsatisfactory for the implementation of variable number of parameters in C - one cannot choose the POINTER alignment of the TAG a priori in non-prototype calls.

An alternative circumstance for using var_intro is where all uses of apply_proc which have the effect of calling this procedure may have more params present than the number of params_intro, and none of them will have their var_param option present. The body of the make_proc can access the additional params by using installer-defined TOKENs specified in the companion document TDF Token Register, analogous to the use of variable argument lists in C. A local variable v of shape ~va_list must be initialised to ~__va_start(p), where p is the POINTER obtained from the var_intro. Successive elements of the params list can then be obtained by successive applications of ~va_arg(v,s) where s is the SHAPE of element obtained. Finally, ~va_end(v) completes the use of v.

The definition of caller parameters in general procedures addesses this difficulty in a different way, by describing the layout of caller parameters qualified by PROCPROPS var_callers. This allows both the call and the body to have closely associated views of the OFFSETs within a parameter set, regardless of whether or not the particular parameter has been named. The installer-defined TOKEN ~next_caller_offset provides access to successive caller parameters, by using OFFSETs relative to the current frame pointer current_env, adjusting for any differences there may be between the closely associated views. The caller_intro list of the make_general_proc must not be empty, then the sequence of OFFSETs can start with an appropriate env_offset. Similar consideration applies to accessing within the callee parameters, using the installer-defined TOKEN ~next_callee_offset.

All uses of return, untidy_return and tail_call in a procedure will return values of the same SHAPE, and this will be the result_shape specified in all uses of apply_proc or apply_general_proc calling the procedure.

The use of untidy_return gives a generalisation of local_alloc. It extends the validity of pointers allocated by local_alloc within the immediatly enclosing procedure into the calling procedure. The original space of these pointers may be invalidated by local_free just as if it had been generated by local_alloc in the calling procedure.

The PROCPROPS check_stack may be used to check that limit set by set_stack_limit is not exceeded by the allocation of the static locals of a procedure body to be obeyed. If it is exceeded then the producer-defined TOKEN ~Throw: NAT -> EXP will be invoked as ~Throw(error_val(stack_overflow)). Note that this will not include any space generated by local_alloc; an explicit test is required to do check these.

Any SHAPE is permitted as the result_shape in an apply_proc or apply_general_proc.


7.10. Frames

TDF states that while a particular procedure activation is current, it is possible to create a POINTER, by using current_env, which gives access to all the declared variables and identifications of the activation which are alive and which have been marked as visible. The construction env_offset gives the OFFSET of one of these relative to such a POINTER. These constructions may serve for several purposes.

One significant purpose is to implement such languages as Pascal which have procedures declared inside other procedures. One way of implementing this is by means of a "display", that is, a tuple of frame pointers of active procedures.

Another purpose is to find active variables satisfying some criterion in all the procedure activations. This is commonly required for garbage collection. TDF does not force the installer to implement a frame pointer register, since some machines do not work best in this way. Instead, a frame pointer is created only if required by current_env. The implication of this is that this sort of garbage collection needs the collaboration of the producer to create TDF which makes the correct calls on current_env and env_offset and place suitable values in known positions.

Programs compiled especially to provide good diagnostic information can also use these operations.

In general any program which wishes to manipulate the frames of procedures other than the current one can use current_env and env_offset to do so.

A frame consists of three components, the caller parameters, callee parameters and locals of the procedure involved. Since each component may have different internal methods of access within the frame, each has a different special frame alignment associated with pointers within them. These are callers_alignment, callees_alignment and locals_alignment. The POINTER produced by current_env will be some union of these special alignments depending on how the procedure was defined.

Each of these frame alignments are considered to contain any ALIGNMENT produced by alignment from any SHAPE. Note that this does not say that they are the set union of all such ALIGNMENTs. This is because the interpretation of pointer and offset operations (notably add_to_pointer) may be different depending on the implementation of the frames; they may involve extra indirections.

Accordingly, because of the constraints on add_to_ptr, an OFFSET produced by env_offset can only be added to a POINTER produced by current_env. It is a further constraint that such an OFFSET will only be added to a POINTER produced from current_env used on the procedure which declared the TAG.


7.11. Lifetimes

TAGs are bound to values during the evaluation of EXPs, which are specified by the construction which introduces the TAG. The evaluation of these EXPs is called the lifetime of the activation of the TAG.

Note that lifetime is a different concept from that of scope. For example, if the EXP contains the application of a procedure, the evaluation of the body of the procedure is within the lifetime of the TAG, but the TAG will not be in scope.

A similar concept applies to LABELs.


7.12. Alloca

The constructions involving alloca (last_local, local_alloc, local_free, local_free_all) as well as the untidy_return construction imply a stack-like implementation which is related to procedure calls. They may be implemented using the same stack as the procedure frames, if there is such a stack, or it may be more convenient to implement them separately. However note that if the alloca mechanism is implemented as a stack, this may be an upward or a downward growing stack.

The state of this notional stack is referred to here as the alloca state. The construction local_alloc creates a new space on the alloca stack, the size of this space being given by an OFFSET. In the special case that this OFFSET is zero, local_alloc in effect gives the current alloca state (normally a POINTER to the top of the stack).

A use of local_free_all returns the alloca state to what it was on entry to the current procedure.

The construction last_local gives a POINTER to the top item on the stack, but it is necessary to give the size of this (as an OFFSET) because this cannot be deduced if the stack is upward growing. This top item will be the whole of an item previously allocated with local_alloc.

The construction local_free returns the state of the alloca machine to what it was when its parameter POINTER was allocated. The OFFSET parameter will be the same value as that with which the POINTER was allocated.

The ALIGNMENT of the POINTER delivered by local_alloc is alloca_alignment. This shall include the set union of all the ALIGNMENTs which can be produced by alignment from any SHAPE.

The use of alloca_alignment arises so that the alloca stack can hold any kind of value. The sizes of spaces allocated must be rounded up to the appropriate ALIGNMENT. Since this includes all value ALIGNMENTs a value of any ALIGNMENT can be assigned into this space. Note that there is no necessary relation with the special frame alignments (see section 7.10) though they must both contain all the ALIGNMENTs which can be produced by alignment from any SHAPE.

Stack pushing is local_alloc. Stack popping can be performed by use of last_local and local_free. Remembering the state of the alloca stack and returning to it can be performed by using local_alloc with a zero OFFSET and local_free.

Note that stack pushing can also be achieved by the use of a procedure call with untidy_return.

A transfer of control to a local label by means of goto, goto_local_lv, any test construction or any error_jump will not change the alloca stack.

If an installer implements identify and variable by creating space on a stack when they come into existence, rather than doing the allocation for identify and variable at the start of a procedure activation, then it may have to consider making the alloca stack into a second stack.


7.13. Memory Model

The layout of data in memory is entirely determined by the calculation of OFFSETs relative to POINTERs. That is, it is determined by OFFSET arithmetic and the add_to_ptr construction.

A POINTER is parameterised by the ALIGNMENT of the data indicated. An ALIGNMENT is a set of all the different kinds of basic value which can be indicated by a POINTER. That is, it is a set chosen from all VARIETYs, all FLOATING_VARIETYs, all BITFIELD_VARIETYs, proc, code, pointer and offset. There are also three special ALIGNMENTs, frame_alignment, alloca_alignment and var_param_alignment.

The parameter of a POINTER will not consist entirely of BITFIELD_VARIETYs.

The implication of this is that the ALIGNMENT of all procedures is the same, the ALIGNMENT of all POINTERs is the same and the ALIGNMENT of all OFFSETs is the same.

At present this corresponds to the state of affairs for all machines. But it is certainly possible that, for example, 64-bit pointers might be aligned on 64-bit boundaries while 32-bit pointers are aligned on 32-bit boundaries. In this case it will become necessary to add different kinds of pointer to TDF. This will not present a problem, because, to use such pointers, similar changes will have to be made in languages to distinguish the kinds of pointer if they are to be mixed.

The difference between two POINTERs is measured by an OFFSET. Hence an OFFSET is parameterised by two ALIGNMENTs, that of the starting POINTER and that of the end POINTER. The ALIGNMENT set of the first must include the ALIGNMENT set of the second.

The parameters of an OFFSET may consist entirely of BITFIELD_VARIETYs.

The operations on OFFSETs are subject to various constraints on ALIGNMENTs. It is important not to read into offset arithmetic what is not there. Accordingly some rules of the algebra of OFFSETs are given below.

  • offset_add is associative.

  • offset_mult corresponds to repeated offset_addition.

  • offset_max is commutative, associative and idempotent.

  • offset_add distributes over offset_max where they form legal expressions.

  • offset_test(prob, >= , a, b) continues if offset_max(a,b) = a.

7.13.1. Simple model

An example of the representation of OFFSET arithmetic is given below. This is not a definition, but only an example. In order to make this clear a machine with bit addressing is hypothesized. This machine is referred to as the simple model.

In this machine ALIGNMENTs will be represented by the number by which the bit address of data must be divisible. For example, 8-bit bytes might have an ALIGNMENT of 8, longs of 32 and doubles of 64. OFFSETs will be represented by the displacement in bits from a POINTER. POINTERs will be represented by the bit address of the data. Only one memory space will exist. Then in this example a possible conforming implementation would be as follows.

  • add_to_ptr is addition.

  • offset_add is addition.

  • offset_div and offset_div_by_int are exact division.

  • offset_max is maximum.

  • offset_mult is multiply.

  • offset_negate is negate.

  • offset_pad(a, x) is ((x + a - 1) / a) * a

  • offset_subtract is subtract.

  • offset_test is integer_test.

  • offset_zero is 0.

  • shape_offset(s) is the minimum number of bits needed to be moved to move a value of SHAPE s.

Note that these operations only exist where the constraints on the parameters are satisfied. Elsewhere the operations are undefined.

All the computations in this representation are obvious, but there is one point to make concerning offset_max, which has the following arguments and result.

	arg1:		EXP OFFSET(x, y)
	arg2:		EXP OFFSET(z, y)
		   -> EXP OFFSET(unite_alignments(x, z), y)
The SHAPEs could have been chosen to be:

	arg1:		EXP OFFSET(x, y)
	arg2:		EXP OFFSET(z, t)
		   -> EXP OFFSET(unite_alignments(x, z),
				 intersect_alignments(y, t))
where unite_alignments is set union and intersect_alignments is set intersection. This would have expressed the most general reality. The representation of unite_alignments(x, z) is the maximum of the representations of x and z in the simple model. Unfortunately the representation of intersect_alignments(y, t) is not the minimum of the representations of y and t. In other words the simple model representation is not a homomorphism if intersect_alignments is used. Because the choice of representation in the installer is an important consideration the actual definition was chosen instead. It seems unlikely that this will affect practical programs significantly.

7.13.2. Comparison of pointers and offsets

Two POINTERs to the same ALIGNMENT, a, are equal if and only if the result of subtract_ptrs applied to them is equal to offset_zero(a).

The comparison of OFFSETs is reduced to the definition of offset_max and the equality of OFFSETs by the note in offset_test.

7.13.3. Circular types in languages

It is assumed that circular types in programming languages will always involve the SHAPEs PROC or POINTER(x) on the circular path in their TDF representation. Since the ALIGNMENT of POINTER is {pointer} and does not involve the ALIGNMENT of the thing pointed at, circular SHAPEs are not needed. The circularity is always broken in ALIGNMENT (or PROC).

7.13.4. Special alignments

There are seven special ALIGNMENTs. One of these is code_alignment, the ALIGNMENT of the POINTER delivered by make_local_lv.

The ALIGNMENT of a parameter of SHAPE s is given by parameter_alignment(s) which will always contain alignment(s).

The other five special ALIGNMENTs are alloca_alignment, callees_alignment, callers_alignment, locals_alignment and var_param_alignment. Each of these contains the set union of all the ALIGNMENTs which can be produced by alignment from any SHAPE. But they need not be equal to that set union, nor need there be any relation between them.

In particular they are not equal (in the sense of Equality of ALIGNMENTs).

Each of these five refer to alignments of various components of a frame.

Notice that pointers and offsets such as POINTER(callees_alignment(true)) and OFFSET(callees_alignment(true), x) etc. can have some special representation and that add_to_ptr and offset_add can operate correctly on these representations. However it is necessary that

	alignment(POINTER(A))={pointer}
for any ALIGNMENT A.

7.13.5. Atomic assignment

At least one VARIETY shall exist such that assign and assign_with_mode are atomic operations. This VARIETY shall be specified as part of the installer specification. It shall be capable of representing the numbers 0 to 127.

Note that it is not necessary for this to be the same VARIETY on each machine. Normal practice will be to use a TOKEN for this VARIETY and choose the definition of the TOKEN on the target machine.


7.14. Order of evaluation

The order of evaluation is specified in certain constructions in terms of equivalent effect with a canonical order of evaluation. These constructions are conditional, identify, labelled, repeat, sequence and variable. Let these be called the order-specifying constructions.

The constructions which change control also specify a canonical order. These are apply_proc, apply_general_proc, case, goto, goto_local_lv, long_jump, return, untidy_return, return_to_label, tail_call, the test constructions and all instructions containing the error_jump and trap ERROR_TREATMENTs.

The order of evaluation of the components of other constructions is as follows. The components may be evaluated in any order and with their components - down to the TDF leaf level - interleaved in any order. The constituents of the order specifying constructions may also be interleaved in any order, but the order of the operations within an order specifying operation shall be equivalent in effect to a canonical order.

Note that the rule specifying when error_jumps or traps are to be taken (error_jump) relaxes the strict rule that everything has to be "as if" completed by the end of certain constructions. Without this rule pipelines would have to stop at such points, in order to be sure of processing any errors. Since this is not normally needed, it would be an expensive requirement. Hence this rule. However a construction will be required to force errors to be processed in the cases where this is important.


7.15. Original pointers

Certain constructions are specified as producing original pointers. They allocate space to hold values and produce pointers indicating that new space. All other pointer values are derived pointers, which are produced from original pointers by a sequence of add_to_ptr operations. Counting original pointers as being derived from themselves, every pointer is derived from just one original pointer.

A null pointer is counted as an original pointer.

If procedures are called which come from outside the TDF world (such as calloc) it is part of their interface with TDF to state if they produce original pointers, and what is the lifetime of the pointer.

As a special case, original pointers can be produced by using current_env and env_offset (see current_env).

Note that:

	add_to_ptr(p, offset_add(q, r))
is equivalent to:
	add_to_ptr(add_to_ptr(p, q), r)
In the case that p is the result of current_env and q is the result of env_offset:
	add_to_ptr(p, q)
is defined to be an original pointer. For any such expression q will be produced by env_offset applied to a TAG introduced in the procedure in which current_env was used to make p.


7.16. Overlapping

In the case of move_some, or assign or assign_with_mode in which arg2 is a contents or contents_with_mode, it is possible that the source and destination of the transfer might overlap.

In this case, if the operation is move_some or assign_with_mode and the TRANSFER_MODE contains overlap, then the transfer shall be performed correctly, that is, as if the data were copied from the source to an independent place and then to the destination.

In all cases, if the source and destination do not overlap the transfer shall be performed correctly.

Otherwise the effect is undefined.


7.17. Incomplete assignment

If the arg2 component of an assign or assign_with_mode operation is left by means of a jump, the question arises as to what value is in the destination of the transfer.

If the TRANSFER_MODE complete is used, the destination shall be left unchanged if the arg2 component is left by means of a jump. If complete is not used and arg2 is left by a jump, the destination may be affected in any way.


7.18. Representing integers

Integer VARIETYs shall be represented by a range of integers which includes those specified by the given bounds. This representation shall be twos-complement.

If the lower bound of the VARIETY is non-negative, the representing range shall be from 0 to 28n-1 for some n. n is called the number of bytes in the representation. The number of bits in the representation is 8n.

If the lower bound of the VARIETY is negative the representing range shall be from -28n-1 to 28n-1-1 for some n. n is called the number of bytes in the representation. The number of bits in the representation is 8n

Installers may limit the size of VARIETY that they implement. A statement of such limits shall be part of the specification of the installer. In no case may such limits be less than 64 bits, signed or unsigned.

It is intended that there should be no upper limit allowed at some future date.

Operations are performed in the representing VARIETY. If the result of an operation does not lie within the bounds of the stated VARIETY, but does lie in the representation, the value produced in that representation shall be as if the VARIETY had the lower and upper bounds of the representation. The implication of this is usually that a number in a VARIETY is represented by that same number in the representation.

If the bounds of a VARIETY, v, include those of a VARIETY, w, the representing VARIETY for v shall include or be equal to the representing VARIETY for w.

The representations of two VARIETYs of the form var_limits(0, 2n-1) and var_limits(-2n-1, 2n-1-1) shall have the same number of bits and the mapping of their ALIGNMENTs into the target alignment shall be the same.


7.19. Overflow and Integers

It is necessary first to define what overflow means for integer operations and second to specify what happens when it occurs. The intention of TDF is to permit the simplest possible implementation of common constructions on all common machines while allowing precise effects to be achieved, if necessary at extra cost.

Integer varieties may be represented in the computer by a range of integers which includes the bounds given for the variety. An arithmetic operation may therefore yield a result which is within the stated variety, or outside the stated variety but inside the range of representing values, or outside that range. Most machines provide instructions to detect the latter case; testing for the second case is possible but a little more costly.

In the first two cases the result is defined to be the value in the representation. Overflow occurs only in the third case.

If the ERROR_TREATMENT is impossible overflow will not occur. If it should happen to do so the effect of the operation is undefined.

If the ERROR_TREATMENT is error_jump a LABEL is provided to jump to if overflow occurs.

If the ERROR_TREATMENT is trap(overflow), a producer-defined TOKEN ~Throw: NAT -> EXP must be provided. On an overflow, the installer will arrange that ~Throw(error_val(overflow)) is evaluated.

The wrap ERROR_TREATMENT is provided so that a useful defined result may be produced in certain cases where it is usually easily available on most machines. This result is available on the assumption that machines use binary arithmetic for integers. This is certainly so at present, and there is no close prospect of other bases being used.

If a precise result is required further arithmetic and testing may be needed which the installer may be able to optimise away if the word lengths happen to suit the problem. In extreme cases it may be necessary to use a larger variety.


7.20. Representing floating point

FLOATING_VARIETYs shall be implemented by a representation which has at least the properties specified.

Installers may limit the size of FLOATING_VARIETY which they implement. A statement of such limits shall be part of the specification of an installer.

The limit may also permit or exclude infinities.

Any installer shall implement at least one FLOATING_VARIETY with the following properties (c.f. IEEE doubles):

  • mantissa_digs shall not be less than 53.
  • min_exponent shall not be less than 1023.
  • max_exponent shall not be less than 1022.
Operations are performed and overflows detected in the representing FLOATING_VARIETY.

There shall be at least two FLOATING_VARIETYs, one occupying the same number of bytes and having the same alignment as a VARIETY representation, and one occupying twice as many bytes.


7.21. Floating point errors

The only permitted ERROR_TREATMENTs for operations delivering FLOATING_VARIETYs are impossible, error_jump and trap (overflow).

The kinds of floating point error which can occur depend on the machine architecture (especially whether it has IEEE floating point) and on the definitions in the ABI being obeyed.

Possible floating point errors depend on the state of the machine and may include overflow, divide by zero, underflow, invalid operation and inexact. The setting of this state is performed outside TDF (at present).

If an error_jump or trap is taken as the result of a floating point error the operations to test what kind of error it was are outside the TDF definition (at present).


7.22. Rounding and floating point

Each machine has a rounding state which shall be one of to_nearest, toward_larger, toward_smaller, toward_zero. For each operation delivering a FLOATING_VARIETY, except for make_floating, any rounding necessary shall be performed according to the rounding state.


7.23. Floating point accuracy

While it is understood that most implementations will use IEEE floating arithmetic operations, there are machines which use other formats and operations. It is intended that they should not be excluded from having TDF implementations.

For TDF to have reasonably consistent semantics across many platforms, one must have some minimum requirements on the accuracies of the results of the floating point operations defined in TDF. The provisional requirements sketched below would certainly be satisfied by an IEEE implementation.

Let @ be some primitive dyadic arithmetic operator and @' be its TDF floating-point implementation. Let F be some non-complex FLOATING_VARIETY and F' be a representational variety of F.

Condition 1:

If a, b and a @ b can all be represented exactly in F, then they will also be represented exactly in F'. Extending the '-notation in the obvious manner:

	(a @ b)' = (a' @' b')
This equality will also hold using the TDF equality test, i.e.:
	(a @ b)' =' (a' @' b')

Condition 2:

The operator @' is monotonic in the sense apposite to the operator @. For example, consider the operator +; if x is any number and a and b are as above:

	(x > b) => ((a' +' x') >=  (a + b)')
and:
	(x < b) =>  ((a' +' x') <= (a + b)')
and so on, reflecting the weakening of the ordering after the operation from > to >= and < to <=. Once again, the inequalities will hold for their TDF equivalents e.g., >=' and >'.

Similar conditions can be expressed for the monadic operations.

For the floating-point test operation, there are obvious analogues to both conditions. The weakening of the ordering in the monotonicity condition, however, may lead to surprising results, arising mainly from the uncertainty of the result of equality between floating numbers which cannot be represented exactly in F.

Accuracy requirements for complex FLOATING_VARIETYs could follow directly by considering the above conditions applied to real and imaginary parts independently. The following proviso is added for some complex operations however, to allow for possible intermediate error conditions. With floating_div, floating_mult and floating_power for complex FLOATING_VARIETYs, errors are guaranteed not to occur only if the square of the modulus of each argument is representable and the square of the modulus of the result is representable. Whenever these additional constraints are not met, the operation will either complete with the accuracy conditions above applying, or it will complete according to the ERROR_TREATMENT specified.


7.24. Representing bitfields

BITFIELD_VARIETYs specify a number of bits and shall be represented by exactly that number of bits in twos-complement notation. Producers may expect them to be packed as closely as possible.

Installers may limit the number of bits permitted in BITFIELD_VARIETYs. Such a limit shall be not less than 32 bits, signed or unsigned.

It is intended that there should be no upper limit allowed at some future date.

Some offsets of which the second parameter contains a BITFIELD alignment are subject to a constraint defined below. This constraint is referred to as variety_enclosed.

The intent of this constraint is to force BITFIELDs to be implemented (in memory) as being included in some properly aligned VARIETY value. The constraint applies to:

	x: offset(p, b)
and to:
	sh = bitfield(bfvar_bits(s, n))
where alignment(sh) is included in b. The constraint is as follows:

There will exist a VARIETY, v, and r: offset(p, q) where v is in q.

	offset_pad(b, r) <= x
and:
	offset_pad(b, r + sz(v)) >= offset_pad( b, x + sz(sh))
where the comparisons are in the sense of offset_test, + is offset_add and sz is shape_offset.


7.25. Permitted limits

An installer may specify limits on the sizes of some of the data SHAPEs which it implements. In each case there is a minimum set of limits such that all installers shall implement at least the specified SHAPEs. Part of the description of an installer shall be the limits it imposes. Installers are encouraged not to impose limits if possible, though it is not expected that this will be feasible for floating point numbers.


7.26. Least Upper Bound

The LUB of two SHAPEs, a and b is defined as follows:
  • If a and b are equal shapes, then a.
  • If a is BOTTOM then b.
  • If b is BOTTOM then a.
  • Otherwise TOP.


7.27. Read-only areas

Consider three scenarios in increasingly static order:
  • Dynamic loading. A new module is loaded, initialising procedures are obeyed and the results of these are then marked as read-only.

  • Normal loading. An ld program is obeyed which produces various (possibly circular) structures which are put into an area which will be read-only when the program is obeyed.

  • Using ROM. Data structures are created (again possibly circular) and burnt into ROM for use by a separate program.
In each case program is obeyed to create a structure, which is then frozen. The special case when the data is, say, just a string is not sufficiently general.

This TDF specification takes the attitude that the use of read-only areas is a property of how TDF is used - a part of the installation process - and there should not be TDF constructions to say that some values in a CAPSULE are read-only. Such constructions could not be sufficiently general.


7.28. Tag and Token signatures

In a TDF program there will usually be references to TAGs which are not defined in TDF; their definitions are intended to be supplied by a host system in system specific libraries.

These TAGs will be declared (but not defined) in a TDF CAPSULE and will be specified by external linkages of the CAPSULE with EXTERNALs containg either TDFIDENTs or UNIQUEs. In previous versions of TDF, the external names required by system linking could only be derived from those EXTERNALs.

Version 4.0 gives an alternative method of constructing extra-TDF names. Each global TAG declaration can now contain a STRING signature field which may be used to derive the external name required by the system.

This addition is principally motivated by the various "name mangling" schemes of C++. The STRING signature can be constructed by concatenations and token expansions. Suitable usages of TOKENs can ensure that the particular form of name-mangling can be deferred to installation time and hence allow, at least conceptually, linking with different C++ libraries.

As well as TAG declarations, TAG definitions are allowed to have signatures. The restriction that the signature (if present) of a TAG definition being identical to its corresponding definition could allow type checking across seperately compiled CAPSULEs.

Similar considerations apply to TOKENs; although token names are totally internal to TDF, it would allow one to check that a token declared in one CAPSULE has the same "type" as its definition in another.


7.29. Dynamic initialisation

The dynamic initialisation of global variables is required for languages like C++. Previous to version 4.0, the only initialisations permissable were load-time ones; in particular no procedure calls were allowed in forming the initialising value. Version 4.0 introduces the constructor initial_value to remedy this situation.

Several different implementation strategies could be considered for this. Basically, one must ensure that all the initial_value expressions are transformed into assignments to globals in some procedure. One might expect that there would be one such procedure invented for each CAPSULE and that somehow this procedure is called before the main program.

This raises problems on how we can name this procedure so that it can be identified as being a special initialising procedure. Some UNIX linkers reserve a name like __init specially so that all instances of it from different modules can be called before the main procedure. Other cases may require a pre-pass on the .o files prior to system linking.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tdf/spec11.html100644 1750 1750 34103 6466607530 17433 0ustar brooniebroonie The bit encoding of TDF

TDF Specification, Issue 4.0

January 1998

next section previous section current document TenDRA home page document index


8.1 - The Basic Encoding
8.2 - Fundamental encodings
8.2.1 - TDFINT
8.2.2 - TDFBOOL
8.2.3 - TDFSTRING
8.2.4 - TDFIDENT
8.3 - BITSTREAM
8.3.1 - BYTESTREAM
8.3.2 - BYTE_ALIGN
8.3.3 - Extendable integer encoding
8.4 - The TDF encoding
8.5 - File Formats

8. The bit encoding of TDF

This is a description of the encoding used for TDF.

Section 8.1 defines the basic level of encoding, in which integers consisting of a specified number of bits are appended to the sequence of bytes. Section 8.2 defines the second level of encoding, in which fundamental kinds of value are encoded in terms of integers of specified numbers of bits. Section 8.4 defines the third level, in which TDF is encoded using the previously defined concepts.


8.1. The Basic Encoding

TDF consists of a sequence of 8-bit bytes used to encode integers of a varying number of bits, from 1 to 32. These integers will be called basic integers.

TDF is encoded into bytes in increasing byte index, and within the byte the most significant end is filled before the least significant. Let the bits within a byte be numbered from 0 to 7, 0 denoting the least significant bit and 7 the most significant. Suppose that the bytes up to n-1 have been filled and that the next free bit in byte n is bit k. Then bits k+1 to 7 are full and bits 0 to k remain to be used. Now an integer of d bits is to be appended.

If d is less than or equal to k, the d bits will occupy bits k-d+1 to k of byte n, and the next free bit will be at bit k-d. Bit 0 of the integer will be at bit k-d+1 of the byte, and bit d-1 of the integer will be at bit k.

If d is equal to k+1, the d bits will occupy bits 0 to k of byte n and the next free bit will be bit 7 of byte n+1. Bit d-1 of the integer will be at bit k of the byte.

If d is greater than k+1, the most significant k+1 bits of the integer will be in byte n, with bit d-1 at bit k of the byte. The remaining d-k-1 least significant bits are then encoded into the bytes, starting at byte n+1, bit 7, using the same algorithm (i.e. recursively).


8.2. Fundamental encodings

This section describes the encoding of TDFINT, TDFBOOL, TDFSTRING, TDFIDENT, BITSTREAM, BYTESTREAM, BYTE_ALIGN and extendable integers.

8.2.1. TDFINT

TDFINT encodes non-negative integers of unbounded size. The encoding uses octal digits encoded in 4-bit basic integers. The most significant octal digit is encoded first, the least significant last. For all digits except the last the 4-bit integer is the value of the octal digit. For the last digit the 4-bit integer is the value of the octal digit plus 8.

8.2.2. TDFBOOL

TDFBOOL encodes a boolean, true or false. The encoding uses a 1-bit basic integer, with 1 encoding true and 0 encoding false.

8.2.3. TDFSTRING

TDFSTRING encodes a sequence containing n non-negative integers, each of k bits. The encoding consists of, first a TDFINT giving the number of bits, second a TDFINT giving the number of integers, which may be zero. Thirdly it contains n k-bit basic integers, giving the sequence of integers required, the first integer being first in this sequence.

8.2.4. TDFIDENT

TDFIDENT also encodes a sequence containing n non-negative integers. These integers will all consist of the same number of bits, which will be a multiple of 8. It is a property of the encoding of the other constructions that TDFIDENTS will start on either bit 7 or bit 3 of a byte and end on bit 7 or bit 3 of a byte. It thus has some alignment properties which are useful to permit fast copying of sections of TDF.

The encoding consists of, first a TDFINT giving the number of bits, second a TDFINT giving the number of integers, which may be zero. If the next free bit is not bit 7 of some byte, it is moved on to bit 7 of the next byte.

Thirdly it contains n k-bit integers.

If the next free bit is not bit 7 of some byte, it is moved on to bit 7 of the next byte.


8.3. BITSTREAM

It can be useful to be able to skip a TDF construction without reading through it. BITSTREAM provides a means of doing this.

A BITSTREAM encoding of X consists of a TDFINT giving the number of bits of encoding which are occupied by the X. Hence to skip over a BITSTREAM while decoding, one should read the TDFINT and then advance the bit index by that number of bits. To read the contents of a BITSTREAM encoding of X, one should read and ignore a TDFINT and then decode an X. There will be no spare bits at the end of the X, so reading can continue directly.

8.3.1. BYTESTREAM

It can be useful to be able to skip a TDF construction without reading through it. BYTESTREAM provides a means of doing this while remaining byte aligned, so facilitating copying the TDF. A BYTESTREAM will always start when the bit position is 3 or 7.

A BYTESTREAM encoding of X starts with a TDFINT giving a number, n. After this, if the current bit position is not bit 7 of some byte, it is moved to bit 7 of the next byte. The next n bytes are an encoding of X. There may be some spare bits left over at the end of X.

Hence to skip over a BYTESTREAM while decoding one should read a TDFINT, n, move to the next byte alignment (if the bit position is not 7) and advance the bit index over n bytes. To read a BYTESTREAM encoding of X one should read a TDFINT, n, and move to the next byte, b (if the bit position is not 7), and then decode an X. Finally the bit position should be moved to n bytes after b.

8.3.2. BYTE_ALIGN

BYTE_ALIGN leaves the bit position alone if it is 7, and otherwise moves to bit 7 of the next byte.

8.3.3. Extendable integer encoding

A d-bit extendable integer encoding enables an integer greater than zero to be encoded given d, a number of bits.

If the integer is between 1 and 2d - 1 inclusive, a d-bit basic integer is encoded.

If the integer, i, is greater than or equal to 2d, a d-bit basic integer encoding of zero is inserted and then i - 2d + 1 is encoded as a d-bit extendable encodin, and so on, recursively.


8.4. The TDF encoding

The descriptions of SORTS and constructors contain encoding information which is interpreted as follows to define the TDF encoding.


8.5. File Formats

There may be various kinds of files which contain TDF bitstream information. Each will start with a 4-byte "magic-number" identifying the kind of file followed by two TDFINTs giving the major and minor version numbers of the TDF involved.

A CAPSULE file will have a magic-number "TDFC". The encoding of the CAPSULE will be byte-aligned following the version numbers.

A TDF library file will have a magic-number "TDFL". These files are constructed by the TDF linker.

A TDF archive file will have a magic-number "TDFA".

Other file formats introduced should follow a similar pattern.

The TDF linker will refuse to link TDF files with different major version numbers. The resulting minor version number is the maximum of component minor version numbers.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tdf/spec12.html100644 1750 1750 50614 6466607530 17441 0ustar brooniebroonie Index for TDF Specification, Issue 4.0

TDF Specification, Issue 4.0

January 1998

next section previous section current document TenDRA home page document index


Index

[A] [B] [C] [D]  [E] [F] [G] [H] [I] [J] [K] [L] [M]
[N] [O] [P] [Q] [R]  [S] [T] [U] [V] [W] [X] [Y] [Z]

A

abs
access [1] [2]
access_apply_token
access_cond
add_accesses
add_modes
add_procprops
add_to_ptr
alignment
   alloca
   alloca_alignment
   code
   frame
   var_param
alignment_apply_token
alignment_cond
alignment_sort
alloca
   alloca_alignment
alloca_alignment
al_tag
al_tagdef
al_tagdef_props
al_tag_apply_token
and
apply_general_proc
apply_proc
argument
   notation
assign
assignment
   atomic
assign_with_mode
atomic
   assignment

B

bfvar_apply_token
bfvar_bits
bfvar_cond
bitfield
bitfields
bitfield_assign
bitfield_assign_with_mode
bitfield_contents
bitfield_contents_with_mode
bitfield_variety
BITSTREAM
bitstream
bool
bool_apply_token
bool_cond
bottom
byte align
byte boundaries
BYTESTREAM
bytestream

C

callees
callees_alignment
callers_alignment
capsule
   introduction
capsule_link
case
caselim
chain_extern
change_bitfield_to_int
change_floating_variety
change_int_to_bitfield
change_variety
check_stack
code
   for characters
code_alignment
common_tagdec
common_tagdef
comparable
complete
complex_conjugate
complex_of_float
complex_parms
component
compound
computed_nat
computed_signed_nat
concat_nof
concat_string
conditional
constant
   evaluation
contents
contents_with_mode
continue
current_env

D

div0
div1
div2
division
   definition of kinds

E

encoding
   basic
   boundary
   extendable
   extendable integer
   number
   number of bits
   of lists
   of option
   of sorts
env_offset
env_size
equal
equality
   of ALIGNMENT
   of EXP
   of SHAPE
errors
   in floating point
error_code
error_jump
error_treatment
error_val
errt_apply_token
errt_cond
evaluation
   of constants
   order of
exp
exp_apply_token
exp_cond
extendable integer
   encoding
extendable
external
extern_link

F

fail_installer
false
floating
floating point
   errors
   representation
floating_abs
floating_div
floating_maximum
floating_minimum
floating_minus
floating_mult
floating_negate
floating_plus
floating_power
floating_test
floating_variety
float_int
float_of_complex
flvar_apply_token
flvar_cond
flvar_parms
foreign_sort
frame

G

goto
goto_local_lv
greater_than
greater_than_or_equal
group

I

identification
   linkable entity
   unit
identify
ignorable
imaginary_part
impossible
initial_value
inline
integer
integer extendable encoding
integer
   basic encoding
   overflow
integers
   representation of
integer_test
introduction
   of tags

L

label
   introduction
labelled
label_apply_token
last_local
less_than
less_than_or_equal
less_than_or_greater_than
lifetime
link
linkable entity
   identification
linkextern
linkinfo
linkinfo_props
links
list
   encoding
   notation
locals_alignment
local_alloc
local_alloc_check
local_free
local_free_all
long_jump
long_jump_access

M

make_al_tag
make_al_tagdef
make_al_tagdefs
make_callee_list
make_capsule
make_capsule_link
make_caselim
make_comment
make_complex
make_compound
make_dynamic_callees
make_extern_link
make_floating
make_general_proc
make_group
make_id_tagdec
make_id_tagdef
make_int
make_label
make_link
make_linkextern
make_linkinfos
make_links
make_local_lv
make_nat
make_nof
make_nof_int
make_null_local_lv
make_null_proc
make_null_ptr
make_otagexp
make_proc
make_signed_nat
make_stack_limit
make_string
make_tag
make_tagacc
make_tagdecs
make_tagdefs
make_tagshacc
make_tok
make_tokdec
make_tokdecs
make_tokdef
make_tokdefs
make_tokformals
make_top
make_unique
make_unit
make_value
make_var_tagdec
make_var_tagdef
make_version
make_versions
make_weak_defn
make_weak_symbol
maximum
memory
   model
   simple model
minimum
minus
modulus
   definition of kinds
move_some
mult

N

nat
nat_apply_token
nat_cond
negate
nil_access
nof
not
not_comparable
not_equal
not_greater_than
not_greater_than_or_equal
not_less_than
not_less_than_and_not_greater_than
not_less_than_or_equal
no_long_jump_dest
no_other_read
no_other_write
ntest
ntest_apply_token
ntest_cond
n_copies

O

obtain_al_tag
obtain_tag
of labels
offset
   arithmetic
offset_add
offset_div
offset_div_by_int
offset_max
offset_mult
offset_negate
offset_pad
offset_subtract
offset_test
offset_zero
option
   encoding
   notation
or
order
   of evaluation
original pointer
   creation [1] [2]
original
   pointers
otagexp
out_par
overflow
   integer
overlap
overlapping

P

parameter_alignment
plus
pointer
   arithmetic
pointers
   original
pointer_test
power
preserve
proc
procprops
procprops_apply_token
procprops_cond
proc_test
profile
PROPS

R

real_part
register
rem0
rem1
rem2
repeat
representation
   of floating point
   of integers
result
   notation
return
return_to_label
rotate
rotate_left
rotate_right
rounding
rounding_mode
rounding_mode_apply_token
rounding_mode_cond
round_as_state
round_with_mode

S

same_callees
sequence
set_stack_limit
shape
shape_apply_token
shape_cond
shape_offset
shift_left
shift_right
signed_nat
signed_nat_apply_token
signed_nat_cond
snat_from_nat
sort
   meaning of
sortname
Specification of TDF Constructs
stack_overflow
standard_access
standard_transfer_mode
static_name_def
string
string_apply_token
string_cond
string_extern
subtract_ptrs

T

tag
   introduction
tagacc
tagdec
tagdec_props
tagdef
tagdef_props
tagshacc
tag_apply_token
tail_call
TDF
   extending
TDFBOOL
tdfbool
TDFIDENT
tdfident
TDFINT
tdfint
tdfstring
tld [1] [2]
tokdec
tokdec_props
tokdef
tokdef_props
token
   introduction to
token_apply_token
token_definition
token_defn
tokformals
top
toward_larger
toward_smaller
toward_zero
to_nearest
transfer_mode
transfer_mode_apply_token
transfer_mode_cond
trap
trap_on_nil
true
types
   circular

U

unique
unique_extern
unit
   al_tagdef
   identification
   kinds of
   tagdec
   tagdef
   tokdec
   tokdef
unite_alignments
untidy
untidy_return
used_as_volatile
user_info
use_tokdef

V

variable
variety
var_apply_token
var_callees
var_callers
var_cond
var_limits
var_param_alignment
var_width
version
version_props
visible
volatile

W

wrap

X

xor


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tdf/spec4.html100644 1750 1750 11335 6466607530 17357 0ustar brooniebroonie Introduction

TDF Specification, Issue 4.0

January 1998

next section previous section current document TenDRA home page document index


1. Introduction

TDF is a porting technology and, as a result, it is a central part of a shrink-wrapping, distribution and installation technology. TDF has been chosen by the Open Software Foundation as the basis of its Architecture Neutral Distribution Format. It was developed by the United Kingdom's Defence Research Agency (DRA). TDF is not UNIX specific, although most of the implementation has been done on UNIX.

Software vendors, when they port their programs to several platforms, usually wish to take advantage of the particular features of each platform. That is, they wish the versions of their programs on each platform to be functionally equivalent, but not necessarily algorithmically identical. TDF is intended for porting in this sense. It is designed so that a program in its TDF form can be systematically modified when it arrives at the target platform to achieve the intended functionality and to use the algorithms and data structures which are appropriate and efficient for the target machine. A fully efficient program, specialised to each target, is a necessity if independent software vendors are to take-up a porting technology.

These modifications are systematic because, on the source machine, programmers work with generalised declarations of the APIs they are using. The declarations express the requirements of the APIs without giving their implementation. The declarations are specified in terms of TDF's "tokens", and the TDF which is produced contains uses of these tokens. On each target machine the tokens are used as the basis for suitable substitutions and alterations.

Using TDF for porting places extra requirements on software vendors and API designers. Software vendors must write their programs scrupulously in terms of APIs and nothing more. API designers need to produce an interface which can be specialised to efficient data structures and constructions on all relevant machines.

TDF is neutral with respect to the set of languages which has been considered. The design of C, C++, Fortran and Pascal is quite conventional, in the sense that they are sufficiently similar for TDF constructions to be devised to represent them all. These TDF constructions can be chosen so that they are, in most cases, close to the language constructions. Other languages, such as Lisp, are likely to need a few extensions. To express novel language features TDF will probably have to be more seriously extended. But the time to do so is when the feature in question has achieved sufficient stability. Tokens can be used to express the constructs until the time is right. For example, there is a lack of consensus about the best constructions for parallel languages, so that at present TDF would either have to use low level constructions for parallelism or back what might turn out to be the wrong system. In other words it is not yet the time to make generalisations for parallelism as an intrinsic part of TDF.

TDF is neutral with respect to machine architectures. In designing TDF, the aim has been to retain the information which is needed to produce and optimise the machine code, while discarding identifier and syntactic information. So TDF has constructions which are closely related to typical language features and it has an abstract model of memory. We expect that programs expressed in the considered languages can be translated into code which is as efficient as that produced by native compilers for those languages.

Because of these porting features TDF supports shrink-wrapping, distribution and installation. Installation does not have to be left to the end-user; the production of executables can be done anywhere in the chain from software vendor, through dealer and network manager to the end-user.

This document provides English language specifications for each construct in the TDF format and some general notes on various aspects of TDF. It is intended for readers who are aware of the general background to TDF but require more detailed information.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tdf/spec5.html100644 1750 1750 34067 6466607530 17367 0ustar brooniebroonie Structure of TDF

TDF Specification, Issue 4.0

January 1998

next section previous section current document TenDRA home page document index


2.1 - The Overall Structure
2.2 - Tokens
2.3 - Tags
2.4 - Extending the format

2. Structure of TDF

Each piece of TDF program is classified as being of a particular SORT. Some pieces of TDF are LABELs, some are TAGs, some are ERROR_TREATMENTs and so on (to list some of the more transparently named SORTs). The SORTs of the arguments and result of each construct of the TDF format are specified. For instance, plus is defined to have three arguments - an ERROR_TREATMENT and two EXPs (short for "expression") - and to produce an EXP; goto has a single LABEL argument and produces an EXP. The specification of the SORTs of the arguments and results of each construct constitutes the syntax of the TDF format. When TDF is represented as a parsed tree it is structured according to this syntax. When it is constructed and read it is in terms of this syntax.

2.1. The Overall Structure

A separable piece of TDF is called a CAPSULE. A producer generates a CAPSULE; the TDF linker links CAPSULEs together to form a CAPSULE; and the final translation process turns a CAPSULE into an object file.

The structure of capsules is designed so that the process of linking two or more capsules consists almost entirely of copying large byte-aligned sections of the source files into the destination file, without changing or even examining these sections. Only a small amount of interface information has to be modified and this is made easily accessible. The translation process only requires an extra indirection to account for this interface information, so it is also fast. The description of TDF at the capsule level is almost all about the organisation of the interface information.

There are three major kinds of entity which are used inside a capsule to name its constituents. The first are called tags; they are used to name the procedures, functions, values and variables which are the components of the program. The second are called tokens; they identify pieces of TDF which can be used for substitution - a little like macros. The third are the alignment tags, used to name alignments so that circular types can be described. Because these internal names are used for linking pieces of TDF together, they are collectively called linkable entities. The interface information relates these linkable entities to each other and to the world outside the capsule.

The most important part of a capsule, the part which contains the real information, consists of a sequence of groups of units. Each group contains units of the same kind, and all the units of the same kind are in the same group. The groups always occur in the same order, though it is not necessary for each kind to be present.



The order is as follows:

This organisation is imposed to help installers, by ensuring that the information needed to process a unit has been provided before that unit arrives. For example, the token definitions occur before any tag definition, so that, during translation, the tokens may be expanded as the tag definitions are being read (in a capsule which is ready for translation all tokens used must be defined, but this need not apply to an arbitrary capsule).

The tags and tokens in a capsule have to be related to the outside world. For example, there might be a tag standing for printf, used in the appropriate way inside the capsule. When an object file is produced from the capsule the identifier printf must occur in it, so that the system linker can associate it with the correct library procedure. In order to do this, the capsule has a table of tags at the capsule level, and a set of external links which provide external names for some of these tags.



In just the same way, there are tables of tokens and alignment tags at the capsule level, and external links for these as well.

The tags used inside a unit have to be related to these capsule tags, so that they can be properly named. A similar mechanism is used, with a table of tags at the unit level, and links between these and the capsule level tags.



Again the same technique is used for tokens and alignment tags.

It is also necessary for a tag used in one unit to refer to the same thing as a tag in another unit. To do this a tag at the capsule level is used, which may or may not have an external link.



The same technique is used for tokens and alignment tags.

So when the TDF linker is joining two capsules, it has to perform the following tasks:

  • It creates new sets of capsule level tags, tokens and alignment tags by identifying those which have the same external name, and otherwise creating different entries.

  • It similarly joins the external links, suppressing any names which are no longer to be external.

  • It produces new link tables for the units, so that the entities used inside the units are linked to the new positions in the capsule level tables.

  • It re-organises the units so that the correct order is achieved.
This can be done without looking into the interior of the units (except for the tld unit), simply copying the units into their new place.

During the process of installation the values associated with the linkable entities can be accessed by indexing into an array followed by one indirection. These are the kinds of object which in a programming language are referred to by using identifiers, which involves using hash tables for access. This is an example of a general principle of the design of TDF; speed is required in the linking and installing processes, if necessary at the expense of time in the production of TDF.

2.2. Tokens

Tokens are used (applied) in the TDF at the point where substitutions are to be made. Token definitions provide the substitutions and usually reside on the target machine and are linked in there.

A typical token definition has parameters from various SORTs and produces a result of a given SORT. As an example of a simple token definition, written here in a C-like notation, consider the following.

	EXP ptr_add (EXP par0, EXP par1, SHAPE par2)
	{
	    add_to_ptr(
		par0,
		offset_mult(
		    offset_pad(
			alignment(par2),
			shape_offset(par2)),
		    par1))
	}
This defines the token, ptr_add, to produce something of SORT EXP. It has three parameters, of SORTs EXP, EXP and SHAPE. The add_to_ptr, offset_mult, offset_pad, alignment and shape_offset constructions are TDF constructions producing respectively an EXP, an EXP, an EXP, an ALIGNMENT and an EXP.

A typical use of this token is:

	ptr_add(
	    obtain_tag(tag41),
	    contents(integer(~signed_int), obtain_tag(tag62)),
	    integer(~char))
The effect of this use is to produce the TDF of the definition with par0, par1 and par2 substituted by the actual parameters.

There is no way of obtaining anything like a side-effect. A token without parameters is therefore just a constant.

Tokens can be used for various purposes. They are used to make the TDF shorter by using tokens for commonly used constructions (ptr_add is an example of this use). They are used to make target dependent substitutions (~char in the use of ptr_add is an example of this, since ~char may be signed or unsigned on the target).

A particularly important use is to provide definitions appropriate to the translation of a particular language. Another is to abstract those features which differ from one ABI to another. This kind of use requires that sets of tokens should be standardised for these purposes, since otherwise there will be a proliferation of such definitions.

2.3. Tags

Tags are used to identify the actual program components. They can be declared or defined. A declaration gives the SHAPE of a tag (a SHAPE is the TDF analogue of a type). A definition gives an EXP for the tag (an EXP describes how the value is to be made up).

2.4. Extending the format

TDF can be extended for two major reasons.

First, as part of the evolution of TDF, new features will from time to time be identified. It is highly desirable that these can be added without disturbing the current encoding, so that old TDF can still be installed by systems which recognise the new constructions. Such changes should only be made infrequently and with great care, for stability reasons, but nevertheless they must be allowed for in the design.

Second, it may be required to add extra information to TDF to permit special processing. TDF is a way of describing programs and it clearly may be used for other reasons than portability and distribution. In these uses it may be necessary to add extra information which is closely integrated with the program. Diagnostics and profiling can serve as examples. In these cases the extra kinds of information may not have been allowed for in the TDF encoding.

Some extension mechanisms are described below and related to these reasons:

  • The encoding of every SORT in TDF can be extended indefinitely (except for certain auxiliary SORTs). This mechanism should only be used for extending standard TDF to the next standard, since otherwise extensions made by different groups of people might conflict with each other. See Extendable integer encoding.

  • Basic TDF has three kinds of linkable entity and seven kinds of unit. It also contains a mechanism for extending these so that other information can be transmitted in a capsule and properly related to basic TDF. The rules for linking this extra information are also laid down. See make_capsule.

    If a new kind of unit is added, it can contain any information, but if it is to refer to the tags and tokens of other units it must use the linkable entities. Since new kinds of unit might need extra kinds of linkable entity, a method for adding these is also provided. All this works in a uniform way, with capsule level tables of the new entities, and external and internal links for them.

    If new kinds of unit are added, the order of groups must be the same in any capsules which are linked together. As an example of the use of this kind of extension, the diagnostic information is introduced in just this way. It uses two extra kinds of unit and one extra kind of linkable entity. The extra units need to refer to the tags in the other units, since these are the object of the diagnostic information. This mechanism can be used for both purposes.

  • The parameters of tokens are encoded in such a way that foreign information (that is, information which cannot be expressed in the TDF SORTs) can be supplied. This mechanism should only be used for the second purpose, though it could be used to experiment with extensions for future standards. See BITSTREAM.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tdf/spec6.html100644 1750 1750 15036 6466607530 17363 0ustar brooniebroonie Describing the Structure

TDF Specification, Issue 4.0

January 1998

next section previous section current document TenDRA home page document index


3. Describing the Structure

The following examples show how TDF constructs are described in this document. The first is the construct floating:
	fv:		FLOATING_VARIETY
		   -> SHAPE
The constructs' arguments (one in this case) precede the "->" and the result follows it. Each argument is shown as follows:
	name:	 	SORT
The name standing before the colon is for use in the accompanying English description within the specification. It has no other significance.

The example given above indicates that floating takes one argument. This argument, v, is of SORT FLOATING_VARIETY. After the "->" comes the SORT of the result of floating. It is a SHAPE.

In the case of floating the formal description supplies the syntax and the accompanying English text supplies the semantics. However, in the case of some constructs it is convenient to specify more information in the formal section. For example, the specification of the construct floating_negate not only states that it has an EXP argument and an EXP result:

	flpt_err:	 ERROR_TREATMENT
	arg1:		EXP FLOATING(f)
		   -> EXP FLOATING(f)
it also supplies additional information about those EXPs. It specifies that these expressions will be floating point numbers of the same kind.

Some construct's arguments are optional. This is denoted as follows (from apply_proc):

	result_shape:	SHAPE
	p:		EXP PROC
	params:		LIST(EXP)
	var_param:	OPTION(EXP)
		   -> EXP result_shape
var_param is an optional argument to the apply_proc construct shown above.

Some constructs take a varying number of arguments. params in the above construct is an example. These are denoted by LIST. There is a similar construction, SLIST, which differs only in having a different encoding.

Some constructs' results are governed by the values of their arguments. This is denoted by the "?" formation shown in the specification of the case construct shown below:

	exhaustive:	BOOL
	control:	EXP INTEGER(v)
	branches:	LIST(CASELIM)
		   -> EXP (exhaustive ? BOTTOM : TOP)
If exhaustive is true, the resulting EXP has the SHAPE BOTTOM: otherwise it is TOP.

Depending on a TDF-processing tool's purpose, not all of some constructs' arguments need necessarily be processed. For instance, installers do not need to process one of the arguments of the x_cond constructs (where x stands for a SORT, e.g. exp_cond. Secondly, standard tools might want to ignore embedded fragments of TDF adhering to some private standard. In these cases it is desirable for tools to be able to skip the irrelevant pieces of TDF. BITSTREAMs and BYTESTREAMs are formations which permit this. In the encoding they are prefaced with information about their length.

Some constructs' arguments are defined as being BITSTREAMs or BYTESTREAMs, even though the constructs specify them to be of a particular SORT. In these cases the argument's SORT is denoted as, for example, BITSTREAM FLOATING_VARIETY . This construct must have a FLOATING_VARIETY argument, but certain TDF-processing tools may benefit from being able to skip past the argument (which might itself be a very large piece of TDF) without having to read its content.

The nature of the UNITs in a GROUP is determined by unit identifications. These occur in make_capsule. The values used for unit identifications are specified in the text as follows:

Unit identification: some_name
where some_name might be tokdec, tokdef etc.

The kinds of linkable entity used are determined by linkable entity identifications. These occur in make_capsule. The values used for linkable entity identification are specified in the text as follows:

Linkable entity identification: some_name
where some_name might be tag, token etc.

The bit encodings are also specified in this document. The details are given in The bit encoding of TDF. This section describes the encoding in terms of information given with the descriptions of the SORTs and constructs.

With each SORT the number of bits used to encode the constructs is given in the following form:

Number of encoding bits: n
This number may be zero; if so the encoding is non-extendable. If it is non-zero the encoding may be extendable or non-extendable. This is specified in the following form:
Is coding extendable: yes/no
With each construct the number used to encode it is given in the following form:
Encoding number: n
If the number of encoding bits is zero, n will be zero.

There may be a requirement that a component of a construct should start on a byte boundary in the encoding. This is denoted by inserting BYTE_ALIGN before the component SORT.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tdf/spec7.html100644 1750 1750 7046 6466607530 17346 0ustar brooniebroonie Installer Behaviour

TDF Specification, Issue 4.0

January 1998

next section previous section current document TenDRA home page document index


4.1 - Definition of terms
4.2 - Properties of Installers

4. Installer Behaviour

4.1. Definition of terms

In this document the behaviour of TDF installers is described in a precise manner. Certain words are used with very specific meanings. These are:
  • "undefined": means that installers can perform any action, including refusing to translate the program. It can produce code with any effect, meaningful or meaningless.

  • "shall": when the phrase "P shall be done" (or similar phrases involving "shall") is used, every installer must perform P.

  • "should": when the phrase "P should be done" (or similar phrase involving "should") is used, installers are advised to perform P, and producer writers may assume it will be done if possible. This usage generally relates to optimisations which are recommended.

  • "will": when the phrase "P will be true" (or similar phrases involving "will") is used to describe the composition of a TDF construct, the installer may assume that P holds without having to check it. If, in fact, a producer has produced TDF for which P does not hold, the effect is undefined.

  • "target-defined": means that behaviour will be defined, but that it varies from one target machine to another. Each target installer shall define everything which is said to be "target-defined".

4.2. Properties of Installers

All installers must implement all of the constructions of TDF. There are some constructions where the installers may impose limits on the ranges of values which are implemented. In these cases the description of the installer must specify these limits.

Installers are not expected to check that the TDF they are processing is well-formed, nor that undefined constructs are absent. If the TDF is not well-formed any effect is permitted.

Installers shall only implement optimisations which are correct in all circumstances. This correctness can only be shown by demonstrating the equivalence of the transformed program, from equivalences deducible from this specification or from the ordinary laws of arithmetic. No statements are made in this specification of the form "such and such an optimisation is permitted".

Fortran90 has a notion of mathematical equivalence which is not the same as TDF equivalence. It can be applied to transform programs provided parentheses in the text are not crossed. TDF does not acknowledge this concept. Such transformations would have to be applied in a context where the permitted changes are known.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tdf/spec8.html100644 1750 1750 654776 6466607530 17431 0ustar brooniebroonie Specification of TDF Constructs

TDF Specification, Issue 4.0

January 1998

next section previous section current document TenDRA home page document index


5.1 - ACCESS
5.1.1 - access_apply_token
5.1.2 - access_cond
5.1.3 - add_accesses
5.1.4 - constant
5.1.5 - long_jump_access
5.1.6 - no_other_read
5.1.7 - no_other_write
5.1.8 - out_par
5.1.9 - preserve
5.1.10 - register
5.1.11 - standard_access
5.1.12 - used_as_volatile
5.1.13 - visible
5.2 - AL_TAG
5.2.1 - al_tag_apply_token
5.2.2 - make_al_tag
5.3 - AL_TAGDEF
5.3.1 - make_al_tagdef
5.4 - AL_TAGDEF_PROPS
5.4.1 - make_al_tagdefs
5.5 - ALIGNMENT
5.5.1 - alignment_apply_token
5.5.2 - alignment_cond
5.5.3 - alignment
5.5.4 - alloca_alignment
5.5.5 - callees_alignment
5.5.6 - callers_alignment
5.5.7 - code_alignment
5.5.8 - locals_alignment
5.5.9 - obtain_al_tag
5.5.10 - parameter_alignment
5.5.11 - unite_alignments
5.5.12 - var_param_alignment
5.6 - BITFIELD_VARIETY
5.6.1 - bfvar_apply_token
5.6.2 - bfvar_cond
5.6.3 - bfvar_bits
5.7 - BITSTREAM
5.8 - BOOL
5.8.1 - bool_apply_token
5.8.2 - bool_cond
5.8.3 - false
5.8.4 - true
5.9 - BYTESTREAM
5.10 - CALLEES
5.10.1 - make_callee_list
5.10.2 - make_dynamic_callees
5.10.3 - same_callees
5.11 - CAPSULE
5.11.1 - make_capsule
5.12 - CAPSULE_LINK
5.12.1 - make_capsule_link
5.13 - CASELIM
5.13.1 - make_caselim
5.14 - ERROR_CODE
5.14.1 - nil_access
5.14.2 - overflow
5.14.3 - stack_overflow
5.15 - ERROR_TREATMENT
5.15.1 - errt_apply_token
5.15.2 - errt_cond
5.15.3 - continue
5.15.4 - error_jump
5.15.5 - trap
5.15.6 - wrap
5.15.7 - impossible
5.16 - EXP
5.16.1 - exp_apply_token
5.16.2 - exp_cond
5.16.3 - abs
5.16.4 - add_to_ptr
5.16.5 - and
5.16.6 - apply_proc
5.16.7 - apply_general_proc
5.16.8 - assign
5.16.9 - assign_with_mode
5.16.10 - bitfield_assign
5.16.11 - bitfield_assign_with_mode
5.16.12 - bitfield_contents
5.16.13 - bitfield_contents_with_mode
5.16.14 - case
5.16.15 - change_bitfield_to_int
5.16.16 - change_floating_variety
5.16.17 - change_variety
5.16.18 - change_int_to_bitfield
5.16.19 - complex_conjugate
5.16.20 - component
5.16.21 - concat_nof
5.16.22 - conditional
5.16.23 - contents
5.16.24 - contents_with_mode
5.16.25 - current_env
5.16.26 - div0
5.16.27 - div1
5.16.28 - div2
5.16.29 - env_offset
5.16.30 - env_size
5.16.31 - fail_installer
5.16.32 - float_int
5.16.33 - floating_abs
5.16.34 - floating_div
5.16.35 - floating_minus
5.16.36 - floating_maximum
5.16.37 - floating_minimum
5.16.38 - floating_mult
5.16.39 - floating_negate
5.16.40 - floating_plus
5.16.41 - floating_power
5.16.42 - floating_test
5.16.43 - goto
5.16.44 - goto_local_lv
5.16.45 - identify
5.16.46 - ignorable
5.16.47 - imaginary_part
5.16.48 - initial_value
5.16.49 - integer_test
5.16.50 - labelled
5.16.51 - last_local
5.16.52 - local_alloc
5.16.53 - local_alloc_check
5.16.54 - local_free
5.16.55 - local_free_all
5.16.56 - long_jump
5.16.57 - make_complex
5.16.58 - make_compound
5.16.59 - make_floating
5.16.60 - make_general_proc
5.16.61 - make_int
5.16.62 - make_local_lv
5.16.63 - make_nof
5.16.64 - make_nof_int
5.16.65 - make_null_local_lv
5.16.66 - make_null_proc
5.16.67 - make_null_ptr
5.16.68 - make_proc
5.16.69 - make_stack_limit
5.16.70 - make_top
5.16.71 - make_value
5.16.72 - maximum
5.16.73 - minimum
5.16.74 - minus
5.16.75 - move_some
5.16.76 - mult
5.16.77 - n_copies
5.16.78 - negate
5.16.79 - not
5.16.80 - obtain_tag
5.16.81 - offset_add
5.16.82 - offset_div
5.16.83 - offset_div_by_int
5.16.84 - offset_max
5.16.85 - offset_mult
5.16.86 - offset_negate
5.16.87 - offset_pad
5.16.88 - offset_subtract
5.16.89 - offset_test
5.16.90 - offset_zero
5.16.91 - or
5.16.92 - plus
5.16.93 - pointer_test
5.16.94 - power
5.16.95 - proc_test
5.16.96 - profile
5.16.97 - real_part
5.16.98 - rem0
5.16.99 - rem1
5.16.100 - rem2
5.16.101 - repeat
5.16.102 - return
5.16.103 - return_to_label
5.16.104 - round_with_mode
5.16.105 - rotate_left
5.16.106 - rotate_right
5.16.107 - sequence
5.16.108 - set_stack_limit
5.16.109 - shape_offset
5.16.110 - shift_left
5.16.111 - shift_right
5.16.112 - subtract_ptrs
5.16.113 - tail_call
5.16.114 - untidy_return
5.16.115 - variable
5.16.116 - xor
5.17 - EXTERNAL
5.17.1 - string_extern
5.17.2 - unique_extern
5.17.3 - chain_extern
5.18 - EXTERN_LINK
5.18.1 - make_extern_link
5.19 - FLOATING_VARIETY
5.19.1 - flvar_apply_token
5.19.2 - flvar_cond
5.19.3 - flvar_parms
5.19.4 - complex_parms
5.19.5 - float_of_complex
5.19.6 - complex_of_float
5.20 - GROUP
5.20.1 - make_group
5.21 - LABEL
5.21.1 - label_apply_token
5.21.2 - make_label
5.22 - LINK
5.22.1 - make_link
5.23 - LINKEXTERN
5.23.1 - make_linkextern
5.24 - LINKS
5.24.1 - make_links
5.25 - NAT
5.25.1 - nat_apply_token
5.25.2 - nat_cond
5.25.3 - computed_nat
5.25.4 - error_val
5.25.5 - make_nat
5.26 - NTEST
5.26.1 - ntest_apply_token
5.26.2 - ntest_cond
5.26.3 - equal
5.26.4 - greater_than
5.26.5 - greater_than_or_equal
5.26.6 - less_than
5.26.7 - less_than_or_equal
5.26.8 - not_equal
5.26.9 - not_greater_than
5.26.10 - not_greater_than_or_equal
5.26.11 - not_less_than
5.26.12 - not_less_than_or_equal
5.26.13 - less_than_or_greater_than
5.26.14 - not_less_than_and_not_greater_than
5.26.15 - comparable
5.26.16 - not_comparable
5.27 - OTAGEXP
5.27.1 - make_otagexp
5.28 - PROCPROPS
5.28.1 - procprops_apply_token
5.28.2 - procprops_cond
5.28.3 - add_procprops
5.28.4 - check_stack
5.28.5 - inline
5.28.6 - no_long_jump_dest
5.28.7 - untidy
5.28.8 - var_callees
5.28.9 - var_callers
5.29 - PROPS
5.30 - ROUNDING_MODE
5.30.1 - rounding_mode_apply_token
5.30.2 - rounding_mode_cond
5.30.3 - round_as_state
5.30.4 - to_nearest
5.30.5 - toward_larger
5.30.6 - toward_smaller
5.30.7 - toward_zero
5.31 - SHAPE
5.31.1 - shape_apply_token
5.31.2 - shape_cond
5.31.3 - bitfield
5.31.4 - bottom
5.31.5 - compound
5.31.6 - floating
5.31.7 - integer
5.31.8 - nof
5.31.9 - offset
5.31.10 - pointer
5.31.11 - proc
5.31.12 - top
5.32 - SIGNED_NAT
5.32.1 - signed_nat_apply_token
5.32.2 - signed_nat_cond
5.32.3 - computed_signed_nat
5.32.4 - make_signed_nat
5.32.5 - snat_from_nat
5.33 - SORTNAME
5.33.1 - access
5.33.2 - al_tag
5.33.3 - alignment_sort
5.33.4 - bitfield_variety
5.33.5 - bool
5.33.6 - error_treatment
5.33.7 - exp
5.33.8 - floating_variety
5.33.9 - foreign_sort
5.33.10 - label
5.33.11 - nat
5.33.12 - ntest
5.33.13 - procprops
5.33.14 - rounding_mode
5.33.15 - shape
5.33.16 - signed_nat
5.33.17 - string
5.33.18 - tag
5.33.19 - transfer_mode
5.33.20 - token
5.33.21 - variety
5.34 - STRING
5.34.1 - string_apply_token
5.34.2 - string_cond
5.34.3 - concat_string
5.34.4 - make_string
5.35 - TAG
5.35.1 - tag_apply_token
5.35.2 - make_tag
5.36 - TAGACC
5.36.1 - make_tagacc
5.37 - TAGDEC
5.37.1 - make_id_tagdec
5.37.2 - make_var_tagdec
5.37.3 - common_tagdec
5.38 - TAGDEC_PROPS
5.38.1 - make_tagdecs
5.39 - TAGDEF
5.39.1 - make_id_tagdef
5.39.2 - make_var_tagdef
5.39.3 - common_tagdef
5.40 - TAGDEF_PROPS
5.40.1 - make_tagdefs
5.41 - TAGSHACC
5.41.1 - make_tagshacc
5.42 - TDFBOOL
5.43 - TDFIDENT
5.44 - TDFINT
5.45 - TDFSTRING
5.46 - TOKDEC
5.46.1 - make_tokdec
5.47 - TOKDEC_PROPS
5.47.1 - make_tokdecs
5.48 - TOKDEF
5.48.1 - make_tokdef
5.49 - TOKDEF_PROPS
5.49.1 - make_tokdefs
5.50 - TOKEN
5.50.1 - token_apply_token
5.50.2 - make_tok
5.50.3 - use_tokdef
5.51 - TOKEN_DEFN
5.51.1 - token_definition
5.52 - TOKFORMALS
5.52.1 - make_tokformals
5.53 - TRANSFER_MODE
5.53.1 - transfer_mode_apply_token
5.53.2 - transfer_mode_cond
5.53.3 - add_modes
5.53.4 - overlap
5.53.5 - standard_transfer_mode
5.53.6 - trap_on_nil
5.53.7 - volatile
5.53.8 - complete
5.54 - UNIQUE
5.54.1 - make_unique
5.55 - UNIT
5.55.1 - make_unit
5.56 - VARIETY
5.56.1 - var_apply_token
5.56.2 - var_cond
5.56.3 - var_limits
5.56.4 - var_width
5.57 - VERSION_PROPS
5.57.1 - make_versions
5.58 - VERSION
5.58.1 - make_version
5.58.2 - user_info

5. Specification of TDF Constructs


5.1. ACCESS

Number of encoding bits: 4
Is coding extendable: yes

An ACCESS describes properties a variable or identity may have which may constrain or describe the ways in which the variable or identity is used.

Each construction which needs an ACCESS uses it in the form OPTION(ACCESS). If the option is absent the variable or identity has no special properties.

An ACCESS acts like a set of the values constant, long_jump_access, no_other_read, no_other_write, register, out_par, used_as_volatile, and visible. standard_access acts like the empty set. add_accesses is the set union operation.

5.1.1. access_apply_token

Encoding number: 1

	token_value:	TOKEN
	token_args:	BITSTREAM param_sorts(token_value)
		   -> ACCESS
The token is applied to the arguments encoded in the BITSTREAM token_args to give an ACCESS.

The notation param_sorts(token_value) is intended to mean the following. The token definition or token declaration for token_value gives the SORTs of its arguments in the SORTNAME component. The BITSTREAM in token_args consists of these SORTs in the given order. If no token declaration or definition exists in the CAPSULE, the BITSTREAM cannot be read.

5.1.2. access_cond

Encoding number: 2

	control:	EXP INTEGER(v)
	e1:		BITSTREAM ACCESS
	e2:		BITSTREAM ACCESS
		   -> ACCESS
control is evaluated. It will be a constant at install time under the constant evaluation rules. If it is non-zero, e1 is installed at this point and e2 is ignored and never processed. If control is zero then e2 is installed at this point and e1 is ignored and never processed.

5.1.3. add_accesses

Encoding number: 3

	a1:		ACCESS
	a2:		ACCESS
		   -> ACCESS
A construction qualified with add_accesses has both ACCESS properties a1 and a2. This operation is associative and commutative.

5.1.4. constant

Encoding number: 4

		   -> ACCESS
Only a variable (not an identity) may be qualified with constant. A variable qualified with constant will retain its initialising value unchanged throughout its lifetime.

5.1.5. long_jump_access

Encoding number: 5

		   -> ACCESS
An object must also have this property if it is to have a defined value when a long_jump returns to the procedure declaring the object.

5.1.6. no_other_read

Encoding number: 6

		   -> ACCESS
This property refers to a POINTER, p. It says that, within the lifetime of the declaration being qualified, there are no contents, contents_with_mode or move_some source accesses to any pointer not derived from p which overlap with any of the contents, contents_with_mode, assign, assign_with_mode or move_some accesses to pointers derived from p.

The POINTER being described is that obtained by applying obtain_tag to the TAG of the declaration. If the declaration is an identity, the SHAPE of the TAG will be a POINTER.

5.1.7. no_other_write

Encoding number: 7

		   -> ACCESS
This property refers to a POINTER, p. It says that, within the lifetime of the declaration being qualified, there are no assign, assign_with_mode or move_some destination accesses to any pointer not derived from p which overlap with any of the contents, contents_with_mode, assign, assign_with_mode or move_some accesses to pointers derived from p.

The POINTER being described is that obtained by applying obtain_tag to the TAG of the declaration. If the declaration is an identity, the SHAPE of the TAG will be a POINTER.

5.1.8. out_par

Encoding number: 8

		   -> ACCESS
An object qualified by out_par will be an output parameter in a make_general_proc construct. This will indicate that the final value of the parameter is required in postlude part of an apply_general_proc of this procedure.

5.1.9. preserve

Encoding number: 9

		   -> ACCESS
This property refers to a global object. It says that the object will be included in the final program, whether or not all possible accesses to that object are optimised away; for example by inlining all possible uses of procedure object.

5.1.10. register

Encoding number: 10

		   -> ACCESS
Indicates that an object with this property is frequently used. This can be taken as a recommendation to place it in a register.

5.1.11. standard_access

Encoding number: 11

		   -> ACCESS
An object qualified as having standard_access has normal (i.e. no special) access properties.

5.1.12. used_as_volatile

Encoding number: 12

		   -> ACCESS
An object qualified as having used_as_volatile will be used in a move_some, contents_with_mode or an assign_with_mode construct with TRANSFER_MODE volatile.

5.1.13. visible

Encoding number: 13

		   -> ACCESS
An object qualified as visible may be accessed when the procedure in which it is declared is not the current procedure. A TAG must have this property if it is to be used by env_offset.


5.2. AL_TAG

Number of encoding bits: 1
Is coding extendable: yes
Linkable entity identification: alignment

AL_TAGs name ALIGNMENTs. They are used so that circular definitions can be written in TDF. However, because of the definition of alignments, intrinsic circularities cannot occur.

For example, the following equation has a circular form x = alignment(pointer(alignment(x))) and it or a similar equation might occur in TDF. But since alignment(pointer(x)) is {pointer}, this reduces to x = {pointer}.

5.2.1. al_tag_apply_token

Encoding number: 2

	token_value:	TOKEN
	token_args:	BITSTREAM param_sorts(token_value)
		   -> AL_TAG
The token is applied to the arguments encoded in the BITSTREAM token_args to give an AL_TAG.

If there is a definition for token_value in the CAPSULE then token_args is a BITSTREAM encoding of the SORTs of its parameters, in the order specified.

5.2.2. make_al_tag

Encoding number: 1

	al_tagno:	TDFINT
		   -> AL_TAG
make_al_tag constructs an AL_TAG identified by al_tagno.


5.3. AL_TAGDEF

Number of encoding bits: 1
Is coding extendable: yes

An AL_TAGDEF gives the definition of an AL_TAG for incorporation into a AL_TAGDEF_PROPS.

5.3.1. make_al_tagdef

Encoding number: 1

	t:		TDFINT
	a:		ALIGNMENT
		   -> AL_TAGDEF
The AL_TAG identified by t is defined to stand for the ALIGNMENT a. All the AL_TAGDEFs in a CAPSULE must be considered together as a set of simultaneous equations defining ALIGNMENT values for the AL_TAGs. No order is imposed on the definitions.

In any particular CAPSULE the set of equations may be incomplete, but a CAPSULE which is being translated into code will have a set of equations which defines all the AL_TAGs which it uses.

The result of the evaluation of the control argument of any x_cond construction (e.g alignment_cond) used in a shall be independent of any AL_TAGs used in the control. Simultaneous equations defining ALIGNMENTs can then always be solved.

See Circular types in languages.


5.4. AL_TAGDEF_PROPS

Number of encoding bits: 0
Is coding extendable: no
Unit identification: aldef

5.4.1. make_al_tagdefs

Encoding number: 0

	no_labels:	TDFINT
	tds:		SLIST(AL_TAGDEF)
		   -> AL_TAGDEF_PROPS
no_labels is the number of local LABELs used in tds. tds is a list of AL_TAGDEFs which define the bindings for al_tags.


5.5. ALIGNMENT

Number of encoding bits: 4
Is coding extendable: yes

An ALIGNMENT gives information about the layout of data in memory and hence is a parameter for the POINTER and OFFSET SHAPEs (see Memory Model). This information consists of a set of elements.

The possible values of the elements in such a set are proc, code, pointer, offset, all VARIETYs, all FLOATING_VARIETYs and all BITFIELD_VARIETYs. The sets are written here as, for example, {pointer, proc} meaning the set containing pointer and proc.

In addition, there are "special" ALIGNMENTs alloca_alignment, callers_alignment, callees_alignment, locals_alignment and var_param_alignment. Each of these are considered to be sets which include all of the "ordinary" ALIGNMENTs above.

There is a function, alignment, which can be applied to a SHAPE to give an ALIGNMENT (see the definition below). The interpretation of a POINTER to an ALIGNMENT, a, is that it can serve as a POINTER to any SHAPE, s, such that alignment(s) is a subset of the set a.

So given a POINTER({proc, pointer}) it is permitted to assign a PROC or a POINTER to it, or indeed a compound containing only PROCs and POINTERs. This permission is valid only in respect of the space being of the right kind; it may or may not be big enough for the data.

The most usual use for ALIGNMENT is to ensure that addresses of int values are aligned on 4-byte boundaries, float values are aligned on 4-byte boundaries, doubles on 8-bit boundaries etc. and whatever may be implied by the definitions of the machines and languages involved.

In the specification the phrase "a will include b" where a and b are ALIGNMENTs, means that the set b will be a subset of a (or equal to a).

5.5.1. alignment_apply_token

Encoding number: 1

	token_value:	TOKEN
	token_args:	BITSTREAM param_sorts(token_value)
		   -> ALIGNMENT
The token is applied to the arguments encoded in the BITSTREAM token_args to give an ALIGNMENT.

If there is a definition for token_value in the CAPSULE then token_args is a BITSTREAM encoding of the SORTs of its parameters, in the order specified.

5.5.2. alignment_cond

Encoding number: 2

	control:	EXP INTEGER(v)
	e1:		BITSTREAM ALIGNMENT
	e2:		BITSTREAM ALIGNMENT
		   -> ALIGNMENT
control is evaluated. It will be a constant at install time under the constant evaluation rules. If it is non-zero, e1 is installed at this point and e2 is ignored and never processed. If control is zero then e2 is installed at this point and e1 is ignored and never processed.

5.5.3. alignment

Encoding number: 3

	sha:		SHAPE
		   -> ALIGNMENT
The alignment construct is defined as follows:
  • If sha is PROC then the resulting ALIGNMENT is {proc}.
  • If sha is INTEGER(v) then the resulting ALIGNMENT is {v}.
  • If sha is FLOATING(v) then the resulting ALIGNMENT is {v}.
  • If sha is BITFIELD(v) then the resulting ALIGNMENT is {v}.
  • If sha is TOP the resulting ALIGNMENT is {} - the empty set.
  • If sha is BOTTOM the resulting ALIGNMENT is undefined.
  • If sha is POINTER(x) the resulting ALIGNMENT is {pointer}.
  • If sha is OFFSET(x, y) the resulting ALIGNMENT is {offset}.
  • If sha is NOF(n, s) the resulting ALIGNMENT is alignment(s).
  • If sha is COMPOUND(EXP OFFSET(x, y)) then the resulting ALIGNMENT is x.

5.5.4. alloca_alignment

Encoding number: 4

		   -> ALIGNMENT
Delivers the ALIGNMENT of POINTERs produced from local_alloc.

5.5.5. callees_alignment

Encoding number: 5

	var:		BOOL
		   -> ALIGNMENT
If var is true the ALIGNMENT is that of callee parameters qualified by the PROCPROPS var_callees. If var is false, the ALIGNMENT is that of callee parameters not qualified by PROCPROPS var_callees.

Delivers the base ALIGNMENT of OFFSETs from a frame-pointer to a CALLEE parameter. Values of such OFFSETs can only be produced by env_offset applied to CALLEE parameters, or offset arithmetic operations applied to existing OFFSETs.

5.5.6. callers_alignment

Encoding number: 6

	var:		BOOL
		   -> ALIGNMENT
If var is true the ALIGNMENT is that of caller parameters qualified by the PROCPROPS var_callers. If var is false, the ALIGNMENT is that of caller parameters not qualified by PROCPROPS var_callers.

Delivers the base ALIGNMENT of OFFSETs from a frame-pointer to a CALLER parameter. Values of such OFFSETs can only be produced by env_offset applied to CALLER parameters, or offset arithmetic operations applied to existing OFFSETs.

5.5.7. code_alignment

Encoding number: 7

		   -> ALIGNMENT
Delivers {code}, the ALIGNMENT of the POINTER produced by make_local_lv.

5.5.8. locals_alignment

Encoding number: 8

		   -> ALIGNMENT
Delivers the base ALIGNMENT of OFFSETs from a frame-pointer to a value defined by variable or identify. Values of such OFFSETs can only be produced by env_offset applied to TAGs so defined, or offset arithmetic operations applied to existing OFFSETs.

5.5.9. obtain_al_tag

Encoding number: 9

	at:		AL_TAG
		   -> ALIGNMENT
obtain_al_tag produces the ALIGNMENT with which the AL_TAG at is bound.

5.5.10. parameter_alignment

Encoding number: 10

	sha:		SHAPE
		   -> ALIGNMENT
Delivers the ALIGNMENT of a parameter of a procedure of SHAPE sha.

5.5.11. unite_alignments

Encoding number: 11

	a1:		ALIGNMENT
	a2:		ALIGNMENT
		   -> ALIGNMENT
unite_alignments produces the alignment at which all the members of the ALIGNMENT sets a1 and a2 can be placed - in other words the ALIGNMENT set which is the union of a1 and a2.

5.5.12. var_param_alignment

Encoding number: 12

		   -> ALIGNMENT
Delivers the ALIGNMENT used in the var_param argument of make_proc.


5.6. BITFIELD_VARIETY

Number of encoding bits: 2
Is coding extendable: yes

These describe runtime bitfield values. The intention is that these values are usually kept in memory locations which need not be aligned on addressing boundaries.

There is no limit on the size of bitfield values in TDF, but an installer may specify limits. See Representing bitfields and Permitted limits.

5.6.1. bfvar_apply_token

Encoding number: 1

	token_value:	TOKEN
	token_args:	BITSTREAM param_sorts(token_value)
		   -> BITFIELD_VARIETY
The token is applied to the arguments encoded in the BITSTREAM token_args to give a BITFIELD_VARIETY.

If there is a definition for token_value in the CAPSULE then token_args is a BITSTREAM encoding of the SORTs of its parameters, in the order specified.

5.6.2. bfvar_cond

Encoding number: 2

	control:	EXP INTEGER(v)
	e1:		BITSTREAM BITFIELD_VARIETY
	e2:		BITSTREAM BITFIELD_VARIETY
		   -> BITFIELD_VARIETY
control is evaluated. It will be a constant at install time under the constant evaluation rules. If it is non-zero, e1 is installed at this point and e2 is ignored and never processed. If control is zero then e2 is installed at this point and e1 is ignored and never processed.

5.6.3. bfvar_bits

Encoding number: 3

	issigned:	BOOL
	bits:		NAT
		   -> BITFIELD_VARIETY
bfvar_bits constructs a BITFIELD_VARIETY describing a pattern of bits bits. If issigned is true, the pattern is considered to be a twos-complement signed number: otherwise it is considered to be unsigned.


5.7. BITSTREAM

A BITSTREAM consists of an encoding of any number of bits. This encoding is such that any program reading TDF can determine how to skip over it. To read it meaningfully extra knowledge of what it represents may be needed.

A BITSTREAM is used, for example, to supply parameters in a TOKEN application. If there is a definition of this TOKEN available, this will provide the information needed to decode the bitstream.

See The TDF encoding.


5.8. BOOL

Number of encoding bits: 3
Is coding extendable: yes

A BOOL is a piece of TDF which can take two values, true or false.

5.8.1. bool_apply_token

Encoding number: 1

	token_value:	TOKEN
	token_args:	BITSTREAM param_sorts(token_value)
		   -> BOOL
The token is applied to the arguments encoded in the BITSTREAM token_args to give a BOOL.

If there is a definition for token_value in the CAPSULE then token_args is a BITSTREAM encoding of the SORTs of its parameters, in the order specified.

5.8.2. bool_cond

Encoding number: 2

	control:	EXP INTEGER(v)
	e1:		BITSTREAM BOOL
	e2:		BITSTREAM BOOL
		   -> BOOL
control is evaluated. It will be a constant at install time under the constant evaluation rules. If it is non-zero, e1 is installed at this point and e2 is ignored and never processed. If control is zero then e2 is installed at this point and e1 is ignored and never processed.

5.8.3. false

Encoding number: 3

		   -> BOOL
false produces a false BOOL.

5.8.4. true

Encoding number: 4

		   -> BOOL
true produces a true BOOL.


5.9. BYTESTREAM

A BYTESTREAM is analogous to a BITSTREAM, but is encoded to permit fast copying.

See The TDF encoding.


5.10. CALLEES

Number of encoding bits: 2
Is coding extendable: yes

This is an auxilliary SORT used in calling procedures by apply_general_proc and tail_call to provide their actual callee parameters.

5.10.1. make_callee_list

Encoding number: 1

	args:		LIST(EXP)
		   -> CALLEES
The list of EXPs args are evaluated in any interleaved order and the resulting list of values form the actual callee parameters of the call.

5.10.2. make_dynamic_callees

Encoding number: 2

	ptr:		EXP POINTER(x)
	sze:		EXP OFFSET(x, y)
		   -> CALLEES
The value of size sze pointed at by ptr forms the actual callee parameters of the call.

The CALLEES value is intended to refer to a sequence of zero or more callee parameters. x will include parameter_alignment(s) for each s that is the SHAPE of an intended callee parameter. The value addressed by ptr may be produced in one of two ways. It may be produced as a COMPOUND SHAPE value in the normal sense of a structure, whose successive elements will be used to generate the sequence of callee parameters. In this case, each element in the sequence of SHAPE s must additionally be padded to parameter_alignment(s). Alternatively, ptr may address the callee parameters of an already activated procedure, by referring to the first of the sequence. sze will be equivalent to shape_offset(c) where c is the COMPOUND SHAPE just described.

The call involved (i.e. apply_general_proc or tail_call) must have a var_callees PROCPROPS.

5.10.3. same_callees

Encoding number: 3

		   -> CALLEES
The callee parameters of the call are the same as those of the current procedure.


5.11. CAPSULE

Number of encoding bits: 0
Is coding extendable: no

A CAPSULE is an independent piece of TDF. There is only one construction, make_capsule.

5.11.1. make_capsule

Encoding number: 0

	prop_names:	SLIST(TDFIDENT)
	cap_linking:	SLIST(CAPSULE_LINK)
	ext_linkage:	SLIST(EXTERN_LINK)
	groups:		SLIST(GROUP)
		   -> CAPSULE
make_capsule brings together UNITs and linking and naming information. See The Overall Structure.

The elements of the list, prop_names, correspond one-to-one with the elements of the list, groups. The element of prop_names is the unit identification of all the UNITs in the corresponding GROUP. See PROPS. A CAPSULE need not contain all the kinds of UNIT.

It is intended that new kinds of PROPS with new unit identifications can be added to the standard in a purely additive fashion, either to form a new standard or for private purposes.

The elements of the list, cap_linking, correspond one-to-one with the elements of the list, ext_linkage. The element of cap_linking gives the linkable entity identification for all the LINKEXTERNs in the element of ext_linkage. It also gives the number of CAPSULE level linkable entities having that identification.

The elements of the list, cap_linking, also correspond one-to-one with the elements of the lists called local_vars in each of the make_unit constructions for the UNITs in groups. The element of local_vars gives the number of UNIT level linkable entities having the identification in the corresponding member of cap_linking.

It is intended that new kinds of linkable entity can be added to the standard in a purely additive fashion, either to form a new standard or for private purposes.

ext_linkage provides a list of lists of LINKEXTERNs. These LINKEXTERNs specify the associations between the names to be used outside the CAPSULE and the linkable entities by which the UNITs make objects available within the CAPSULE.

The list, groups, provides the non-linkage information of the CAPSULE.


5.12. CAPSULE_LINK

Number of encoding bits: 0
Is coding extendable: no

An auxiliary SORT which gives the number of linkable entities of a given kind at CAPSULE level. It is used only in make_capsule.

5.12.1. make_capsule_link

Encoding number: 0

	sn:		TDFIDENT
	n:		TDFINT
		   -> CAPSULE_LINK
n is the number of CAPSULE level linkable entities (numbered from 0 to n-1) of the kind given by sn. sn corresponds to the linkable entity identification.


5.13. CASELIM

Number of encoding bits: 0
Is coding extendable: no

An auxiliary SORT which provides lower and upper bounds and the LABEL destination for the case construction.

5.13.1. make_caselim

Encoding number: 0

	branch:		LABEL
	lower:		SIGNED_NAT
	upper:		SIGNED_NAT
		   -> CASELIM
Makes a triple of destination and limits. The case construction uses a list of CASELIMs. If the control variable of the case lies between lower and upper, control passes to branch.


5.14. ERROR_CODE

Number of encoding bits: 2
Is coding extendable: yes

5.14.1. nil_access

Encoding number: 1

		   -> ERROR_CODE
Delivers the ERROR_CODE arising from an attempt to access a nil pointer in an operation with TRANSFER_MODE trap_on_nil.

5.14.2. overflow

Encoding number: 2

		   -> ERROR_CODE
Delivers the ERROR_CODE arising from a numerical exceptional result in an operation with ERROR_TREATMENT trap(overflow).

5.14.3. stack_overflow

Encoding number: 3

		   -> ERROR_CODE
Delivers the ERROR_CODE arising from a stack overflow in the call of a procedure defined with PROCPROPS check_stack.


5.15. ERROR_TREATMENT

Number of encoding bits: 3
Is coding extendable: yes

These values describe the way to handle various forms of error which can occur during the evaluation of operations.

It is expected that additional ERROR_TREATMENTs will be needed.

5.15.1. errt_apply_token

Encoding number: 1

	token_value:	TOKEN
	token_args:	BITSTREAM param_sorts(token_value)
		   -> ERROR_TREATMENT
The token is applied to the arguments encoded in the BITSTREAM token_args to give an ERROR_TREATMENT.

If there is a definition for token_value in the CAPSULE then token_args is a BITSTREAM encoding of the SORTs of its parameters, in the order specified.

5.15.2. errt_cond

Encoding number: 2

	control:	EXP INTEGER(v)
	e1:		BITSTREAM ERROR_TREATMENT
	e2:		BITSTREAM ERROR_TREATMENT
		   -> ERROR_TREATMENT
control is evaluated. It will be a constant at install time under the constant evaluation rules. If it is non-zero, e1 is installed at this point and e2 is ignored and never processed. If control is zero then e2 is installed at this point and e1 is ignored and never processed.

5.15.3. continue

Encoding number: 3

		   -> ERROR_TREATMENT
If an operation with a continue ERROR_TREATMENT causes an error, some value of the correct SHAPE shall be delivered. This value shall have the same properties as is specified in make_value.

5.15.4. error_jump

Encoding number: 4

	lab:		LABEL
		   -> ERROR_TREATMENT
error_jump produces an ERROR_TREATMENT which requires that control be passed to lab if it is invoked. lab will be in scope.

If a construction has an error_jump ERROR_TREATMENT and the jump is taken, the canonical order specifies only that the jump occurs after evaluating the construction. It is not specified how many further constructions are evaluated.

This rule implies that a further construction is needed to guarantee that errors have been processed. This is not yet included. The effect of nearby procedure calls or exits also needs definition.

5.15.5. trap

Encoding number: 5

	trap_list:	LIST(ERROR_CODE)
		   -> ERROR_TREATMENT
The list of ERROR_CODES in trap_list specifies a set of possible exceptional behaviours. If any of these occur in an construction with ERROR_TREATMENT trap, the TDF exception handling is invoked (see section 7.8).

The observations on canonical ordering in error_jump apply equally here.

5.15.6. wrap

Encoding number: 6

		   -> ERROR_TREATMENT
wrap is an ERROR_TREATMENT which will only be used in constructions with integer operands and delivering EXP INTEGER(v) where either the lower bound of v is zero or the construction is not one of mult, power, div0, div1, div2, rem0, rem1, rem2. The result will be evaluated and any bits in the result lying outside the representing VARIETY will be discarded (see Representing integers).

5.15.7. impossible

Encoding number: 7

		   -> ERROR_TREATMENT
impossible is an ERROR_TREATMENT which means that this error will not occur in the construct concerned.

impossible is possibly a misnomer. If an error occurs the result is undefined.


5.16. EXP

Number of encoding bits: 7
Is coding extendable: yes

EXPs are pieces of TDF which are translated into program. EXP is by far the richest SORT. There are few primitive EXPs: most of the constructions take arguments which are a mixture of EXPs and other SORTs. There are constructs delivering EXPs that correspond to the declarations, program structure, procedure calls, assignments, pointer manipulation, arithmetic operations, tests etc. of programming languages.

5.16.1. exp_apply_token

Encoding number: 1

	token_value:	TOKEN
	token_args:	BITSTREAM param_sorts(token_value)
		   -> EXP x
The token is applied to the arguments encoded in the BITSTREAM token_args to give an EXP.

If there is a definition for token_value in the CAPSULE then token_args is a BITSTREAM encoding of the SORTs of its parameters, in the order specified.

5.16.2. exp_cond

Encoding number: 2

	control:	EXP INTEGER(v)
	e1:		BITSTREAM EXP x
	e2:		BITSTREAM EXP y
		   -> EXP (control ? x : y)
control is evaluated. It will be a constant at install time under the constant evaluation rules. If it is non-zero, e1 is installed at this point and e2 is ignored and never processed. If control is zero then e2 is installed at this point and e1 is ignored and never processed.

5.16.3. abs

Encoding number: 3

	ov_err:		ERROR_TREATMENT
	arg1:		EXP INTEGER(v)
		   -> EXP INTEGER(v)
The absolute value of the result produced by arg1 is delivered.

If the result cannot be expressed in the VARIETY being used to represent v, an overflow error is caused and is handled in the way specified by ov_err.

5.16.4. add_to_ptr

Encoding number: 4

	arg1:		EXP POINTER(x)
	arg2:		EXP OFFSET(y, z)
		   -> EXP POINTER(z)
arg1 is evaluated, giving p, and arg2 is evaluated and the results are added to produce the answer. The result is derived from the pointer delivered by arg1. The intention is to produce a POINTER displaced from the argument POINTER by the given amount.

x will include y.

arg1 may deliver a null POINTER. In this case the result is derived from a null POINTER which counts as an original POINTER. Further OFFSETs may be added to the result, but the only other useful operation on the result of adding a number of OFFSETs to a null POINTER is to subtract_ptrs a null POINTER from it.

The result will be less than or equal (in the sense of pointer_test) to the result of applying add_to_ptr to the original pointer from which p is derived and the size of the space allocated for the original pointer.

In the simple representation of POINTER arithmetic (see Memory Model) add_to_ptr is represented by addition. The constraint "x includes y" ensures that no padding has to be inserted in this case.

5.16.5. and

Encoding number: 5

	arg1:		EXP INTEGER(v)
	arg2:		EXP INTEGER(v)
		   -> EXP INTEGER(v)
The arguments are evaluated producing integer values of the same VARIETY, v. The result is the bitwise and of the two values in the representing VARIETY. The result is delivered with the same SHAPE as the arguments.

See Representing integers.

5.16.6. apply_proc

Encoding number: 6

	result_shape:	SHAPE
	p:		EXP PROC
	params:		LIST(EXP)
	var_param:	OPTION(EXP)
		   -> EXP result_shape
p, params and var_param (if present) are evaluated in any interleaved order. The procedure, p, is applied to the parameters. The result of the procedure call, which will have result_shape, is delivered as the result of the construction.

The canonical order of evaluation is as if the definition were in-lined. That is, the actual parameters are evaluated interleaved in any order and used to initialise variables which are identified by the formal parameters during the evaluation of the procedure body. When this is complete the body is evaluated. So apply_proc is evaluated like a variable construction, and obeys similar rules for order of evaluation.

If p delivers a null procedure the effect is undefined.

var_param is intended to communicate parameters which vary in SHAPE from call to call. Access to these parameters during the procedure is performed by using OFFSET arithmetic. Note that it is necessary to place these values on var_param_alignment because of the definition of make_proc.

The optional var_param should not be confused with variable argument lists in the C (<stdarg.h> or <varargs.h>) sense, which are communicated by extending the params list. This is discussed further in section 7.9. If the number of arguments in the params list differs from the number of elements in the params_intro of the corresponding make_proc, then var_param must not be present.

All calls to the same procedure will yield results of the same SHAPE.

For notes on the intended implementation of procedures see section 7.9.

5.16.7. apply_general_proc

Encoding number: 7

	result_shape:	SHAPE
	prcprops:	OPTION(PROCPROPS)
	p:		EXP PROC
	callers_intro:	LIST(OTAGEXP)
	callee_pars:	CALLEES
	postlude:	EXP TOP
		   -> EXP result_shape
p, callers_intro and callee_pars are evaluated in any order. The procedure, p, is applied to the parameters. The result of the procedure call, which will have result_shape, is delivered as the result of the construction.

If p delivers a null procedure the effect is undefined.

Any TAG introduced by an OTAGEXP in callers_intro is available in postlude which will be evaluated after the application.

postlude will not contain any local_allocs or calls of procedures with untidy returns. If prcprops include untidy, postlude will be make_top.

The canonical order of evaluation is as if the definition of p were inlined in a manner dependent on prcprops.

If none of the PROCPROPS var_callers, var_callees and check_stack are present the inlining is as follows, supposing that P is the body of the definition of p:

Let Ri be the value of the EXP of the ith OTAGEXP in callers_intro and Ti be its TAG (if it is present). Let Ei be the ith value in callee_pars.
Let ri be the ith formal caller parameter TAG of p.
Let ei be the ith formal callee parameter TAG of p.

Each Ri is used to initialise a variable which is identified by ri; there will be exactly as many Ri as ri.The scope of these variable definitions is a sequence consisting of three components - the identification of a TAG res with the result of a binding of P, followed by a binding of postlude, followed by an obtain_tag of res giving the result of the inlined procedure call.

The binding of P consists of using each Ei to initialise a variable identified with ei; there will be exactly as many Ei as ei. The scope of these variable definitions is P modified so that the first return or untidy_return encountered in P gives the result of the binding. If it ends with a return, any space generated by local_allocs within the binding is freed (in the sense of local_free) at this point. If it ends with untidy_return, no freeing will take place.

The binding of postlude consists of identifying each Ti (if present) with the contents of the variable identified by ri. The scope of these identifications is postlude.

If the PROCPROPS var_callers is present, the inlining process is modified by:
A compound variable is constructed initialised to Ri in order; the alignment and padding of each individual Ri will be given by an exact application of parameter_alignment on the SHAPE of Ri. Each ri is then identified with a pointer to the copy of Ri within the compound variable; there will be at least as many Ri as ri. The evaluation then continues as above with the scope of these identifications being the sequence.

If the PROCPROPS var_callees is present, the inlining process is modified by:
The binding of P is done by generating (as if by local_alloc) a pointer to space for a compound value constructed from each Ei in order (just as for var_callers). Each ei is identified with a pointer to the copy of Ei within the generated space; there will be at least as many ei as Ei. P is evaluated within the scope of these identifications as before. Note that the generation of space for these callee parameters is a local_alloc with the binding of P, and hence will not be freed if P ends with an untidy_return.

5.16.8. assign

Encoding number: 8

	arg1:		EXP POINTER(x)
	arg2:		EXP y
		   -> EXP TOP
The value produced by arg2 will be put in the space indicated by arg1.

x will include alignment(y).

y will not be a BITFIELD.

If the space which the pointer indicates does not lie wholly within the space indicated by the original pointer from which it is derived, the effect is undefined.

If the value delivered by arg1 is a null pointer the effect is undefined.

See Overlapping and Incomplete assignment.

The constraint "x will include alignment(y)" ensures in the simple memory model that no change is needed to the POINTER.

5.16.9. assign_with_mode

Encoding number: 9

	md:		TRANSFER_MODE
	arg1:		EXP POINTER(x)
	arg2:		EXP y
		   -> EXP TOP
The value produced by arg2 will be put in the space indicated by arg1. The assignment will be carried out as specified by the TRANSFER_MODE (q.v.).

If md consists of standard_transfer_mode only, then assign_with_mode is the same as assign.

x will include alignment(y).

y will not be a BITFIELD.

If the space which the pointer indicates does not lie wholly within the space indicated by the original pointer from which it is derived, the effect is undefined.

If the value delivered by arg1 is a null pointer the effect is undefined.

See Overlapping and Incomplete assignment.

5.16.10. bitfield_assign

Encoding number: 10

	arg1:		EXP POINTER(x)
	arg2:		EXP OFFSET(y, z)
	arg3:		EXP BITFIELD(v)
		   -> EXP TOP
The value delivered by arg3 is assigned at a displacement given by arg2 from the pointer delivered by arg1.

x will include y and z will include v.

arg2, BITFIELD(v) will be variety-enclosed (see section 7.24).

5.16.11. bitfield_assign_with_mode

Encoding number: 11

	md:		TRANSFER_MODE
	arg1:		EXP POINTER(x)
	arg2:		EXP OFFSET(y, z)
	arg3:		EXP BITFIELD(v)
		   -> EXP TOP
The value delivered by arg3 is assigned at a displacement given by arg2 from the pointer delivered by arg1.The assignment will be carried out as specified by the TRANSFER_MODE (q.v.).

If md consists of standard_transfer_mode only, then bitfield_assign_with_mode is the same as bitfield_assign.

x will include y and z will include v.

arg2, BITFIELD(v) will be variety-enclosed.(see section 7.24).

5.16.12. bitfield_contents

Encoding number: 12

	v:		BITFIELD_VARIETY
	arg1:		EXP POINTER(x)
	arg2:		EXP OFFSET(y, z)
		   -> EXP BITFIELD(v)
The bitfield of BITFIELD_VARIETY v, located at the displacement delivered by arg2 from the pointer delivered by arg1 is extracted and delivered.

x will include y and z will include v.

arg2, BITFIELD(v) will be variety_enclosed (see section 7.24).

5.16.13. bitfield_contents_with_mode

Encoding number: 13

	md:		TRANSFER_MODE
	v:		BITFIELD_VARIETY
	arg1:		EXP POINTER(x)
	arg2:		EXP OFFSET(y, z)
		   -> EXP BITFIELD(v)
The bitfield of BITFIELD_VARIETY v, located at the displacement delivered by arg2 from the pointer delivered by arg1 is extracted and delivered.The operation will be carried out as specified by the TRANSFER_MODE (q.v.).

If md consists of standard_transfer_mode only, then bitfield_contents_with_mode is the same as bitfield_contents.

x will include y and z will include v.

arg2, BITFIELD(v) will be variety_enclosed (see section 7.24).

5.16.14. case

Encoding number: 14

	exhaustive:	BOOL
	control:	EXP INTEGER(v)
	branches:	LIST(CASELIM)
		   -> EXP (exhaustive ? BOTTOM : TOP)
control is evaluated to produce an integer value, c. Then c is tested to see if it lies inclusively between lower and upper, for each element of branches. If this tests succeeds, control passes to the label branch belonging to that CASELIM (see section 5.13). If c lies between no pair, the construct delivers a value of SHAPE TOP. The order in which the comparisons are made is undefined.

The sets of SIGNED_NATs in branches will be disjoint.

If exhaustive is true the value delivered by control will lie between one of the lower/upper pairs.

5.16.15. change_bitfield_to_int

Encoding number: 15

	v:		VARIETY
	arg1:		EXP BITFIELD(bv)
		   -> EXP INTEGER(v)
arg1 is evaluated and converted to a INTEGER(v).

If arg1 exceed the bounds of v, the effect is undefined.

5.16.16. change_floating_variety

Encoding number: 16

	flpt_err:	ERROR_TREATMENT
	r:		FLOATING_VARIETY
	arg1:		EXP FLOATING(f)
		   -> EXP FLOATING(r)
arg1 is evaluated and will produce floating point value, fp. The value fp is delivered, changed to the representation of the FLOATING_VARIETY r.

Either r and f will both real or both complex.

If there is a floating point error it is handled by flpt_err.

See Floating point errors.

5.16.17. change_variety

Encoding number: 17

	ov_err:		ERROR_TREATMENT
	r:		VARIETY
	arg1:		EXP INTEGER(v)
		   -> EXP INTEGER(r)
arg1 is evaluated and will produce an integer value, a. The value a is delivered, changed to the representation of the VARIETY r.

If a is not contained in the VARIETY being used to represent r, an overflow occurs and is handled according to ov_err.

5.16.18. change_int_to_bitfield

Encoding number: 18

	bv:		BITFIELD_VARIETY
	arg1:		EXP INTEGER(v)
		   -> EXP BITFIELD(bv)
arg1 is evaluated and converted to a BITFIELD(bv).

If arg1 exceed the bounds of bv, the effect is undefined.

5.16.19. complex_conjugate

Encoding number: 19

	c:		EXP FLOATING(cv)
		   -> EXP FLOATING(cv)
Delivers the complex conjugate of c.

cv will be a complex floating variety.

5.16.20. component

Encoding number: 20

	sha:		SHAPE
	arg1:		EXP COMPOUND(EXP OFFSET(x, y))
	arg2:		EXP OFFSET(x, alignment(sha))
		   -> EXP sha
arg1 is evaluated to produce a COMPOUND value. The component of this value at the OFFSET given by arg2 is delivered. This will have SHAPE sha.

arg2 will be a constant and non-negative (see Constant evaluation).

If sha is a BITFIELD then arg2, sha will be variety_enclosed (see section 7.24).

5.16.21. concat_nof

Encoding number: 21

	arg1:		EXP NOF(n, s)
	arg2:		EXP NOF(m, s)
		   -> EXP NOF(n+m, s)
arg1 and arg2 are evaluated and their results concatenated. In the result the components derived from arg1 will have lower indices than those derived from arg2.

5.16.22. conditional

Encoding number: 22

	altlab_intro:	LABEL
	first:		EXP x
	alt:		EXP z
		   -> EXP (x LUB z)
first is evaluated. If first produces a result, f, this value is delivered as the result of the whole construct, and alt is not evaluated.

If goto(altlab_intro) or any other jump (including long_jump) to altlab_intro is obeyed during the evaluation of first, then the evaluation of first will stop, alt will be evaluated and its result delivered as the result of the construction.

The lifetime of altlab_intro is the evaluation of first. altlab_intro will not be used within alt.

The actual order of evaluation of the constituents shall be indistinguishable in all observable effects (apart from time) from evaluating all the obeyed parts of first before any obeyed part of alt. Note that this specifically includes any defined error handling.

For LUB see Least Upper Bound.

5.16.23. contents

Encoding number: 23

	s:		SHAPE
	arg1:		EXP POINTER(x)
		   -> EXP s
A value of SHAPE s will be extracted from the start of the space indicated by the pointer, and this is delivered.

x will include alignment(s).

s will not be a BITFIELD.

If the space which the pointer indicates does not lie wholly within the space indicated by the original pointer from which it is derived, the effect is undefined.

If the value delivered by arg1 is a null pointer the effect is undefined.

The constraint "x will include alignment(s)" ensures in the simple memory model that no change is needed to the POINTER.

5.16.24. contents_with_mode

Encoding number: 24

	md:		TRANSFER_MODE
	s:		SHAPE
	arg1:		EXP POINTER(x)
		   -> EXP s
A value of SHAPE s will be extracted from the start of the space indicated by the pointer, and this is delivered. The operation will be carried out as specified by the TRANSFER_MODE (q.v.).

If md consists of standard_transfer_mode only, then contents_with_mode is the same as contents.

x will include alignment(s).

s will not be a BITFIELD.

If the space which the pointer indicates does not lie wholly within the space indicated by the original pointer from which it is derived, the effect is undefined.

If the value delivered by arg1 is a null pointer the effect is undefined.

5.16.25. current_env

Encoding number: 25

		   -> EXP POINTER(fa)
A value of SHAPE POINTER(fa) is created and delivered. It gives access to the variables, identities and parameters in the current procedure activation which are declared as having ACCESS visible.

If the immediately enclosing procedure is defined by make_general_proc, then fa is the set union of local_alignment and the alignments of the kinds of parameters defined. That is to say, if there are caller parameters, then the alignment includes callers_alignment(x) where x is true if and only if the PROCPROPS var_callers is present; if there are callee parameters, the alignment includes callees_alignment(x) where x is true if and only if the PROCPROPS var_callees is present.

If the immediately enclosing procedure is defined by make_proc, then fa = { locals_alignment, callers_alignment(false) }.

If an OFFSET produced by env_offset is added to a POINTER produced by current_env from an activation of the procedure which contains the declaration of the TAG used by env_offset, then the result is an original POINTER, notwithstanding the normal rules for add_to_ptr (see Original pointers).

If an OFFSET produced by env_offset is added to such a pointer from an inappropriate procedure the effect is undefined.

5.16.26. div0

Encoding number: 26

	div_by_0_err:	ERROR_TREATMENT
	ov_err:		ERROR_TREATMENT
	arg1:		EXP INTEGER(v)
	arg2:		EXP INTEGER(v)
		   -> EXP INTEGER(v)
arg1 and arg2 are evaluated and will produce integer values, a and b, of the same VARIETY, v. Either the value a D1 b or the value a D2 b is delivered as the result of the construct, with the same SHAPE as the arguments. Different occurrences of div0 in the same capsule can use D1 or D2 independently.

If b is zero a div_by_zero error occurs and is handled by div_by_0_err.

If b is not zero and the result cannot be expressed in the VARIETY being used to represent v an overflow occurs and is handled by ov_err.

Producers may assume that shifting and div0 by a constant which is a power of two yield equally good code.

See Division and modulus for the definitions of D1, D2, M1 and M2.

5.16.27. div1

Encoding number: 27

	div_by_0_err:	ERROR_TREATMENT
	ov_err:		ERROR_TREATMENT
	arg1:		EXP INTEGER(v)
	arg2:		EXP INTEGER(v)
		   -> EXP INTEGER(v)
arg1 and arg2 are evaluated and will produce integer values, a and b, of the same VARIETY, v. The value a D1 b is delivered as the result of the construct, with the same SHAPE as the arguments.

If b is zero a div_by_zero error occurs and is handled by div_by_0_err.

If b is not zero and the result cannot be expressed in the VARIETY being used to represent v an overflow occurs and is handled by ov_err.

Producers may assume that shifting and div1 by a constant which is a power of two yield equally good code.

See Division and modulus for the definitions of D1, D2, M1 and M2.

5.16.28. div2

Encoding number: 28

	div_by_0_err:	ERROR_TREATMENT
	ov_err:		ERROR_TREATMENT
	arg1:		EXP INTEGER(v)
	arg2:		EXP INTEGER(v)
		   -> EXP INTEGER(v)
arg1 and arg2 are evaluated and will produce integer values, a and b, of the same VARIETY, v. The value a D2 b is delivered as the result of the construct, with the same SHAPE as the arguments.

If b is zero a div_by_zero error occurs and is handled by div_by_0_err.

If b is not zero and the result cannot be expressed in the VARIETY being used to represent v an overflow occurs and is handled by ov_err.

Producers may assume that shifting and div2 by a constant which is a power of two yield equally good code if the lower bound of v is zero.

See Division and modulus for the definitions of D1, D2, M1 and M2.

5.16.29. env_offset

Encoding number: 29

	fa:		ALIGNMENT
	y:		ALIGNMENT
	t:		TAG x
		   -> EXP OFFSET(fa, y)
t will be the tag of a variable, identify or procedure parameter with the visible property within a procedure defined by make_general_proc or make_proc.

If it is defined in a make_general_proc, let P be its associated PROCPROPS; otherwise let P be the PROCPROPS {locals_alignment, caller_alignment(false)}.

If t is the TAG of a variable or identify, fa will contain locals_alignment; if it is a caller parameter fa will contain a caller_alignment(b) where b is true if and only if P contains var_callers ; if it is a callee parameter fa will contain a callee_alignment(b) where b is true if and only if P contains var_callees.

If t is the TAG of a variable or parameter, the result is the OFFSET of its position, within any procedure environment which derives from the procedure containing the declaration of the variable or parameter, relative to its environment pointer. In this case x will be POINTER(y).

If t is the TAG of an identify, the result will be an OFFSET of space which holds the value. This pointer will not be used to alter the value. In this case y will be alignment(x).

See section 7.10.

5.16.30. env_size

Encoding number: 30

	proctag:	TAG PROC
		   -> EXP OFFSET(locals_alignment, {})
Delivers an OFFSET of a space sufficient to contain all the variables and identifications, explicit or implicit in the procedure identified by proctag. This will not include the space required for any local_allocs or procedure calls within the procedure.

proctag will be defined in the current CAPSULE by a TAGDEF identification of a make_proc or a make_general_proc.

5.16.31. fail_installer

Encoding number: 31

	message:	STRING(k, n)
		   -> EXP BOTTOM
Any attempt to use this operation to produce code will result in a failure of the installation process. message will give information about the reason for this failure which should be passed to the installation manager.

5.16.32. float_int

Encoding number: 32

	flpt_err:	ERROR_TREATMENT
	f:		FLOATING_VARIETY
	arg1:		EXP INTEGER(v)
		   -> EXP FLOATING(f)
arg1 is evaluated to produce an integer value, which is converted to the representation of f and delivered.

If f is complex the real part of the result will be derived from arg1 and the imaginary part will be zero.

If there is a floating point error it is handled by flpt_err. See Floating point errors.

5.16.33. floating_abs

Encoding number: 33

	flpt_err:	ERROR_TREATMENT
	arg1:		EXP FLOATING(f)
		   -> EXP FLOATING(f)
arg1 is evaluated and will produce a floating point value, a, of the FLOATING_VARIETY, f. The absolute value of a is delivered as the result of the construct, with the same SHAPE as the argument.

Though floating_abs cannot produce an overflow it can give an invalid operand exception which is handled by flpt_err.

f will not be complex.

See also Floating point accuracy.

5.16.34. floating_div

Encoding number: 34

	flpt_err:	ERROR_TREATMENT
	arg1:		EXP FLOATING(f)
	arg2:		EXP FLOATING(f)
		   -> EXP FLOATING(f)
arg1 and arg2 are evaluated and will produce floating point values, a and b, of the same FLOATING_VARIETY, f. The value a/b is delivered as the result of the construct, with the same SHAPE as the arguments.

If there is a floating point error it is handled by flpt_err. See Floating point errors.

See also Floating point accuracy.

5.16.35. floating_minus

Encoding number: 35

	flpt_err:	ERROR_TREATMENT
	arg1:		EXP FLOATING(f)
	arg2:		EXP FLOATING(f)
		   -> EXP FLOATING(f)
arg1 and arg2 are evaluated and will produce floating point values, a and b, of the same FLOATING_VARIETY, f. The value a-b is delivered as the result of the construct, with the same SHAPE as the arguments.

If there is a floating point error it is handled by flpt_err. See Floating point errors.

See also Floating point accuracy.

5.16.36. floating_maximum

Encoding number: 36

	flpt_err:	ERROR_TREATMENT
	arg1:		EXP FLOATING(f)
	arg2:		EXP FLOATING(f)
		   -> EXP FLOATING(f)
The maximum of the values delivered by arg1 and arg2 is the result. f will not be complex.

If arg1 and arg2 are incomparable, flpt_err will be invoked.

See also Floating point accuracy.

5.16.37. floating_minimum

Encoding number: 37

	flpt_err:	ERROR_TREATMENT
	arg1:		EXP FLOATING(f)
	arg2:		EXP FLOATING(f)
		   -> EXP FLOATING(f)
The minimum of the values delivered by arg1 and arg2 is the result. f will not be complex.

If arg1 and arg2 are incomparable, flpt_err will be invoked.

See also Floating point accuracy.

5.16.38. floating_mult

Encoding number: 38

	flpt_err:	ERROR_TREATMENT
	arg1:		LIST(EXP)
		   -> EXP FLOATING(f)
The arguments, arg1, are evaluated producing floating point values all of the same FLOATING_VARIETY, f. These values are multiplied in any order and the result of this multiplication is delivered as the result of the construct, with the same SHAPE as the arguments.

If there is a floating point error it is handled by flpt_err. See Floating point errors.

Note that separate floating_mult operations cannot in general be combined, because rounding errors need to be controlled. The reason for allowing floating_mult to take a variable number of arguments is to make it possible to specify that a number of multiplications can be re-ordered.

If arg1 contains one element the result is the value of that element. There will be at least one element in arg1.

See also Floating point accuracy.

5.16.39. floating_negate

Encoding number: 39

	flpt_err:	ERROR_TREATMENT
	arg1:		EXP FLOATING(f)
		   -> EXP FLOATING(f)
arg1 is evaluated and will produce a floating point value, a, of the FLOATING_VARIETY, f. The value -a is delivered as the result of the construct, with the same SHAPE as the argument.

Though floating_negate cannot produce an overflow it can give an invalid operand exception which is handled by flpt_err.

See also Floating point accuracy.

5.16.40. floating_plus

Encoding number: 40

	flpt_err:	ERROR_TREATMENT
	arg1:		LIST(EXP)
		   -> EXP FLOATING(f)
The arguments, arg1, are evaluated producing floating point values, all of the same FLOATING_VARIETY, f. These values are added in any order and the result of this addition is delivered as the result of the construct, with the same SHAPE as the arguments.

If there is a floating point error it is handled by flpt_err. See Floating point errors.

Note that separate floating_plus operations cannot in general be combined, because rounding errors need to be controlled. The reason for allowing floating_plus to take a variable number of arguments is to make it possible to specify that a number of multiplications can be re-ordered.

If arg1 contains one element the result is the value of that element. There will be at least one element in arg1.

See also Floating point accuracy.

5.16.41. floating_power

Encoding number: 41

	flpt_err:	ERROR_TREATMENT
	arg1:		EXP FLOATING(f)
	arg2:		EXP INTEGER(v)
		   -> EXP FLOATING(f)
The result of arg1 is raised to the power given by arg2.

If there is a floating point error it is handled by flpt_err. See Floating point errors.

See also Floating point accuracy.

5.16.42. floating_test

Encoding number: 42

	prob:		OPTION(NAT)
	flpt_err:	ERROR_TREATMENT
	nt:		NTEST
	dest:		LABEL
	arg1:		EXP FLOATING(f)
	arg2:		EXP FLOATING(f)
		   -> EXP TOP
arg1 and arg2 are evaluated and will produce floating point values, a and b, of the same FLOATING_VARIETY, f. These values are compared using nt.

If f is complex then nt will be equal or not_equal.

If a nt b, this construction yields TOP. Otherwise control passes to dest.

If prob is present, prob/100 gives the probability that control will continue to the next construct (ie. not pass to dest). If prob is absent this probability is unknown.

If there is a floating point error it is handled by flpt_err. See Floating point errors.

See also Floating point accuracy.

5.16.43. goto

Encoding number: 43

	dest:		LABEL
		   -> EXP BOTTOM
Control passes to the EXP labelled dest. This construct will only be used where dest is in scope.

5.16.44. goto_local_lv

Encoding number: 44

	arg1:		EXP POINTER({code})
		   -> EXP BOTTOM
arg1 is evaluated. The label from which the value delivered by arg1 was created will be within its lifetime and this construction will be obeyed in the same activation of the same procedure as the creation of the POINTER({code}) by make_local_lv. Control passes to this activation of this LABEL.

If arg1 delivers a null POINTER the effect is undefined.

5.16.45. identify

Encoding number: 45

	opt_access:	OPTION(ACCESS)
	name_intro:	TAG x
	definition:	EXP x
	body:		EXP y
		   -> EXP y
definition is evaluated to produce a value, v. Then body is evaluated. During this evaluation, v is bound to name_intro. This means that inside body an evaluation of obtain_tag(name_intro) will produce the value, v.

The value delivered by identify is that produced by body.

The TAG given for name_intro will not be reused within the current UNIT. No rules for the hiding of one TAG by another are given: this will not happen. The lifetime of name_intro is the evaluation of body.

If opt_access contains visible, it means that the value must not be aliased while the procedure containing this declaration is not the current procedure. Hence if there are any copies of this value they will need to be refreshed when the procedure is returned to. The easiest implementation when opt_access is visible may be to keep the value in memory, but this is not a necessary requirement.

The order in which the constituents of definition and body are evaluated shall be indistinguishable in all observable effects (apart from time) from completely evaluating definition before starting body. See the note about order in sequence.

5.16.46. ignorable

Encoding number: 46

	arg1:		EXP x
		   -> EXP x
If the result of this construction is discarded, arg1 need not be evaluated, though evaluation is permitted. If the result is used it is the result of arg1.

5.16.47. imaginary_part

Encoding number: 47

	arg1:		EXP c
		   -> EXP FLOATING (float_of_complex(c))
c will be complex. Delivers the imaginary part of the value produced by arg1.

5.16.48. initial_value

Encoding number: 48

	init:		EXP s
		   -> EXP s
Any tag used as an argument of an obtain_tag in init will be global or defined within init.

All labels used in init will be defined within init.

init will be evaluated once only before any procedure application, other than those involved in this or other initial_value constructions, but after all load-time constant initialisations of TAGDEFs. The result of this evaluation is the value of the construction.

The order of evaluation of the different initial_values in a program is undefined.

See section 7.29.

5.16.49. integer_test

Encoding number: 49

	prob:		OPTION(NAT)
	nt:		NTEST
	dest:		LABEL
	arg1:		EXP INTEGER(v)
	arg2:		EXP INTEGER(v)
		   -> EXP TOP
arg1 and arg2 are evaluated and will produce integer values, a and b, of the same VARIETY, v. These values are compared using nt.

If a nt b, this construction yields TOP. Otherwise control passes to dest.

If prob is present, prob/100 gives the probability that control will continue to the next construct (ie. not pass to dest). If prob is absent this probability is unknown.

5.16.50. labelled

Encoding number: 50

	labs_intro:	LIST(LABEL)
	starter:	EXP x
	places:		LIST(EXP)
		   -> EXP w
The lists labs_intro and places shall have the same number of elements.

To evaluate the construction starter is evaluated. If its evaluation runs to completion producing a value, then this is delivered as the result of the whole construction. If a goto one of the LABELs in labs_intro or any other jump to one of these LABELs is evaluated, then the evaluation of starter stops and the corresponding element of places is evaluated. In the canonical ordering all the operations which are evaluated from starter are completed before any from an element of places is started. If the evaluation of the member of places produces a result this is the result of the construction.

If a jump to any of the labs_intro is obeyed then evaluation continues similarly. Such jumping may continue indefinitely, but if any places terminates, then the value it produces is the value delivered by the construction.

The SHAPE w is the LUB of x and all the places. See Least Upper Bound.

The actual order of evaluation of the constituents shall be indistinguishable in all observable effects (apart from time) from that described above. Note that this specifically includes any defined error handling.

The lifetime of each of the LABELs in labs_intro, is the evaluation of starter and all the elements of places.

5.16.51. last_local

Encoding number: 51

	x:		EXP OFFSET(y, z)
		   -> EXP POINTER(alloca_alignment)
If the last use of local_alloc in the current activation of the current procedure was after the last use of local_free or local_free_all, then the value returned is the last POINTER allocated with local_alloc.

If the last use of local_free in the current activation of the current procedure was after the last use of local_alloc, then the result is the POINTER last allocated which is still active.

The result POINTER will have been created by local_alloc with the value of its arg1 equal to the value of x.

If the last use of local_free_all in the current activation of the current procedure was after the last use of local_alloc, or if there has been no use of local_alloc in the current activation of the current procedure, then the result is undefined.

The ALIGNMENT, alloca_alignment, includes the set union of all the ALIGNMENTs which can be produced by alignment from any SHAPE. See Special alignments.

5.16.52. local_alloc

Encoding number: 52

	arg1:		EXP OFFSET(x, y)
		   -> EXP POINTER(alloca_alignment)
The arg1 expression is evaluated and space is allocated sufficient to hold a value of the given size. The result is an original pointer to this space.

x will not consist entirely of bitfield alignments.

The initial contents of the space are not specified.

This allocation is as if on the stack of the current procedure, and the lifetime of the pointer ends when the current activation of the current procedure ends with a return, return_to_label or tail_call or if there is a long jump out of the activation. Any use of the pointer thereafter is undefined. Note the specific exclusion of the procedure ending with untidy_return; in this case the calling procedure becomes the current activation.

The uses of local_alloc within the procedure are ordered dynamically as they occur, and this order affects the meaning of local_free and last_local.

arg1 may be a zero OFFSET. In this case suppose the result is p. Then a subsequent use, in the same activation of the procedure, of

local_free(offset_zero(alloca_alignment), p)

will return the alloca stack to the state it was in immediately before the use of local_alloc.

Note that if a procedure which uses local_alloc is inlined, it may be necessary to use local_free to get the correct semantics.

See also section 7.12.

5.16.53. local_alloc_check

Encoding number: 53

	arg1:		EXP OFFSET(x, y)
		   -> EXP POINTER(alloca_alignment)

If the OFFSET arg1 can be accomodated within the limit of the local_alloc stack (see section 5.16.108), the action is precisely the same as local_alloc.

If not, normal action is stopped and a TDF exception is raised with ERROR_CODE stack_overflow.

5.16.54. local_free

Encoding number: 54

	a:		EXP OFFSET(x, y)
	p:		EXP POINTER(alloca_alignment)
		   -> EXP TOP
The POINTER, p, will be an original pointer to space allocated by local_alloc within the current call of the current procedure. It and all spaces allocated after it by local_alloc will no longer be used. This POINTER will have been created by local_alloc with the value of its arg1 equal to the value of a.

Any subsequent use of pointers to the spaces no longer used will be undefined.

5.16.55. local_free_all

Encoding number: 55

		   -> EXP TOP
Every space allocated by local_alloc within the current call of the current procedure will no longer be used.

Any use of a pointer to space allocated before this operation within the current call of the current procedure is undefined.

Note that if a procedure which uses local_free_all is inlined, it may be necessary to use local_free to get the correct semantics.

5.16.56. long_jump

Encoding number: 56

	arg1:		EXP POINTER(fa)
	arg2:		EXP POINTER({code})
		   -> EXP BOTTOM
arg1 will be a pointer produced by an application of curent_env in a currently active procedure.

The frame produced by arg1 is reinstated as the current procedure. This frame will still be active. Evaluation recommences at the label given by arg2. This operation will only be used during the lifetime of that label.

Only TAGs declared to have long_jump_access will be defined at the re-entry.

If arg2 delivers a null POINTER({code}) the effect is undefined.

5.16.57. make_complex

Encoding number: 57

	c:		FLOATING_VARIETY
	arg1:		EXP FLOATING(f)
	arg2:		EXP FLOATING(f)
		   -> EXP FLOATING(c)
c will be complex and derived from the same parameters as f.

Delivers a complex number with arg1 delivering the real part and arg2 the imaginary.

5.16.58. make_compound

Encoding number: 58

	arg1:		EXP OFFSET(base, y)
	arg2:		LIST(EXP)
		   -> EXP COMPOUND(arg1)
Let the ith component (i starts at one) of arg2 be x[i]. The list may be empty.

The components x[2 * k] are values which are to be placed at OFFSETs given by x[2 * k - 1]. These OFFSETs will be constants and non-negative.

The OFFSET x[2 * k - 1] will have the SHAPE OFFSET(zk, alignment(shape(x[2 * k]))), where shape gives the SHAPE of the component and base includes zk.

arg1 will be a constant non-negative OFFSET, see offset_pad.

The values x[2 * k - 1] will be such that the components when in place either do not overlap or exactly coincide, in the sense that the OFFSETs are equal and the values have the same SHAPE. If they coincide the corresponding values x[2 * k] will have VARIETY SHAPEs and will be ored together.

The SHAPE of a x[2 * k] component can be TOP. In this case the component is evaluated, but no value is placed at the corresponding OFFSET.

If x[2 * k] is a BITFIELD then x[2 * k - 1], shape(x[2 * k]) will be variety-enclosed (see section 7.24).

5.16.59. make_floating

Encoding number: 59

	f:		FLOATING_VARIETY
	rm:		ROUNDING_MODE
	negative:	BOOL
	mantissa:	STRING(k, n)
	base:		NAT
	exponent:	SIGNED_NAT
		   -> EXP FLOATING(f)
f will not be complex.

mantissa will be a STRING of 8-bit integers, each of which is either 46 or is greater than or equal to 48. Those values, c, which lie between 48 and 63 will represent the digit c-48. A decimal point is represented by 46.

The BOOL negative determines the sign of the result, if true the result will be negative, if false, positive.

A floating point number, mantissa*(baseexponent) is created and rounded to the representation of f as specified by rm. rm will not be round_as_state. mantissa is read as a sequence of digits to base base and may contain one point symbol.

base will be one of the numbers 2, 4, 8, 10, 16. Note that in base 16 the digit 10 is represented by the character number 58 etc.

The result will lie in f.

5.16.60. make_general_proc

Encoding number: 60

	result_shape:	SHAPE
	prcprops:	OPTION(PROCPROPS)
	caller_intro:	LIST(TAGSHACC)
	callee_intro:	LIST(TAGSHACC)
	body:		EXP BOTTOM
		   -> EXP PROC
Evaluation of make_general_proc delivers a PROC. When this procedure is applied to parameters using apply_general_proc, space is allocated to hold the actual values of the parameters caller_intro and callee_intro . The values produced by the actual parameters are used to initialise these spaces. Then body is evaluated. During this evaluation the TAGs in caller_intro and callee_intro are bound to original POINTERs to these spaces. The lifetime of these TAGs is the evaluation of body.

The SHAPE of body will be BOTTOM. caller_intro and callee_intro may be empty.

The TAGs introduced in the parameters will not be reused within the current UNIT.

The SHAPEs in the parameters specify the SHAPE of the corresponding TAGs.

The OPTION(ACCESS) (in params_intro) specifies the ACCESS properties of the corresponding parameter, just as for a variable declaration.

In body the only TAGs which may be used as an argument of obtain_tag are those which are declared by identify or variable constructions in body and which are in scope, or TAGs which are declared by make_id_tagdef, make_var_tagdef or common_tagdef or are in caller_intro or callee_intro. If a make_proc occurs in body its TAGs are not in scope.

The argument of every return or untidy_return construction in body will have SHAPE result_shape. Every apply_general_proc using the procedure will specify the SHAPE of its result to be result_shape.

The presence or absence of each of the PROCPROPS var_callers, var_callees, check_stack and untidy in prcprops will be reflected in every apply_general_proc or tail_call on this procedure.

The definition of the canonical ordering of the evaluation of apply_general_proc gives the definition of these PROCPROPS.

If prcprocs contains check_stack, a TDF exception will be raised if the static space required for the procedure call (in the sense of env_size) would exceed the limit given by set_stack_limit.

If prcprops contains no_long_jump_dest, the body of the procedure will never contain the destination label of a long_jump.

For notes on the intended implementation of procedures see section 7.9.

5.16.61. make_int

Encoding number: 61

	v:		VARIETY
	value:		SIGNED_NAT
		   -> EXP INTEGER(v)
An integer value is delivered of which the value is given by value, and the VARIETY by v. The SIGNED_NAT value will lie between the bounds of v.

5.16.62. make_local_lv

Encoding number: 62

	lab:		LABEL
		   -> EXP POINTER({code})
A POINTER({code}) lv is created and delivered. It can be used as an argument to goto_local_lv or long_jump. If and when one of these is evaluated with lv as an argument, control will pass to lab.

5.16.63. make_nof

Encoding number: 63

	arg1:		LIST(EXP)
		   -> EXP NOF(n, s)
Creates an array of n values of SHAPE s, containing the given values produced by evaluating the members of arg1 in the same order as they occur in the list.

n will not be zero.

5.16.64. make_nof_int

Encoding number: 64

	v:		VARIETY
	str:		STRING(k, n)
		   -> EXP NOF(n, INTEGER(v))
An NOF INTEGER is delivered. The conversions are carried out as if the elements of str were INTEGER(var_limits(0, 2k-1)). n may be zero.

5.16.65. make_null_local_lv

Encoding number: 65

		   -> EXP POINTER({code})
Makes a null POINTER({code}) which can be detected by pointer_test. The effect of goto_local_lv or long_jump applied to this value is undefined.

All null POINTER({code}) are equal to each other and unequal to any other POINTERs.

5.16.66. make_null_proc

Encoding number: 66

		   -> EXP PROC
A null PROC is created and delivered. The null PROC may be tested for by using proc_test. The effect of using it as the first argument of apply_proc is undefined.

All null PROC are equal to each other and unequal to any other PROC.

5.16.67. make_null_ptr

Encoding number: 67

	a:		ALIGNMENT
		   -> EXP POINTER(a)
A null POINTER(a) is created and delivered. The null POINTER may be tested for by pointer_test.

a will not include code.

All null POINTER(x) are equal to each other and unequal to any other POINTER(x).

5.16.68. make_proc

Encoding number: 68

	result_shape:	SHAPE
	params_intro:	LIST(TAGSHACC)
	var_intro:	OPTION(TAGACC)
	body:		EXP BOTTOM
		   -> EXP PROC
Evaluation of make_proc delivers a PROC. When this procedure is applied to parameters using apply_proc, space is allocated to hold the actual values of the parameters params_intro and var_intro (if present). The values produced by the actual parameters are used to initialise these spaces. Then body is evaluated. During this evaluation the TAGs in params_intro and var_intro are bound to original POINTERs to these spaces. The lifetime of these TAGs is the evaluation of body.

If var_intro is present, it may be used for one of two purposes, with different consequences for corresponding uses of apply_proc. See section 7.9. The ALIGNMENT, var_param_alignment, includes the set union of all the ALIGNMENTs which can be produced by alignment from any SHAPE. Note that var_intro does not contain an ACCESS component and so cannot be marked visible. Hence it is not a possible argument of env_offset. If present, var_intro is an original pointer.

The SHAPE of body will be BOTTOM. params_intro may be empty.

The TAGs introduced in the parameters will not be reused within the current UNIT.

The SHAPEs in the parameters specify the SHAPE of the corresponding TAGs.

The OPTION(ACCESS) (in params_intro) specifies the ACCESS properties of the corresponding parameter, just as for a variable declaration.

In body the only TAGs which may be used as an argument of obtain_tag are those which are declared by identify or variable constructions in body and which are in scope, or TAGs which are declared by make_id_tagdef, make_var_tagdef or common_tagdef or are in params_intro or var_intro. If a make_proc occurs in body its TAGs are not in scope.

The argument of every return construction in body will have SHAPE result_shape. Every apply_proc using the procedure will specify the SHAPE of it result to be result_shape.

For notes on the intended implementation of procedures see section 7.9.

5.16.69. make_stack_limit

Encoding number: 116

	stack_base:	EXP POINTER(fa)
	frame_size:	EXP OFFSET(locals_alignment, x)
	alloc_size:	EXP OFFSET(alloca_alignment, y)
		   -> EXP POINTER(fb)
This creates a POINTER suitable for use with set_stack_limit.

fa and fb will include locals_alignment and, if alloc_size is not the zero offset, will also contain alloca_alignment.

The result will be the same as if given by:
Assume stack_base is the current frame-pointer as given by current_env in a hypothetical procedure P with env_size equal to frame_size and which has generated alloc_size by a local_alloc. If P then calls Q, the result will be the same as that of a current_env performed immediately in the body of Q.
If the following construction is performed:
set_stack_limit(make_stack_limit(current_env, F, A))
the frame space and local_alloc space that would be available for use by this supposed call of Q will not be reused by procedure calls with check_stack or uses of local_alloc_check after the set_stack_limit. Any attempt to do so will raise a TDF exception, stack_overflow.

5.16.70. make_top

Encoding number: 69

		   -> EXP TOP
make_top delivers a value of SHAPE TOP (i.e. void).

5.16.71. make_value

Encoding number: 70

	s:		SHAPE
		   -> EXP s
This EXP creates some value with the representation of the SHAPE s. This value will have the correct size, but its representation is not specified. It can be assigned, be the result of a contents, a parameter or result of a procedure, or the result of any construction (like sequence) which delivers the value delivered by an internal EXP. But if it is used for arithmetic or as a POINTER for taking contents or add_to_ptr etc. the effect is undefined.

Installers will usually be able to implement this operation by producing no code.

Note that a floating point NaN is a possible value for this purpose.

The SHAPE s will not be BOTTOM.

5.16.72. maximum

Encoding number: 71

	arg1:		EXP INTEGER(v)
	arg2:		EXP INTEGER(v)
		   -> EXP INTEGER(v)
The arguments will be evaluated and the maximum of the values delivered is the result.

5.16.73. minimum

Encoding number: 72

	arg1:		EXP INTEGER(v)
	arg2:		EXP INTEGER(v)
		   -> EXP INTEGER(v)
The arguments will be evaluated and the minimum of the values delivered is the result.

5.16.74. minus

Encoding number: 73

	ov_err:		ERROR_TREATMENT
	arg1:		EXP INTEGER(v)
	arg2:		EXP INTEGER(v)
		   -> EXP INTEGER(v)
arg1 and arg2 are evaluated and will produce integer values, a and b, of the same VARIETY, v. The difference a-b is delivered as the result of the construct, with the same SHAPE as the arguments.

If the result cannot be expressed in the VARIETY being used to represent v, an overflow error is caused and is handled in the way specified by ov_err.

5.16.75. move_some

Encoding number: 74

	md:		TRANSFER_MODE
	arg1:		EXP POINTER(x)
	arg2:		EXP POINTER(y)
	arg3:		EXP OFFSET(z, t)
		   -> EXP TOP
The arguments are evaluated to produce p1, p2, and sz respectively. A quantity of data measured by sz in the space indicated by p1 is moved to the space indicated by p2. The operation will be carried out as specified by the TRANSFER_MODE (q.v.).

x will include z and y will include z.

sz will be a non-negative OFFSET, see offset_pad.

If the spaces of size sz to which p1 and p2 point do not lie entirely within the spaces indicated by the original pointers from which they are derived, the effect of the operation is undefined.

If the value delivered by arg1 or arg2 is a null pointer the effect is undefined.

See Overlapping.

5.16.76. mult

Encoding number: 75

	ov_err:		ERROR_TREATMENT
	arg1:		EXP INTEGER(v)
	arg2:		EXP INTEGER(v)
		   -> EXP INTEGER(v)
arg1 and arg2 are evaluated and will produce integer values, a and b, of the same VARIETY, v. The product a*b is delivered as the result of the construct, with the same SHAPE as the arguments.

If the result cannot be expressed in the VARIETY being used to represent v, an overflow error is caused and is handled in the way specified by ov_err.

5.16.77. n_copies

Encoding number: 76

	n:		NAT
	arg1:		EXP x
		   -> EXP NOF(n, x)
arg1 is evaluated and an NOF value is delivered which contains n copies of this value. n can be zero or one or greater.

Producers are encouraged to use n_copies to initialise arrays of known size.

5.16.78. negate

Encoding number: 77

	ov_err:		ERROR_TREATMENT
	arg1:		EXP INTEGER(v)
		   -> EXP INTEGER(v)
arg1 is evaluated and will produce an integer value, a. The value -a is delivered as the result of the construct, with the same SHAPE as the argument.

If the result cannot be expressed in the VARIETY being used to represent v, an overflow error is caused and is handled in the way specified by ov_err.

5.16.79. not

Encoding number: 78

	arg1:		EXP INTEGER(v)
		   -> EXP INTEGER(v)
The argument is evaluated producing an integer value, of VARIETY, v. The result is the bitwise not of this value in the representing VARIETY. The result is delivered as the result of the construct, with the same SHAPE as the arguments.

See Representing integers.

5.16.80. obtain_tag

Encoding number: 79

	t:		TAG x
		   -> EXP x
The value with which the TAG t is bound is delivered. The SHAPE of the result is the SHAPE of the value with which the TAG is bound.

5.16.81. offset_add

Encoding number: 80

	arg1:		EXP OFFSET(x, y)
	arg2:		EXP OFFSET(z, t)
		   -> EXP OFFSET(x, t)
The two arguments deliver OFFSETs. The result is the sum of these OFFSETs, as an OFFSET.

y will include z.

The effect of the constraint "y will include z" is that, in the simple representation of pointer arithmetic, this operation can be represented by addition. offset_add can lose information, so that offset_subtract does not have the usual relation with it.

5.16.82. offset_div

Encoding number: 81

	v:		VARIETY
	arg1:		EXP OFFSET(x, x)
	arg2:		EXP OFFSET(x, x)
		   -> EXP INTEGER(v)
The two arguments deliver OFFSETs, a and b. The result is a/b, as an INTEGER of VARIETY, v. Division is interpreted in the same sense (with respect to remainder) as in div0.

The value produced by arg2 will be non-zero.

5.16.83. offset_div_by_int

Encoding number: 82

	arg1:		EXP OFFSET(x, x)
	arg2:		EXP INTEGER(v)
		   -> EXP OFFSET(x, x)
The result is the OFFSET produced by arg1 divided by arg2, as an OFFSET(x, x).

The value produced by arg2 will be greater than zero.

The following identity will apply for all A and n:

offset_mult(offset_div_by_int(A, n), n) = A

5.16.84. offset_max

Encoding number: 83

	arg1:		EXP OFFSET(x, y)
	arg2:		EXP OFFSET(z, y)
		   -> EXP OFFSET(unite_alignments(x, z), y)
The two arguments deliver OFFSETs. The result is the maximum of these OFFSETs, as an OFFSET.

See Comparison of pointers and offsets.

In the simple memory model this operation is represented by maximum. The constraint that the second ALIGNMENT parameters are both y is to permit the representation of OFFSETs in installers by a simple homomorphism.

5.16.85. offset_mult

Encoding number: 84

	arg1:		EXP OFFSET(x, x)
	arg2:		EXP INTEGER(v)
		   -> EXP OFFSET(x, x)
The first argument gives an OFFSET, off, and the second an integer, n. The result is the product of these, as an offset.

The result shall be equal to offset_adding off n times to offset_zero(x).

5.16.86. offset_negate

Encoding number: 85

	arg1:		EXP OFFSET(x, x)
		   -> EXP OFFSET(x, x)
The inverse of the argument is delivered.

In the simple memory model this can be represented by negate.

5.16.87. offset_pad

Encoding number: 86

	a:		ALIGNMENT
	arg1:		EXP OFFSET(z, t)
		   -> EXP OFFSET(unite_alignments(z, a), a)
arg1 is evaluated giving off. The next greater or equal OFFSET at which a value of ALIGNMENT a can be placed is delivered. That is, there shall not exist an OFFSET of the same SHAPE as the result which is greater than or equal to off and less than the result, in the sense of offset_test.

off will be a non-negative OFFSET, that is it will be greater than or equal to a zero OFFSET of the same SHAPE in the sense of offset_test.

In the simple memory model this operation can be represented by ((off + a - 1) / a) * a. In the simple model this is the only operation which is not represented by a simple corresponding integer operation.

5.16.88. offset_subtract

Encoding number: 87

	arg1:		EXP OFFSET(x, y)
	arg2:		EXP OFFSET(x, z)
		   -> EXP OFFSET(z, y)
The two arguments deliver offsets, p and q. The result is p-q, as an offset.

Note that x will include y, x will include z and z will include y, by the constraints on OFFSETs.

offset_subtract and offset_add do not have the conventional relationship because offset_add can lose information, which cannot be regenerated by offset_subtract.

5.16.89. offset_test

Encoding number: 88

	prob:		OPTION(NAT)
	nt:		NTEST
	dest:		LABEL
	arg1:		EXP OFFSET(x, y)
	arg2:		EXP OFFSET(x, y)
		   -> EXP TOP
arg1 and arg2 are evaluated and will produce offset values, a and b. These values are compared using nt.

If a nt b, this construction yields TOP. Otherwise control passes to dest.

If prob is present, prob/100 gives the probability that control will continue to the next construct (ie. not pass to dest). If prob is absent this probability is unknown.

a greater_than_or_equal b is equivalent to offset_max(a, b) = a, and similarly for the other comparisons.

In the simple memory model this can be represented by integer_test.

5.16.90. offset_zero

Encoding number: 89

	a:		ALIGNMENT
		   -> EXP OFFSET(a, a)
A zero offset of SHAPE OFFSET(a, a).

offset_pad(b, offset_zero(a)) is a zero offset of SHAPE OFFSET(unite_alignments(a, b), b).

5.16.91. or

Encoding number: 90

	arg1:		EXP INTEGER(v)
	arg2:		EXP INTEGER(v)
		   -> EXP INTEGER(v)
The arguments are evaluated producing integer values of the same VARIETY, v. The result is the bitwise or of these two integers in the representing VARIETY. The result is delivered as the result of the construct, with the same SHAPE as the arguments.

See Representing integers.

5.16.92. plus

Encoding number: 91

	ov_err:		ERROR_TREATMENT
	arg1:		EXP INTEGER(v)
	arg2:		EXP INTEGER(v)
		   -> EXP INTEGER(v)
arg1 and arg2 are evaluated and will produce integer values, a and b, of the same VARIETY, v. The sum a+b is delivered as the result of the construct, with the same SHAPE as the arguments.

If the result cannot be expressed in the VARIETY being used to represent v, an overflow error is caused and is handled in the way specified by ov_err.

5.16.93. pointer_test

Encoding number: 92

	prob:		OPTION(NAT)
	nt:		NTEST
	dest:		LABEL
	arg1:		EXP POINTER(x)
	arg2:		EXP POINTER(x)
		   -> EXP TOP
arg1 and arg2 are evaluated and will produce pointer values, a and b, which will be derived from the same original pointer. These values are compared using nt.

If a nt b, this construction yields TOP. Otherwise control passes to dest.

If prob is present, prob/100 gives the probability that control will continue to the next construct (ie. not pass to dest). If prob is absent this probability is unknown.

The effect of this construction is the same as:

offset_test(prob, nt, dest, subtract_ptrs(arg1 , arg2), offset_zero(x))

In the simple memory model this construction can be represented by integer_test.

5.16.94. power

Encoding number: 93

	ov_err:		ERROR_TREATMENT
	arg1:		EXP INTEGER(v)
	arg2:		EXP INTEGER(w)
		   -> EXP INTEGER(v)
arg2 will be non-negative. The result is the result of arg1 raised to the power given by arg2.

If the result cannot be expressed in the VARIETY being used to represent v, an overflow error is caused and is handled in the way specified by ov_err.

5.16.95. proc_test

Encoding number: 94

	prob:		OPTION(NAT)
	nt:		NTEST
	dest:		LABEL
	arg1:		EXP PROC
	arg2:		EXP PROC
		   -> EXP TOP
arg1 and arg2 are evaluated and will produce PROC values, a and b. These values are compared using nt. The only permitted values of nt are equal and not_equal.

If a nt b, this construction yields TOP. Otherwise control passes to dest.

If prob is present, prob/100 gives the probability that control will continue to the next construct (ie. not pass to dest). If prob is absent this probability is unknown.

Two PROCs are equal if they were produced by the same instantiation of make_proc or if they were both made with make_null_proc. Otherwise they are unequal.

5.16.96. profile

Encoding number: 95

	uses:		NAT
		   -> EXP TOP
The integer uses gives the number of times which this construct is expected to be evaluated.

All uses of profile in the same capsule are to the same scale. They will be mutually consistent.

5.16.97. real_part

Encoding number: 96

	arg1:		EXP c
		   -> EXP FLOATING (float_of_complex(c))
c will be complex. Delivers the real part of the value produced by arg1.

5.16.98. rem0

Encoding number: 97

	div_by_0_err:	ERROR_TREATMENT
	ov_err:		ERROR_TREATMENT
	arg1:		EXP INTEGER(v)
	arg2:		EXP INTEGER(v)
		   -> EXP INTEGER(v)
arg1 and arg2 are evaluated and will produce integer values, a and b, of the same VARIETY, v. The value a M1 b or the value a M2 b is delivered as the result of the construct, with the same SHAPE as the arguments. Different occurrences of rem0 in the same capsule can use M1 or M2 independently.

The following equivalence shall hold:

	x = plus(mult(div0(x, y), y), rem0(x, y))
if all the ERROR_TREATMENTs are impossible, and x and y have no side effects.

If b is zero a div_by_zero error occurs and is handled by div_by_0_err.

If b is not zero and div0(a, b) cannot be expressed in the VARIETY being used to represent v an overflow may occur in which case it is handled by ov_err.

Producers may assume that suitable masking and rem0 by a power of two yield equally good code.

See Division and modulus for the definitions of D1, D2, M1 and M2.

5.16.99. rem1

Encoding number: 98

	div_by_0_err:	ERROR_TREATMENT
	ov_err:		ERROR_TREATMENT
	arg1:		EXP INTEGER(v)
	arg2:		EXP INTEGER(v)
		   -> EXP INTEGER(v)
arg1 and arg2 are evaluated and will produce integer values, a and b, of the same VARIETY, v. The value a M1 b is delivered as the result of the construct, with the same SHAPE as the arguments.

If b is zero a div_by_zero error occurs and is handled by div_by_0_err.

If b is not zero and div1(a, b) cannot be expressed in the VARIETY being used to represent v an overflow may occur, in which case it is handled by ov_err.

Producers may assume that suitable masking and rem1 by a power of two yield equally good code.

See Division and modulus for the definitions of D1, D2, M1 and M2.

5.16.100. rem2

Encoding number: 99

	div_by_0_err:	ERROR_TREATMENT
	ov_err:		ERROR_TREATMENT
	arg1:		EXP INTEGER(v)
	arg2:		EXP INTEGER(v)
		   -> EXP INTEGER(v)
arg1 and arg2 are evaluated and will produce integer values, a and b, of the same VARIETY, v. The value a M2 b is delivered as the result of the construct, with the same SHAPE as the arguments.

If b is zero a div_by_zero error occurs and is handled by div_by_0_err.

If b is not zero and div2(a, b) cannot be expressed in the VARIETY being used to represent v an overflow may occur, in which case it is handled by ov_err.

Producers may assume that suitable masking and rem2 by a power of two yield equally good code if the lower bound of v is zero.

See Division and modulus for the definitions of D1, D2, M1 and M2.

5.16.101. repeat

Encoding number: 100

	replab_intro:	LABEL
	start:		EXP TOP
	body:		EXP y
		   -> EXP y
start is evaluated. Then body is evaluated.

If body produces a result, this is the result of the whole construction. However if goto or any other jump to replab_intro is encountered during the evaluation then the current evaluation stops and body is evaluated again. In the canonical order all evaluated components are completely evaluated before any of the next iteration of body. The lifetime of replab_intro is the evaluation of body.

The actual order of evaluation of the constituents shall be indistinguishable in all observable effects (apart from time) from that described above. Note that this specifically includes any defined error handling.

5.16.102. return

Encoding number: 101

	arg1:		EXP x
		   -> EXP BOTTOM
arg1 is evaluated to produce a value, v. The evaluation of the immediately enclosing procedure ceases and v is delivered as the result of the procedure.

Since the return construct can never produce a value, the SHAPE of its result is BOTTOM.

All uses of return in the body of a make_proc or make_general_proc will have arg1 with the same SHAPE.

5.16.103. return_to_label

Encoding number: 102

	lab_val:	EXP POINTER code_alignment
		   -> EXP BOTTOM
lab_val will be a label value in the calling procedure.

The evaluation of the immediately enclosing procedure ceases and control is passed to the calling procedure at the label given by lab_val.

5.16.104. round_with_mode

Encoding number: 103

	flpt_err:	ERROR_TREATMENT
	mode:		ROUNDING_MODE
	r:		VARIETY
	arg1:		EXP FLOATING(f)
		   -> EXP INTEGER(r)
arg is evaluated to produce a floating point value, v. This is rounded to an integer of VARIETY, r, using the ROUNDING_MODE, mode. This is the result of the construction.

If f is complex the result is derived from the real part of arg1.

If there is a floating point error it is handled by flpt_err. See Floating point errors.

5.16.105. rotate_left

Encoding number: 104

	arg1:		EXP INTEGER(v)
	arg2:		EXP INTEGER(w)
		   -> EXP INTEGER(v)
The value delivered by arg1 is rotated left arg2 places.

arg2 will be non-negative and will be strictly less than the number of bits needed to represent v.

The use of this construct assumes knowledge of the representational variety of v.

5.16.106. rotate_right

Encoding number: 105

	arg1:		EXP INTEGER(v)
	arg2:		EXP INTEGER(w)
		   -> EXP INTEGER(v)
The value delivered by arg1 is rotated right arg2 places.

arg2 will be non-negative and will be strictly less than the number of bits needed to represent v.

The use of this construct assumes knowledge of the representational variety of v.

5.16.107. sequence

Encoding number: 106

	statements:	LIST(EXP)
	result:		EXP x
		   -> EXP x
The statements are evaluated in the same order as the list, statements, and their results are discarded. Then result is evaluated and its result forms the result of the construction.

A canonical order is one in which all the components of each statement are completely evaluated before any component of the next statement is started. A similar constraint applies between the last statement and the result. The actual order in which the statements and their components are evaluated shall be indistinguishable in all observable effects (apart from time) from a canonical order.

Note that this specifically includes any defined error handling. However, if in any canonical order the effect of the program is undefined, the actual effect of the sequence is undefined.

Hence constructions with impossible error handlers may be performed before or after those with specified error handlers, if the resulting order is otherwise acceptable.

5.16.108. set_stack_limit

Encoding number: 107

	lim:		EXP POINTER({locals_alignment, alloca_alignment})
		   -> EXP TOP
set_stack_limit sets the limits of remaining free stack space to lim. This include both the frame stack limit and the local_alloc stack. Note that, in implementations where the frame stack and local_alloc stack are distinct, this pointer will have a special representation, appropriate to its frame alignment. Thus the pointer should always be generated using make_stack_limit or its equivalent formation.

Any later apply_general_proc with PROCPROPS including check_stack up to the dynamically next set_stack_limit will check that the frame required for the procedure will be within the frame stack limit. If it is not, normal execution is stopped and a TDF exception with ERROR_CODE stack_overflow is raised.

Any later local_alloc_check will check that the locally allocated space required is within the local_alloc stack limit. If it is not, normal execution is stopped and a TDF exception with ERROR_CODE stack_overflow is raised.

5.16.109. shape_offset

Encoding number: 108

	s:		SHAPE
		   -> EXP OFFSET(alignment(s), {})
This construction delivers the "size" of a value of the given SHAPE.

Suppose that a value of SHAPE, s, is placed in a space indicated by a POINTER(x), p, where x includes alignment(s). Suppose that a value of SHAPE, t, where a is alignment(t) and x includes a, is placed at

add_to_ptr(p, offset_pad(a, shape_offset(s)))

Then the values shall not overlap. This shall be true for all legal s, x and t.

5.16.110. shift_left

Encoding number: 109

	ov_err:		ERROR_TREATMENT
	arg1:		EXP INTEGER(v)
	arg2:		EXP INTEGER(w)
		   -> EXP INTEGER(v)
arg1 and arg2 are evaluated and will produce integer values, a and b. The value a shifted left b places is delivered as the result of the construct, with the same SHAPE as a.

b will be non-negative and will be strictly less than the number of bits needed to represent v.

If the result cannot be expressed in the VARIETY being used to represent v, an overflow error is caused and is handled in the way specified by ov_err.

Producers may assume that shift_left and multiplication by a power of two yield equally efficient code.

5.16.111. shift_right

Encoding number: 110

	arg1:		EXP INTEGER(v)
	arg2:		EXP INTEGER(w)
		   -> EXP INTEGER(v)
arg1 and arg2 are evaluated and will produce integer values, a and b. The value a shifted right b places is delivered as the result of the construct, with the same SHAPE as arg1.

b will be non-negative and will be strictly less than the number of bits needed to represent v.

If the lower bound of v is negative, the sign will be propagated.

5.16.112. subtract_ptrs

Encoding number: 111

	arg1:		EXP POINTER(y)
	arg2:		EXP POINTER(x)
		   -> EXP OFFSET(x, y)
arg1 and arg2 are evaluated to produce pointers p1 and p2, which will be derived from the same original pointer. The result, r, is the OFFSET from p2 to p1. Both arguments will be derived from the same original pointer.

Note that add_to_ptr(p2, r) = p1.

5.16.113. tail_call

Encoding number: 112

	prcprops:	OPTION(PROCPROPS)
	p:		EXP PROC
	callee_pars:	CALLEES
		   -> EXP BOTTOM
p is called in the sense of apply_general_proc with the caller parameters of the immediately enclosing proc and CALLEES given by callee_pars and PROCPROPS prcprops.

The result of the call is delivered as the result of the immediately enclosing proc in the sense of return. The SHAPE of the result of p will be identical to the SHAPE specified as the result of immediately enclosing procedure.

No pointers to any callee parameters, variables, identifications or local allocations defined in immediately enclosing procedure will be accessed either in the body of p or after the return.

The presence or absence of each of the PROCPROPS check_stack and untidy, in prcprops will be reflected in the PROCPROPS of the immediately enclosing procedure.

5.16.114. untidy_return

Encoding number: 113

	arg1:		EXP x
		   -> EXP BOTTOM
arg1 is evaluated to produce a value, v. The evaluation of the immediately enclosing procedure ceases and v is delivered as the result of the procedure, in such a manner as that pointers to any callee parameters or local allocations are valid in the calling procedure.

untidy_return can only occur in a procedure defined by make_general_proc with PROCPROPS including untidy.

5.16.115. variable

Encoding number: 114

	opt_access:	OPTION(ACCESS)
	name_intro:	TAG POINTER(alignment(x))
	init:		EXP x
	body:		EXP y
		   -> EXP y
init is evaluated to produce a value, v. Space is allocated to hold a value of SHAPE x and this is initialised with v. Then body is evaluated. During this evaluation, an original POINTER pointing to the allocated space is bound to name_intro. This means that inside body an evaluation of obtain_tag(name_intro) will produce a POINTER to this space. The lifetime of name_intro is the evaluation of body.

The value delivered by variable is that produced by body.

If opt_access contains visible, it means that the contents of the space may be altered while the procedure containing this declaration is not the current procedure. Hence if there are any copies of this value they will need to be refreshed from the variable when the procedure is returned to. The easiest implementation when opt_access is visible may be to keep the value in memory, but this is not a necessary requirement.

The TAG given for name_intro will not be reused within the current UNIT. No rules for the hiding of one TAG by another are given: this will not happen.

The order in which the constituents of init and body are evaluated shall be indistinguishable in all observable effects (apart from time) from completely evaluating init before starting body. See the note about order in sequence.

When compiling languages which permit uninitialised variable declarations, make_value may be used to provide an initialisation.

5.16.116. xor

Encoding number: 115

	arg1:		EXP INTEGER(v)
	arg2:		EXP INTEGER(v)
		   -> EXP INTEGER(v)
The arguments are evaluated producing integer values of the same VARIETY, v. The result is the bitwise xor of these two integers in the representing VARIETY. The result is delivered as the result of the construct, with the same SHAPE as the arguments.

See Representing integers.


5.17. EXTERNAL

Number of encoding bits: 2
Is coding extendable: yes

An EXTERNAL defines the classes of external name available for connecting the internal names inside a CAPSULE to the world outside the CAPSULE.

5.17.1. string_extern

Encoding number: 1

	s:		BYTE_ALIGN TDFIDENT(n)
		   -> EXTERNAL
string_extern produces an EXTERNAL identified by the TDFIDENT s.

5.17.2. unique_extern

Encoding number: 2

	u:		BYTE_ALIGN UNIQUE
		   -> EXTERNAL
unique_extern produces an EXTERNAL identified by the UNIQUE u.

5.17.3. chain_extern

Encoding number: 3

	s:		BYTE_ALIGN TDFIDENT
	prev:		TDFINT
		   -> EXTERNAL
This construct is redundant and should not be used.


5.18. EXTERN_LINK

Number of encoding bits: 0
Is coding extendable: no

An auxiliary SORT providing a list of LINKEXTERN.

5.18.1. make_extern_link

Encoding number: 0

	el:		SLIST(LINKEXTERN)
		   -> EXTERN_LINK
make_capsule requires a SLIST(EXTERN_LINK) to express the links between the linkable entities and the named (by EXTERNALs) values outside the CAPSULE.


5.19. FLOATING_VARIETY

Number of encoding bits: 3
Is coding extendable: yes

These describe kinds of floating point number.

5.19.1. flvar_apply_token

Encoding number: 1

	token_value:	TOKEN
	token_args:	BITSTREAM param_sorts(token_value)
		   -> FLOATING_VARIETY
The token is applied to the arguments to give a FLOATING_VARIETY

If there is a definition for token_value in the CAPSULE then token_args is a BITSTREAM encoding of the SORTs of its parameters, in the order specified.

5.19.2. flvar_cond

Encoding number: 2

	control:	EXP INTEGER(v)
	e1:		BITSTREAM FLOATING_VARIETY
	e2:		BITSTREAM FLOATING_VARIETY
		   -> FLOATING_VARIETY
The control is evaluated. It will be a constant at install time under the constant evaluation rules. If it is non-zero, e1 is installed at this point and e2 is ignored and never processed. If control is zero then e2 is installed at this point and e1 is ignored and never processed.

5.19.3. flvar_parms

Encoding number: 3

	base:		NAT
	mantissa_digs:	NAT
	min_exponent:	NAT
	max_exponent:	NAT
		   -> FLOATING_VARIETY
base is the base with respect to which the remaining numbers refer. base will be a power of 2.

mantissa_digs is the required number of base digits, q, such that any number with q digits can be rounded to a floating point number of the variety and back again without any change to the q digits.

min_exponent is the negative of the required minimum integer such that base raised to that power can be represented as a non-zero floating point number in the FLOATING_VARIETY.

max_exponent is the required maximum integer such that base raised to that power can be represented in the FLOATING_VARIETY.

A TDF translator is required to make available a representing FLOATING_VARIETY such that, if only values within the given requirements are produced, no overflow error will occur. Where several such representative FLOATING_VARIETYs exist, the translator will choose one to minimise space requirements or maximise the speed of operations.

All numbers of the form xb1 M*base N+1-q are required to be represented exactly where M and N are integers such that
baseq-1 M < baseq
-min_exponent N max_exponent

Zero will also be represented exactly in any FLOATING_VARIETY.

5.19.4. complex_parms

Encoding number: 4

	base:		NAT
	mantissa_digs:	NAT
	min_exponent:	NAT
	max_exponent:	NAT
		   -> FLOATING_VARIETY
A FLOATING_VARIETY described by complex_parms holds a complex number which is likely to be represented by its real and imaginary parts, each of which is as if defined by flvar_parms with the same arguments.

5.19.5. float_of_complex

Encoding number: 5

	csh:		SHAPE
		   -> FLOATING_VARIETY
csh will be a complex SHAPE.

Delivers the FLOATING_VARIETY required for the real (or imaginary) part of a complex SHAPE csh.

5.19.6. complex_of_float

Encoding number: 6

	fsh:		SHAPE
		   -> FLOATING_VARIETY
fsh will be a floating SHAPE.

Delivers FLOATING_VARIETY required for a complex number whose real (and imaginary) parts have SHAPE fsh.


5.20. GROUP

Number of encoding bits: 0
Is coding extendable: no

A GROUP is a list of UNITs with the same unit identification.

5.20.1. make_group

Encoding number: 0

	us:		SLIST(UNIT)
		   -> GROUP
make_capsule contains a list of GROUPS. Each member of this list has a different unit identification deduced from the prop_name argument of make_capsule.


5.21. LABEL

Number of encoding bits: 1
Is coding extendable: yes

A LABEL marks an EXP in certain constructions, and is used in jump-like constructions to change the control to the labelled construction.

5.21.1. label_apply_token

Encoding number: 2

	token_value:	TOKEN
	token_args:	BITSTREAM param_sorts(token_value)
		   -> LABEL x
The token is applied to the arguments to give a LABEL.

If there is a definition for token_value in the CAPSULE then token_args is a BITSTREAM encoding of the SORTs of its parameters, in the order specified.

5.21.2. make_label

Encoding number: 1

	labelno:	TDFINT
		   -> LABEL
Labels are represented in TDF by integers, but they are not linkable. Hence the definition and all uses of a LABEL occur in the same UNIT.


5.22. LINK

Number of encoding bits: 0
Is coding extendable: no

A LINK expresses the connection between two variables of the same SORT.

5.22.1. make_link

Encoding number: 0

	unit_name:	TDFINT
	capsule_name:	TDFINT
		   -> LINK
A LINK defines a linkable entity declared inside a UNIT as unit_name to correspond to a CAPSULE linkable entity having the same linkable entity identification. The CAPSULE linkable entity is capsule_name.

A LINK is normally constructed by the TDF builder in the course of resolving sharing and name clashes when constructing a composite CAPSULE.


5.23. LINKEXTERN

Number of encoding bits: 0
Is coding extendable: no

A value of SORT LINKEXTERN expresses the connection between the name by which an object is known inside a CAPSULE and a name by which it is known outside.

5.23.1. make_linkextern

Encoding number: 0

	internal:	TDFINT
	ext:		EXTERNAL
		   -> LINKEXTERN
make_linkextern produces a LINKEXTERN connecting an object identified within a CAPSULE by a TAG, TOKEN, AL_TAG or any linkable entity constructed from internal, with an EXTERNAL, ext. The EXTERNAL is an identifier which linkers and similar programs can use.


5.24. LINKS

Number of encoding bits: 0
Is coding extendable: no

5.24.1. make_links

Encoding number: 0

	ls:		SLIST(LINK)
		   -> LINKS
make_unit uses a SLIST(LINKS) to define which linkable entities within a UNIT correspond to the CAPSULE linkable entities. Each LINK in a LINKS has the same linkable entity identification.


5.25. NAT

Number of encoding bits: 3
Is coding extendable: yes

These are non-negative integers of unlimited size.

5.25.1. nat_apply_token

Encoding number: 1

	token_value:	TOKEN
	token_args:	BITSTREAM param_sorts(token_value)
		   -> NAT
The token is applied to the arguments to give a NAT.

If there is a definition for token_value in the CAPSULE then token_args is a BITSTREAM encoding of the SORTs of its parameters, in the order specified.

5.25.2. nat_cond

Encoding number: 2

	control:	EXP INTEGER(v)
	e1:		BITSTREAM NAT
	e2:		BITSTREAM NAT
		   -> NAT
The control is evaluated. It will be a constant at install time under the constant evaluation rules. If it is non-zero, e1 is installed at this point and e2 is ignored and never processed. If control is zero then e2 is installed at this point and e1 is ignored and never processed.

5.25.3. computed_nat

Encoding number: 3

	arg:		EXP INTEGER(v)
		   -> NAT
arg will be an install-time non-negative constant. The result is that constant.

5.25.4. error_val

Encoding number: 4

	err:		ERROR_CODE
		   -> NAT
Gives the NAT corresponding to the ERROR_CODE err. Each distinct ERROR_CODE will give a different NAT.

5.25.5. make_nat

Encoding number: 5

	n:		TDFINT
		   -> NAT
n is a non-negative integer of unbounded magnitude.


5.26. NTEST

Number of encoding bits: 4
Is coding extendable: yes

These describe the comparisons which are possible in the various test constructions. Note that greater_than is not necessarily the same as not_less_than_or_equal, since the result need not be defined (e.g. in IEEE floating point).

5.26.1. ntest_apply_token

Encoding number: 1

	token_value:	TOKEN
	token_args:	BITSTREAM param_sorts(token_value)
		   -> NTEST
The token is applied to the arguments to give a NTEST.

If there is a definition for token_value in the CAPSULE then token_args is a BITSTREAM encoding of the SORTs of its parameters, in the order specified.

5.26.2. ntest_cond

Encoding number: 2

	control:	EXP INTEGER(v)
	e1:		BITSTREAM NTEST
	e2:		BITSTREAM NTEST
		   -> NTEST
The control is evaluated. It will be a constant at install time under the constant evaluation rules. If it is non-zero, e1 is installed at this point and e2 is ignored and never processed. If control is zero then e2 is installed at this point and e1 is ignored and never processed.

5.26.3. equal

Encoding number: 3

		   -> NTEST
Signifies "equal" test.

5.26.4. greater_than

Encoding number: 4

		   -> NTEST
Signifies "greater than" test.

5.26.5. greater_than_or_equal

Encoding number: 5

		   -> NTEST
Signifies "greater than or equal" test.

5.26.6. less_than

Encoding number: 6

		   -> NTEST
Signifies "less than" test.

5.26.7. less_than_or_equal

Encoding number: 7

		   -> NTEST
Signifies "less than or equal" test.

5.26.8. not_equal

Encoding number: 8

		   -> NTEST
Signifies "not equal" test.

5.26.9. not_greater_than

Encoding number: 9

		   -> NTEST
Signifies "not greater than" test.

5.26.10. not_greater_than_or_equal

Encoding number: 10

		   -> NTEST
Signifies "not (greater than or equal)" test.

5.26.11. not_less_than

Encoding number: 11

		   -> NTEST
Signifies "not less than" test.

5.26.12. not_less_than_or_equal

Encoding number: 12

		   -> NTEST
Signifies "not (less than or equal)" test.

5.26.13. less_than_or_greater_than

Encoding number: 13

		   -> NTEST
Signifies "less than or greater than" test.

5.26.14. not_less_than_and_not_greater_than

Encoding number: 14

		   -> NTEST
Signifies "not less than and not greater than" test.

5.26.15. comparable

Encoding number: 15

		   -> NTEST
Signifies "comparable" test.

With all operands SHAPEs except FLOATING, this comparison is always true.

5.26.16. not_comparable

Encoding number: 16

		   -> NTEST
Signifies "not comparable" test.

With all operands SHAPEs except FLOATING, this comparison is always false.


5.27. OTAGEXP

Number of encoding bits: 0
Is coding extendable: no

This is a auxilliary SORT used in apply_general_proc.

5.27.1. make_otagexp

Encoding number: 0

	tgopt:		OPTION(TAG x)
	e:		EXP x
		   -> OTAGEXP
e is evaluated and its value is the actual caller parameter. If tgopt is present, the TAG will be bound to the final value of caller parameter in the postlude part of the apply_general_proc.


5.28. PROCPROPS

Number of encoding bits: 4
Is coding extendable: yes

PROCPROPS is a set of properties ascribed to procedure definitions and calls.

5.28.1. procprops_apply_token

Encoding number: 1

	token_value:	TOKEN
	token_args:	BITSTREAM param_sorts(token_value)
		   -> PROCPROPS
The token is applied to the arguments to give a PROCPROPS.

If there is a definition for token_value in the CAPSULE then token_args is a BITSTREAM encoding of the SORTs of its parameters in the order specified.

5.28.2. procprops_cond

Encoding number: 2

	control:	EXP INTEGER(v)
	e1:		BITSTREAM PROCPROPS
	e2:		BITSTREAM PROCPROPS
		   -> PROCPROPS
The control is evaluated. It will be a constant at install time under the constant evaluation rules. If it is non-zero, e1 is installed at this point and e2 is ignored and never processed. If control is zero then e2 is installed at this point and e1 is ignored and never processed.

5.28.3. add_procprops

Encoding number: 3

	arg1:		PROCPROPS
	arg2:		PROCPROPS
		   -> PROCPROPS
Delivers the join of arg1 and arg2.

5.28.4. check_stack

Encoding number: 4

		   -> PROCPROPS
The procedure body is required to check for stack overflow.

5.28.5. inline

Encoding number: 5

		   -> PROCPROPS
The procedure body is a good candidate for inlining at its application.

5.28.6. no_long_jump_dest

Encoding number: 6

		   -> PROCPROPS
The procedure body will contain no label which is the destination of a long_jump.

5.28.7. untidy

Encoding number: 7

		   -> PROCPROPS
The procedure body may be exited using an untidy_return.

5.28.8. var_callees

Encoding number: 8

		   -> PROCPROPS
Applications of the procedure may have different numbers of actual callee parameters.

5.28.9. var_callers

Encoding number: 9

		   -> PROCPROPS
Applications of the procedure may have different numbers of actual caller parameters.


5.29. PROPS

A PROPS is an assemblage of program information. This standard offers various ways of constructing a PROPS - i.e. it defines kinds of information which it is useful to express. These are:
  • definitions of AL_TAGs standing for ALIGNMENTs;
  • declarations of TAGs standing for EXPs;
  • definitions of the EXPs for which TAGs stand;
  • declarations of TOKENs standing for pieces of TDF program;
  • definitions of the pieces of TDF program for which TOKENs stand;
  • linkage and naming information;
  • version information
PROPS giving diagnostic information are described in a separate document.

The standard can be extended by the definition of new kinds of PROPS information and new PROPS constructs for expressing them; and private standards can define new kinds of information and corresponding constructs without disruption to adherents to the present standard.

Each GROUP of UNITs is identified by a unit identification - a TDFIDENT. All the UNITs in that GROUP are of the same kind.

In addition there is a tld UNIT, see The TDF encoding.


5.30. ROUNDING_MODE

Number of encoding bits: 3
Is coding extendable: yes

ROUNDING_MODE specifies the way rounding is to be performed in floating point arithmetic.

5.30.1. rounding_mode_apply_token

Encoding number: 1

	token_value:	TOKEN
	token_args:	BITSTREAM param_sorts(token_value)
		   -> ROUNDING_MODE
The token is applied to the arguments to give a ROUNDING_MODE.

If there is a definition for token_value in the CAPSULE then token_args is a BITSTREAM encoding of the SORTs of its parameters, in the order specified.

5.30.2. rounding_mode_cond

Encoding number: 2

	control:	EXP INTEGER(v)
	e1:		BITSTREAM ROUNDING_MODE
	e2:		BITSTREAM ROUNDING_MODE
		   -> ROUNDING_MODE
The control is evaluated. It will be a constant at install time under the constant evaluation rules. If it is non-zero, e1 is installed at this point and e2 is ignored and never processed. If control is zero then e2 is installed at this point and e1 is ignored and never processed.

5.30.3. round_as_state

Encoding number: 3

		   -> ROUNDING_MODE
Round as specified by the current state of the machine.

5.30.4. to_nearest

Encoding number: 4

		   -> ROUNDING_MODE
Signifies rounding to nearest. The effect when the number lies half-way is not specified.

5.30.5. toward_larger

Encoding number: 5

		   -> ROUNDING_MODE
Signifies rounding toward next largest.

5.30.6. toward_smaller

Encoding number: 6

		   -> ROUNDING_MODE
Signifies rounding toward next smallest.

5.30.7. toward_zero

Encoding number: 7

		   -> ROUNDING_MODE
Signifies rounding toward zero.


5.31. SHAPE

Number of encoding bits: 4
Is coding extendable: yes

SHAPEs express symbolic size and representation information about run time values.

SHAPEs are constructed from primitive SHAPEs which describe values such as procedures and integers, and recursively from compound construction in terms of other SHAPEs.

5.31.1. shape_apply_token

Encoding number: 1

	token_value:	TOKEN
	token_args:	BITSTREAM param_sorts(token_value)
		   -> SHAPE
The token is applied to the arguments to give a SHAPE.

If there is a definition for token_value in the CAPSULE then token_args is a BITSTREAM encoding of the SORTs of its parameters, in the order specified.

5.31.2. shape_cond

Encoding number: 2

	control:	EXP INTEGER(v)
	e1:		BITSTREAM SHAPE
	e2:		BITSTREAM SHAPE
		   -> SHAPE
The control is evaluated. It will be a constant at install time under the constant evaluation rules. If it is non-zero, e1 is installed at this point and e2 is ignored and never processed. If control is zero then e2 is installed at this point and e1 is ignored and never processed.

5.31.3. bitfield

Encoding number: 3

	bf_var:		BITFIELD_VARIETY
		   -> SHAPE
A BITFIELD is used to represent a pattern of bits which will be packed, provided that the variety_enclosed constraints are not violated. (see See section 7.24)

A BITFIELD_VARIETY specifies the number of bits and whether they are considered to be signed.

There are very few operations on BITFIELDs, which have to be converted to INTEGERs before arithmetic can be performed on them.

An installer may place a limit on the number of bits it implements. See Permitted limits.

5.31.4. bottom

Encoding number: 4

		   -> SHAPE
BOTTOM is the SHAPE which describes a piece of program which does not evaluate to any result. Examples include goto and return.

If BOTTOM is a parameter to any other SHAPE constructor, the result is BOTTOM.

5.31.5. compound

Encoding number: 5

	sz:		EXP OFFSET(x, y)
		   -> SHAPE
The SHAPE constructor COMPOUND describes cartesian products and unions.

The alignments x and y will be alignment(sx) and alignment(sy) for some SHAPEs sx and sy.

sz will evaluate to a constant, non-negative OFFSET (see offset_pad). The resulting SHAPE describes a value whose size is given by sz.

5.31.6. floating

Encoding number: 6

	fv:		FLOATING_VARIETY
		   -> SHAPE
Most of the floating point arithmetic operations, floating_plus, floating_minus etc., are defined to work in the same way on different kinds of floating point number. If these operations have more than one argument the arguments have to be of the same kind, and the result is of the same kind.

See Representing floating point.

An installer may limit the FLOATING_VARIETYs it can represent. A statement of any such limits shall be part of the specification of an installer. See Representing floating point.

5.31.7. integer

Encoding number: 7

	var:		VARIETY
		   -> SHAPE
The different kinds of INTEGER are distinguished by having different VARIETYs. A fundamental VARIETY (not a TOKEN or conditional) is represented by two SIGNED_NATs, respectively the lower and upper bounds (inclusive) of the set of values belonging to the VARIETY.

Most architectures require that dyadic integer arithmetic operations take arguments of the same size, and so TDF does likewise. Because TDF is completely architecture neutral and makes no assumptions about word length, this means that the VARIETYs of the two arguments must be identical. An example illustrates this. A piece of TDF which attempted to add two values whose SHAPEs were:

	INTEGER(0, 60000)  and  INTEGER(0, 30000)
would be undefined. The reason is that without knowledge of the target architecture's word length, it is impossible to guarantee that the two values are going to be represented in the same number of bytes. On a 16-bit machine they probably would, but not on a 15-bit machine. The only way to ensure that two INTEGERs are going to be represented in the same way in all machines is to stipulate that their VARIETYs are exactly the same.

When any construct delivering an INTEGER of a given VARIETY produces a result which is not representable in the space which an installer has chosen to represent that VARIETY, an integer overflow occurs. Whether it occurs in a particular case depends on the target, because the installers' decisions on representation are inherently target-defined.

A particular installer may limit the ranges of integers that it implements. See Representing integers.

5.31.8. nof

Encoding number: 8

	n:		NAT
	s:		SHAPE
		   -> SHAPE
The NOF constructor describes the SHAPE of a value consisting of an array of n values of the same SHAPE, s.

5.31.9. offset

Encoding number: 9

	arg1:		ALIGNMENT
	arg2:		ALIGNMENT
		   -> SHAPE
The SHAPE constructor OFFSET describes values which represent the differences between POINTERs, that is they measure offsets in memory. It should be emphasised that these are in general run-time values.

An OFFSET measures the displacement from the value indicated by a POINTER(arg1) to the value indicated by a POINTER(arg2). Such an offset is only defined if the POINTERs are derived from the same original POINTER.

An OFFSET may also measure the displacement from a POINTER to the start of a BITFIELD_VARIETY, or from the start of one BITFIELD_VARIETY to the start of another. Hence, unlike the argument of pointer, arg1 or arg2 may consist entirely of BITFIELD_VARIETYs.

The set arg1 will include the set arg2.

See Memory Model.

5.31.10. pointer

Encoding number: 10

	arg:		ALIGNMENT
		   -> SHAPE
A POINTER is a value which points to space allocated in a computer's memory. The POINTER constructor takes an ALIGNMENT argument. This argument will not consist entirely of BITFIELD_VARIETYs. See Memory Model.

5.31.11. proc

Encoding number: 11

		   -> SHAPE
PROC is the SHAPE which describes pieces of program.

5.31.12. top

Encoding number: 12

		   -> SHAPE
TOP is the SHAPE which describes pieces of program which return no useful value. assign is an example: it performs an assignment, but does not deliver any useful value.


5.32. SIGNED_NAT

Number of encoding bits: 3
Is coding extendable: yes

These are positive or negative integers of unbounded size.

5.32.1. signed_nat_apply_token

Encoding number: 1

	token_value:	TOKEN
	token_args:	BITSTREAM param_sorts(token_value)
		   -> SIGNED_NAT
The token is applied to the arguments to give a SIGNED_NAT.

If there is a definition for token_value in the CAPSULE then token_args is a BITSTREAM encoding of the SORTs of its parameters, in the order specified.

5.32.2. signed_nat_cond

Encoding number: 2

	control:	EXP INTEGER(v)
	e1:		BITSTREAM SIGNED_NAT
	e2:		BITSTREAM SIGNED_NAT
		   -> SIGNED_NAT
The control is evaluated. It will be a constant at install time under the constant evaluation rules. If it is non-zero, e1 is installed at this point and e2 is ignored and never processed. If control is zero then e2 is installed at this point and e1 is ignored and never processed.

5.32.3. computed_signed_nat

Encoding number: 3

	arg:		EXP INTEGER(v)
		   -> SIGNED_NAT
arg will be an install-time constant. The result is that constant.

5.32.4. make_signed_nat

Encoding number: 4

	neg:		TDFBOOL
	n:		TDFINT
		   -> SIGNED_NAT
n is a non-negative integer of unbounded magnitude. The result is negative if and only if neg is true.

5.32.5. snat_from_nat

Encoding number: 5

	neg:		BOOL
	n:		NAT
		   -> SIGNED_NAT
The result is negated if and only if neg is true.


5.33. SORTNAME

Number of encoding bits: 5
Is coding extendable: yes

These are the names of the SORTs which can be parameters of TOKEN definitions.

5.33.1. access

Encoding number: 1

		   -> SORTNAME

5.33.2. al_tag

Encoding number: 2

		   -> SORTNAME

5.33.3. alignment_sort

Encoding number: 3

		   -> SORTNAME

5.33.4. bitfield_variety

Encoding number: 4

		   -> SORTNAME

5.33.5. bool

Encoding number: 5

		   -> SORTNAME

5.33.6. error_treatment

Encoding number: 6

		   -> SORTNAME

5.33.7. exp

Encoding number: 7

		   -> SORTNAME
The SORT of EXP.

5.33.8. floating_variety

Encoding number: 8

		   -> SORTNAME

5.33.9. foreign_sort

Encoding number: 9

	foreign_name:	STRING(k, n)
		   -> SORTNAME
This SORT enables unanticipated kinds of information to be placed in TDF.

5.33.10. label

Encoding number: 10

		   -> SORTNAME

5.33.11. nat

Encoding number: 11

		   -> SORTNAME

5.33.12. ntest

Encoding number: 12

		   -> SORTNAME

5.33.13. procprops

Encoding number: 13

		   -> SORTNAME

5.33.14. rounding_mode

Encoding number: 14

		   -> SORTNAME

5.33.15. shape

Encoding number: 15

		   -> SORTNAME

5.33.16. signed_nat

Encoding number: 16

		   -> SORTNAME

5.33.17. string

Encoding number: 17

		   -> SORTNAME

5.33.18. tag

Encoding number: 18

		   -> SORTNAME
The SORT of TAG.

5.33.19. transfer_mode

Encoding number: 19

		   -> SORTNAME

5.33.20. token

Encoding number: 20

	result:		SORTNAME
	params:		LIST(SORTNAME)
		   -> SORTNAME
The SORTNAME of a TOKEN. Note that it can have tokens as parameters, but not as result.

5.33.21. variety

Encoding number: 21

		   -> SORTNAME

5.34. STRING

Number of encoding bits: 3
Is coding extendable: yes

5.34.1. string_apply_token

Encoding number: 1

	token_value:	TOKEN
	token_args:	BITSTREAM param_sorts(token_value)
		   -> STRING(k, n)
The token is applied to the arguments to give a STRING

If there is a definition for token_value in the CAPSULE then token_args is a BITSTREAM encoding of the SORTs of its parameters, in the order specified.

5.34.2. string_cond

Encoding number: 2

	control:	EXP INTEGER(v)
	e1:		BITSTREAM STRING
	e2:		BITSTREAM STRING
		   -> STRING(k, n)
The control is evaluated. It will be a constant at install time under the constant evaluation rules. If it is non-zero, e1 is installed at this point and e2 is ignored and never processed. If control is zero then e2 is installed at this point and e1 is ignored and never processed.

5.34.3. concat_string

Encoding number: 3

	arg1:		STRING(k, n)
	arg2:		STRING(k, m)
		   -> STRING(k, n+m)
Gives a STRING which is the concatenation of arg1 with arg2.

5.34.4. make_string

Encoding number: 4

	arg:		TDFSTRING(k, n)
		   -> STRING(k, n)
Delivers the STRING identical to the arg.


5.35. TAG

Number of encoding bits: 1
Is coding extendable: yes
Linkable entity identification: tag

These are used to name values and variables in the run time program.

5.35.1. tag_apply_token

Encoding number: 2

	token_value:	TOKEN
	token_args:	BITSTREAM param_sorts(token_value)
		   -> TAG x
The token is applied to the arguments to give a TAG.

If there is a definition for token_value in the CAPSULE then token_args is a BITSTREAM encoding of the SORTs of its parameters, in the order specified.

5.35.2. make_tag

Encoding number: 1

	tagno:		TDFINT
		   -> TAG x
make_tag produces a TAG identified by tagno.


5.36. TAGACC

Number of encoding bits: 0
Is coding extendable: no

Constructs a pair of a TAG and an OPTION(ACCESS) for use in make_proc.

5.36.1. make_tagacc

Encoding number: 0

	tg:		TAG POINTER var_param_alignment
	acc:		OPTION(ACCESS)
		   -> TAGACC
Constructs the pair for make_proc.


5.37. TAGDEC

Number of encoding bits: 2
Is coding extendable: yes

A TAGDEC declares a TAG for incorporation into a TAGDEC_PROPS.

5.37.1. make_id_tagdec

Encoding number: 1

	t_intro:	TDFINT
	acc:		OPTION(ACCESS)
	signature:	OPTION(STRING)
	x:		SHAPE
		   -> TAGDEC
A TAGDEC announcing that the TAG t_intro identifies an EXP of SHAPE x is constructed.

acc specifies the ACCESS properties of the TAG.

If there is a make_id_tagdec for a TAG then all other make_id_tagdec for the same TAG will specify the same SHAPE and there will be no make_var_tagdec or common_tagdec for the TAG.

If two make_id_tagdecs specify the same tag and both have signatures present, the strings will be identical. Possible uses of this signature argument are outlined in section 7.28.

5.37.2. make_var_tagdec

Encoding number: 2

	t_intro:	TDFINT
	acc:		OPTION(ACCESS)
	signature:	OPTION(STRING)
	x:		SHAPE
		   -> TAGDEC
A TAGDEC announcing that the TAG t_intro identifies an EXP of SHAPE POINTER(alignment (x)) is constructed.

acc specifies the ACCESS properties of the TAG.

If there is a make_var_tagdec for a TAG then all other make_var_tagdecs for the same TAG will specify SHAPEs with identical ALIGNMENT and there will be no make_id_tagdec or common_tagdec for the TAG.

If two make_var_tagdecs specify the same tag and both have signature present, the strings will be identical. Possible uses of this signature argument are outlined in section 7.28.

5.37.3. common_tagdec

Encoding number: 3

	t_intro:	TDFINT
	acc:		OPTION(ACCESS)
	signature:	OPTION(STRING)
	x:		SHAPE
		   -> TAGDEC
A TAGDEC announcing that the TAG t_intro identifies an EXP of SHAPE POINTER(alignment (x)) is constructed.

acc specifies the ACCESS properties of the TAG.

If there is a common_tagdec for a TAG then there will be no make_id_tagdec or make_var_tagdec for that TAG. If there is more than one common_tagdec for a TAG the one having the maximum SHAPE shall be taken to apply for the CAPSULE. Each pair of such SHAPEs will have a maximum. The maximum of two SHAPEs, a and b, is defined as follows:

  • If the a is equal to b the maximum is a.
  • If a and b are COMPOUND(x) and COMPOUND(y) respectively and a is an initial segment of b, then b is the maximum. Similarly if b is an initial segment of a then a is the maximum.
  • If a and b are NOF(n, x) and NOF(m, x) respectively and n is less than or equal to m, then b is the maximum. Similarly if m is less than or equal to n then a is the maximum.
  • Otherwise a and b have no maximum.
If two common_tagdecs specify the same tag and both have signatures present, the strings will be identical. Possible uses of this signature argument are outlined in section 7.28.


5.38. TAGDEC_PROPS

Number of encoding bits: 0
Is coding extendable: no
Unit identification: tagdec

5.38.1. make_tagdecs

Encoding number: 0

	no_labels:	TDFINT
	tds:		SLIST(TAGDEC)
		   -> TAGDEC_PROPS
no_labels is the number of local LABELs used in tds. tds is a list of TAGDECs which declare the SHAPEs associated with TAGs.


5.39. TAGDEF

Number of encoding bits: 2
Is coding extendable: yes

A value of SORT TAGDEF gives the definition of a TAG for incorporation into a TAGDEF_PROPS.

5.39.1. make_id_tagdef

Encoding number: 1

	t:		TDFINT
	signature:	OPTION(STRING)
	e:		EXP x
		   -> TAGDEF
make_id_tagdef produces a TAGDEF defining the TAG x constructed from the TDFINT, t. This TAG is defined to stand for the value delivered by e.

e will be a constant which can be evaluated at load_time or e will be some initial_value(E) (see section 5.16.48).

t will be declared in the CAPSULE using make_id_tagdec. If both the make_id_tagdec and make_id_tagdef have signatures present, the strings will be identical.

If x is PROC and the TAG represented by t is named externally via a CAPSULE_LINK, e will be some make_proc or make_general_proc.

There will not be more than one TAGDEF defining t in a CAPSULE.

5.39.2. make_var_tagdef

Encoding number: 2

	t:		TDFINT
	opt_access:	OPTION(ACCESS)
	signature:	OPTION(STRING)
	e:		EXP x
		   -> TAGDEF
make_var_tagdef produces a TAGDEF defining the TAG POINTER(alignment(x)) constructed from the TDFINT, t. This TAG stands for a variable which is initialised with the value delivered by e. The TAG is bound to an original pointer which has the evaluation of the program as its lifetime.

If opt_access contains visible, the meaning is that the variable may be used by agents external to the capsule, and so it must not be optimised away. If it contains constant, the initialising value will remain in it throughout the program.

e will be a constant which can be evaluated at load_time or e will be some initial_value(e1) (see section 5.16.48).

t will be declared in the CAPSULE using make_var_tagdec. If both the make_var_tagdec and make_var_tagdef have signatures present, the strings will be identical.

There will not be more than one TAGDEF defining t in a CAPSULE.

5.39.3. common_tagdef

Encoding number: 3

	t:		TDFINT
	opt_access:	OPTION(ACCESS)
	signature:	OPTION(STRING)
	e:		EXP x
		   -> TAGDEF
common_tagdef produces a TAGDEF defining the TAG POINTER(alignment(x)) constructed from the TDFINT, t. This TAG stands for a variable which is initialised with the value delivered by e. The TAG is bound to an original pointer which has the evaluation of the program as its lifetime.

If opt_access contains visible, the meaning is that the variable may be used by agents external to the capsule, and so it must not be optimised away. If it contains constant, the initialising value will remain in it throughout the program.

e will be a constant evaluable at load_time or e will be some initial_value(E) (see section 5.16.48 ).

t will be declared in the CAPSULE using common_tagdec.If both the common_tagdec and
common_tagdef have signatures present, the strings
will be identical. Let the maximum SHAPE of these (see common_tagdec) be s.

There may be any number of common_tagdef definitions for t in a CAPSULE. Of the e parameters of these, one will be a maximum. This maximum definition is chosen as the definition of t. Its value of e will have SHAPE s.

The maximum of two common_tagdef EXPs, a and b, is defined as follows:

  • If a has the form make_value(s), b is the maximum.
  • If b has the form make_value(s), a is the maximum.
  • If a and b have SHAPE COMPOUND(x) and COMPOUND(y) respectively and the value produced by a is an initial segment of the value produced by b, then b is the maximum. Similarly if b is an initial segment of a then a is the maximum.
  • If a and b have SHAPE NOF(n, x) and NOF(m, x) respectively and the value produced by a is an initial segment of the value produced by b, then b is the maximum. Similarly if b is an initial segment of a then a is the maximum.
  • If the value produced by a is equal to the value produced by b the maximum is a.
  • Otherwise a and b have no maximum.

5.40. TAGDEF_PROPS

Number of encoding bits: 0
Is coding extendable: no
Unit identification: tagdef

5.40.1. make_tagdefs

Encoding number: 0

	no_labels:	TDFINT
	tds:		SLIST(TAGDEF)
		   -> TAGDEF_PROPS
no_labels is the number of local LABELs used in tds. tds is a list of TAGDEFs which give the EXPs which are the definitions of values associated with TAGs.


5.41. TAGSHACC

Number of encoding bits: 0
Is coding extendable: no

5.41.1. make_tagshacc

Encoding number: 0

	sha:		SHAPE
	opt_access:	OPTION(ACCESS)
	tg_intro:	TAG
		   -> TAGSHACC
This is an auxiliary construction to make the elements of params_intro in make_proc.


5.42. TDFBOOL

A TDFBOOL is the TDF encoding of a boolean. See Fundamental encoding.


5.43. TDFIDENT

A TDFIDENT(k, n) encodes a sequence of n unsigned integers of size k bits. k will be a multiple of 8. See Fundamental encoding.

This construction will not be used inside a BITSTREAM.


5.44. TDFINT

A TDFINT is the TDF encoding of an unbounded unsigned integer constant. See Fundamental encoding.


5.45. TDFSTRING

A TDFSTRING(k, n) encodes a sequence of n unsigned integers of size k bits. See Fundamental encoding.


5.46. TOKDEC

Number of encoding bits: 1
Is coding extendable: yes

A TOKDEC declares a TOKEN for incorporation into a UNIT.

5.46.1. make_tokdec

Encoding number: 1

	tok:		TDFINT
	signature:	OPTION(STRING)
	s:		SORTNAME
		   -> TOKDEC
The sort of the token tok is declared to be s. Note that s will always be a token SORT, with a list of parameter SORTs (possible empty) and a result SORT.

If signature is present, it will be produced by make_string.

If two make_tokdecs specify the same token and both have signatures present, the strings will be identical. Possible uses of this signature argument are outlined in section 7.28.


5.47. TOKDEC_PROPS

Number of encoding bits: 0
Is coding extendable: no
Unit identification: tokdec

5.47.1. make_tokdecs

Encoding number: 0

	tds:		SLIST(TOKDEC)
		   -> TOKDEC_PROPS
tds is a list of TOKDECs which gives the sorts associated with TOKENs.


5.48. TOKDEF

Number of encoding bits: 1
Is coding extendable: yes

A TOKDEF gives the definition of a TOKEN for incorporation into a TOKDEF_PROPS.

5.48.1. make_tokdef

Encoding number: 1

	tok:		TDFINT
	signature:	OPTION(STRING)
	def:		BITSTREAM TOKEN_DEFN
		   -> TOKDEF
A TOKDEF is constructed which defines the TOKEN tok to stand for the fragment of TDF, body, which may be of any SORT with a SORTNAME, except for token. The SORT of the result, result_sort, is given by the first component of the BITSTREAM. See token_definition.

If signature is present, it will be produced by make_string.

tok may have been introduced by a make_tokdec. If both the make_tokdec and make_tokdef have signatures present, the strings will be identical.

At the application of this TOKEN actual pieces of TDF having SORT sn[i] are supplied to correspond to the tk[i]. The application denotes the piece of TDF obtained by substituting these actual parameters for the corresponding TOKENs within body.


5.49. TOKDEF_PROPS

Number of encoding bits: 0
Is coding extendable: no
Unit identification: tokdef

5.49.1. make_tokdefs

Encoding number: 0

	no_labels:	TDFINT
	tds:		SLIST(TOKDEF)
		   -> TOKDEF_PROPS
no_labels is the number of local LABELs used in tds. tds is a list of TOKDEFs which gives the definitions associated with TOKENs.


5.50. TOKEN

Number of encoding bits: 2
Is coding extendable: yes
Linkable entity identification: token

These are used to stand for functions evaluated at installation time. They are represented by TDFINTs.

5.50.1. token_apply_token

Encoding number: 1

	token_value:	TOKEN
	token_args:	BITSTREAM param_sorts(token_value)
		   -> TOKEN
The token is applied to the arguments to give a TOKEN.

If there is a definition for token_value in the CAPSULE then token_args is a BITSTREAM encoding of the SORTs of its parameters, in the order specified.

5.50.2. make_tok

Encoding number: 2

	tokno:		TDFINT
		   -> TOKEN
make_tok constructs a TOKEN identified by tokno.

5.50.3. use_tokdef

Encoding number: 3

	tdef:		BITSTREAM TOKEN_DEFN
		   -> TOKEN
tdef is used to supply the definition, as in make_tokdef. Note that TOKENs are only used in x_apply_token constructions.


5.51. TOKEN_DEFN

Number of encoding bits: 1
Is coding extendable: yes

An auxiliary SORT used in make_tokdef and use_tokdef.

5.51.1. token_definition

Encoding number: 1

	result_sort:	SORTNAME
	tok_params:	LIST(TOKFORMALS)
	body:		result_sort
		   -> TOKEN_DEFN
Makes a token definition. result_sort is the SORT of body. tok_params is a list of formal TOKENs and their SORTs. body is the definition, which can use the formal TOKENs defined in tok_params.

The effect of applying the definition of a TOKEN is as if the following sequence was obeyed.

First, the actual parameters (if any) are expanded to produce expressions of the appropriate SORTs. During this expansion all token applications in the actual parameters are expanded.

Second, the definition is copied, making fresh TAGs and LABELs where these are introduced in identify, variable, labelled, conditional, make_proc, make_general_proc and repeat constructions. Any other TAGs or LABELs used in body will be provided by the context (see below) of the TOKEN_DEFN or by the expansions of the actual parameters.

Third, the actual parameter expressions are substituted for the formal parameter tokens in tok_params to give the final result.

The context of a TOKEN_DEFN is the set of names (TOKENs, TAGs, LABELs, AL_TAGs etc.) "in scope" at the site of the TOKEN_DEFN.

Thus, in a make_tokdef, the context consists of the set of TOKENs defined in its tokdef UNIT, together with the set of linkable entities defined by the make_links of that UNIT. Note that this does not include LABELs and the only TAGs included are "global" ones.

In a use_tokdef, the context may be wider, since the site of the TOKEN_DEFN need not be in a tokdef UNIT; it may be an actual parameter of a token application. If this happens to be within an EXP, there may be TAGs or LABELs locally within scope; these will be in the context of the TOKEN_DEFN, together with the global names of the enclosing UNIT as before.

Previous versions of the specification limited token definitions to be non-recursive. There is no intrinsic reason for the limitation on recursive TOKENs. Since the UNIT structure implies different namespaces, there is very little implementation advantage to be gained from retaining the limitation.


5.52. TOKFORMALS

Number of encoding bits: 0
Is coding extendable: no

5.52.1. make_tokformals

Encoding number: 0

	sn:		SORTNAME
	tk:		TDFINT
		   -> TOKFORMALS
An auxiliary construction to make up the elements of the lists in token_definition.


5.53. TRANSFER_MODE

Number of encoding bits: 3
Is coding extendable: yes

A TRANSFER_MODE controls the operation of assign_with_mode, contents_with_mode and move_some.

A TRANSFER_MODE acts like a set of the values overlap, trap_on_nil, complete and volatile. The TRANSFER_MODE standard_transfer_mode acts like the empty set. add_modes acts like set union.

5.53.1. transfer_mode_apply_token

Encoding number: 1

	token_value:	TOKEN
	token_args:	BITSTREAM param_sorts(token_value)
		   -> TRANSFER_MODE
The token is applied to the arguments encoded in the BITSTREAM token_args to give a TRANSFER_MODE.

The notation param_sorts(token_value) is intended to mean the following. The token definition or token declaration for token_value gives the SORTs of its arguments in the SORTNAME component. The BITSTREAM in token_args consists of these SORTs in the given order. If no token declaration or definition exists in the CAPSULE, the BITSTREAM cannot be read.

5.53.2. transfer_mode_cond

Encoding number: 2

	control:	EXP INTEGER(v)
	e1:		BITSTREAM TRANSFER_MODE
	e2:		BITSTREAM TRANSFER_MODE
		   -> TRANSFER_MODE
control is evaluated. It will be a constant at install time under the constant evaluation rules. If it is non-zero, e1 is installed at this point and e2 is ignored and never processed. If control is zero then e2 is installed at this point and e1 is ignored and never processed.

5.53.3. add_modes

Encoding number: 3

	md1:		TRANSFER_MODE
	md2:		TRANSFER_MODE
		   -> TRANSFER_MODE
A construction qualified by add_modes has both TRANSFER_MODES md1 and md2. If md1 is standard_transfer_mode then the result is md2 and symmetrically. This operation is associative and commutative.

5.53.4. overlap

Encoding number: 4

		   -> TRANSFER_MODE
If overlap is used to qualify a move_some or an assign_with_mode for which arg2 is a contents or contents_with_mode, then the source and destination might overlap. The transfer shall be made as if the data were copied from the source to an independent place and thence to the destination.

See Overlapping.

5.53.5. standard_transfer_mode

Encoding number: 5

		   -> TRANSFER_MODE
This TRANSFER_MODE implies no special properties.

5.53.6. trap_on_nil

Encoding number: 6

		   -> TRANSFER_MODE
If trap_on_nil is used to qualify a contents_with_mode operation with a nil pointer argument, or an assign_with_mode whose arg1 is a nil pointer, or a move_some with either argument a nil pointer, the TDF exception nil_access is raised.

5.53.7. volatile

Encoding number: 7

		   -> TRANSFER_MODE
If volatile is used to qualify a construction it shall not be optimised away.

This is intended to implement ANSI C's volatile construction. In this use, any volatile identifier should be declared as a TAG with used_as_volatile ACCESS.

5.53.8. complete

Encoding number: 8

		   -> TRANSFER_MODE
A transfer qualified with complete shall leave the destination unchanged if the evaluation of the value transferred is left with a jump.


5.54. UNIQUE

Number of encoding bits: 0
Is coding extendable: no

These are used to provide world-wide unique names for TOKENs and TAGs.

This implies a registry for allocating UNIQUE values.

5.54.1. make_unique

Encoding number: 0

	text:		SLIST(TDFIDENT)
		   -> UNIQUE
Two UNIQUE values are equal if and only if they were constructed with equal arguments.


5.55. UNIT

Number of encoding bits: 0
Is coding extendable: no

A UNIT gathers together a PROPS and LINKs which relate the names by which objects are known inside the PROPS and names by which they are to be known across the whole of the enclosing CAPSULE.

5.55.1. make_unit

Encoding number: 0

	local_vars:	SLIST(TDFINT)
	lks:		SLIST(LINKS)
	properties:	BYTESTREAM PROPS
		   -> UNIT
local_vars gives the number of linkable entities of each kind. These numbers correspond (in the same order) to the variable sorts in cap_linking in make_capsule. The linkable entities will be represented by TDFINTs in the range 0 to the corresponding nl-1.

lks gives the LINKs for each kind of entity in the same order as in local_vars.

The properties will be a PROPS of a form dictated by the unit identification, see make_capsule.

The length of lks will be either 0 or equal to the length of cap_linking in make_capsule.


5.56. VARIETY

Number of encoding bits: 2
Is coding extendable: yes

These describe the different kinds of integer which can occur at run time. The fundamental construction consists of a SIGNED_NAT for the lower bound of the range of possible values, and a SIGNED_NAT for the upper bound (inclusive at both ends).

There is no limitation on the magnitude of these bounds in TDF, but an installer may specify limits. See Representing integers.

5.56.1. var_apply_token

Encoding number: 1

	token_value:	TOKEN
	token_args:	BITSTREAM param_sorts(token_value)
		   -> VARIETY
The token is applied to the arguments to give a VARIETY.

If there is a definition for token_value in the CAPSULE then token_args is a BITSTREAM encoding of the SORTs of its parameters, in the order specified.

5.56.2. var_cond

Encoding number: 2

	control:	EXP INTEGER(v)
	e1:		BITSTREAM VARIETY
	e2:		BITSTREAM VARIETY
		   -> VARIETY
The control is evaluated. It will be a constant at install time under the constant evaluation rules. If it is non-zero, e1 is installed at this point and e2 is ignored and never processed. If control is zero then e2 is installed at this point and e1 is ignored and never processed.

5.56.3. var_limits

Encoding number: 3

	lower_bound:	SIGNED_NAT
	upper_bound:	SIGNED_NAT
		   -> VARIETY
lower_bound is the lower limit (inclusive) of the range of values which shall be representable in the resulting VARIETY, and upper_bound is the upper limit (inclusive).

5.56.4. var_width

Encoding number: 4

	signed_width:	BOOL
	width:		NAT
		   -> VARIETY
If signed_width is true then this construction is equivalent to var_limits(-2width-1, 2width-1-1). If signed_width is false then this construction is var_limits (0, 2width-1).


5.57. VERSION_PROPS

Number of encoding bits: 0
Is coding extendable: no
Unit identification: versions

This UNIT gives information about version numbers and user information.

5.57.1. make_versions

Encoding number: 0

	version_info:	SLIST(VERSION)
		   -> VERSION_PROPS
Contains version information.


5.58. VERSION

Number of encoding bits: 1
Is coding extendable: yes

5.58.1. make_version

Encoding number: 1

	major_version:	TDFINT
	minor_version:	TDFINT
		   -> VERSION
The major and minor version numbers of the TDF used. An increase in minor version number means an extension of facilities, an increase in major version number means an incompatible change. TDF with the same major number but a lower minor number than the installer shall install correctly.

For TDF conforming to this specification the major number will be 4 and the minor number will be 0.

Every CAPSULE will contain at least one make_version construct.

5.58.2. user_info

Encoding number: 2

	information:	STRING(k, n)
		   -> VERSION
This is (usually character) information included in the TDF for labelling purposes.

information will be produced by make_string.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tdf/spec9.html100644 1750 1750 7475 6466607530 17356 0ustar brooniebroonie Supplementary UNIT

TDF Specification, Issue 4.0

January 1998

next section previous section current document TenDRA home page document index


6.1 - LINKINFO_PROPS
6.1.1 - make_linkinfos
6.2 - LINKINFO
6.2.1 - static_name_def
6.2.2 - make_comment
6.2.3 - make_weak_defn
6.2.4 - make_weak_symbol

6. Supplementary UNIT


6.1. LINKINFO_PROPS

Number of encoding bits: 0
Is coding extendable: no
Unit identification: linkinfo

This is an additional UNIT which gives extra information about linking.

6.1.1. make_linkinfos

Encoding number: 0

	no_labels:	TDFINT
	tds:		SLIST(LINKINFO)
		   -> LINKINFO_PROPS
Makes the UNIT.


6.2. LINKINFO

Number of encoding bits: 2
Is coding extendable: yes

6.2.1. static_name_def

Encoding number: 1

	assexp:		EXP POINTER x
	id:		TDFSTRING(k, n)
		   -> LINKINFO
assexp will be an obtain_tag construction which refers to a TAG which is defined with make_id_tagdef, make_var_tagdef or common_tagdef. This TAG will not be linked to an EXTERNAL.

The name id shall be used (but not exported, i.e. static) to identify the definition for subsequent linking.

This construction is likely to be needed for profiling, so that useful names appear for statically defined objects. It may also be needed when C++ is translated into C, in order to identify global initialisers.

6.2.2. make_comment

Encoding number: 2

	n:		TDFSTRING(k, n)
		   -> LINKINFO
n shall be incorporated into the object file as a comment, if this facility exists. Otherwise the construct is ignored.

6.2.3. make_weak_defn

Encoding number: 3

	namer:		EXP POINTER x
	val:		EXP POINTER y
		   -> LINKINFO
namer and val will be obtain_tag constructions which refer to TAGs which are defined with make_id_tagdef, make_var_tagdef or common_tagdef. They shall be made synonymous.

6.2.4. make_weak_symbol

Encoding number: 4

	id:		TDFSTRING(k, n)
	val:		EXP POINTER x
		   -> LINKINFO
val will be an obtain_tag construction which refers to a TAG which is defined with make_id_tagdef, make_var_tagdef or common_tagdef.

This TAG shall be made weak (in the same sense as in the SVID ABI Symbol Table), and id shall be synonymous with it.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tdfc/ 40755 1750 1750 0 6507734505 15475 5ustar brooniebroonietendra-doc-4.1.2.orig/doc/tdfc/table1.gif100644 1750 1750 24334 6466607534 17464 0ustar brooniebroonieGIF87aˆO€ÿÿÿ,ˆOþŒ©Ëí£œ´Ú‹³Þ¼û†âH–扦êʶî ÇòL×öçúÎ÷þ ‡Ä¢ñˆL*—̦ó J§ÔªõŠÍj·Ü®÷ ‹Çä²ùŒN«×ì¶û ËçôºýŽÏë÷ü¾ÿ(8HXhxˆ˜¨¸ÈØèø)9IYiy‰™©¹ÉÙéù *:JZjzŠšªºÊÚêú +;K[k{‹›«»ËÛëû ,N^n~Žž®¾ÎÞîþ/?O_oŸ¯¿Ïßïÿ0 À" <ˆ0¡Â… :|1¢Ä‰+Z¼ˆ1£Æþ;zü2¤È‘Ñ<‰2¥Ê•,[º| 3¦L—&gÚ¼‰3§Î<{ú|™Æà;AŸ ­IÔ]Ò3K™eZphQgOÍT]×´LVeWÉtM·uLXd_Å”=76LZcgÁ´-·öK\bo½Ô7·K^aw¹ô ·wK``µÎ’0@Ô ‹× VØÁA1D–ÑøÀä•5oŽÐ™Äa,£­4ŽšYBj“¯F@ôõÐNÓV¶å¥ºï(mº·Þ¬±+ 7Ûš·ä ·A¯‘»9ìç ¤^»”×=7=Šú¸Þälóï}Òîå3k÷mB÷æîæÃO?ÿôótÈëþ?wMžzŠáG u¼·x†E[yí=hŸQ4§_š]8àƒðÙ·aqf˜„ Èæš†¦â‰ë™¸w,vÈqfxÚ‹)¢€à„ "݇ìÉ("úçŸw%¶'Ô‘[±gÙ $zvcŠ+‚8 Šbc‹/2©!”(ÖŸ”Æ·ã‚A*ÁÝ”KÆX¦`äÉ–XÊè#dœ™¥–ªi!ŒZ9e˜ûÙeZš8㕈ŽYBŽB¶y…{…òùg¢L‘'f˜çÕÉ¢’” jžyúøiŸžî¹¥¥Pbhª«fÚ§Ÿ›‚z¤‘Úœzz‡¤£OÜ6_øñZ%±‘Uõþž¨lº§k€U&`zÁ^¥’ÏJ;"‘êj$®øê¸Ó(‹¹mxÛ‚¸L¨¹F¸ë¼°‹&ºÙÈ;¾H1go‚úó/—BoS5U3±°S ;Õ¯ŽÓõð2 ±5÷°±(ñq2ÿfTÆ¿šLÍÈ:¨üIÈB¸ pÅ\¡¼îO6ߌsÎ:ïÌsÏÕBåsÐBMtÑFÔÑJ/ÍtÓN?]Ò¾JÉ,2Í5³ì Ö³•¬µ 0³E5YV/ñu1]CÇuPcCs6 mgR6eFÛ¥d†r¹ò;¾ÂXú™m¹LÂÚ{ÄÍñjVÒxu±¸œ¡ñþCŠ«j-ꊛ¨Ö­E9ˆóðŸrŒÚúéÕEÞyg¡{ ùê-¾.ûàßéù¼¬û«¶å¨›¹³yÒýlðÛFX¡|‚2‰¬µtKKû»:»à›S—ØwæyIâ´Â Û«sÅ}ß+‡™c= _9[ª‡¿W^ªò£ hݬÎj!œrʹiŽûÝ=ëm.Nü»ò—:@µTÌkЗL•¨÷¡¯w³ûÝ÷T¬ÆÁfN3Ú”mduA=}PSùñÒùÀg˜ØHƒ© ›—Áò…/†; £D% ªhKƒb˜áÊ@*忀AÜà˜NĽ BPuƒ¡ràä›a‘0}¢S¡þžø#Àa®|ß±ô(†q.§Ò”«r²ú€]:òªGÀIͯcTöäE0©‰‰Sˆ‹éŒ¸<*Æ1uBܳ*%&ù„qUKìÐ*Õ bA«Œã— û¦<Êð23<á®ÇAd]nXÙå¡5¾K†¨‚øËße;òIŠVÍâb¶vU¼ ÞnY’¤O´¾¸F GªtR1ºß¨Ñ‡„Á25ÇÒ-™ÏLc1§·kâ(lÒÔf0¥Öo^'màd‡8%GN I%š «f¾ÜIn²ž–\票BOåÓlò4Û>åvÎS7éÔ ÔŠÐ„*t¡7f8þûI—&n ƒè0’¹2‰òÅ¢|Ѩ2)J êŸ"G ÃÑ``4+ÍZJ sR–ÆÔ¥ì„XI“$×ýñŠüÁ›&ÇIÛ´ Tävb‰B@†j§ƒ´ÛLëuÓŸ^n•ŒÁ"¿5¯Y©†ª‚ßݎꜫþ´ª«{&èÈöT¯MµIdíaMªÔÁqUHVT"X‘Š§Çœu{5XZ¡³V_vË€ž<ŸýT$JðÕG†VE”öþèJáõp×bÎôLØÕëz–b!XT÷…xÿ3g#ø×–sQd­éÌ÷8Æ–QŒ”œ-!´(7Òvq­Â!eJ;¼»†Ktâ"[þë§Îo·ÇŠäs‹À‰:”‘ ÄaÿHeÕÞÚNœBn¸êi†zLb%YZ×k‰”Aªá§`Å*Bõ6ÚË!ØÝÔötº®uíq‡ˆF4ÚötåEãf¹¦8FðB¼=¥o9–Þ ®k€O’c»KXåZ˜©„ì­¬ÜúQþÂwÄf´®ýÜÁ:Fñµ¥ºmý e`5xÖe]‚é*Ô66VÛÝ1vk·Ã®˜> täd•ôƒ–Îfn¿_e{™TTÞ'§©Ü”É‹·Ÿöx\Æ·Ô+0V¾JNáŠeø¼ìËÑJY—’moš#ëTŦÌ»¬œ~1ã­;ƒ"ªsUžÕ þdµÒæÏžà3[Õ'!-û!Ѐ4¡9ah½1óÑV¡4¢# é—þ‚Ñn³t,0ÝV›Š,šÎš§ý|êW€º65èªáVê^pz³ÆÅ«·JÒ¤1t׼m‘Q£ãÖ iµWRí bË5׫U‡²Ó[µg;ÕžV1¶XÝ i×NÔÍ&uM-†m2i[ãæ)¸Ë½ß)W©ê^ön‚µTêÕ!´bîi1ɈK¬^Ýæ÷}²kk•Ãpemmü=TC. sæÙÏÛë[Ãv¨u JGæbÛ±ÇÛ¬qþÖmQ¼ÐÄ«h2însÓÁâÎ l*¥ÔåöB9Å‘Û-­CþööÉ÷¤¸Êãú²oŠçÍ…syõåRþÐánÔTã÷cßòUzlñ¨^H©v‘ÆÜ<‹Qa)tÓ1êTç¢Yܳ3ä4<ñÖËñ“ýÁNU’Òícº{Úb½&{—ð9՗࣠2’Ç,lµ¬U©Úåp+åäáÛ± Gð›Þ(¾÷ìBM“ŽçNI-‚ÞÃ'®4Û1>ÈÆg¿ŒŠ(ŠH C{hŒ$y8šGÄYwº^dŒ5J6 ‘¤Ÿ@4¢·‘r„D`ÚAÊ’x¤¥| ¡‚·¥%×a·çDßè¡aj[°•=Â×|5¹^ž¹TõY BùˆÓ7~›H•»ù„²¥§xø›šXRÜwfbKSX~} ¨z•> L"*þ›ß@›Œ™©Þ°©u¨Wœð©EšhšŸªB7ª€Pª*µª¢H‚='‚<¸€´:„{Yš‹ ª§ú‚¼jªÚ ¯ S)™ÇŠ¬Éª¬"!¬Ü@¬›«â£šnÁšn­–ð¬¦æ«A˜ª°€šت­²v­Æy«Õ¶ƒ®¶›Ò¡h ©nVXpôé›J™”ÁI£»DIJn功=1PD\ò%œ÷š>§p•@+Ä¢‘c8›W‡¶ÙTïg{K£ÚY°æh°Fˆic «,›8y7äS±£¤¦g¸žD£Jª„š±+­ùÙ…þ¸_‘:‹æ tºjßz–ÜÕ†Îõcíþæ]‚¸=w$¥W¥}¦SÄx²5y^úŠÍBÕ˜Hf±§}FZŠDÛùEgÚ¤ÉlÊŠ^W¨4{‰iC,$‹`›•"[VBY¡ª¶þ:¤±xyb«~D¶Ca “w ’éX¶U瑽÷j<Û:e˜‘Vbaµ_k¡ ÖD-«¯7Ö´¹©?æE¸vC§´’ä|7ˤ79jîI<á×N¶>GiZAYe³ëªZx©õ:uOé§\6u¥ä¨¾¨®ûf‚Á¯ÀX‡Œ[Ž¸Ú„Íš ú¢Àš™ð È;¶º„ݪjÅ›»³Š®z9‚愽ɦ½L‹á›m㋱Îf¾þ¬@½Y®Ï«¼„©«¨ ½ÅʼÚ0®ˆ¾ïº¬ýë¿ÿ Àû;£å[¿Ðº­EXÀÚz¿÷²¾«Ð¾[9­rQ­r!ÀÈÔÀªðÀ8Áv1ÁvQÁŒ¿1øÁŠ–™qñVe©…i»¹ö: Êä¼lÁR5†1L2^èœÿø³›¤*Ç;je[€«•/c+ï{hKx¿¸±d8qwž¨KÄ#«fˆÅ|¼%•Z"»€'g¥H²q² ³NJ…©0—¢ŠYN™³ú‰Vrë ƒ§(„‹ 'vr|¡ _1zÛ+Œcœc¤¢ö^ùz°%ö3 ¢Ïu @ú½¹çŽjçYìIþȾÈcŠ›Ç[V±ç£7ž›|c’«È%ä¼ÁÇ÷ÕH*öÆY+^cÊ`D ä¥ñæ²:îv.«šð1 íSý.Y½,áð_nÔäÞð¡úðd¼ðѾ¼»ÎæižÒö>¬mî1¯¾$/Ó&ï¬( o®/çÀ«gÿ°{þ-]"OéÀ:Ë(è!‡kƒmèÌó÷îóVÎè8LPDŸÂ:OÃI] ¯jé[†éVüÙ~›\¬Îêí(Ï4Îã}ô'_¿CçÓ\È(iîßÍŸ¼HëŠJǸÎȹ®ë§®˜½ÎdJLݱ=^¯>GöfápÏí¼)ëôÞíáNç/ñùýÜÀ^¢ÖøØnÆå‘›c®÷`¸"óýÊø5¯Cß~Þ®îÝê´\+ÞÈΠ„d3I~µõnRŸ2øþEú»üþã½ûïxW©ÍG•@i©Â‹•d÷»šêj·²?.0SJ/λÊ/ ; ÒÄëòÛ@ýu©ò¨®ñ!_üô{üÁZöþ/Ïò<}ýøËü-³ýyßýòkz7 ÿñ/ÿP#ýí’þ…¶þÆßþ.=þØÿ@p0u¹ýa”@qÖ›wÿÁP ®Ñ<¹]ÙÖ}% žW™¾ñ\ÇÕÝŸz?ᨱ‘ˆc’ÙtŽ‚Ï]TZµ¢–WÞAÛõ>©ßVX\6'²ç†YÜ ²ξÍÕ1¼ÁgøÍðHlâó~ AÒìþÆJ3Ø$¥¨$7£:Õ(¹(_F'+º1;çJ'U`匂fYuð<ÑLI@)êŠn•ŒP†sVqk=#ý-¡9•ŽDŸ±/µ£/´­;±Û¾‘ÁRÕ÷~¹Íå¦þÅÅÇûÏÇ÷²í»á½ëë}Ê7DÙ2 ®¬¤P ¬„ ÿðƒØ£š¯‰©õ‰„1žÅŠ¢zÜxq™Býú])õna9‘!71”U‘I”Õ¢,÷ÑÆž-‰4$£Ím ÍÈâK§Á,VÒÔ†d£#ÿñŠŠ´d“ëi´u¯W Þ,Q=ƱiD«½RüékÊ-è†úëh4dR­´4VRû”-`©}¨*å…õoV‰cÒU“ò™P?gnÉ8\ß¿)?> ¼¹ñ1¸ŠC"4o…¢Jéz¶©ïpTÂG†±3·°hØZO玼h2ÌÂ4ÅÊ)“ùmßuU›‚mRøþç:Eb—±·íDv[ÇàÆÏ4þ\‡–ÜÒ{Òä1Í*Ïý9¼L4]»ê“uAô×ó‹=Šâ™i}ø!Áy@3ïºd5ðÂsí¾¤(¬PÃ.ªC\2Ü0¢K Q<ÑDLq OAäºQ^4‘ÅþT¤1.Y̱4+ãqDoÒB!‰LGVŽ,2<EdRÉ'EH’8(©D°Ê®ÌrÅ+¥ÔÒEStÒË1{|²K2³I5ÑlsG%ÏtÓ 1+¤SÎ1ã´ìN"íl­Ï=¹Ì2O@3a³ÎMTÑEmÔÑG!TÒI)­ÔÒK1ÍTÓM9íÔÓOAmTPC þÔÔRuüÓCRQ¥rÐ2^mÕVý¤UÖ"cý"×['ÜU[yÒW-† 6` RÕØTO]vIdK,ÖÙ<¤­¢Úi]P6Zlï¼6nWõÖ‰,\‹C¨Ê ¡\xÔÍ  ëÆÁZ_æ—R¨%:nókë-·î½ÃÙ¦(†¯6ÔßfÞ"¸ú=È–z#Þ‘bÄô$ÊáŠú ¸CÅõ/_€p3Ï1xqCŸúPFQª—¹C<Ìàû8°8îY^5[b±,ï=WQ- q.ø®Q!®y+¶¦ûw³îœ«Zà÷š›iê–Ëz8jM£ÎÊŽ¬¡ê¶Òî9™þ÷(ål|ÙÉkÒÎÅi⊲®kþÒZùbªëÚÏ;ýF ;ö` ünÅ$c8]Ã`kæ2½V<»ÀWKîÿ2§é7¸Ó,¿¾ÿ»*Ê÷ò[lÙ îÙ¬º5.!ÏÇnÇÜ^v¯œ¨ÎwØ6·Ë2ïÃGKMuÒ“ÛûËØ=ŽmÔh‹[úÔgsëzçÚ=lå!Ÿrà`×ݵÁRÓZ®è±OÞ7¦ÝÞf®LŸ¸þYÇRïÀéq,^—É¡2•Íl7ñ!íæ£´œé,a#zÝ–f¾³M/\×ð¨(mAÚ`È·û ÃyQ³J4->Ï„daPDxÁ`­Z%þl¡øªôB‘Ù0U<,Ý ½¤ÃÕQ„P#" ¡$Ä#‰K>\¢®øÄ99±yJ”"»uÅ*Q‹¸Šb­EE]‰ŒãúbÇEFb…ŠmtãáG9ΑŽu´ã5ÅE4¶ëŒ{4’×hE?Ö ƒ U‚HCú@¹X¤½y¬H>Ò9¤¤É&yE^Ò`Yä$ 3I~†ž$e¯ôxJR•3¥e.!ê‹eQÚÏ@‹¢ƒô“xúGŠPÖj˜oè´^q"ã´Bm$3f½ÃlrKÊÌU2ÍM©Ü‡pù£¯å9qbÒ0†¼Œ¹È¯dþfr©°mú,ÅæXʶppN?ôº<ø‡Àk ÈpMÚWäÒÏfö„‚l /¹‰C`­€øØ'Åê?惂 Ø=@ºVã©OÀ6ºØo9x AËÙ›ôåNtÕá'Co¶‡Š“§ÉÒe3Sê“”æF¨N‘ÎÛø²6tÅÄnBÍɰHºÅÖQ”/AeÜÜŒç8è´înËÓéì€zœZ¥hXÚ)-NòËžj´vÂC©Q ³”}øŒ dŠÊiEuŒv-^UW»¯5kµi[XøRäñ‹zœ£ÝR*˜Œä v•¿‹å_Õ—>í)‡±u5ìù »ºÎ‰’šÔžþê2»ÌÔz‡.ä[Ö`*SÂ!l}IݘãrZ¬t†O5 úf{›• Wy™^/b¢Êσ =Íwì™­XJè ù ?È4޾,‹“ëyº{•ü4×5ŒÑ´3±ð¤´,ÝQ`šPñ6—= Ò Tßµ‡ út½y.ß1ºÕ$á#ù:Å.µ·‡sbå±  Ë=%øW fðŒ hÊCøY^ƒ‡[ Ï$†ÀM1‰æEçâÂ<1Š_ …³8ƶ¬q+o\É“6Ç;ì1)¬Í»²ÅÝ2;5|dHÙ~J&[’Œ…"oU¶ò•±œe-o™Ë]îT€þ£¬Š‡™SÞ0™ì*4»×Ìióšy7b8Ÿ‹É ðœí]ë}1¹üZ]š¾7Õ6þ½&pÍr‹áŒÇÁ¦Óɲ¯n·m[á6îH\vµ+–E(Z?NŒµå4m¡•xÊ;m…´Ö¨³+n_‹3®æ¶ÝÆz÷öâ-jÐÒU5>õàZkïïÞËQÎÃHYÆá„6“6yÑ̉ßÌ„¸îj-#ÇĆ6¦¼ º·írËþÈ¡WOº&S¾Ç¼ÿÑá´¢ïÞ$@÷½ãÇð ÷Þ;Â3£Ï‹‡¶šéÄÏ5ò¥´dåÝxÌcñò›ç¸á=?D(‡~­f |˜'OOÒÞU§rêi½ú‚^ö’}íÎ'/ïž÷½÷ýï|àk÷„¼}í±^(Ø_qùï,~îußþü'J¿è²O¾iŸ/ÊÁcþúõ̾öo}×ßóû®Ü~å»Éò¿áü‘OÿéòG])+¾æ“"¥Í®ëõ6ºé¶óÚäý\ì´JHëæé’Jí¤ôOùÎ×¶ NCÐȦêL(ÚŒõ¤©²€)îžéåMÄH&À hnô¢Ÿà 1âÏ@¤§ÜœÀÖ+ÿòF˜>(¡nðÝœé½ÂK»–…Lì´²'ÝŽg¸Âj¶4G¥h®ì–nö®þ¸g¹Êm¨ÚísZn¦ðµ²IÆOº ®4f¬Æé«âãù¯^"Pæº0]È…,€² £.í3æmæ0(?K݆þCè–)njË!$.∠W9NÈådG༠äÆ)™øÉšdkuÚÙ\íÓÌ-4¬Px(çµÜª‰NÖÜméÞê M‘ÜÔ† ™# Íf$~b¢»\ªúúÊ›^ìXðªPëãíüG¾è 7ê9.QõÄiíÌë–䪄ꨃ¿–±íŽú„d úH‘ô°ñÌ$P¤ýðíf¨é¬aP¯Ý ‰òÖQiý²E³ÑU ILÝ‘i¯óÊOK RmÌ ÝËûŽ …Mø$r")²"-ò"-’øU,’–ú±…ø !2"7 þÑJ’ÆNœÂoõVR’ZRÕR2Ðbòif’Õ^’?’zò”F’oï¢m7æ¯ uâCô¥â‚¨—ŠQÓ¬—4È™l©Ýí'-Ò /.„_‘YÂR´ÒÐÁfm ]’Äh°iý±ÿêQr>KÇÊlR×*“£<0.#Ê/•.öæñ ÊwQpþ«hÞŸ¸K¡öé»jcÙjæ¿®«ÀÌNõ2Ã>ØHì“;ª “F¿ìÃÛxŒ1·œ+¶†¶0$¸ˆéÔgÿ|4ku)1q´ÄìÿkK®eZ«è î%”‹¬ˆ“ßp' Sþ¤¬ vÄ08˜k±¦'‡î°xëªx|êêÑR"2Ó,0÷Ln>l“w§6u3¿0ÃâðÐ{Èh(S³­ã™îk;#S;‹;Ǧ1Ç2åwFmî*ÖvpêRí8³‡+ýê·–§¸Ö+[á˜6‘#t>gª.ޏ®7°q}«+óŠD÷R™h3>¥° Ç7T6Çg#–S8´C+ô> 3·+Û°‹½3fšÑ¼Š4õ‡?¼®=èŽÍSÙÐ'®”4FMMån€¨16 ³‹§ÆÔJéKÃN&ß2'³I ¿Ï>‰,$™p'•JÍNÄC›þŒMéÒLHRO%OûNþ”•Ôj²Ð5óµMòMÕbäc‚Ò# uRÕ0Q÷ N SÁ¥RòRÕQåÌRM #Q5UUuUYµU'E#Gõ/}ÄSMTcuÔÜ4û8•ün5;2TaµWïW!ÕV…4ÏvÕ 50ýT(#Õïš5AKX×”HmÑ_x3£`TïœÒLŸ2*דkÖ)Ò|P–uÉ´òé¥DÐ[Ç’·äF\óRV3mPÐ^³N5‡UƨÕõuú4ä6šví_Ó’Tý5`±/:K01S@ræ6p¬>ç1ç4Úl“aÿLZO®31>êîrþÄΕA€BêÕoM¥“ÝHãbó¬Â°PK$y³,qt7óDƒ›Êªhc6…ë->è)¦ðât(]Ùì §ccb†=Ë+cÎ=™4cÝ=Ÿì<5ˆíä°Ù3hŽpFs}Šó«”Hi?Lý°|`QäÆÐ »ÃêzÓYÖ,ËU%ÆAq±Ê°\åÖC¦„m¥L{–0Bôh[³wè‡q—ðSh±J1tF»çvªÆïp!7}4;<(H½ïZ×…ì~ñH½4•<«Oy31–»èõŸ¸”듃˜²xP¶LÍôí¬KLu.Z7sZéP¡ÕX÷x‰µþŠ`U ÷—·gµPƒµYAµX¯÷X³WW‰w¢¦—yùÄzY6|ï–’w{…µ{Ÿ/zó‘V.YçÌ}mÌUí÷~ñ7õ·ËÔ·WÙ·øèw!Íw,¯‘|£s€uóê ¸õàW…äÎ$X0Ç7}Ëw€ÿ÷"Ø'˜nxÍ6ØÇ¼Öi¦‹F:Xô *\)V3ó g¥ë ƒè{ .ÒE«pÅÐ}¦©˜.#[ Œ+)XÈ‚¸Âøõ|…X=oÆEßõ7;p¯°_XЉ˜eÒ;_0Ó‘&Jýè27vˆu¶x…¶h![˜G×…½ÎI›¦pe¸ÚJ¬ˆî­.þpß0wOø‡ç²79u%Gs†ö{`S8›ÊÞ–ó,~v‘qe6±nTtyÑ.kª ^—< xŒÈ#y¦àóm;ù1q«þæßr”Š8CQîpœö°èÊ’KO‹ê¶rØŒQÜt´k±ãÒB–w<-ñlã¸lÖµ ãÖ¶6Ë91÷±ö¸†5ÍŽÅ8f-)”~@4> íms˜kš›‡WyÁ”î–Ô3ß°K¤wŸ†ŸvâLP‚|Yj¾ó ÊÇÕ¿š‹Õy…wœû÷V3ùŠ™ò&8¯tžz z‰1øƒÑ,„³ò o9¡­õ€šN=O¢;´oe ãÅœ'¤a²¡g9£sµ}Mš¢¶yYZoö7¦ez¦iº¦)åTm:§uz§yÚUqº§:¨…z¨µì¥ú¨‘:©•z©™º©ú©¡:ª¥zª©ºª­úª±:«µz«¹º«½ú«Á:¬Åz¬Éº¬Íú¬Ñ:­Õz­Ùº­Ýú­á:®åz®éº®íú®ñ:¯õz¯ùº¯ýú¯;°{° »° û°;±{±»±û±!;²%{²)»²-û²1;³5{³9»³=û³A;´E{´I»´Mû´Q;µU{µY»µ]ûµa;¶e{¶i{ ;tendra-doc-4.1.2.orig/doc/tdfc/table2.gif100644 1750 1750 5531 6466607534 17443 0ustar brooniebroonieGIF87ašo€ÿÿÿ,šoþŒ©Ëí£œ´Ú‹³Þ¼û†âH–扦êʶî ÇòL×öçúÎ÷þ ‡Ä¢ñˆL*—̦ó J§ÔªõŠÍj·Ü®÷ ‹Çä²ùŒN«×ì¶û ËçôºýŽÏë÷ü¾ÿ(8HXhxˆ˜¨¸ÈØèø)9IYiy‰™©¹ÉÙéù *:JZjzŠšªºÊÚêú +;K[k{‹›«»ËÛëû ,N^n~ž†¾ÎÞîþ/?ï­NŸ¯¿Ïß`ÿÏY@3Ò>X¦à…Í’a˜0ÚAƒ % ¤HÐbCþŒ.И¤šHNëñ¡È$+¬DÐ’’G‹1©¤sSÁK—5kJšЧ”œar*¤“&Q¤”þ;YÉ$Ñ*S¿Ü”ˆë¬#c*ÙlSK-«N1‹r¥CšIÙ6ø:¶iÐLÊ -—« Ü‚õË2l\z¬¥pbµAåŠu Xgc±nñ–Œ8‹bÍ/2E:õsÖ§› I¥kµ—Ò¾,/vÍök©d_am7/Û¹U§å½›£J່Ÿõ½¹0ãC•¿žX‘yÝÅ…ãtLz^ìX¨ëÒþÄ{säË›?žžõdàˆŸÎ½õú¢ñ{µoò~ûýYûþ—ä__pÑÍwLJüç^€Õ È  Ö'eFÕƒãAŸW=arZ[·YŸP¾µá%&ÕˆúIæažÆ€h]‘¦b'Îh â§eZ9tÁ5e?Å8S©åˆÄU.)X’Fe6"¹ÕVLÒ°#Fù˜_C9˜”[ÂqcVêÐ¥=–éc˜Z’(§Õ…e•žY„“-¶8c ,½x×]|ºqš™‡ÆÐækòG U²Õ(‹ Ñh™¾ri›ñi+jŸ“Æ2ꥱª*©úÐ*¦§Âòj±z:+§‘®–«+µòp+©é Kl±Æ»þ+1¿î¬ª½Ššlrˆ2mÍÂú,+Õê˜-«Ýºí×’²í ãûmmá>§SH3© !Š{®›%o•Ò¡–å–p.³#Ú#¾yf¶¤u :(‘£íËFš•Ü LT™ge“1¼†Ã_q½;HfÈé^¯”óv'œö~Œ°„› qFvÙI1„INŒ±¼"ÊUS(’„þÜo‰ê›3E[;ò)I3Ú´³ÄÑ&ËÓ0,Í­ÕdÑ›¢ÖQqý¡×2ݛؓ˜½ÖæF-0Êb¨Ý Ú*À]ݛȂÝаM.Ù{ xà‚ÎŽß Êløw‰s¼‘Ûôá§ã`èþ ä'PµåhÎ/ßèR›q'ŒtÙëˆç·aö•çy»Îfg#¤~·ƒuÆÜÑG¥ª¨†žº1$^é~2âíÒÌrO!ãüõ }©æWË®&i)+ó\Ðçe»Ä¦»=dá//ËAŸÝý¸G|òõ> &ócß›òqۇΘöú/<æÐ 7ß–Þ}Ïx­"(LIm#`î;BÜOiÌtȾ‹HNR4 {Ö8 ^§ƒ… a¤÷ð"dÏD¸Êp†4Ta lˆ¾ òÊ„ƒàáp˜¶ z‡¢ € "ºK‰@¢ð^˜ &úAŠp"þ ¬è*òD‹ЋvŒ8­ ‹&¸§S·#‚‘K#;#÷¸¤F7F/dâF¾Ý½ÈE„š"¥€G#®àÈzú*äIøx¹5îí]jÊ”é“Öry*ã#MÉI)y0ƒS¯ÈØ9  „’ûÙûV:÷ÝÑ’,B)_9HÄŒR}¡<aôx±ÑR—™¤‘¯¿8yá›ÓŠYFEÊ€/TŸûI?ñÑ,Hc¥.G 3‘]ïb7ó2ñ'G ч…K ”Ìú£÷xG#x!DÓ^Ëf§ÌÌu‘)äç ¿|N6pi¢÷É3?Š36\S;þ==vµYEItÑ0B‘G•CG[©CemÔKÝZH¥uAv¤†,m©KË3ÒªÅÔQEÓL5UÒfp¢ºi“r:?žr0¥;Ué2j꡵„]“¥"¤Á6u|ªR¿è´*&”¦¶Ü_RóXOñÑs©¹q^•i&ÉJª÷”¦)ÃRPÌÀU“•Œœ÷n)¤is­{Þ1²9iÓŸŒ%^ÁtG¼=TÌ4æ,#ã×, Ö¤ï¼+)ÇWŸ5FL¾ÅTùÄÈv±•ñ[Ï÷UsÂïSÒb½dÖBÕ³H|-·EÖÕøT§¢š1·T£Ž0…Ï;­¹vþ;Õ܆V¨;$êpOºäšÊ¸:“®iû6å^Q»Y´îlëAè¾æ¥ä-¯yÝá]x1÷7Ø}\zQqZðz@¾|¯°¨›†ør7™øU‡}¥ÖÞÉí·–¡; xÑ6P¸Ùµ xúk«ÿb˰ö«WQ ïn• ¤/)MÚsÁÃÌfý®«Ù¹ž²}Ü$1òËM2zø‡ŒQñ.-{bjÉu|—åjŠ+ÉaZjÇÕ-ŠEhN {Hž¥5H{òV¿N• æàƒêêÀ]©H¹¶Ìö‘°­š¬ˆ+“Y@ëE©Û,.4_fÍK:¯ïŒgn˜yÈt¶Ÿ˜›Ûþç\¨ÎÁ4ƒ)ºgA˜ ‰ŽÝ¢Òh¦õ'.np¸\CÈUº¨éà àVx)³}t„‹›WEP¸’ü-ñ¦Yi®ôتHÞd7£ YQH¥L}„!–mLÍWçZǺû¦’}\ëú¹š•þl.ƒí¦M'vµfÕ2Ÿ_}>JOùׯ¶¥#+NÂj:^n®µ©ƒírº§Nà›×=æÂÐòæôsë}ïyÔ]~·½œç€ <Ï‘þ„·Ýëï|ÿÑí^õQ ¾¶s¿ âqü3{~œ*òðàNZ‰©q4bœÖ:êtçBþF~‡¹¸|¼°Ð  •=’S¬ ©ÊXo^Èþ¹²I­tʆ9ÊmXB»|{-º,­GÚù¼Å™l&çv­×g“ÜÞ¥µ1ö¦‰I;[eK71ßšÔ¶~70öðŒ‹â0u¶bã¬ÖkÎucóÒ±Ît§Ø!Zì]Ò÷ìK$òˆ‡NwµoóÄOªÞUÉ–U}ª/{¶ ùnP ë/Êõ¹ZçÙëêAÄæ-·|¶AÓáh~^ΨùÀ¶ŽÇû„×8¹«_yËõâ*ß" ÖBú’c–öú¶2Åë–ûðÖ¾xÌP=½‹ÿ{6J¼(É_¤ÅÕ½pv¿Î¯¾õ[ÚüÊe_Ÿ ÿöôgz’.ßÉáöóãlü}÷ýç—èúËbøþ’ú obe{Â:Ôïït•QÉ!;&G±§qö$x%[e'kž—D˜|Mõ€¬btÕLŸ¥'nÕbGÂKnAw·vX“•jÈx8€$ˆv~‡X†7x´Htôç}×j‚w=ò’W7¶i'ƒ'jñdGþƒV&(e0@¼}ݶ}5¶„ýô+ãÇ(OèhMßÇ8ÈW~Ȧ…Xènø6ƒIX é'-×g†g88†kȆ顆m‡q¨Ï@‡uh‡wˆ‡y¨‡{ȇ}臈(ˆƒHˆ…hˆ‡ˆˆ‰¨ˆ‹Èˆèˆ‰‘(‰“H‰•h‰—ˆ‰™¨‰›ÈC‰è‰ŸŠ¡(Š£HŠ¥hЧˆŠ©¨Š«ÈŠ­èŠ¯‹±(‹³H‹µh‹·ˆ‹¹¨‹»È‹½è‹¿ŒÁ(ŒÃHŒÅhŒP;tendra-doc-4.1.2.orig/doc/tdfc/table3.gif100644 1750 1750 5531 6466607534 17444 0ustar brooniebroonieGIF87ašo€ÿÿÿ,šoþŒ©Ëí£œ´Ú‹³Þ¼û†âH–扦êʶî ÇòL×öçúÎ÷þ ‡Ä¢ñˆL*—̦ó J§ÔªõŠÍj·Ü®÷ ‹Çä²ùŒN«×ì¶û ËçôºýŽÏë÷ü¾ÿ(8HXhxˆ˜¨¸ÈØèø)9IYiy‰™©¹ÉÙéù *:JZjzŠšªºÊÚêú +;K[k{‹›«»ËÛëû ,N^n~ž†¾ÎÞîþ/?ï­NŸ¯¿Ïß`ÿÏY@3Ò>X¦à…Í’a˜0ÚAƒ % ¤HÐbCþŒ.И¤šHNëñ¡È$+¬DÐ’’G‹1©¤sSÁK—5kJšЧ”œar*¤“&Q¤”þ;YÉ$Ñ*S¿Ü”ˆë¬#c*ÙlSK-«N1‹r¥CšIÙ6ø:¶iÐLÊ -—« Ü‚õË2l\z¬¥pbµAåŠu Xgc±nñ–Œ8‹bÍ/2E:õsÖ§› I¥kµ—Ò¾,/vÍök©d_am7/Û¹U§å½›£J່Ÿõ½¹0ãC•¿žX‘yÝÅ…ãtLz^ìX¨ëÒþÄ{säË›?žžõdàˆŸÎ½õú¢ñ{µoò~ûýYûþ—ä__pÑÍwLJüç^€Õ È  Ö'eFÕƒãAŸW=arZ[·YŸP¾µá%&ÕˆúIæažÆ€h]‘¦b'Îh â§eZ9tÁ5e?Å8S©åˆÄU.)X’Fe6"¹ÕVLÒ°#Fù˜_C9˜”[ÂqcVêÐ¥=–éc˜Z’(§Õ…e•žY„“-¶8c ,½x×]|ºqš™‡ÆÐækòG U²Õ(‹ Ñh™¾ri›ñi+jŸ“Æ2ꥱª*©úÐ*¦§Âòj±z:+§‘®–«+µòp+©é Kl±Æ»þ+1¿î¬ª½Ššlrˆ2mÍÂú,+Õê˜-«Ýºí×’²í ãûmmá>§SH3© !Š{®›%o•Ò¡–å–p.³#Ú#¾yf¶¤u :(‘£íËFš•Ü LT™ge“1¼†Ã_q½;HfÈé^¯”óv'œö~Œ°„› qFvÙI1„INŒ±¼"ÊUS(’„þÜo‰ê›3E[;ò)I3Ú´³ÄÑ&ËÓ0,Í­ÕdÑ›¢ÖQqý¡×2ݛؓ˜½ÖæF-0Êb¨Ý Ú*À]ݛȂÝаM.Ù{ xà‚ÎŽß Êløw‰s¼‘Ûôá§ã`èþ ä'PµåhÎ/ßèR›q'ŒtÙëˆç·aö•çy»Îfg#¤~·ƒuÆÜÑG¥ª¨†žº1$^é~2âíÒÌrO!ãüõ }©æWË®&i)+ó\Ðçe»Ä¦»=dá//ËAŸÝý¸G|òõ> &ócß›òqۇΘöú/<æÐ 7ß–Þ}Ïx­"(LIm#`î;BÜOiÌtȾ‹HNR4 {Ö8 ^§ƒ… a¤÷ð"dÏD¸Êp†4Ta lˆ¾ òÊ„ƒàáp˜¶ z‡¢ € "ºK‰@¢ð^˜ &úAŠp"þ ¬è*òD‹ЋvŒ8­ ‹&¸§S·#‚‘K#;#÷¸¤F7F/dâF¾Ý½ÈE„š"¥€G#®àÈzú*äIøx¹5îí]jÊ”é“Öry*ã#MÉI)y0ƒS¯ÈØ9  „’ûÙûV:÷ÝÑ’,B)_9HÄŒR}¡<aôx±ÑR—™¤‘¯¿8yá›ÓŠYFEÊ€/TŸûI?ñÑ,Hc¥.G 3‘]ïb7ó2ñ'G ч…K ”Ìú£÷xG#x!DÓ^Ëf§ÌÌu‘)äç ¿|N6pi¢÷É3?Š36\S;þ==vµYEItÑ0B‘G•CG[©CemÔKÝZH¥uAv¤†,m©KË3ÒªÅÔQEÓL5UÒfp¢ºi“r:?žr0¥;Ué2j꡵„]“¥"¤Á6u|ªR¿è´*&”¦¶Ü_RóXOñÑs©¹q^•i&ÉJª÷”¦)ÃRPÌÀU“•Œœ÷n)¤is­{Þ1²9iÓŸŒ%^ÁtG¼=TÌ4æ,#ã×, Ö¤ï¼+)ÇWŸ5FL¾ÅTùÄÈv±•ñ[Ï÷UsÂïSÒb½dÖBÕ³H|-·EÖÕøT§¢š1·T£Ž0…Ï;­¹vþ;Õ܆V¨;$êpOºäšÊ¸:“®iû6å^Q»Y´îlëAè¾æ¥ä-¯yÝá]x1÷7Ø}\zQqZðz@¾|¯°¨›†ør7™øU‡}¥ÖÞÉí·–¡; xÑ6P¸Ùµ xúk«ÿb˰ö«WQ ïn• ¤/)MÚsÁÃÌfý®«Ù¹ž²}Ü$1òËM2zø‡ŒQñ.-{bjÉu|—åjŠ+ÉaZjÇÕ-ŠEhN {Hž¥5H{òV¿N• æàƒêêÀ]©H¹¶Ìö‘°­š¬ˆ+“Y@ëE©Û,.4_fÍK:¯ïŒgn˜yÈt¶Ÿ˜›Ûþç\¨ÎÁ4ƒ)ºgA˜ ‰ŽÝ¢Òh¦õ'.np¸\CÈUº¨éà àVx)³}t„‹›WEP¸’ü-ñ¦Yi®ôتHÞd7£ YQH¥L}„!–mLÍWçZǺû¦’}\ëú¹š•þl.ƒí¦M'vµfÕ2Ÿ_}>JOùׯ¶¥#+NÂj:^n®µ©ƒírº§Nà›×=æÂÐòæôsë}ïyÔ]~·½œç€ <Ï‘þ„·Ýëï|ÿÑí^õQ ¾¶s¿ âqü3{~œ*òðàNZ‰©q4bœÖ:êtçBþF~‡¹¸|¼°Ð  •=’S¬ ©ÊXo^Èþ¹²I­tʆ9ÊmXB»|{-º,­GÚù¼Å™l&çv­×g“ÜÞ¥µ1ö¦‰I;[eK71ßšÔ¶~70öðŒ‹â0u¶bã¬ÖkÎucóÒ±Ît§Ø!Zì]Ò÷ìK$òˆ‡NwµoóÄOªÞUÉ–U}ª/{¶ ùnP ë/Êõ¹ZçÙëêAÄæ-·|¶AÓáh~^ΨùÀ¶ŽÇû„×8¹«_yËõâ*ß" ÖBú’c–öú¶2Åë–ûðÖ¾xÌP=½‹ÿ{6J¼(É_¤ÅÕ½pv¿Î¯¾õ[ÚüÊe_Ÿ ÿöôgz’.ßÉáöóãlü}÷ýç—èúËbøþ’ú obe{Â:Ôïït•QÉ!;&G±§qö$x%[e'kž—D˜|Mõ€¬btÕLŸ¥'nÕbGÂKnAw·vX“•jÈx8€$ˆv~‡X†7x´Htôç}×j‚w=ò’W7¶i'ƒ'jñdGþƒV&(e0@¼}ݶ}5¶„ýô+ãÇ(OèhMßÇ8ÈW~Ȧ…Xènø6ƒIX é'-×g†g88†kȆ顆m‡q¨Ï@‡uh‡wˆ‡y¨‡{ȇ}臈(ˆƒHˆ…hˆ‡ˆˆ‰¨ˆ‹Èˆèˆ‰‘(‰“H‰•h‰—ˆ‰™¨‰›ÈC‰è‰ŸŠ¡(Š£HŠ¥hЧˆŠ©¨Š«ÈŠ­èŠ¯‹±(‹³H‹µh‹·ˆ‹¹¨‹»È‹½è‹¿ŒÁ(ŒÃHŒÅhŒP;tendra-doc-4.1.2.orig/doc/tdfc/table4.gif100644 1750 1750 11014 6466607534 17456 0ustar brooniebroonieGIF87a1$€ÿÿÿ,1$þŒ©Ëí£œ´Ú‹³Þ¼û†âH–扦êʶî ÇòL×öçúÎ÷þ ‡Ä¢ñˆL*—̦ó J§ÔªõŠÍj·Ü®÷ ‹Çä²ùŒN«×ì¶û ËçôºýŽÏë÷ü¾ÿ(8HXhxˆ˜¨¸ÈØèø)9IYiy‰™©¹ÉÙéù *:JZjzŠšªºÊÚêú +;K[k{‹›«»ËÛëû ,N^n~Žž®¾ÎÞîþŸ 0OÀ0Ÿ@Q¯À_ÏŸ€ ÚC`pŸ@ ú*¬‘PDÃxGìé;¸ãþ…jÌ÷q£Ç‘"†4Id€‰O>`)ÁG._ZœÙ'Å 7mªô ³dG“‰ E(ð$¾ ?^X¨3CTK§b°ºóGR–üŒvyS©N™\5δzcZ¡]¿‚ ;ÒbÓ•pÉÚ]Iw«Ý°x‘öÌ‹×mïVõû6«‹½fÿîý9Ö%ÔÁzAÖDÉñ°WÊ}‘ÕÛX-cÌon¦KÚïäª[û¶ÕüzâÒÄŠWpnzmf’„%ä|¬Í²ŽßýÇ·ç£Æÿ¢Î-åsœÔy§üœ²9v£%±Öþ”;ÙéÕ÷î7mcä—¡{ÆM|èxÀsï²Fì::yúsþ¥óî_zä‰Vœeß±Ztö9'`dýx¤ÞuMíÆvq¹Ç tšˆ_kãÍ—We°‰’‡Ö¹¢v÷¡¸á$Ö›ˆA¹¥Úo”Å8!ú·–y7–—r–­F ~) #{@Ò¶c“G6¹"Ž.ÂâCíM‰¥rPe–^Z)Q—_ŽIf™fž‰fšj®Éf›n"r%1ÍIgvÞ‰gžzîÉgŸ~þ h ‚Jh¡†Šh¢ŠÚ™Æ¢Ž> i¤’NJi¥–^Š© hdÊi§ž~ j¨¢ŽJé¦b¾ùDœ[0«¨Z¡ª®~1ë«SÄšE­]èj+¸bÁ몧öªÄ¯W+þë°Ä"a,¬ÊVì²ÅšÚ,´ÏJKDµTD 쵆˜vŒ±§-Üë-!BšËJ³ ÒùΖ;Hf\¶û”Z!ЫeºQÈû­Oòg©̯ îZwÁD 7°Ã 'œª¿ì¶ßuÚí×™Lb²0}c¸å|²Ø]ÉâZ\1Å`ü!a´<Ä™„œOiavr~=ê[£oËòË¿¦TQ&³…ÈùþSB9OídQ_¯ËM ]|'Ó|"Ç@W}óÓT{Œö×ÖsÚOÀµq«‹£ÁÆvðce;_€²yaŠvÿwÑ[~â¨hÍÄܾ*Þ5ã‹SþK¹(ޝ"y±'q9¶=dÎùæÌŠî9D•›Ñyé:€>:ë‘®º ®‘zã°ÇÃìEÔ¾ï¸Ó {¶· üïž^†ïÆÇ‹<¤>}ôÒOO}õ}Rk}öÚoÏ}÷Þ_Æ÷âO~ùæsÚüòÓ†?üÊ«¿Xúb¼ÿñì/ýõ«€?Oíçÿ¿ý½H~­  %B@0Ìjn³3Ù–ØC¸hmB#×eú羡P 9‡Ã V¢24üÅ k$ô`* {ðMK6S¡Ä&§ä@»áDµ£mpg|JFNˆBº0&ôUÖÃÂñ4$^Gš¿]4<RþhÆÖ³·i±=çJšÔªÈ¤AQc c£˜¤š½ð‰ £ÓD+™ðs6dQFä•”©d„ÓáIø 2Bí>Rãl¤6?FÀ`KË "?Æ$>±cy4Oƒ¾Æ!Ä€¾iyÈÇŽeH`bå KXžá ¬G› ‡Bi¯µ1lùQÏ+Í"Émd¼Ä[&!bÀJ¦’½<ÛiÔdîR‘[\¤˜¼ØÌôÌòjYŒå"ÿ˜ýì²>¹´$“E$Í‘*Áô«ÂÆÇäR”Ü1äÝ*ÄLP§ C$äzéIŸå“p±ÒfÛ<9˜·Erº¤¦7y)Íq¶D¡«KeBþü9Å(bÓFEºO@ Mô´¤žX»â”·d15$-XÙ(6ZÔ7‚c9ÉÉÐRÇv€”§éXxMâtlX™Ž4SÀd“&X§ˆzJmñJ\Z”cN»©Ä_µy/Eݳ`°ª¦*krÓjCƒzÀ«$V^ «TÆê¤š^÷CëZwVM–U¨s}« ­W`æÕ®æó[õþc@Yµ<t¿ -?¯ÑWQt–VFsõ¯…­´¥/iR‰™ÐÞ´µ$í9PóMÔNåô =M4R/KÕÜEµ©è×ʾš‰®îjMœË#V»C× áõ?ú«^yþ‡@é§}ú7%ü7/¸uè} 8€Xzè"øiHvH{å÷:Æ·$"5vêF©[bø‡*#x+‘¡`Îf‚"È´“J°÷|4ƒ (~õVpû¶ƒã¦‚¶V‚;8#Û7u1XCèeRE„ßñƒæ&Ò„§ˆƒî7…l…Ë÷„Lh„Nt…ìR…Qö…`˜…ê7†OW†x†;åum¸†Vxnø†pÓ…îswˆ‡yHXuXsQøpsèb(‡€ØWiØ€„ho‚xvˆH‡aØŒènƒ¨ˆhy‹hˆè‡·„Ë“þ‰”uÙŠ6HZh;Ä‚…sk›H¤Ès¾áŠ5rŠØ6'Ƈþ“D”hVçY¬8U:ȃ¨è‚—ŋÄf6‹¿aÇhVÃh~ºˆ‹ÎøŒªÅŒH‰2‡–X•hw—ȈÓ(ƒ[X„Ž˜€ÙX{ØhŽä¸CÛ(‰è(x‡ÈˆèGÈŽòØ£‡÷ˆùX=µ(rñè…ó˜Ž×¨Ž™bøŽ„èvŽ\ÇÂD9‰ ÉIäâ€i‹©€ £Šð׿T4 ‰ éÖ{~¤Œg’"é2$y -‰ZT¸‘!9“(™`GfX '“…6’0Y =©a@Bþ d{f(Žsh”[†”I©†Kù†MIWD T‰qVùX™OYZh“þÆ•å•cÉYúˆ–i©––ñf–𥕻֖Gù”o)Zq)}sé”e©—Ué•혃ˆ‚´(•+˜Š!ø…véZ‡ù‘n¢˜ÇSƒ©q}™•Ά—Ýð˜×ÕI?u†™Ù^$™‘F™]éeûˆž9e®ø{L9šdY—­y–‰šëu™“›p)›·y—¹Y˜@È—½é„¼yãøšÀ)…¿9œˆœëˆtkéœÏ /gœW8›q&œÌ –ËyŽ9Õygµ){º¹˜×¹Ùâ ™¡xšè©™ØF‰þÞ‰aØÕ˜oŸ}–Dà‰™ìù™IUõ\ó©:þYTZ:úhø9xú™šä9 :‘ J› Êõ¨jžìh /B ¡¡ÖY¡jH™¡Ñµ¡Ø2¢÷¡ÚœÑÉ¢-ê¢wÒ¡‰u¢k† ç0£oV£ù£ñ™£åp£$¡*šÅ™œ$˜¢Zu=Ê ?ZgÉöž;jŸN÷¤Ó©‰NЉPúŸVÚX: Z \z Rz¥Tê‰Ú5¥Ej˜bº¥dZŠAŠ¤Ø™LºgJ:rhn¢å‰¡`ª¡t*v:hGš§º§lÚŠx:¤pÚŸ|J¢/ê¨ú¢Œ*@€zhˆª”hÚþ™’J£–•˜:†”šh~ Úh¢ ¤:i5i¨½(KR¢¨©8ºQg!\ˆªB“#†5.¬ê©ÇYo-hªÅ«£†‚SX¬¥“b¸:«@Zš³!"¸¬ÀØtç¥pãw­­&¨‰ª§èØ­Ìũgz®)¢Ñڤ媮áJŽãÚkîj…­ ­öÚ:ª¯ûš–ìʉþz§ßz©éº¦¿ú‡DJ°b(°š° °Š°ð:°J¯[©P3¬Ô ¯¿z‹¯Ôˆ±k°UúC; [Œ§ª%+² KœRc²Ò€²Á¨²‹ú±ßH¯ ;²X³ªú?þë‹A˲¥º³Ø³ÖZ´©º®9+ k®‹³I{¨Pû®%ǯY«µw¸´ê´ÿØ´Tëªa ³Ê±„:¨úµ y´1[ž¡¹h]k¬«i"´D·3û w»ªLó¥k[‘w§·ËÀ·/•Ÿ*·Ìj¦+¶ÄXOƒ« …ÛƒßT°ek¤g ¢àJ’û¬ç ¸iµõÚ¸þǹÛZ¨£ ²˜+¤»¹‰‹­ªû¦R‹®²›[[k»·»r®K,¥ ·Ò¨»äºªb«µ•„ŽÁ<‚”nE?ýt`ºÚ‚‘p°llxX!Áõ]”Ã2fÃ8™vÌOŒa«1œ“‰0ÃpÈ<Ø¿JÒÄg¥«% ŠÆÛ‘âk³œó¯Î{’°›¶«;µÌœÊë¶hû²´+ºdkÆÂK´Zì’mk¶`œ˜VŒl¸KÇu,XrÜ&Iܺl|“\¬¹™»²|ì“Ö“xì¿hLƬ«¶¨«³JÅÿÛŧ ‘d8ü”†¼&îõ€•,Èn9D±aš³š{ÉÈEç^`“Ož·¾¤9ÊAWʶt^c]½Ûž)Ú6\S¬¤ÉHÈÍPÊÕäÅ6m rŒpÁ:P1œ{¹,‰»Ì z¼Èp|°ˆ È »Æ«Ì¶büÆiì°›L—ÒìÅQ»ÇÖü—» vlÎçL>‚ÎëÌÎÖ£Îí Ïñ,*ãLÏõlÏ÷ŒÏù¬ÏûÌÏýìÏÿ Ð-ÐMÐmÐÐ ­Ð ÍÐ íÐ Ñ-ÑMÑmÑÑ­ÑÍÑíÑ Ò!-Ò#MÒ%mÒ'Ò)­Ò+ÍÒ-íÒ/ Ó1-Ó3MÓ5mÓ7Ó9­Ò;tendra-doc-4.1.2.orig/doc/tdfc/table5.ps100644 1750 1750 231164 6466607534 17366 0ustar brooniebroonie%!PS-Adobe-3.0 %%BoundingBox: (atend) %%Pages: (atend) %%PageOrder: (atend) %%DocumentFonts: (atend) %%Creator: Frame 4.0 %%DocumentData: Clean7Bit %%EndComments %%BeginProlog % % Frame ps_prolog 4.0, for use with Frame 4.0 products % This ps_prolog file is Copyright (c) 1986-1993 Frame Technology % Corporation. All rights reserved. This ps_prolog file may be % freely copied and distributed in conjunction with documents created % using FrameMaker, FrameBuilder and FrameViewer as long as this % copyright notice is preserved. % % Frame products normally print colors as their true color on a color printer % or as shades of gray, based on luminance, on a black-and white printer. The % following flag, if set to True, forces all non-white colors to print as pure % black. This has no effect on bitmap images. /FMPrintAllColorsAsBlack false def % % Frame products can either set their own line screens or use a printer's % default settings. Three flags below control this separately for no % separations, spot separations and process separations. If a flag % is true, then the default printer settings will not be changed. If it is % false, Frame products will use their own settings from a table based on % the printer's resolution. /FMUseDefaultNoSeparationScreen true def /FMUseDefaultSpotSeparationScreen true def /FMUseDefaultProcessSeparationScreen false def % % For any given PostScript printer resolution, Frame products have two sets of % screen angles and frequencies for printing process separations, which are % recomended by Adobe. The following variable chooses the higher frequencies % when set to true or the lower frequencies when set to false. This is only % effective if the appropriate FMUseDefault...SeparationScreen flag is false. /FMUseHighFrequencyScreens true def % % PostScript Level 2 printers contain an "Accurate Screens" feature which can % improve process separation rendering at the expense of compute time. This % flag is ignored by PostScript Level 1 printers. /FMUseAcccurateScreens true def % % The following PostScript procedure defines the spot function that Frame % products will use for process separations. You may un-comment-out one of % the alternative functions below, or use your own. % % Dot function /FMSpotFunction {abs exch abs 2 copy add 1 gt {1 sub dup mul exch 1 sub dup mul add 1 sub } {dup mul exch dup mul add 1 exch sub }ifelse } def % % Line function % /FMSpotFunction { pop } def % % Elipse function % /FMSpotFunction { dup 5 mul 8 div mul exch dup mul exch add % sqrt 1 exch sub } def % % /FMversion (4.0) def /FMLevel1 /languagelevel where {pop languagelevel} {1} ifelse 2 lt def /FMPColor FMLevel1 { false /colorimage where {pop pop true} if } { true } ifelse def /FrameDict 400 dict def systemdict /errordict known not {/errordict 10 dict def errordict /rangecheck {stop} put} if % The readline in PS 23.0 doesn't recognize cr's as nl's on AppleTalk FrameDict /tmprangecheck errordict /rangecheck get put errordict /rangecheck {FrameDict /bug true put} put FrameDict /bug false put mark % Some PS machines read past the CR, so keep the following 3 lines together! currentfile 5 string readline 00 0000000000 cleartomark errordict /rangecheck FrameDict /tmprangecheck get put FrameDict /bug get { /readline { /gstring exch def /gfile exch def /gindex 0 def { gfile read pop dup 10 eq {exit} if dup 13 eq {exit} if gstring exch gindex exch put /gindex gindex 1 add def } loop pop gstring 0 gindex getinterval true } bind def } if /FMshowpage /showpage load def /FMquit /quit load def /FMFAILURE { dup = flush FMshowpage /Helvetica findfont 12 scalefont setfont 72 200 moveto show FMshowpage FMquit } def /FMVERSION { FMversion ne { (Frame product version does not match ps_prolog!) FMFAILURE } if } def /FMBADEPSF { (PostScript Lang. Ref. Man., 2nd Ed., H.2.4 says EPS must not call X ) dup dup (X) search pop exch pop exch pop length 4 -1 roll putinterval FMFAILURE } def /FMLOCAL { FrameDict begin 0 def end } def /concatprocs { /proc2 exch cvlit def/proc1 exch cvlit def/newproc proc1 length proc2 length add array def newproc 0 proc1 putinterval newproc proc1 length proc2 putinterval newproc cvx }def FrameDict begin /FMnone 0 def /FMcyan 1 def /FMmagenta 2 def /FMyellow 3 def /FMblack 4 def /FMcustom 5 def /FrameNegative false def /FrameSepIs FMnone def /FrameSepBlack 0 def /FrameSepYellow 0 def /FrameSepMagenta 0 def /FrameSepCyan 0 def /FrameSepRed 1 def /FrameSepGreen 1 def /FrameSepBlue 1 def /FrameCurGray 1 def /FrameCurPat null def /FrameCurColors [ 0 0 0 1 0 0 0 ] def /FrameColorEpsilon .001 def /eqepsilon { sub dup 0 lt {neg} if FrameColorEpsilon le } bind def /FrameCmpColorsCMYK { 2 copy 0 get exch 0 get eqepsilon { 2 copy 1 get exch 1 get eqepsilon { 2 copy 2 get exch 2 get eqepsilon { 3 get exch 3 get eqepsilon } {pop pop false} ifelse }{pop pop false} ifelse } {pop pop false} ifelse } bind def /FrameCmpColorsRGB { 2 copy 4 get exch 0 get eqepsilon { 2 copy 5 get exch 1 get eqepsilon { 6 get exch 2 get eqepsilon }{pop pop false} ifelse } {pop pop false} ifelse } bind def /RGBtoCMYK { 1 exch sub 3 1 roll 1 exch sub 3 1 roll 1 exch sub 3 1 roll 3 copy 2 copy le { pop } { exch pop } ifelse 2 copy le { pop } { exch pop } ifelse dup dup dup 6 1 roll 4 1 roll 7 1 roll sub 6 1 roll sub 5 1 roll sub 4 1 roll } bind def /CMYKtoRGB { dup dup 4 -1 roll add 5 1 roll 3 -1 roll add 4 1 roll add 1 exch sub dup 0 lt {pop 0} if 3 1 roll 1 exch sub dup 0 lt {pop 0} if exch 1 exch sub dup 0 lt {pop 0} if exch } bind def /FrameSepInit { 1.0 RealSetgray } bind def /FrameSetSepColor { /FrameSepBlue exch def /FrameSepGreen exch def /FrameSepRed exch def /FrameSepBlack exch def /FrameSepYellow exch def /FrameSepMagenta exch def /FrameSepCyan exch def /FrameSepIs FMcustom def setCurrentScreen } bind def /FrameSetCyan { /FrameSepBlue 1.0 def /FrameSepGreen 1.0 def /FrameSepRed 0.0 def /FrameSepBlack 0.0 def /FrameSepYellow 0.0 def /FrameSepMagenta 0.0 def /FrameSepCyan 1.0 def /FrameSepIs FMcyan def setCurrentScreen } bind def /FrameSetMagenta { /FrameSepBlue 1.0 def /FrameSepGreen 0.0 def /FrameSepRed 1.0 def /FrameSepBlack 0.0 def /FrameSepYellow 0.0 def /FrameSepMagenta 1.0 def /FrameSepCyan 0.0 def /FrameSepIs FMmagenta def setCurrentScreen } bind def /FrameSetYellow { /FrameSepBlue 0.0 def /FrameSepGreen 1.0 def /FrameSepRed 1.0 def /FrameSepBlack 0.0 def /FrameSepYellow 1.0 def /FrameSepMagenta 0.0 def /FrameSepCyan 0.0 def /FrameSepIs FMyellow def setCurrentScreen } bind def /FrameSetBlack { /FrameSepBlue 0.0 def /FrameSepGreen 0.0 def /FrameSepRed 0.0 def /FrameSepBlack 1.0 def /FrameSepYellow 0.0 def /FrameSepMagenta 0.0 def /FrameSepCyan 0.0 def /FrameSepIs FMblack def setCurrentScreen } bind def /FrameNoSep { /FrameSepIs FMnone def setCurrentScreen } bind def /FrameSetSepColors { FrameDict begin [ exch 1 add 1 roll ] /FrameSepColors exch def end } bind def /FrameColorInSepListCMYK { FrameSepColors { exch dup 3 -1 roll FrameCmpColorsCMYK { pop true exit } if } forall dup true ne {pop false} if } bind def /FrameColorInSepListRGB { FrameSepColors { exch dup 3 -1 roll FrameCmpColorsRGB { pop true exit } if } forall dup true ne {pop false} if } bind def /RealSetgray /setgray load def /RealSetrgbcolor /setrgbcolor load def /RealSethsbcolor /sethsbcolor load def end /setgray { FrameDict begin FrameSepIs FMnone eq { RealSetgray } { FrameSepIs FMblack eq { RealSetgray } { FrameSepIs FMcustom eq FrameSepRed 0 eq and FrameSepGreen 0 eq and FrameSepBlue 0 eq and { RealSetgray } { 1 RealSetgray pop } ifelse } ifelse } ifelse end } bind def /setrgbcolor { FrameDict begin FrameSepIs FMnone eq { RealSetrgbcolor } { 3 copy [ 4 1 roll ] FrameColorInSepListRGB { FrameSepBlue eq exch FrameSepGreen eq and exch FrameSepRed eq and { 0 } { 1 } ifelse } { FMPColor { RealSetrgbcolor currentcmykcolor } { RGBtoCMYK } ifelse FrameSepIs FMblack eq {1.0 exch sub 4 1 roll pop pop pop} { FrameSepIs FMyellow eq {pop 1.0 exch sub 3 1 roll pop pop} { FrameSepIs FMmagenta eq {pop pop 1.0 exch sub exch pop } { FrameSepIs FMcyan eq {pop pop pop 1.0 exch sub } {pop pop pop pop 1} ifelse } ifelse } ifelse } ifelse } ifelse RealSetgray } ifelse end } bind def /sethsbcolor { FrameDict begin FrameSepIs FMnone eq { RealSethsbcolor } { RealSethsbcolor currentrgbcolor setrgbcolor } ifelse end } bind def FrameDict begin /setcmykcolor where { pop /RealSetcmykcolor /setcmykcolor load def } { /RealSetcmykcolor { 4 1 roll 3 { 3 index add 0 max 1 min 1 exch sub 3 1 roll} repeat setrgbcolor pop } bind def } ifelse userdict /setcmykcolor { FrameDict begin FrameSepIs FMnone eq { RealSetcmykcolor } { 4 copy [ 5 1 roll ] FrameColorInSepListCMYK { FrameSepBlack eq exch FrameSepYellow eq and exch FrameSepMagenta eq and exch FrameSepCyan eq and { 0 } { 1 } ifelse } { FrameSepIs FMblack eq {1.0 exch sub 4 1 roll pop pop pop} { FrameSepIs FMyellow eq {pop 1.0 exch sub 3 1 roll pop pop} { FrameSepIs FMmagenta eq {pop pop 1.0 exch sub exch pop } { FrameSepIs FMcyan eq {pop pop pop 1.0 exch sub } {pop pop pop pop 1} ifelse } ifelse } ifelse } ifelse } ifelse RealSetgray } ifelse end } bind put FMLevel1 not { /patProcDict 5 dict dup begin <0f1e3c78f0e1c387> { 3 setlinewidth -1 -1 moveto 9 9 lineto stroke 4 -4 moveto 12 4 lineto stroke -4 4 moveto 4 12 lineto stroke} bind def <0f87c3e1f0783c1e> { 3 setlinewidth -1 9 moveto 9 -1 lineto stroke -4 4 moveto 4 -4 lineto stroke 4 12 moveto 12 4 lineto stroke} bind def <8142241818244281> { 1 setlinewidth -1 9 moveto 9 -1 lineto stroke -1 -1 moveto 9 9 lineto stroke } bind def <03060c183060c081> { 1 setlinewidth -1 -1 moveto 9 9 lineto stroke 4 -4 moveto 12 4 lineto stroke -4 4 moveto 4 12 lineto stroke} bind def <8040201008040201> { 1 setlinewidth -1 9 moveto 9 -1 lineto stroke -4 4 moveto 4 -4 lineto stroke 4 12 moveto 12 4 lineto stroke} bind def end def /patDict 15 dict dup begin /PatternType 1 def /PaintType 2 def /TilingType 3 def /BBox [ 0 0 8 8 ] def /XStep 8 def /YStep 8 def /PaintProc { begin patProcDict bstring known { patProcDict bstring get exec } { 8 8 true [1 0 0 -1 0 8] bstring imagemask } ifelse end } bind def end def } if /combineColor { FrameSepIs FMnone eq { graymode FMLevel1 or not { [/Pattern [/DeviceCMYK]] setcolorspace FrameCurColors 0 4 getinterval aload pop FrameCurPat setcolor } { FrameCurColors 3 get 1.0 ge { FrameCurGray RealSetgray } { FMPColor graymode and { 0 1 3 { FrameCurColors exch get 1 FrameCurGray sub mul } for RealSetcmykcolor } { 4 1 6 { FrameCurColors exch get graymode { 1 exch sub 1 FrameCurGray sub mul 1 exch sub } { 1.0 lt {FrameCurGray} {1} ifelse } ifelse } for RealSetrgbcolor } ifelse } ifelse } ifelse } { FrameCurColors 0 4 getinterval aload FrameColorInSepListCMYK { FrameSepBlack eq exch FrameSepYellow eq and exch FrameSepMagenta eq and exch FrameSepCyan eq and FrameSepIs FMcustom eq and { FrameCurGray } { 1 } ifelse } { FrameSepIs FMblack eq {FrameCurGray 1.0 exch sub mul 1.0 exch sub 4 1 roll pop pop pop} { FrameSepIs FMyellow eq {pop FrameCurGray 1.0 exch sub mul 1.0 exch sub 3 1 roll pop pop} { FrameSepIs FMmagenta eq {pop pop FrameCurGray 1.0 exch sub mul 1.0 exch sub exch pop } { FrameSepIs FMcyan eq {pop pop pop FrameCurGray 1.0 exch sub mul 1.0 exch sub } {pop pop pop pop 1} ifelse } ifelse } ifelse } ifelse } ifelse graymode FMLevel1 or not { [/Pattern [/DeviceGray]] setcolorspace FrameCurPat setcolor } { graymode not FMLevel1 and { dup 1 lt {pop FrameCurGray} if } if RealSetgray } ifelse } ifelse } bind def /savematrix { orgmatrix currentmatrix pop } bind def /restorematrix { orgmatrix setmatrix } bind def /dmatrix matrix def /dpi 72 0 dmatrix defaultmatrix dtransform dup mul exch dup mul add sqrt def /freq dpi dup 72 div round dup 0 eq {pop 1} if 8 mul div def /sangle 1 0 dmatrix defaultmatrix dtransform exch atan def /dpiranges [ 2540 2400 1693 1270 1200 635 600 0 ] def /CMLowFreqs [ 100.402 94.8683 89.2289 100.402 94.8683 66.9349 63.2456 47.4342 ] def /YLowFreqs [ 95.25 90.0 84.65 95.25 90.0 70.5556 66.6667 50.0 ] def /KLowFreqs [ 89.8026 84.8528 79.8088 89.8026 84.8528 74.8355 70.7107 53.033 ] def /CLowAngles [ 71.5651 71.5651 71.5651 71.5651 71.5651 71.5651 71.5651 71.5651 ] def /MLowAngles [ 18.4349 18.4349 18.4349 18.4349 18.4349 18.4349 18.4349 18.4349 ] def /YLowTDot [ true true false true true false false false ] def /CMHighFreqs [ 133.87 126.491 133.843 108.503 102.523 100.402 94.8683 63.2456 ] def /YHighFreqs [ 127.0 120.0 126.975 115.455 109.091 95.25 90.0 60.0 ] def /KHighFreqs [ 119.737 113.137 119.713 128.289 121.218 89.8026 84.8528 63.6395 ] def /CHighAngles [ 71.5651 71.5651 71.5651 70.0169 70.0169 71.5651 71.5651 71.5651 ] def /MHighAngles [ 18.4349 18.4349 18.4349 19.9831 19.9831 18.4349 18.4349 18.4349 ] def /YHighTDot [ false false true false false true true false ] def /PatFreq [ 10.5833 10.0 9.4055 10.5833 10.0 10.5833 10.0 9.375 ] def /screenIndex { 0 1 dpiranges length 1 sub { dup dpiranges exch get 1 sub dpi le {exit} {pop} ifelse } for } bind def /getCyanScreen { FMUseHighFrequencyScreens { CHighAngles CMHighFreqs} {CLowAngles CMLowFreqs} ifelse screenIndex dup 3 1 roll get 3 1 roll get /FMSpotFunction load } bind def /getMagentaScreen { FMUseHighFrequencyScreens { MHighAngles CMHighFreqs } {MLowAngles CMLowFreqs} ifelse screenIndex dup 3 1 roll get 3 1 roll get /FMSpotFunction load } bind def /getYellowScreen { FMUseHighFrequencyScreens { YHighTDot YHighFreqs} { YLowTDot YLowFreqs } ifelse screenIndex dup 3 1 roll get 3 1 roll get { 3 div {2 { 1 add 2 div 3 mul dup floor sub 2 mul 1 sub exch} repeat FMSpotFunction } } {/FMSpotFunction load } ifelse 0.0 exch } bind def /getBlackScreen { FMUseHighFrequencyScreens { KHighFreqs } { KLowFreqs } ifelse screenIndex get 45.0 /FMSpotFunction load } bind def /getSpotScreen { getBlackScreen } bind def /getCompositeScreen { getBlackScreen } bind def /FMSetScreen FMLevel1 { /setscreen load }{ { 8 dict begin /HalftoneType 1 def /SpotFunction exch def /Angle exch def /Frequency exch def /AccurateScreens FMUseAcccurateScreens def currentdict end sethalftone } bind } ifelse def /setDefaultScreen { FMPColor { orgrxfer cvx orggxfer cvx orgbxfer cvx orgxfer cvx setcolortransfer } { orgxfer cvx settransfer } ifelse orgfreq organgle orgproc cvx setscreen } bind def /setCurrentScreen { FrameSepIs FMnone eq { FMUseDefaultNoSeparationScreen { setDefaultScreen } { getCompositeScreen FMSetScreen } ifelse } { FrameSepIs FMcustom eq { FMUseDefaultSpotSeparationScreen { setDefaultScreen } { getSpotScreen FMSetScreen } ifelse } { FMUseDefaultProcessSeparationScreen { setDefaultScreen } { FrameSepIs FMcyan eq { getCyanScreen FMSetScreen } { FrameSepIs FMmagenta eq { getMagentaScreen FMSetScreen } { FrameSepIs FMyellow eq { getYellowScreen FMSetScreen } { getBlackScreen FMSetScreen } ifelse } ifelse } ifelse } ifelse } ifelse } ifelse } bind def end /gstring FMLOCAL /gfile FMLOCAL /gindex FMLOCAL /orgrxfer FMLOCAL /orggxfer FMLOCAL /orgbxfer FMLOCAL /orgxfer FMLOCAL /orgproc FMLOCAL /orgrproc FMLOCAL /orggproc FMLOCAL /orgbproc FMLOCAL /organgle FMLOCAL /orgrangle FMLOCAL /orggangle FMLOCAL /orgbangle FMLOCAL /orgfreq FMLOCAL /orgrfreq FMLOCAL /orggfreq FMLOCAL /orgbfreq FMLOCAL /yscale FMLOCAL /xscale FMLOCAL /edown FMLOCAL /manualfeed FMLOCAL /paperheight FMLOCAL /paperwidth FMLOCAL /FMDOCUMENT { array /FMfonts exch def /#copies exch def FrameDict begin 0 ne /manualfeed exch def /paperheight exch def /paperwidth exch def 0 ne /FrameNegative exch def 0 ne /edown exch def /yscale exch def /xscale exch def FMLevel1 { manualfeed {setmanualfeed} if /FMdicttop countdictstack 1 add def /FMoptop count def setpapername manualfeed {true} {papersize} ifelse {manualpapersize} {false} ifelse {desperatepapersize} {false} ifelse { (Can't select requested paper size for Frame print job!) } if count -1 FMoptop {pop pop} for countdictstack -1 FMdicttop {pop end} for } {{1 dict dup /PageSize [paperwidth paperheight]put setpagedevice}stopped { (Can't select requested paper size for Frame print job!) } if {1 dict dup /ManualFeed manualfeed put setpagedevice } stopped pop } ifelse FMPColor { currentcolorscreen cvlit /orgproc exch def /organgle exch def /orgfreq exch def cvlit /orgbproc exch def /orgbangle exch def /orgbfreq exch def cvlit /orggproc exch def /orggangle exch def /orggfreq exch def cvlit /orgrproc exch def /orgrangle exch def /orgrfreq exch def currentcolortransfer FrameNegative { 1 1 4 { pop { 1 exch sub } concatprocs 4 1 roll } for 4 copy setcolortransfer } if cvlit /orgxfer exch def cvlit /orgbxfer exch def cvlit /orggxfer exch def cvlit /orgrxfer exch def } { currentscreen cvlit /orgproc exch def /organgle exch def /orgfreq exch def currenttransfer FrameNegative { { 1 exch sub } concatprocs dup settransfer } if cvlit /orgxfer exch def } ifelse end } def /pagesave FMLOCAL /orgmatrix FMLOCAL /landscape FMLOCAL /pwid FMLOCAL /FMBEGINPAGE { FrameDict begin /pagesave save def 3.86 setmiterlimit /landscape exch 0 ne def landscape { 90 rotate 0 exch dup /pwid exch def neg translate pop }{ pop /pwid exch def } ifelse edown { [-1 0 0 1 pwid 0] concat } if 0 0 moveto paperwidth 0 lineto paperwidth paperheight lineto 0 paperheight lineto 0 0 lineto 1 setgray fill xscale yscale scale /orgmatrix matrix def gsave } def /FMENDPAGE { grestore pagesave restore end showpage } def /FMFONTDEFINE { FrameDict begin findfont ReEncode 1 index exch definefont FMfonts 3 1 roll put end } def /FMFILLS { FrameDict begin dup array /fillvals exch def dict /patCache exch def end } def /FMFILL { FrameDict begin fillvals 3 1 roll put end } def /FMNORMALIZEGRAPHICS { newpath 0.0 0.0 moveto 1 setlinewidth 0 setlinecap 0 0 0 sethsbcolor 0 setgray } bind def /fx FMLOCAL /fy FMLOCAL /fh FMLOCAL /fw FMLOCAL /llx FMLOCAL /lly FMLOCAL /urx FMLOCAL /ury FMLOCAL /FMBEGINEPSF { end /FMEPSF save def /showpage {} def % See Adobe's "PostScript Language Reference Manual, 2nd Edition", page 714. % "...the following operators MUST NOT be used in an EPS file:" (emphasis ours) /banddevice {(banddevice) FMBADEPSF} def /clear {(clear) FMBADEPSF} def /cleardictstack {(cleardictstack) FMBADEPSF} def /copypage {(copypage) FMBADEPSF} def /erasepage {(erasepage) FMBADEPSF} def /exitserver {(exitserver) FMBADEPSF} def /framedevice {(framedevice) FMBADEPSF} def /grestoreall {(grestoreall) FMBADEPSF} def /initclip {(initclip) FMBADEPSF} def /initgraphics {(initgraphics) FMBADEPSF} def /initmatrix {(initmatrix) FMBADEPSF} def /quit {(quit) FMBADEPSF} def /renderbands {(renderbands) FMBADEPSF} def /setglobal {(setglobal) FMBADEPSF} def /setpagedevice {(setpagedevice) FMBADEPSF} def /setshared {(setshared) FMBADEPSF} def /startjob {(startjob) FMBADEPSF} def /lettertray {(lettertray) FMBADEPSF} def /letter {(letter) FMBADEPSF} def /lettersmall {(lettersmall) FMBADEPSF} def /11x17tray {(11x17tray) FMBADEPSF} def /11x17 {(11x17) FMBADEPSF} def /ledgertray {(ledgertray) FMBADEPSF} def /ledger {(ledger) FMBADEPSF} def /legaltray {(legaltray) FMBADEPSF} def /legal {(legal) FMBADEPSF} def /statementtray {(statementtray) FMBADEPSF} def /statement {(statement) FMBADEPSF} def /executivetray {(executivetray) FMBADEPSF} def /executive {(executive) FMBADEPSF} def /a3tray {(a3tray) FMBADEPSF} def /a3 {(a3) FMBADEPSF} def /a4tray {(a4tray) FMBADEPSF} def /a4 {(a4) FMBADEPSF} def /a4small {(a4small) FMBADEPSF} def /b4tray {(b4tray) FMBADEPSF} def /b4 {(b4) FMBADEPSF} def /b5tray {(b5tray) FMBADEPSF} def /b5 {(b5) FMBADEPSF} def FMNORMALIZEGRAPHICS [/fy /fx /fh /fw /ury /urx /lly /llx] {exch def} forall fx fw 2 div add fy fh 2 div add translate rotate fw 2 div neg fh 2 div neg translate fw urx llx sub div fh ury lly sub div scale llx neg lly neg translate /FMdicttop countdictstack 1 add def /FMoptop count def } bind def /FMENDEPSF { count -1 FMoptop {pop pop} for countdictstack -1 FMdicttop {pop end} for FMEPSF restore FrameDict begin } bind def FrameDict begin /setmanualfeed { %%BeginFeature *ManualFeed True statusdict /manualfeed true put %%EndFeature } bind def /max {2 copy lt {exch} if pop} bind def /min {2 copy gt {exch} if pop} bind def /inch {72 mul} def /pagedimen { paperheight sub abs 16 lt exch paperwidth sub abs 16 lt and {/papername exch def} {pop} ifelse } bind def /papersizedict FMLOCAL /setpapername { /papersizedict 14 dict def papersizedict begin /papername /unknown def /Letter 8.5 inch 11.0 inch pagedimen /LetterSmall 7.68 inch 10.16 inch pagedimen /Tabloid 11.0 inch 17.0 inch pagedimen /Ledger 17.0 inch 11.0 inch pagedimen /Legal 8.5 inch 14.0 inch pagedimen /Statement 5.5 inch 8.5 inch pagedimen /Executive 7.5 inch 10.0 inch pagedimen /A3 11.69 inch 16.5 inch pagedimen /A4 8.26 inch 11.69 inch pagedimen /A4Small 7.47 inch 10.85 inch pagedimen /B4 10.125 inch 14.33 inch pagedimen /B5 7.16 inch 10.125 inch pagedimen end } bind def /papersize { papersizedict begin /Letter {lettertray letter} def /LetterSmall {lettertray lettersmall} def /Tabloid {11x17tray 11x17} def /Ledger {ledgertray ledger} def /Legal {legaltray legal} def /Statement {statementtray statement} def /Executive {executivetray executive} def /A3 {a3tray a3} def /A4 {a4tray a4} def /A4Small {a4tray a4small} def /B4 {b4tray b4} def /B5 {b5tray b5} def /unknown {unknown} def papersizedict dup papername known {papername} {/unknown} ifelse get end statusdict begin stopped end } bind def /manualpapersize { papersizedict begin /Letter {letter} def /LetterSmall {lettersmall} def /Tabloid {11x17} def /Ledger {ledger} def /Legal {legal} def /Statement {statement} def /Executive {executive} def /A3 {a3} def /A4 {a4} def /A4Small {a4small} def /B4 {b4} def /B5 {b5} def /unknown {unknown} def papersizedict dup papername known {papername} {/unknown} ifelse get end stopped } bind def /desperatepapersize { statusdict /setpageparams known { paperwidth paperheight 0 1 statusdict begin {setpageparams} stopped end } {true} ifelse } bind def /DiacriticEncoding [ /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /space /exclam /quotedbl /numbersign /dollar /percent /ampersand /quotesingle /parenleft /parenright /asterisk /plus /comma /hyphen /period /slash /zero /one /two /three /four /five /six /seven /eight /nine /colon /semicolon /less /equal /greater /question /at /A /B /C /D /E /F /G /H /I /J /K /L /M /N /O /P /Q /R /S /T /U /V /W /X /Y /Z /bracketleft /backslash /bracketright /asciicircum /underscore /grave /a /b /c /d /e /f /g /h /i /j /k /l /m /n /o /p /q /r /s /t /u /v /w /x /y /z /braceleft /bar /braceright /asciitilde /.notdef /Adieresis /Aring /Ccedilla /Eacute /Ntilde /Odieresis /Udieresis /aacute /agrave /acircumflex /adieresis /atilde /aring /ccedilla /eacute /egrave /ecircumflex /edieresis /iacute /igrave /icircumflex /idieresis /ntilde /oacute /ograve /ocircumflex /odieresis /otilde /uacute /ugrave /ucircumflex /udieresis /dagger /.notdef /cent /sterling /section /bullet /paragraph /germandbls /registered /copyright /trademark /acute /dieresis /.notdef /AE /Oslash /.notdef /.notdef /.notdef /.notdef /yen /.notdef /.notdef /.notdef /.notdef /.notdef /.notdef /ordfeminine /ordmasculine /.notdef /ae /oslash /questiondown /exclamdown /logicalnot /.notdef /florin /.notdef /.notdef /guillemotleft /guillemotright /ellipsis /.notdef /Agrave /Atilde /Otilde /OE /oe /endash /emdash /quotedblleft /quotedblright /quoteleft /quoteright /.notdef /.notdef /ydieresis /Ydieresis /fraction /currency /guilsinglleft /guilsinglright /fi /fl /daggerdbl /periodcentered /quotesinglbase /quotedblbase /perthousand /Acircumflex /Ecircumflex /Aacute /Edieresis /Egrave /Iacute /Icircumflex /Idieresis /Igrave /Oacute /Ocircumflex /.notdef /Ograve /Uacute /Ucircumflex /Ugrave /dotlessi /circumflex /tilde /macron /breve /dotaccent /ring /cedilla /hungarumlaut /ogonek /caron ] def /ReEncode { dup length dict begin { 1 index /FID ne {def} {pop pop} ifelse } forall 0 eq {/Encoding DiacriticEncoding def} if currentdict end } bind def FMPColor { /BEGINBITMAPCOLOR { BITMAPCOLOR} def /BEGINBITMAPCOLORc { BITMAPCOLORc} def /BEGINBITMAPTRUECOLOR { BITMAPTRUECOLOR } def /BEGINBITMAPTRUECOLORc { BITMAPTRUECOLORc } def } { /BEGINBITMAPCOLOR { BITMAPGRAY} def /BEGINBITMAPCOLORc { BITMAPGRAYc} def /BEGINBITMAPTRUECOLOR { BITMAPTRUEGRAY } def /BEGINBITMAPTRUECOLORc { BITMAPTRUEGRAYc } def } ifelse /K { FMPrintAllColorsAsBlack { dup 1 eq 2 index 1 eq and 3 index 1 eq and not {7 {pop} repeat 0 0 0 1 0 0 0} if } if FrameCurColors astore pop combineColor } bind def /graymode true def /bwidth FMLOCAL /bpside FMLOCAL /bstring FMLOCAL /onbits FMLOCAL /offbits FMLOCAL /xindex FMLOCAL /yindex FMLOCAL /x FMLOCAL /y FMLOCAL /setPatternMode { FMLevel1 { /bwidth exch def /bpside exch def /bstring exch def /onbits 0 def /offbits 0 def freq sangle landscape {90 add} if {/y exch def /x exch def /xindex x 1 add 2 div bpside mul cvi def /yindex y 1 add 2 div bpside mul cvi def bstring yindex bwidth mul xindex 8 idiv add get 1 7 xindex 8 mod sub bitshift and 0 ne FrameNegative {not} if {/onbits onbits 1 add def 1} {/offbits offbits 1 add def 0} ifelse } setscreen offbits offbits onbits add div FrameNegative {1.0 exch sub} if /FrameCurGray exch def } { pop pop dup patCache exch known { patCache exch get } { dup patDict /bstring 3 -1 roll put patDict 9 PatFreq screenIndex get div dup matrix scale makepattern dup patCache 4 -1 roll 3 -1 roll put } ifelse /FrameCurGray 0 def /FrameCurPat exch def } ifelse /graymode false def combineColor } bind def /setGrayScaleMode { graymode not { /graymode true def FMLevel1 { setCurrentScreen } if } if /FrameCurGray exch def combineColor } bind def /normalize { transform round exch round exch itransform } bind def /dnormalize { dtransform round exch round exch idtransform } bind def /lnormalize { 0 dtransform exch cvi 2 idiv 2 mul 1 add exch idtransform pop } bind def /H { lnormalize setlinewidth } bind def /Z { setlinecap } bind def /PFill { graymode FMLevel1 or not { gsave 1 setgray eofill grestore } if } bind def /PStroke { graymode FMLevel1 or not { gsave 1 setgray stroke grestore } if stroke } bind def /fillvals FMLOCAL /X { fillvals exch get dup type /stringtype eq {8 1 setPatternMode} {setGrayScaleMode} ifelse } bind def /V { PFill gsave eofill grestore } bind def /Vclip { clip } bind def /Vstrk { currentlinewidth exch setlinewidth PStroke setlinewidth } bind def /N { PStroke } bind def /Nclip { strokepath clip newpath } bind def /Nstrk { currentlinewidth exch setlinewidth PStroke setlinewidth } bind def /M {newpath moveto} bind def /E {lineto} bind def /D {curveto} bind def /O {closepath} bind def /n FMLOCAL /L { /n exch def newpath normalize moveto 2 1 n {pop normalize lineto} for } bind def /Y { L closepath } bind def /x1 FMLOCAL /x2 FMLOCAL /y1 FMLOCAL /y2 FMLOCAL /R { /y2 exch def /x2 exch def /y1 exch def /x1 exch def x1 y1 x2 y1 x2 y2 x1 y2 4 Y } bind def /rad FMLOCAL /rarc {rad arcto } bind def /RR { /rad exch def normalize /y2 exch def /x2 exch def normalize /y1 exch def /x1 exch def mark newpath { x1 y1 rad add moveto x1 y2 x2 y2 rarc x2 y2 x2 y1 rarc x2 y1 x1 y1 rarc x1 y1 x1 y2 rarc closepath } stopped {x1 y1 x2 y2 R} if cleartomark } bind def /RRR { /rad exch def normalize /y4 exch def /x4 exch def normalize /y3 exch def /x3 exch def normalize /y2 exch def /x2 exch def normalize /y1 exch def /x1 exch def newpath normalize moveto mark { x2 y2 x3 y3 rarc x3 y3 x4 y4 rarc x4 y4 x1 y1 rarc x1 y1 x2 y2 rarc closepath } stopped {x1 y1 x2 y2 x3 y3 x4 y4 newpath moveto lineto lineto lineto closepath} if cleartomark } bind def /C { grestore gsave R clip setCurrentScreen } bind def /CP { grestore gsave Y clip setCurrentScreen } bind def /FMpointsize FMLOCAL /F { FMfonts exch get FMpointsize scalefont setfont } bind def /Q { /FMpointsize exch def F } bind def /T { moveto show } bind def /RF { rotate 0 ne {-1 1 scale} if } bind def /TF { gsave moveto RF show grestore } bind def /P { moveto 0 32 3 2 roll widthshow } bind def /PF { gsave moveto RF 0 32 3 2 roll widthshow grestore } bind def /S { moveto 0 exch ashow } bind def /SF { gsave moveto RF 0 exch ashow grestore } bind def /B { moveto 0 32 4 2 roll 0 exch awidthshow } bind def /BF { gsave moveto RF 0 32 4 2 roll 0 exch awidthshow grestore } bind def /G { gsave newpath normalize translate 0.0 0.0 moveto dnormalize scale 0.0 0.0 1.0 5 3 roll arc closepath PFill fill grestore } bind def /Gstrk { savematrix newpath 2 index 2 div add exch 3 index 2 div sub exch normalize 2 index 2 div sub exch 3 index 2 div add exch translate scale 0.0 0.0 1.0 5 3 roll arc restorematrix currentlinewidth exch setlinewidth PStroke setlinewidth } bind def /Gclip { newpath savematrix normalize translate 0.0 0.0 moveto dnormalize scale 0.0 0.0 1.0 5 3 roll arc closepath clip newpath restorematrix } bind def /GG { gsave newpath normalize translate 0.0 0.0 moveto rotate dnormalize scale 0.0 0.0 1.0 5 3 roll arc closepath PFill fill grestore } bind def /GGclip { savematrix newpath normalize translate 0.0 0.0 moveto rotate dnormalize scale 0.0 0.0 1.0 5 3 roll arc closepath clip newpath restorematrix } bind def /GGstrk { savematrix newpath normalize translate 0.0 0.0 moveto rotate dnormalize scale 0.0 0.0 1.0 5 3 roll arc closepath restorematrix currentlinewidth exch setlinewidth PStroke setlinewidth } bind def /A { gsave savematrix newpath 2 index 2 div add exch 3 index 2 div sub exch normalize 2 index 2 div sub exch 3 index 2 div add exch translate scale 0.0 0.0 1.0 5 3 roll arc restorematrix PStroke grestore } bind def /Aclip { newpath savematrix normalize translate 0.0 0.0 moveto dnormalize scale 0.0 0.0 1.0 5 3 roll arc closepath strokepath clip newpath restorematrix } bind def /Astrk { Gstrk } bind def /AA { gsave savematrix newpath 3 index 2 div add exch 4 index 2 div sub exch normalize 3 index 2 div sub exch 4 index 2 div add exch translate rotate scale 0.0 0.0 1.0 5 3 roll arc restorematrix PStroke grestore } bind def /AAclip { savematrix newpath normalize translate 0.0 0.0 moveto rotate dnormalize scale 0.0 0.0 1.0 5 3 roll arc closepath strokepath clip newpath restorematrix } bind def /AAstrk { GGstrk } bind def /x FMLOCAL /y FMLOCAL /w FMLOCAL /h FMLOCAL /xx FMLOCAL /yy FMLOCAL /ww FMLOCAL /hh FMLOCAL /FMsaveobject FMLOCAL /FMoptop FMLOCAL /FMdicttop FMLOCAL /BEGINPRINTCODE { /FMdicttop countdictstack 1 add def /FMoptop count 7 sub def /FMsaveobject save def userdict begin /showpage {} def FMNORMALIZEGRAPHICS 3 index neg 3 index neg translate } bind def /ENDPRINTCODE { count -1 FMoptop {pop pop} for countdictstack -1 FMdicttop {pop end} for FMsaveobject restore } bind def /gn { 0 { 46 mul cf read pop 32 sub dup 46 lt {exit} if 46 sub add } loop add } bind def /str FMLOCAL /cfs { /str sl string def 0 1 sl 1 sub {str exch val put} for str def } bind def /ic [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0223 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0223 0 {0 hx} {1 hx} {2 hx} {3 hx} {4 hx} {5 hx} {6 hx} {7 hx} {8 hx} {9 hx} {10 hx} {11 hx} {12 hx} {13 hx} {14 hx} {15 hx} {16 hx} {17 hx} {18 hx} {19 hx} {gn hx} {0} {1} {2} {3} {4} {5} {6} {7} {8} {9} {10} {11} {12} {13} {14} {15} {16} {17} {18} {19} {gn} {0 wh} {1 wh} {2 wh} {3 wh} {4 wh} {5 wh} {6 wh} {7 wh} {8 wh} {9 wh} {10 wh} {11 wh} {12 wh} {13 wh} {14 wh} {gn wh} {0 bl} {1 bl} {2 bl} {3 bl} {4 bl} {5 bl} {6 bl} {7 bl} {8 bl} {9 bl} {10 bl} {11 bl} {12 bl} {13 bl} {14 bl} {gn bl} {0 fl} {1 fl} {2 fl} {3 fl} {4 fl} {5 fl} {6 fl} {7 fl} {8 fl} {9 fl} {10 fl} {11 fl} {12 fl} {13 fl} {14 fl} {gn fl} ] def /sl FMLOCAL /val FMLOCAL /ws FMLOCAL /im FMLOCAL /bs FMLOCAL /cs FMLOCAL /len FMLOCAL /pos FMLOCAL /ms { /sl exch def /val 255 def /ws cfs /im cfs /val 0 def /bs cfs /cs cfs } bind def 400 ms /ip { is 0 cf cs readline pop { ic exch get exec add } forall pop } bind def /rip { bis ris copy pop is 0 cf cs readline pop { ic exch get exec add } forall pop pop ris gis copy pop dup is exch cf cs readline pop { ic exch get exec add } forall pop pop gis bis copy pop dup add is exch cf cs readline pop { ic exch get exec add } forall pop } bind def /wh { /len exch def /pos exch def ws 0 len getinterval im pos len getinterval copy pop pos len } bind def /bl { /len exch def /pos exch def bs 0 len getinterval im pos len getinterval copy pop pos len } bind def /s1 1 string def /fl { /len exch def /pos exch def /val cf s1 readhexstring pop 0 get def pos 1 pos len add 1 sub {im exch val put} for pos len } bind def /hx { 3 copy getinterval cf exch readhexstring pop pop } bind def /h FMLOCAL /w FMLOCAL /d FMLOCAL /lb FMLOCAL /bitmapsave FMLOCAL /is FMLOCAL /cf FMLOCAL /wbytes { dup dup 24 eq { pop pop 3 mul } { 8 eq {pop} {1 eq {7 add 8 idiv} {3 add 4 idiv} ifelse} ifelse } ifelse } bind def /BEGINBITMAPBWc { 1 {} COMMONBITMAPc } bind def /BEGINBITMAPGRAYc { 8 {} COMMONBITMAPc } bind def /BEGINBITMAP2BITc { 2 {} COMMONBITMAPc } bind def /COMMONBITMAPc { /r exch def /d exch def gsave 3 index 2 div add exch 4 index 2 div add exch translate rotate 1 index 2 div neg 1 index 2 div neg translate scale /h exch def /w exch def /lb w d wbytes def sl lb lt {lb ms} if /bitmapsave save def r /is im 0 lb getinterval def ws 0 lb getinterval is copy pop /cf currentfile def w h d [w 0 0 h neg 0 h] {ip} image bitmapsave restore grestore } bind def /BEGINBITMAPBW { 1 {} COMMONBITMAP } bind def /BEGINBITMAPGRAY { 8 {} COMMONBITMAP } bind def /BEGINBITMAP2BIT { 2 {} COMMONBITMAP } bind def /COMMONBITMAP { /r exch def /d exch def gsave 3 index 2 div add exch 4 index 2 div add exch translate rotate 1 index 2 div neg 1 index 2 div neg translate scale /h exch def /w exch def /bitmapsave save def r /is w d wbytes string def /cf currentfile def w h d [w 0 0 h neg 0 h] {cf is readhexstring pop} image bitmapsave restore grestore } bind def /ngrayt 256 array def /nredt 256 array def /nbluet 256 array def /ngreent 256 array def /gryt FMLOCAL /blut FMLOCAL /grnt FMLOCAL /redt FMLOCAL /indx FMLOCAL /cynu FMLOCAL /magu FMLOCAL /yelu FMLOCAL /k FMLOCAL /u FMLOCAL FMLevel1 { /colorsetup { currentcolortransfer /gryt exch def /blut exch def /grnt exch def /redt exch def 0 1 255 { /indx exch def /cynu 1 red indx get 255 div sub def /magu 1 green indx get 255 div sub def /yelu 1 blue indx get 255 div sub def /k cynu magu min yelu min def /u k currentundercolorremoval exec def % /u 0 def nredt indx 1 0 cynu u sub max sub redt exec put ngreent indx 1 0 magu u sub max sub grnt exec put nbluet indx 1 0 yelu u sub max sub blut exec put ngrayt indx 1 k currentblackgeneration exec sub gryt exec put } for {255 mul cvi nredt exch get} {255 mul cvi ngreent exch get} {255 mul cvi nbluet exch get} {255 mul cvi ngrayt exch get} setcolortransfer {pop 0} setundercolorremoval {} setblackgeneration } bind def } { /colorSetup2 { [ /Indexed /DeviceRGB 255 {dup red exch get 255 div exch dup green exch get 255 div exch blue exch get 255 div} ] setcolorspace } bind def } ifelse /tran FMLOCAL /fakecolorsetup { /tran 256 string def 0 1 255 {/indx exch def tran indx red indx get 77 mul green indx get 151 mul blue indx get 28 mul add add 256 idiv put} for currenttransfer {255 mul cvi tran exch get 255.0 div} exch concatprocs settransfer } bind def /BITMAPCOLOR { /d 8 def gsave 3 index 2 div add exch 4 index 2 div add exch translate rotate 1 index 2 div neg 1 index 2 div neg translate scale /h exch def /w exch def /bitmapsave save def FMLevel1 { colorsetup /is w d wbytes string def /cf currentfile def w h d [w 0 0 h neg 0 h] {cf is readhexstring pop} {is} {is} true 3 colorimage } { colorSetup2 /is w d wbytes string def /cf currentfile def 7 dict dup begin /ImageType 1 def /Width w def /Height h def /ImageMatrix [w 0 0 h neg 0 h] def /DataSource {cf is readhexstring pop} bind def /BitsPerComponent d def /Decode [0 255] def end image } ifelse bitmapsave restore grestore } bind def /BITMAPCOLORc { /d 8 def gsave 3 index 2 div add exch 4 index 2 div add exch translate rotate 1 index 2 div neg 1 index 2 div neg translate scale /h exch def /w exch def /lb w d wbytes def sl lb lt {lb ms} if /bitmapsave save def FMLevel1 { colorsetup /is im 0 lb getinterval def ws 0 lb getinterval is copy pop /cf currentfile def w h d [w 0 0 h neg 0 h] {ip} {is} {is} true 3 colorimage } { colorSetup2 /is im 0 lb getinterval def ws 0 lb getinterval is copy pop /cf currentfile def 7 dict dup begin /ImageType 1 def /Width w def /Height h def /ImageMatrix [w 0 0 h neg 0 h] def /DataSource {ip} bind def /BitsPerComponent d def /Decode [0 255] def end image } ifelse bitmapsave restore grestore } bind def /BITMAPTRUECOLORc { /d 24 def gsave 3 index 2 div add exch 4 index 2 div add exch translate rotate 1 index 2 div neg 1 index 2 div neg translate scale /h exch def /w exch def /lb w d wbytes def sl lb lt {lb ms} if /bitmapsave save def /is im 0 lb getinterval def /ris im 0 w getinterval def /gis im w w getinterval def /bis im w 2 mul w getinterval def ws 0 lb getinterval is copy pop /cf currentfile def w h 8 [w 0 0 h neg 0 h] {w rip pop ris} {gis} {bis} true 3 colorimage bitmapsave restore grestore } bind def /BITMAPTRUECOLOR { gsave 3 index 2 div add exch 4 index 2 div add exch translate rotate 1 index 2 div neg 1 index 2 div neg translate scale /h exch def /w exch def /bitmapsave save def /is w string def /gis w string def /bis w string def /cf currentfile def w h 8 [w 0 0 h neg 0 h] { cf is readhexstring pop } { cf gis readhexstring pop } { cf bis readhexstring pop } true 3 colorimage bitmapsave restore grestore } bind def /BITMAPTRUEGRAYc { /d 24 def gsave 3 index 2 div add exch 4 index 2 div add exch translate rotate 1 index 2 div neg 1 index 2 div neg translate scale /h exch def /w exch def /lb w d wbytes def sl lb lt {lb ms} if /bitmapsave save def /is im 0 lb getinterval def /ris im 0 w getinterval def /gis im w w getinterval def /bis im w 2 mul w getinterval def ws 0 lb getinterval is copy pop /cf currentfile def w h 8 [w 0 0 h neg 0 h] {w rip pop ris gis bis w gray} image bitmapsave restore grestore } bind def /ww FMLOCAL /r FMLOCAL /g FMLOCAL /b FMLOCAL /i FMLOCAL /gray { /ww exch def /b exch def /g exch def /r exch def 0 1 ww 1 sub { /i exch def r i get .299 mul g i get .587 mul b i get .114 mul add add r i 3 -1 roll floor cvi put } for r } bind def /BITMAPTRUEGRAY { gsave 3 index 2 div add exch 4 index 2 div add exch translate rotate 1 index 2 div neg 1 index 2 div neg translate scale /h exch def /w exch def /bitmapsave save def /is w string def /gis w string def /bis w string def /cf currentfile def w h 8 [w 0 0 h neg 0 h] { cf is readhexstring pop cf gis readhexstring pop cf bis readhexstring pop w gray} image bitmapsave restore grestore } bind def /BITMAPGRAY { 8 {fakecolorsetup} COMMONBITMAP } bind def /BITMAPGRAYc { 8 {fakecolorsetup} COMMONBITMAPc } bind def /ENDBITMAP { } bind def end /ALDsave FMLOCAL /ALDmatrix matrix def ALDmatrix currentmatrix pop /StartALD { /ALDsave save def savematrix ALDmatrix setmatrix } bind def /InALD { restorematrix } bind def /DoneALD { ALDsave restore } bind def /I { setdash } bind def /J { [] 0 setdash } bind def %%EndProlog %%BeginSetup (4.0) FMVERSION 1.20 1.20 0 0 10000 10000 0 1 3 FMDOCUMENT 0 0 /Times-Roman FMFONTDEFINE 32 FMFILLS 0 0 FMFILL 1 0.1 FMFILL 2 0.3 FMFILL 3 0.5 FMFILL 4 0.7 FMFILL 5 0.9 FMFILL 6 0.97 FMFILL 7 1 FMFILL 8 <0f1e3c78f0e1c387> FMFILL 9 <0f87c3e1f0783c1e> FMFILL 10 FMFILL 11 FMFILL 12 <8142241818244281> FMFILL 13 <03060c183060c081> FMFILL 14 <8040201008040201> FMFILL 16 1 FMFILL 17 0.9 FMFILL 18 0.7 FMFILL 19 0.5 FMFILL 20 0.3 FMFILL 21 0.1 FMFILL 22 0.03 FMFILL 23 0 FMFILL 24 FMFILL 25 FMFILL 26 <3333333333333333> FMFILL 27 <0000ffff0000ffff> FMFILL 28 <7ebddbe7e7dbbd7e> FMFILL 29 FMFILL 30 <7fbfdfeff7fbfdfe> FMFILL %%EndSetup %%Page: "75" 1 %%BeginPaperSize: A4 %%EndPaperSize 595.3 841.9 0 FMBEGINPAGE [0 0 0 1 0 0 0] [ 0 1 1 0 1 0 0] [ 0 1 1 0 1 0 0] [ 1 0 1 0 0 1 0] [ 1 1 0 0 0 0 1] [ 1 0 0 0 0 1 1] [ 0 1 0 0 1 0 1] [ 0 0 1 0 1 1 0] 8 FrameSetSepColors FrameNoSep 0 0 0 1 0 0 0 K J 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 12 Q 0 X 0 0 0 1 0 0 0 K (Check) 179.26 807.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (Xs) 318.32 807.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (Xp) 351.67 807.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (Xw) 384.35 807.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (Xc) 420.04 807.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (Xa) 454.05 807.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (Xt) 489.06 807.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (V) 87.21 783.03 T (alue of __STDC__) 94.54 783.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (1) 313.98 783.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (1) 347.99 783.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (1) 382.01 783.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (1) 416.02 783.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (1) 450.04 783.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (0) 484.05 783.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (ISO C rules for integer literals) 87.21 761.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (Y) 313.98 761.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (Y) 347.99 761.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (Y) 382.01 761.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (Y) 416.02 761.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (Y) 450.04 761.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (N) 484.05 761.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (ISO C rules for integer promotions) 87.21 739.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (Y) 313.98 739.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (Y) 347.99 739.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (Y) 382.01 739.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (Y) 416.02 739.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (Y) 450.04 739.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (N) 484.05 739.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (assignments as conditional control) 87.21 717.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (statements) 87.21 703.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 717.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (bit\336eld over\337ow) 87.21 681.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 681.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 681.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 681.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 416.02 681.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 450.04 681.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (block level static function) 87.21 659.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 659.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 659.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 659.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 416.02 659.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (character escape over\337ow) 87.21 637.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 637.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 637.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 637.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 416.02 637.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 450.04 637.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (sign of char) 87.21 615.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (+/-) 313.98 613.83 T 0 8.4 Q (a) 328.08 617.43 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 12 Q (+/-) 347.99 615.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (+/-) 382.01 615.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (+/-) 416.02 615.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (+/-) 450.04 615.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (+/-) 484.05 615.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (char * as generic pointer) 87.21 591.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 591.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 591.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 591.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 416.02 591.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 450.04 591.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (complete struct/union analysis) 87.21 569.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 313.98 569.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 569.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (conditional lvalues) 87.21 547.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 547.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 547.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 382.01 547.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 416.02 547.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (constant conditional control statements) 87.21 525.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 525.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (conversion analysis) 87.21 503.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (\050 int -> enum implicit \051) 87.21 489.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (\050 int -> int explicit \051) 87.21 475.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (\050 int -> int implicit \051) 87.21 461.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (\050 int <-> pointer explicit \051) 87.21 447.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (\050 int <-> pointer implicit \051) 87.21 433.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (\050 pointer -> pointer explicit \051) 87.21 419.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (\050pointer -> pointer implicit \051) 87.21 405.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 313.98 461.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 313.98 447.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 433.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 313.98 419.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 405.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 347.99 447.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 433.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 405.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 489.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 461.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 447.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 382.01 433.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 419.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 382.01 405.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 416.02 433.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 416.02 405.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (directive used as macro ar) 87.21 383.83 T (gument) 212.28 383.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 383.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (directive assert) 87.21 361.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 361.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 361.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 361.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 416.02 361.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (directive \336le) 87.21 339.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 339.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 339.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 339.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 416.02 339.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (directive ident) 87.21 317.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 317.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 317.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 317.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 416.02 317.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (directive unassert) 87.21 295.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 295.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 295.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 295.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 416.02 295.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (directive weak) 87.21 273.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 273.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 273.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 273.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 416.02 273.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (discard analysis) 87.21 251.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (\050 function return \051) 87.21 237.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (\050 static \051) 87.21 223.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (\050 value \051) 87.21 209.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 313.98 237.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 313.98 223.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 313.98 209.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 237.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 223.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 209.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (\324$\325 used as character) 87.21 187.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 187.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (enum switch analysis) 87.21 165.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 165.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (extra , at end of enum lists) 87.21 143.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 143.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 143.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 143.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 416.02 143.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (extra ... in function prototypes) 87.21 121.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 121.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 121.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 121.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 416.02 121.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (extra ; after external declarations) 87.21 99.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 99.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 99.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 99.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 416.02 99.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (extra ; after conditional) 87.21 77.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (statements) 87.21 63.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 77.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (extra int types for bit\336elds) 87.21 41.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 41.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 41.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 41.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 416.02 41.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (extra macro de\336nitions) 87.21 19.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 19.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 19.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 382.01 19.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 416.02 19.83 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 81.21 821.53 81.21 13.33 2 L V 3 H 0 Z N 307.98 824.53 307.98 10.33 2 L V 0.5 H N 341.99 824.53 341.99 10.33 2 L V N 376.01 824.53 376.01 10.33 2 L V N 410.02 824.53 410.02 10.33 2 L V N 444.04 824.53 444.04 10.33 2 L V N 478.05 824.53 478.05 10.33 2 L V N 512.07 821.53 512.07 13.33 2 L V 3 H N 79.71 823.03 513.57 823.03 2 L V N 79.71 797.03 513.57 797.03 2 L V 2 H N 79.71 775.03 513.57 775.03 2 L V 0.5 H N 79.71 753.03 513.57 753.03 2 L V N 79.71 731.03 513.57 731.03 2 L V N 79.71 695.03 513.57 695.03 2 L V N 79.71 673.03 513.57 673.03 2 L V N 79.71 651.03 513.57 651.03 2 L V N 79.71 629.03 513.57 629.03 2 L V N 79.71 605.83 513.57 605.83 2 L V N 79.71 583.83 513.57 583.83 2 L V N 79.71 561.83 513.57 561.83 2 L V N 79.71 539.83 513.57 539.83 2 L V N 79.71 517.83 513.57 517.83 2 L V N 79.71 397.83 513.57 397.83 2 L V N 79.71 375.83 513.57 375.83 2 L V N 79.71 353.83 513.57 353.83 2 L V N 79.71 331.83 513.57 331.83 2 L V N 79.71 309.83 513.57 309.83 2 L V N 79.71 287.83 513.57 287.83 2 L V N 79.71 265.83 513.57 265.83 2 L V N 79.71 201.83 513.57 201.83 2 L V N 79.71 179.83 513.57 179.83 2 L V N 79.71 157.83 513.57 157.83 2 L V N 79.71 135.83 513.57 135.83 2 L V N 79.71 113.83 513.57 113.83 2 L V N 79.71 91.83 513.57 91.83 2 L V N 79.71 55.83 513.57 55.83 2 L V N 79.71 33.83 513.57 33.83 2 L V N 79.71 11.83 513.57 11.83 2 L V 3 H N 0 0 0 1 0 0 0 K FMENDPAGE %%EndPage: "75" 1 %%Page: "76" 2 595.3 841.9 0 FMBEGINPAGE [0 0 0 1 0 0 0] [ 0 1 1 0 1 0 0] [ 0 1 1 0 1 0 0] [ 1 0 1 0 0 1 0] [ 1 1 0 0 0 0 1] [ 1 0 0 0 0 1 1] [ 0 1 0 0 1 0 1] [ 0 0 1 0 1 1 0] 8 FrameSetSepColors FrameNoSep 0 0 0 1 0 0 0 K 0 12 Q 0 X 0 0 0 1 0 0 0 K (extra type name de\336nitions) 87.21 799.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 799.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 799.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 799.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 416.02 799.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (fall into case) 87.21 777.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 313.98 777.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 347.99 777.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (forward enum declarations) 87.21 755.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (function pointer <-> pointer conversions) 87.21 733.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 313.98 733.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 733.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 416.02 733.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 450.04 733.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 484.05 733.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (... as ar) 87.21 711.03 T (gument in function call) 121.31 711.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 711.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 711.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 382.01 711.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 416.02 711.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 450.04 711.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 484.05 711.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (unify the tag namespace) 87.21 689.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 689.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 689.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 689.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 416.02 689.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 450.04 689.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 484.05 689.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (implicit function declaration) 87.21 667.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 667.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 347.99 667.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 667.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (implicit int type for external declaration) 87.21 645.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 645.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 645.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 382.01 645.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 416.02 645.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 484.05 645.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (implicit int type for function return) 87.21 623.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 623.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 347.99 623.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 623.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (incompatible interface de\336nition) 87.21 601.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 601.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 347.99 601.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 601.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 416.02 601.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (incompatible linkage) 87.21 579.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 313.98 579.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 579.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (incompatible promoted function ar) 87.21 557.03 T (gument) 253.97 557.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 557.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 557.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 557.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 416.02 557.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 450.04 557.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 484.05 557.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (incompatible type quali\336er) 87.21 535.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 535.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 535.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 535.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 416.02 535.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (incompatible void return) 87.21 513.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 313.98 513.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 347.99 513.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 513.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (incomplete type used as object type) 87.21 491.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 491.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 491.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 491.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 416.02 491.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (indented # directive) 87.21 469.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 313.98 469.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 469.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (indented directive after #) 87.21 447.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (initialization of automatic struct/union) 87.21 425.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 484.05 425.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (integer operator analysis) 87.21 403.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 403.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (integer over\337ow analysis) 87.21 381.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 381.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (linkage resolution) 87.21 359.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (external) 87.21 345.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (internal) 87.21 331.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 313.98 359.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (Y) 313.98 331.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 347.99 359.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (Y) 347.99 331.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 359.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (Y) 382.01 331.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 416.02 359.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (Y) 416.02 331.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 450.04 359.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (Y) 450.04 331.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 484.05 359.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (Y) 484.05 331.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (long long type) 87.21 309.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (implemented as long) 87.21 295.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (implemented as long long) 87.21 281.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 309.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 309.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 382.01 309.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 416.02 309.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 450.04 309.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 484.05 309.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (nested comment analysis) 87.21 259.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 259.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (no directive/new line after identi\336er) 87.21 237.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 237.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 237.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 237.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 416.02 237.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (no external declarations present) 87.21 215.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 215.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 215.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 215.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 416.02 215.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (no identi\336er after #) 87.21 193.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 193.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 193.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 193.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 416.02 193.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (no new line at end of \336le) 87.21 171.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 171.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 171.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 171.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 416.02 171.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 450.04 171.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (operator precedence analysis) 87.21 149.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 149.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (prototype use) 87.21 127.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (prototype analysis) 87.21 113.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 113.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 113.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 382.01 113.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 450.04 113.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 484.05 127.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (weak prototype use) 87.21 91.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (weak prototype analysis) 87.21 77.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 77.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 77.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 77.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (name length limit on identi\336ers) 87.21 55.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (text after directive) 87.21 33.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 33.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 33.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 33.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 416.02 33.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (Check) 179.26 823.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (Xs) 318.32 823.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (Xp) 351.67 823.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (Xw) 384.35 823.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (Xc) 420.04 823.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (Xa) 454.05 823.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (Xt) 489.06 823.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 81.21 837.53 81.21 26.53 2 L V 3 H 0 Z N 307.98 840.53 307.98 23.53 2 L V 0.5 H N 341.99 840.53 341.99 23.53 2 L V N 376.01 840.53 376.01 23.53 2 L V N 410.02 840.53 410.02 23.53 2 L V N 444.04 840.53 444.04 23.53 2 L V N 478.05 840.53 478.05 23.53 2 L V N 512.07 837.53 512.07 26.53 2 L V 3 H N 79.71 839.03 513.57 839.03 2 L V N 79.71 813.03 513.57 813.03 2 L V 2 H N 79.71 791.03 513.57 791.03 2 L V 0.5 H N 79.71 769.03 513.57 769.03 2 L V N 79.71 747.03 513.57 747.03 2 L V N 79.71 725.03 513.57 725.03 2 L V N 79.71 703.03 513.57 703.03 2 L V N 79.71 681.03 513.57 681.03 2 L V N 79.71 659.03 513.57 659.03 2 L V N 79.71 637.03 513.57 637.03 2 L V N 79.71 615.03 513.57 615.03 2 L V N 79.71 593.03 513.57 593.03 2 L V N 79.71 571.03 513.57 571.03 2 L V N 79.71 549.03 513.57 549.03 2 L V N 79.71 527.03 513.57 527.03 2 L V N 79.71 505.03 513.57 505.03 2 L V N 79.71 483.03 513.57 483.03 2 L V N 79.71 461.03 513.57 461.03 2 L V N 79.71 439.03 513.57 439.03 2 L V N 79.71 417.03 513.57 417.03 2 L V N 79.71 395.03 513.57 395.03 2 L V N 79.71 373.03 513.57 373.03 2 L V N 79.71 323.03 513.57 323.03 2 L V N 79.71 273.03 513.57 273.03 2 L V N 79.71 251.03 513.57 251.03 2 L V N 79.71 229.03 513.57 229.03 2 L V N 79.71 207.03 513.57 207.03 2 L V N 79.71 185.03 513.57 185.03 2 L V N 79.71 163.03 513.57 163.03 2 L V N 79.71 141.03 513.57 141.03 2 L V N 79.71 105.03 513.57 105.03 2 L V N 79.71 69.03 513.57 69.03 2 L V N 79.71 47.03 513.57 47.03 2 L V N 79.71 25.03 513.57 25.03 2 L V 3 H N 0 0 0 1 0 0 0 K FMENDPAGE %%EndPage: "76" 2 %%Page: "77" 3 595.3 841.9 0 FMBEGINPAGE [0 0 0 1 0 0 0] [ 0 1 1 0 1 0 0] [ 0 1 1 0 1 0 0] [ 1 0 1 0 0 1 0] [ 1 1 0 0 0 0 1] [ 1 0 0 0 0 1 1] [ 0 1 0 0 1 0 1] [ 0 0 1 0 1 1 0] 8 FrameSetSepColors FrameNoSep 81.21 499.03 512.07 505.03 C 0 -158.1 1000 841.9 C 0 0 0 1 0 0 0 K 0 10 Q 0 X 0 0 0 1 0 0 0 K (a.) 95.38 492.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (a plain char is of unspeci\336ed sign) 109.55 492.37 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 12 Q (printf/scanf string checking) 87.21 799.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 799.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 799.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 799.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (unify external linkage) 87.21 777.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 777.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 777.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 777.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 416.02 777.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (unify incompatible string literal) 87.21 755.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 755.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 755.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 755.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 416.02 755.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 450.04 755.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (unknown directive) 87.21 733.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 733.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 733.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 733.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 416.02 733.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (unknown escape) 87.21 711.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 711.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 711.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 711.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 416.02 711.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 450.04 711.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 484.05 711.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (unknown pragma) 87.21 689.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 313.98 689.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 347.99 689.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 689.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 450.04 689.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 484.05 689.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (unnamed struct/union type) 87.21 667.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 667.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 667.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 382.01 667.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 416.02 667.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 450.04 667.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (unmatched quotes) 87.21 645.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 313.98 645.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 645.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (unreachable code) 87.21 623.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 313.98 623.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 347.99 623.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 623.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (variable analysis) 87.21 601.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 313.98 601.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 601.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (variable hiding analysis) 87.21 579.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 579.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (variable initialization) 87.21 557.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 557.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 557.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 382.01 557.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 416.02 557.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (weak macro equality) 87.21 535.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 313.98 535.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 347.99 535.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 535.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (E) 416.02 535.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (writeable string literal) 87.21 513.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 313.98 513.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (W) 382.01 513.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (Check) 179.26 823.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (Xs) 318.32 823.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (Xp) 351.67 823.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (Xw) 384.35 823.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (Xc) 420.04 823.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (Xa) 454.05 823.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K (Xt) 489.06 823.03 T 0 0 0 1 0 0 0 K 0 0 0 1 0 0 0 K 81.21 837.53 81.21 506.53 2 L V 3 H 0 Z N 307.98 840.53 307.98 503.53 2 L V 0.5 H N 341.99 840.53 341.99 503.53 2 L V N 376.01 840.53 376.01 503.53 2 L V N 410.02 840.53 410.02 503.53 2 L V N 444.04 840.53 444.04 503.53 2 L V N 478.05 840.53 478.05 503.53 2 L V N 512.07 837.53 512.07 506.53 2 L V 3 H N 79.71 839.03 513.57 839.03 2 L V N 79.71 813.03 513.57 813.03 2 L V 2 H N 79.71 791.03 513.57 791.03 2 L V 0.5 H N 79.71 769.03 513.57 769.03 2 L V N 79.71 747.03 513.57 747.03 2 L V N 79.71 725.03 513.57 725.03 2 L V N 79.71 703.03 513.57 703.03 2 L V N 79.71 681.03 513.57 681.03 2 L V N 79.71 659.03 513.57 659.03 2 L V N 79.71 637.03 513.57 637.03 2 L V N 79.71 615.03 513.57 615.03 2 L V N 79.71 593.03 513.57 593.03 2 L V N 79.71 571.03 513.57 571.03 2 L V N 79.71 549.03 513.57 549.03 2 L V N 79.71 527.03 513.57 527.03 2 L V N 79.71 505.03 513.57 505.03 2 L V 3 H N 0 0 0 1 0 0 0 K FMENDPAGE %%EndPage: "77" 3 %%Trailer %%BoundingBox: 0 0 595.3 841.9 %%PageOrder: Ascend %%Pages: 3 %%DocumentFonts: Times-Roman %%EOF tendra-doc-4.1.2.orig/doc/tdfc/tdfc1.html100644 1750 1750 14374 6466607535 17520 0ustar brooniebroonie C Checker Reference Manual

C Checker Reference Manual

January 1998

next section previous section current document TenDRA home page document index


1 - Introduction
1.1 - Background
1.2 - The C static checker
1.3 - About this document
2 - Configuring the Checker
3 - Type Checking
4 - Integral Types
5 - Data Flow and Variable Analysis
6 - Preprocessing checks
7 - Dialect Features
8 - Common Errors
9 - Symbol Table Dump
10 - Conditional Compilation
11 - References
Annexes
A - Compilation Modes
B - Command Line Options for Portability Checking
C - Integral Type Specification
D - Summary of the pragma statements
E - Dump format specification
F - The Token Syntax
G - API checking
H - Specifying conversions using the token syntax

1 Introduction


1.1 Background

The C program static checker was originally developed as a programming tool to aid the construction of portable programs using the Application Programming Interface (API) model of software portability; the principle underlying this approach being:
If a program is written to conform to an abstract API specification, then that program will be portable to any machine which implements the API specification correctly.
The tool was designed to address the problem of the lack of separation between an API specification and an API implementation and as such was considered as a compiler for an abstract machine.

This approach gave the tool an unusually powerful basis for static checking of C programs and a large amount of development work has resulted in the production of the TenDRA C static checker (tchk). The terms, TenDRA C checker and tchk are used interchangably in this document.


1.2 The C static checker

The C static checker is a powerful and flexible tool which can perform a number of static checks on C programs, including:

  • strict interface checking. In particular, the checker can analyse programs against abstract APIs to check their conformance to the specification. Abstract versions of most standard APIs are provided with the tool; alternatively users can define their own abstract APIs using the syntax described in Annex G;

  • checking of integer sizes, overflows and implicit integer conversions including potential 64-bit problems, against a 16 bit or 32 bit architecture profile;

  • strict ISO C standard checking, plus configurable support for many non-ISO dialect features;

  • extensive type checking, including prototype-style checking for traditionally defined functions, conversion checking, type checking on printf and scanf style argument strings and type checking between translation units;

  • variable analysis, including detection of unused variables, use of uninitialised variables, dependencies on order of evaluation in expressions and detection of unused function returns, computed values and static variables;

  • detection of unused header files;

  • configurable tests for detecting many other common programming errors;

  • complete standard API usage analysis;

  • several built-in checking environments plus support for user-defined checking profiles.


1.3 About this document

This document is designed as a reference manual detailing the features of the C static checker. It contains eleven chapters (including this introduction) and nine annexes.

  • Chapter 2: Configuring the Checker describes the built-in checking modes and the design of customised environments;

  • Chapters 3-8: Type Checking, Integral Types, Data Flow and Variable Analysis , Preprocessing Checks, ISO C and Other Dialects and Common Errors respectively;

  • Chapter 9: The Symbol Table Dump deals with the detection of unused header files, type checking across translation units and complete standard API usage analysis;

  • Chapter 10: Conditional Compilation describes the checker's approach to conditional compilation;

  • Chapter 11: References lists the references used in the production of this document;

  • Annex A: Checking Modes gives a description of the built-in environments;

  • Annex B: Command Line Options lists the command line checking options;

  • Annex C: Specifying Integral Types describes the built-in integer modes and the methods for customising them;

  • Annex D: Pragma Syntax Specification;

  • Annex E: Symbol Table Dump Specification;

  • Annex F: Token Syntax describes the methods and syntax used to produce abstract APIs;

  • Annex G: Abstract API Manipulation gives details of the ways in which TenDRA abstract APIs may be extended, combined or overriden by local declarations;

  • Annex H: Specifying Conversions with Tokens.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tdfc/tdfc10.html100644 1750 1750 51550 6466607534 17574 0ustar brooniebroonie C Checker Reference Manual: Dialect Features

C Checker Reference Manual

January 1998

next section previous section current document TenDRA home page document index


7.1 - Introduction
7.2 - Resolving linkage problems
7.3 - Identifier linkage
7.4 - Implicit integer types
7.5 - Bitfield types
7.6 - Extra type definitions
7.7 - Static block level functions
7.8 - Incomplete array element types
7.9 - Forward enumeration declarations
7.10 - Untagged compound types
7.11 - External volatility
7.12 - Identifier name length
7.13 - Ellipsis in function calls
7.14 - Conditional lvalues
7.15 - Unifying the tag name space
7.16 - Initialisation of compound types
7.17 - Variable initialisation
7.18 - Escape sequences
7.19 - $ in identifier names
7.20 - Writeable string literals
7.21 - Concatenation of character string literals and wide character string literals
7.22 - Nested comments
7.23 - Empty source files
7.24 - Extra commas
7.25 - Extra semicolons
7.26 - Compatibility with C++ to TDF producer

7 Dialect Features


7.1 Introduction

This chapter describes the capabilities of the TenDRA C checker for enforcing the ISO C standard as well as features for detecting areas left undefined by the standard. It also lists the non-ISO dialect features supported by the checker in order to provide compatibility with older versions of C and allow the use of third-party source which may contain non-standard constructs.


7.2 Resolving linkage problems

Often the way that identifier names are resolved can alter the semantics of a program. For example, in:

	void f () {
		{
			extern void g ();
			g ( 3 );
		}
		g ( 7 );
	}
the external declaration of g is only in scope in the inner block of f. Thus, at the second call of g, it is not in scope, and so is inferred to have declaration:

	extern int g ();
(see 3.4). This conflicts with the previous declaration of g which, although not in scope, has been registered in the external namespace. The pragma:

	#pragma TenDRA unify external linkage on
modifies the algorithm for resolving external linkage by searching the external namespace before inferring a declaration. In the example above, this results in the second use of g being resolved to the previous external declaration. The on can be replaced by warning to give a warning when such resolutions are detected, or off to switch this feature off.

Another linkage problem, which is left undefined in the ISO C standard, is illustrated by the following program:

	int f () {
		extern int g ();
		return ( g () );
	}
	
	static int g ()
	{
		return ( 0 );
	}
Is the external variable g (the declaration of which would be inferred if it was omitted) the same as the static variable g? Of course, had the order of the two functions been reversed, there would be no doubt that they were, however, in the given case it is undefined. By default, the linkage is resolved externally, so that the two uses of g are not identified. However, the checker can be made to resolve its linkage internally, so that the two uses of g are identified. The resolution algorithm can be set using:

	#pragma TenDRA linkage resolution : action
where action can be one of:

  1. (internal) on

  2. (internal) warning

  3. (external) on

  4. (external) warning

  5. off

depending on whether the linkage resolution is internal, external, or default, and whether a warning message is required. The most useful behaviour is to issue a warning for all such occurrences (by setting action to (internal) warning, for example) so that the programmer can be alerted to clarify what was intended.


7.3 Identifier linkage

The ISO C standard, section 6.1.2.2, states that "if, within a translation unit, an identifier appears with both internal and external linkage, the behaviour is undefined". By default, the checker silently declares the variable with external linkage. The check to detect variables which are redeclared with incompatible linkage is controlled using:

	#pragma TenDRA incompatible linkage permit
where permit may be allow (default mode), warning (warn about incompatible linkage) or disallow (raise errors for redeclarations with incompatible linkage).


7.4 Implicit integer types

Older C dialects allow external variables to be specified without a type, the type int being inferred. Thus, for example:

	a, b;
is equivalent to:

	int a, b;
By default these inferred declarations are not permitted, though tchk's behaviour can be modified using:

	#pragma TenDRA implicit int type for external declaration permit
where permit is allow, warning or disallow.

A more common feature, allowed by the ISO C standard, but considered bad style by some, is the inference of an int return type for functions defined in the form:

	f ( int n ) {
		....
	}
the checker's treatment of such functions can be determined using:

	#pragma TenDRA implicit int type for function return permit

where permit can be allow, warning or disallow.


7.5 Bitfield types

The ISO C standard only allows signed int, unsigned int and their equivalent types as type specifiers in bitfields. Using the default checking profile, tchk raises errors for other integral types used as type specifiers in bitfields.This behaviour may be modified using the pragma:

	#pragma TenDRA extra int bitfield type permit
Permit is one of allow (no errors raised), warning (allow non-int bitfields through with a warning) or disallow (raise errors for non-int bitfields).

If non-int bitfields are allowed, the bitfield is treated as if it had been declared with an int type of the same signedness as the given type. The use of the type char as a bitfield type still generally causes an error, since whether a plain char is treated as signed or unsigned is implementation-dependent. The pragma:

	#pragma TenDRA character set-sign
where set-sign is signed, unsigned or either, can be used to specify the signedness of a plain char bitfield. If set-sign is signed or unsigned, the bitfield is treated as though it were declared signed char or unsigned char respectively. If set-sign is either, the sign of the bitfield is target-dependent and the use of a plain char bitfield causes an error.


7.6 Extra type definitions

In accordence with the ISO C standard, in default mode tchk does not allow a type to be defined more than once using a typedef. The pragma:

	#pragma TenDRA extra type definition permit
where permit is allow (silently accepts redefinitions, provided they are consistent), warning or disallow.


7.7 Static block level functions

The ISO C standard (Section 6.5.1) states that the declaration of an identifier for a function that has block scope shall have no explicit storage-class specifier other than extern. By default, tchk raises an error for declarations which do not conform to this rule. The behaviour can be modified using:

	#pragma TenDRA block function static permit
where permit is allow (accept block scope function declarations with other storage-class specifiers), disallow or warning.


7.8 Incomplete array element types

The ISO C standard (Section 6.1.2.5) states that an incomplete type e.g an undefined structure or union type, is not an object type and that array elements must be of object type. The default behaviour of the checker causes errors when incomplete types are used to specify array element types. The pragma:

	#pragma TenDRA incomplete type as object type permit
can be used to alter the treatment of array declarations with incomplete element types. Permit is one of allow, disallow or warning as usual.


7.9 Forward enumeration declarations

The ISO C Standard (Section 6.5.2.3) states that the first introduction of an enumeration tag shall declare the constants associated with that tag. This rule is enforced by the checker in default mode, however it can be relaxed using the pragma:

	#pragma TenDRA forward enum declaration permit
where replacing permit by allow permits the declaration and use of an enumeration tag before the declaration of its associated enumeration constants. A disallow variant which restores the default behaviour is also available.


7.10 Untagged compound types

The ISO C standard states that a declaration must declare at least a declarator, a tag or the members of an enumeration. The checker detects such declarations and, by default, raises an error. The severity of the errors can be altered by:

	#pragma TenDRA unknown struct/union permit
where permit may be allow to allows code such as:

	struct {int i; int j;};
through without errors (statements such as this occur in some system headers) or disallow to restore the default behaviour.


7.11 External volatility

The inclusion of the pragma:

	        #pragma TenDRA external volatile_t
instructs the checker thereafter to treat any object declared with external linkage (ISO C standard Section 6.1.2.2) as if it were volatile (ISO C standard Section 6.5.3). This was a feature of some traditional C dialects. In the default mode, objects with external linkage are only treated as volatile if they were declared with the volatile type qualifier.


7.12 Identifier name length

Under the ISO C standard rules on identifier name length, an implementation is only required to treat the first 31 characters of an internal name and the first 6 characters of an external name as significant. The TenDRA C checker provides a facility for users to specify the maximum number of characters allowed in an identifier name, to prevent unexpected results when the application is moved to a new implementation. The limit is set using:

	#pragma TenDRA set name limit integer_constant
There is currently no distinction made between external and internal names for length checking. Identifier name lengths are not checked in the default mode.


7.13 Ellipsis in function calls

An ellipsis is not an identifier and should not be used in a function call, even if, as in the program below, the function prototype contains an ellipsis:

	int f(int a,...) {
		return 1; }
	int main() {
		int x, y;
		x=f(y ,...);
		return 1;
	}
In default mode the checker raises an error if an ellipsis is used as a parameter in a function call. The severity of this error can be modified by using:

	#pragma TenDRA ident ... permit
If permit is replaced by allow the ellipsis is ignored, if warning is used tchk produces a warning and if disallow is used the default behaviour is restored.


7.14 Conditional lvalues

The ? operator cannot normally be used to define an lvalue, so that for example, the program:

	struct s {int a, b; };
	void f (int n,struct s *s1,struct s *s2) {
		( n ? s1: s2)->a = 0;
	}
is not allowed in ISO C. The pragma:

	#pragma TenDRA conditional lvalue allow
allows conditional lvalues if:

  1. Both options of the conditional operator have compatible compound types;

  2. Both options of the conditional are lvalues.

(there is also a disallow variant, but warning is not permitted in this case).


7.15 Unifying the tag name space

Each object in the tag name space is associated with a classification (struct, union or enum) of the type to which it refers. If such a tag is used, it must be preceded by the correct classification, otherwise the checker produces an error by default. However, the pragma:

	#pragma TenDRA ignore struct/union/enum tag status
may be used to change the severity of the error. The options for status are: on (allows a tag to be used with any of the three classifications, the correct classification being inferred from the type definition), warning or off.


7.16 Initialisation of compound types

Many older C dialects do not allow the initialisation of automatic variables of compound type. Thus, for example:

	void f () {
		struct {
			int a;
			int b;
		} x = { 3, 2 };
	}
would not be allowed by some older compilers, although by default tchk does not raise any errors since the code is legal according to the ISO C standard. The checker's behaviour may be changed using:

	#pragma TenDRA initialization of struct/union (auto) permit
where permit is allow, warning or disallow. This feature is particularly useful when developing a program which is intended to be compiled with a compiler which does not support automatic compound initialisations.


7.17 Variable initialisation

The ISO C standard (Section 6.5.7) states that all expressions in an initialiser for an object that has static storage duration or in an initialiser-list for an object that has aggregate or union type shall be constant expressions. The pragma:

	#pragma TenDRA variable initialization permit
may be used to allow non-constant initialisers if permit is replaced by allow. The other option for permit is disallow which restores the default behaviour of flagging non-constant initialisers for objects of static storage duration as errors.


7.18 Escape sequences

The ISO C standard specifies a small set of escape sequences in strings, for example \n as newline. Unknown escape sequences lead to an error in the default mode , however the severity of the error may be altered using:

	#pragma TenDRA unknown escape permit
where permit is allow (silently replaces the unknown escape sequence, \z say, by z), warning or disallow .


7.19 $ in identifier names

The ISO C standard (Section 6.1) states that the use of the character $ in identifier names is illegal. The pragma:

	#pragma TenDRA dollar as ident allow
can be used to allow such identifiers, which by default are flagged as errors. There is also a disallow variant which restores the default behaviour.


7.20 Writeable string literals

The ISO C standard, section 6.1.4, states that "if the program attempts to modify a string literal of either form, the behaviour is undefined". Assignments to string literals of the form:

	"abc"='3';
always result in errors. Other attempts to modify members of string literals, e.g.

	"abc"[1]='3';
are permitted in the default checking mode. This behaviour can be changed using:

	#pragma TenDRA writeable string literal permit
where permit may be allow, warning or disallow.


7.21 Concatenation of character string literals and wide character string literals

The ISO C standard, section 6.1.4, states that if a character string literal is adjacent to a wide character string literal, the behaviour is undefined. By default, this is flagged as an error by the checker. If the pragma:

	#pragma TenDRA unify incompatible string literal permit
is used, with permit set to allow or warning the character string literal is converted to a wide character string literal and the strings are concatenated, although in the warning case a warning is output. The disallow version of the pragma restores the default behaviour.


7.22 Nested comments

The occurence of the `/*' characters inside a C comment, i.e. text surrounded by the `/*' and `*/' symbols, is usually a mistake and can lead to the termination of a comment unexpectedly. By default such nested comments are processed silently, however an error or warning can be produced by setting:

	#pragma TenDRA nested comment analysis status
with status as on or warning. If status is off the default behaviour is restored.


7.23 Empty source files

The ISO standard states that each source file should contain at least one declaration or definition. Source files which contain no external declarations or definitions are flagged as errors by the checker in default mode. The severity of the error may be altered using:

	#pragma TenDRA no external declaration permit
where the options for permit are allow (no errors raised), warning or disallow.


7.24 Extra commas

The ISO C standard does not allow extra commas in enumeration type declarations e.g.

	enum e = {red, orange, yellow,};
The extra comma at the end of the declaration is flagged as an error by default, but this behaviour may be changed by using:

	#pragma TenDRA extra , permit
where permit has the usual allow, disallow and warning options.


7.25 Extra semicolons

Some dialects of C allow extra semicolons at the external declaration and definition level in contravention of the ISO C standard. For example, the program:

	int f () {
		return ( 0 );
	};
is not ISO compliant. The checker enforces the ISO rules by default, but the errors raised may be reduced to warning or suppressed entirely using:

	#pragma TenDRA extra ; permit
with permit as warning or allow. The disallow option restores the default behaviour.


7.26 Compatibility with C++ to TDF producer

In the interests of compatibility between the C checker and the new C++ checker, all pragmas beginning:

	#pragma TenDRA ++
are silently ignored by tchk.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tdfc/tdfc11.html100644 1750 1750 11564 6466607534 17576 0ustar brooniebroonie C Checker Reference Manual: Common Errors

C Checker Reference Manual

January 1998

next section previous section current document TenDRA home page document index


8.1 - Introduction
8.2 - Enumerations controlling switch statements
8.3 - Incomplete structures and unions
8.4 - Variable shadowing
8.5 - Floating point equality

8 Common Errors


8.1 Introduction

Tchk is capable of performing a number of checks for common programming mistakes. This chapter describes these checks and controlling pragmas.


8.2 Enumerations controlling switch statements

Enumerations are commonly used as control expressions in switch statements. When case labels for some of the enumeration constant belonging to the enumeration type do not exist and there is no default label, the switch statement has no effect for certain possible values of the control expression. Checks to detect such switch statements are controlled by:

	#pragma TenDRA enum switch analysis status
where status is on (raise an error), warning (produce a warning), or off (the default mode when no errors are produced).


8.3 Incomplete structures and unions

ISO C allows for structures or unions to be declared but not defined, provided they are not used in a context where it is necessary to know the complete structure. For example:

	struct tag *p;
is allowed, despite the fact that struct tag is incomplete. The TenDRA C checker has an option to detect such incomplete structures or unions, controlled by:

	#pragma TenDRA complete struct/union analysis status
where status is on to give an error as an incomplete structure or union is detected, warning to give a warning, or off to disable the check.

The check can also be controlled by passing the command-line option -X:complete_struct=state to tchk, where state is check, warn or dont.

The only place where the checker can actually detect that a structure or union is incomplete is at the end of the source file. This is because it is possible to complete a structure after it has been used. For example, in:

	struct tag *p;
	struct tag {
		int a;
		int b;
	};
struct tag is complete despite the fact that it was incomplete in the definition of p.


8.4 Variable shadowing

It is quite legal in C to have a variable in an inner scope, with the same name as a variable in an outer scope. These variables are distinct and whilst in the inner scope, the declaration in the outer scope is not visible - it is "shadowed" by the local variable of the same name. Confusion can arise if this was not what the programmer intended. The checker can therefore be configured to detect shadowing in three cases: a local variable shadowing a global variable; a local variable shadowing a local variable with a wider scope and a local variable shadowing a typedef name, by using:

	#pragma TenDRA variable hiding analysis status
If status is on an error is raised when a local variable that shadows another variable is declared, if warning is used the error is replaced by a warning and the off option restores the default behaviour (shadowing is permitted and no errors are produced).


8.5 Floating point equality

Due to the rounding errors that occur in the handling of floating point values, comparison for equality between two floating point values is a hazardous and unpredictable operation. Tests for equality of two floating point numbers are controlled by:

	#pragma TenDRA floating equality permit
where permit is allow, warning or disallow. By default the check is switched off.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tdfc/tdfc12.html100644 1750 1750 13401 6466607534 17567 0ustar brooniebroonie C Checker Reference Manual: Symbol Table Dump

C Checker Reference Manual

January 1998

next section previous section current document TenDRA home page document index


9.1 - Introduction
9.2 - Unused headers
9.3 - Error processing
9.4 - API usage analysis
9.5 - Intermodular checks

9 Symbol Table Dump


9.1 Introduction

Tchk produces an extra output file, called a dump output file, for each translation unit processed. This file is in the form given by the symbol table output specification in Annex E, and contains information about the objects declared, defined or used within an application. Each object encountered during processing is assigned a unique reference number allowing uses of the object to be traced back to the declaration and definition of the object.

In the default mode only external declaration and definition information is written to each dump file. The amount of information output may be increased by passing the -sym[cehklsu] command line option to tchk. Any combination of the optional flags enclosed by [] may be used and the effect of each flag is described below:

  • no flags external declarations, definitions only;

  • c string and character literals output;

  • e errors incorporated into dump output;

  • e_only only errors output;

  • h included headers output;

  • k keywords output;

  • l local variables output;

  • s scopes output;

  • u variable usage output.

The dump information is currently used for four main purposes: detecting included header files from which nothing is used within the translation unit; production of lint-like error output; API usage analysis and type checking between translation units.


9.2 Unused headers

Header files which are included but from which nothing is used within the other source files comprising the translation unit, might just as well not have been included. Tchk can detect top level include files which are unnecessary, by analysing the dump output for the file. This check is enabled by passing the -Wd,-H command line flag to tchk. Errors are written to stderr in a simple ascii form by default, or to the unified dump file in dump format if the -D command line option is used.


9.3 Error processing

By default the error messages generated by the checker are written in a simple ascii form to stderr. If instead, the errors are written to the dump file using the -sym:e option mentioned above, an alternative lint-like error output may be generated by processing the dump files. The lint-like errors are enabled by passing the -Ycompact flag to tchk.


9.4 API usage analysis

Analysis performed on the set of dump files produced for an entire application can detect the objects, types, etc. from external APIs which are used by the application. The API usage analysis is enabled by passing one or more -api_checkAPI flags to tchk where API may be any of the standard APIs listed in section 2.1. The -api_check_outFILE flag may be used to direct the API analysis information to the file FILE (by default it is written to stdout). The APIs used to perform API usage analysis may be different from those used to process the application. Annex G.8 contains details of the methods used to perform the API usage analysis.


9.5 Intermodular checks

All the checks discussed in earlier chapters have been concerned with a single source file. However, tchk also contains a linking phase in which it is able to perform intermodular checks (i.e. checks between source files). In the linking phase, the dumps file generated from each translation unit processed are combined into a single dump file containing information on all external objects within the application, and type consistency checks are applied to ensure that the definitions and declarations of each object are consistent and external objects and functions have at most one definition.

The amount of information about an object stored in a dump file depends on the compilation mode used to produce that file. For example, if extra prototype checks are enabled (see section 3.3), the dump file contains any information inferred about a function from its traditional style definition or from applications of that function. For example, if one file contains:

	extern void f () ;
	void g ()
	{
		f ( "hello" ) ;
	}
and another contained:

	void f ( n ) int n ;
	{
		return ;
	}
then the inferred prototype:

	void f WEAK ( char * ) ;
from the call of f would be included in the first dump file, whereas the weak prototype deduced from the definition of f:

	void f WEAK ( int ) ;
would be included in the second. When these two dump files are linked, the inconsistency is discovered and an error is reported.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tdfc/tdfc13.html100644 1750 1750 6142 6466607534 17554 0ustar brooniebroonie C Checker Reference Manual: Conditional Compilation

C Checker Reference Manual

January 1998

next section previous section current document TenDRA home page document index


10.1 - Conditional Compilation

10 Conditional Compilation


10.1 Conditional Compilation

Tchk generally treats conditional compilation in the same way as other compilers and checkers. For example, consider:

	#if expr
	.... /* First branch */
	#else
	.... /* Second branch */
	#endif
the expression, expr, is evaluated: if it is non-zero the first branch of the conditional is processed; if it is zero the second branch is processed instead.

Sometimes, however, tchk may be unable to evaluate the expression statically because of the abstract types and expressions which arise from the minimum integer range assumptions or the abstract standard headers used by the tool (see target-dependent types in section 4.5). For example, consider the following ISO compliant program:

	#include <stdio.h>
	#include <limits.h>
	int main () {
	#if ( CHAR_MIN == 0 )
		puts ("char is unsigned");
	#else
		puts ("char is signed");
	#endif
		return ( 0 );
	}
The TenDRA representation of the ISO API merely states that CHAR_MIN - the least value which fits into a char - is a target dependent integral constant. Hence, whether or not it equals zero is again target dependent, so the checker needs to maintain both branches. By contrast, any conventional compiler is compiling to a particular target machine on which CHAR_MIN is a specific integral constant. It can therefore always determine which branch of the conditional it should compile.

In order to allow both branches to be maintained in these cases, it has been necessary for tchk to impose certain restrictions on the form of the conditional branches and the positions in which such target-dependent conditionals may occur. These may be summarised as:

  • Target-dependent conditionals may not appear at the outer level. If the checker encounters a target-dependent conditional at the outer level an error is produced. In order to continue checking in the rest of the file an arbitrary assumption must be made about which branch of the conditional to process; tchk assumes that the conditional is true and the first branch is used;

  • The branches of allowable target-dependent conditionals may not contain declarations or definitions.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tdfc/tdfc14.html100644 1750 1750 2352 6466607534 17554 0ustar brooniebroonie C Checker Reference Manual: References

C Checker Reference Manual

January 1998

next section previous section current document TenDRA home page document index


11 References

  1. Tdfc: The C to TDF Producer Issue 1.0 (DRA/CIS3/OSSG/TR/95/102/1.0)

  2. The C to TDF Producer Issue 2.1.0 (June 1993)

  3. TCheck, The TenDRA Program Checker (DRA/CIS/CSE2/TR/94/44/1.2, November 1994)

  4. Tcc Users Guide (DRA/CIS/CSE2/TR/94/48/1.2, June 1994)

  5. Implementation of ISO/IEC 9899 : 1990, Programming languages - C

  6. tspec - An API Specification Tool (DRA/CIS/CSE2/94/48/2.1, September 1994)


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tdfc/tdfc15.html100644 1750 1750 7141 6466607535 17557 0ustar brooniebroonie C Checker Reference Manual: Compilation Modes

C Checker Reference Manual

January 1998

next section previous section current document TenDRA home page document index


A.1 - Base modes
A.2 - nepc and not_ansi modes

A Compilation Modes


A.1 Base modes

The Xs, Xp, Xw, Xc, Xa and Xt modes are mutually incompatible and should not be used together. All other built-in mode combinations are allowed and, of course, any built-in mode can be combined with user-defined modes as described in section 2.2.

Checks marked with E are enabled to produce an error and checks marked with W are enabled to produce a warning. A blank entry implies that the check is disabled.




A.2 nepc and not_ansi modes

These modes modify the base environment.

The nepc environment switches off most of the extra portability checking. It is specified by passing the -nepc option to tcc.

Printf String Checking OFF

Pragma Profile:

	#pragma TenDRA conversion analysis off
	#pragma TenDRA weak prototype analysis off
	#pragma TenDRA compatible type : char * == void * : allow
	#pragma TenDRA function pointer as pointer allow
	#pragma TenDRA character escape overflow allow
	#pragma TenDRA no nline after file end allow
	#pragma TenDRA bitfield overflow allow

The not_ansi environment provides support for a range of non-ansi dialect features. It is specified by passing the -not_ansi option to tcc.

Pragma Profile:

	#pragma TenDRA linkage resolution : (internal) on
	#pragma TenDRA unify external linkage on
	#pragma TenDRA directive assert allow
	#pragma TenDRA directive file allow
	#pragma TenDRA directive ident allow
	#pragma TenDRA directive unassert allow
	#pragma TenDRA directive weak allow
	#pragma TenDRA compatible type : char * == void * : allow
	#pragma TenDRA conditional lvalue allow
	#pragma TenDRA extra ; allow
	#pragma TenDRA extra bitfield int type allow
	#pragma TenDRA extra type definition allow
	#pragma TenDRA ignore struct/union/enum tag on
	#pragma TenDRA implicit int type for external declaration allow
	#pragma TenDRA implicit int type for function return allow
	#pragma TenDRA no external declaration allow
	#pragma TenDRA text after directive allow
	#pragma TenDRA unknown escape allow
	#pragma TenDRA unknown pragma allow
	#pragma TenDRA weak macro equality allow
	#pragma TenDRA extra ... allow
	#pragma TenDRA extra , allow
	#pragma TenDRA incomplete type as object type allow
	#pragma TenDRA dollar as ident allow
	#pragma TenDRA variable initialization allow
	#pragma TenDRA extra macro definition allow
	#pragma TenDRA incompatible type qualifier allow
	#pragma TenDRA no directive/nline after ident allow
	#pragma TenDRA unknown directive allow
	#pragma TenDRA no ident after # allow
	#pragma TenDRA block function static allow
	#pragma TenDRA unknown struct/union allow


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tdfc/tdfc16.html100644 1750 1750 1717 6466607535 17563 0ustar brooniebroonie C Checker Reference Manual: Command Line Options for Portability Checking

C Checker Reference Manual

January 1998

next section previous section current document TenDRA home page document index


B Command Line Options for Portability Checking

where status can be check,warn or dont.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tdfc/tdfc17.html100644 1750 1750 15576 6466607535 17614 0ustar brooniebroonie C Checker Reference Manual: Integral Type Specification

C Checker Reference Manual

January 1998

next section previous section current document TenDRA home page document index


C.1 - Specifying integer literal types
C.2 - The Portability Table

C Integral Type Specification


C.1 Specifying integer literal types

The integer literal pragmas are used to define the method of computing the type of an integer literal. Integer literals cannot be used in a program unless the class to which they belong has been described using an integer literal pragma. Each built-in checking mode includes some integer literal pragmas describing the semantics appropriate for that mode. If these built-in modes are inappropriate, then the user must describe the semantics using the pragma below:

	#pragma integer literal literal_class lit_class_type_list
The literal_class identifies the type of literal integer involved. The possibilities are:

  • decimal
  • octal
  • hexadecimal
Each of these types can optionally be followed by unsigned and/or long to specify an unsigned and/or long type respectively.

The values of the integer literals of any particular class are divided into contiguous sub-ranges specified by the lit_class_type_list which takes the form below:

    lit_class_type_list
	*int_type_spec
	    integer_constant int_type_spec | lit_class_type_list
	int_type_spec :
	    : type_name
	    * warningopt : identifier
	    ** :
The first integer constant, i1 say, identifies the range [0,i1], the second, i2 say, identifies the range [i1+1,i2]. The symbol * specifies the unlimited range upwards from the last integer constant. Each integer constant must be strictly greater than its predecessor.

Associated with each sub-range is an int_type_spec which is either a type, a procedure token identifier with an optional warning (see G.9)or a failure. For each sub-range:

  • If the int_type_spec is a type name, then it must be an integral type and specifies the type associated with literals in that sub-range.

  • If the int_type_spec is an identifier, then the type of integer is computed by a procedure token of that name which takes the integer value as a parameter and delivers its type. The procedure token must have been declared previously as
    	#pragma token PROC ( VARIETY ) VARIETY
    
    Since the type of the integer is computed by a procedure token which may be implemented differently on different targets, there is the option of producing a warning whenever the token is applied.

  • If the int_type_spec is **, then any integer literal lying in the associated sub-range will cause the checker to raise an error.

For example:

	#pragma integer literal decimal 0x7fff : int | 0x7fffffff : long | * : unsigned long
divides unsuffixed decimal literals into three ranges: literals in the range [0, 0x7fff] are of type int, integer literals in the range [0x7fff, 0x7fffffff] are of type long and the remainder are of type unsigned long.

There are four pre-defined procedure tokens supplied with the compiler which are used in the startup files to provide the default specification for integer literals:

  • ~lit_int is the external identification of a token that returns the integer type according to the rules of ISO C for an unsuffixed decimal;

  • ~lit_hex is the external identification of a token that returns the integer type according to the rules of ISO C for an unsuffixed hexadecimal;

  • ~lit_unsigned is the external identification of a token that returns the integer type according to the rules of ISO C for integers suffixed by U only;

  • ~lit_long is the external identification of a token that returns the integer type according to the rules of ISO C for integers suffixed by L only.


C.2 The Portability Table

The portability table is used by the checker to describe the minimum assumptions about the representation of the integral types. It contains information on the minimum integer sizes and the minimum range of values that can be represented by each integer type.

Two built-in portability tables are provided. The default reflects the minimal requirements laid down in the ISO C standard. The 32-bit portability table (specified by the passing the -Y32bit option to tchk) reflects the implementation on most modern 32 bit machines. These tables are shown below.

ISO/ANSI Minimum Requirements Portability Table (default)

32 bit Portability table (specified by -Y32bit option)

The decimal integer associated with each of char_bits, short_bits , int_bits and long_bits gives the minimum number of bits in the representation of each integer type on all target machines. For example, if int_bits is set to 32 the compiler will perform its checks in the knowledge that the program will not be used on a machine whose int types are represented by 16 bits although they might be represented by 32 or 64 bits.

The minimum integer ranges are deduced from the minimum integer sizes as follows. Suppose b is the minimum number of bits that will be used to represent a certain integral type, then:

  • For unsigned integer types the minimum range is [0, 2b-1];

  • For signed integer types if signed_range is maximum the minimum range is [-2b-1, 2b-1-1]. Otherwise, if signed_range is symmetric the minimum range is [-(2b-1-1), 2b-1-1];

  • For the type char which is not specified as signed or unsigned, if char_type is signed then char is treated as signed, if char_type is unsigned then char is treated as unsigned, and if char_type is either, the minimum range of char is the intersection of the minimum ranges of signed char and unsigned char.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tdfc/tdfc18.html100644 1750 1750 26507 6466607535 17611 0ustar brooniebroonie C Checker Reference Manual: Summary of the pragma statements

C Checker Reference Manual

January 1998

next section previous section current document TenDRA home page document index


D Summary of the pragma statements

  1. pragma_syntax:
    	#pragma TenDRA tendra_pragma
    	#pragma token token_pragma
    	#pragma token_operation
    	#pragma integer_pragma
    
  2. tendra_pragma:
    	begin ==>
    	begin name environment identifier 
    	declaration block identifier begin ==>
    	declaration block end ==>
    	directory name use environment identifier
    	use environment identifier
    	end ==>
    	analysis_spec
    	function_pars ==>
    	keyword identifier for keyword_spec
    	type identifier for type_spec
    	check_pragma
    	variable_pragma
    	dialect_pragma
    
  3. analysis_spec:
    	complete struct/union analysis state ==> 
    	conversion conv_list allow ==>
    	conversion analysis conversion_spec ==>
    	discard analysis discard_spec ==>
    	enum switch analysis state ==>
    	fall into case permit ==>
    	function pointer as pointer permit ==>
    	integer operator analysis state ==>
    	integer overflow analysis state ==>
    	nested comment analysis state ==>
    	operator precedence analysis state ==>
    	unreachable code permit ==>
    	variable analysis state ==>
    	variable hiding analysis state ==>
    	weak prototype analysis state ==>
    
  4. conversion_spec:
    	empty
    	( int-int ) ==>
    	( int-int explicit ) ==>
    	( int-int implicit ) ==>
    	( int-enum implicit) ==>
    	(enum-int implicit) ==>
    	( int-pointer ) ==>
    	( int-pointer explicit ) ==>
    	(int-pointer implicit ) ==>
    	( pointer-int ) ==>
    	( pointer-int explicit ) ==>
    	( pointer-int implicit ) ==>
    	( pointer-pointer ) ==>
    	( pointer-pointer explicit ) ==>
    	( pointer-pointer implicit ) ==>
    
  5. discard_spec:
    	empty
    	( function return ) ==>
    	( static ) ==>
    	( value ) ==>
    	
  6. function_pars: argument type_name as type_name ==> argument type_name as ... ==> extra ... permit ==>
  7. keyword_spec:
    	discard value ==>
    	discard variable ==>
    	exhaustive ==>
    	fall into case ==>
    	set ==>
    	set reachable ==>
    	set unreachable ==>
    	type representation ==>
    	weak ==>
    
  8. type_spec:
    	bottom ==>
    	... printf ==>
    	... scanf ==>
    
  9. check_pragma:
    	implicit function declaration state ==>
    	incompatible interface declaration permit ==>
    
    	incompatible void return permit ==>
    
  10. variable_pragma:
    	discard identifier separator ==> 
    	preserve identifier_list ==>
    	set identifier separator ==>
    	suspend static identifier_list ==>
    	exhaustive ==>
    
  11. separator:
    	;
    	,
    
  12. identifier_list:
    	identifier
    	identifier identifier_list
    
  13. dialect_pragma:
    	++ ==>
    	assignment as bool permit ==>
    	bitfield overflow permit ==>
    	block function static permit ==>
    	character set_sign ==>
    	character escape overflow permit ==>
    	compatible type : char * == void * : permit ==> 
    	conditional lvalue dallow ==>
    	const conditional permit ==>
    	dollar as ident dallow ==>
    	directive pp_directive pp_spec ==>
    	directive as macro argument permit ==>
    	external volatile_t ==>
    	extra ; permit ==>
    	extra ; after conditional permit ==>
    	extra , permit ==>
    	extra bitfield int type permit ==>
    	extra macro definition dallow ==>
    	extra type definition permit ==>
    	forward enum declaration dallow ==>
    	floating point equality permit ==>
    	ident ... permit ==>
    	ignore struct/union/enum tag status ==> 
    	implicit int type for external declaration permit ==>
    	implicit int type for function return permit ==> 
    	includes depth integral_constant ==>
    	incompatible linkage permit ==>
    	incompatible promoted function argument dallow ==>
    	incompatible type qualifier dallow ==>
    	incomplete type as object type permit ==> 
    	indented # directive permit ==>
    	initialization of struct/union (auto) permit ==> 
    	linkage resolution : linkage_spec ==>
    	longlong type permit ==>
    	no directive/nline after ident permit ==> 
    	no external declaration permit ==>
    	no ident after # permit ==>
    	no nline after file end permit ==>
    	prototype permit ==>
    	prototype (weak) permit ==>
    	set longlong type : type_name ==>
    	set name limit integer_constant ==>
    	set size_t : type_name ==>
    	text after directive permit ==>
    	unify external linkage status ==>
    	unify incompatible string literal permit ==> 
    	unknown escape permit ==>
    	unknown pragma permit ==>
    	unknown struct/union dallow ==>
    	unknown directive permit ==>
    	unmatched quote permit ==>
    	variable initialization dallow ==>
    	weak macro equality permit ==>
    	writeable string literal permit ==>
    
  14. set_sign:
    	signed
    	unsign
    	either
    
  15. pp_directive:
    	file
    	ident
    	assert
    	unassert
    	weak
    
  16. pp_spec:
    	allow
    	warning
    	(ignore) allow
    	(ignore) warning
    
  17. linkage_spec:
    	(internal) on
    	(internal) warning
    	(external) on
    	(external) warning
    	off
    
  18. state:
    	on
    	warning
    	off
    
  19. permit:
    	allow
    	warning
    	disallow
    
  20. dallow:
    	allow
    	disallow
    
  21. token_pragma:
    	ARITHMETIC ==>
    	DEFINE MEMBER
    	EXP ==>
    	FUNC ==>
    	MEMBER ==>
    	NAT ==>
    	PROC ==>
    	STATEMENT ==>
    	STRUCT ==>
    	TYPE ==>
    	UNION ==>
    	VARIETY ==>
    
  22. token_operation:
    	define
    	no_def
    	extend
    	ignore
    	implement
    	interface
    	promote ==>
    
  23. integer_pragma:
    	integer literal lit_class_type_list ==>
    
  24. lit_class_type_list:
    	*int_type_spec
    	integer_constant int_type_spec | lit_class_type_list
    
  25. int_type_spec:
    	type_name
    	*warningopt : identifier
    	** :
    


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tdfc/tdfc19.html100644 1750 1750 24723 6466607535 17610 0ustar brooniebroonie C Checker Reference Manual: Dump format specification

C Checker Reference Manual

January 1998

next section previous section current document TenDRA home page document index


E.1 - Basics
E.2 - Dump commands
E.3 - API information
E.4 - Base definitions
E.5 - Error commands
E.6 - File commands
E.7 - Identifier commands
E.8 - Scope commands
E.9 - String command
E.10 - Templates
E.11 - Token sort information
E.12 - Type information

E Dump format specification


E.1 Basics

	digit :	one of 0 1 2 3 4 5 6 7 8 9
	digit-sequence :
		digit
		digit-sequence
	number :
		digit-sequence
	string :
		<characters>
		&digit-sequence<characters>
	location :
		number number number string string
		number number number string *
		number number number *
		number number *
		number *
		*

E.2 Dump commands

	dump-file :
		command-listopt
	command-list :
		command
		command command-list
	command :
		B base-definition	base class graph
		error-command
		file-command
		I identifier-command	implicit declarations etc.
		identifier-command
		scope-command
		O identifier identifier	overriding virtual function
		P type : type	promotion type specifier
		string-command
		V number number string	version number
		X api-info	external token name
		Z template-info	template instance

E.3 API information

	api-info :
		identifier key identifier string

E.4 Base definitions

	virtual :
		V
	base-class :
		number = virtualopt accessopt type-name

		number :
	base-list :
		base-graph
		base-graph base-list
	base-graph :
		base-class
		base-class ( base-list )
	base-definition :
		identifier-key number base-graph
	base-number :
		number : type-name

E.5 Error commands

	error-command :
		EA error-argument	error argument
		EC error-info	continuation error
		EF location error-info	fatal error
		EI location error-info	internal error
		ES location error-info	serious error
		EW location error-info	warning
	error-info :
		error-name number number
	error-name :
		number = string
		number
	error-argument :
		B base-number
		C scope-identifier
		E exp
		H hashid
		I identifier
		L location
		N nat
		S string
		T type
		V number
		V -number

E.6 File commands

	file-command :
		FD number = string stringopt	inclusion directory
		FE location	file end
		FIA location string	file include with <>
		FIE location string	include end-up
		FIN location string	file include with []
		FIQ location string	file include with ""
		FIR location	resume file
		FIS location string	include startup
		FS location directory	file start
	directory :
		number
		*

E.7 Identifier commands

	identifier-command :
		C identifier-info	call identifier
		D identifier-info type-info	define identifier
		L identifier-info	use identifier
		M identifier-info type-info	declare identifier
		Q identifier-info	end identifier definition
		T identifier-info type-info	tentatively define identifier
		U identifier-info	undefine identifier
		W identifier-info type-info	weak prototype
	identifier-info :
		identifier-key location identifier
	identifier-key :
		CD	static data member
		CF function-key	member function
		CM	data member
		CS function-key	static member function
		CV function-key	virtual member function
		E	enumerator
		FB function-key	builtin function
		FE function key	external function
		FS function-key	static function
		K	keyword
		L	label
		MB	built-in macro
		MF	function-like macro
		MO	object-like macro
		NA	namespace alias
		NN	namespace name
		TA	type alias
		TC	class tag
		TE	enum tag
		TS	struct tag
		TU	union tag
		VA	automatic variable
		VE	extern variable
		VP	function parameter
		VS	static variable
		XF	procedure token
		XO	object token
		XP	token parameter
		XT	template parameter
	function-key :
		empty
		C function-key	C linkage
		I function-key	inline
	identifier :
		number = hashid accessopt scope-identifier
		number
	hashid :
		string	simple name
		C type	constructor
		D type	destructor
		O string	operator
		T type	conversion
	access :
		B	protected
		N	public
		P	private

E.8 Scope commands

	scope-command :
		SE scope-key location identifier	end scope
		SS scope-key location identifier	start scope
	scope-key :
		B	block scope
		CC	conditional scope
		N	namespace scope
		CF	false conditional scope
		CT	true conditional scope
		D	other declarative scope
		H	header scope
		S	class scope
	scope-identifier :
		identifier
		*

E.9 String command

	string-command :
		A location string	string literal
		AC location string	character literal
		ACL location string	wide character literal
		AL location string	wide string literal

E.10 Templates

	specialisation-info :
		token-application
		*
	template-info :
		identifier-key identifier token-application specialisation-info

E.11 Token sort information

	sort :
		ZEC type-info	constant expression
		ZEL type-info	lvalue expression
		ZER type-info	rvalue expression
		ZF type-info	function
		ZM type-info : type-name	member
		ZN	integral constant
		ZPS parameter-list-opt : sort	procedure type ()
		ZPG parameter-list-opt ; parameter-list-opt:sort 
			procedure type {}
		ZS	statement
		ZTA	arithmetic type
		ZTF	floating type
		ZTI	integral type
		ZTO	opaque type
		ZTP	scalar type
		ZTS	structure type
		ZTt parameter-list-opt :	template type
		ZTTS	structure tag 
		ZTTU	union tag
		ZTU	union type
		ZUF number	function macro
		ZUO	object macro
	exp :
		nat
	member :
		identifier
		string
	statement :
		exp
	token-argument :
		C identifier	template argument
		E exp	expression argument
		F identifier	function argument
		M member	member argument
		N nat	integer constant argument
		S statement	statement argument
		T type-info	type argument
	token-argument-list :
		token-argument
		token-argument , token-argument-list
	token-application :
		T identifier , token-argument-list :

E.12 Type information

	type-info :
		scope-identifier	for namespace alias
		sort	for token, macro etc.
		type	for variable etc.
		type identifier-opt	for overloaded function
	type :
		qualifieropt unqualified-type
	qualifier :
		C	const
		V	volatile
		CV	const volatile
	unqualified-type :
		type-name
		token-application
		c	char
		s	short
		i	int
		l	long
		x	long long
		f	float
		d	double
		r	long double
		v	void
		b	bool
		w	wchar_t
		Sc	signed char
		Uc	unsigned char
		Us	unsigned short
		Ui	unsigned int
		Ul	unsigned long
		Ux	unsigned long long
		u	bottom
		y	ptrdiff_t
		z	size_t
		a type : type	arithmetic type
		n nat	literal type
		p type	promoted type
		t parameter-listopt : type	template type
		A natopt : type	array type
		B nat : type	bitfield type
		F type parameter-types	function type
		M type-name : type	pointer to member type
		P type	pointer type
		R type	reference type
		W type parameter-types	weak function type
		Q string	quoted type
		*	unknown type
	parameter-types :
		: exceptionopt qualifieropt :	no parameters
		. exceptionopt qualifieropt :	ellipsis
		. exceptionopt qualifieropt .	unknown
		, type parameter-types
	exception :
		( exception-listopt )
	exception-list :
		type
		type, exception-list
	parameter-list :
		identifier
		identifier , parameter-list
	type-name :
		identifier
	nat :
		+number
		-number
		string
		identifier
		token-application

Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tdfc/tdfc20.html100644 1750 1750 150126 6466607534 17614 0ustar brooniebroonie C Checker Reference Manual: The Token Syntax

C Checker Reference Manual

January 1998

next section previous section current document TenDRA home page document index


F.1 - Introduction
F.2 - Program construction using TDF
F.3 - The token syntax
F.4 - Token identification
F.5 - Expression tokens
F.6 - Statement tokens
F.7 - Type tokens
F.7.1 - General type tokens
F.7.2 - Integral type tokens
F.7.3 - Arithmetic type tokens
F.7.4 - Compound type tokens
F.7.5 - Type token compatibility, definitions etc.
F.8 - Selector tokens
F.9 - Procedure tokens
F.9.1 - General procedure tokens
F.9.2 - Simple procedure tokens
F.9.3 - Function procedure tokens
F.9.4 - Defining procedure tokens
F.10 - Tokens and APIs

F The Token Syntax


F.1 Introduction

The token syntax is used to introduce references to program constructs such as types, expressions etc. that can be defined in other compilation modules. This can be thought of as a generalisation of function prototyping which is used to introduce references to functions defined elsewhere. The references introduced by the token syntax are called tokens because they are tokens for the program constructs that they reference. The token syntax is said to specify a token interface.

It is expected that the general user will have little direct contact with the token syntax, instead using the asbstract standard headers provided or using the tspec tool [Ref. 5] to generate their own token interface header files automatically. However, it may occasionally be necessary to use the raw power of the token syntax directly.

As an example of the power of the token syntax consider the program below:

	#pragma token TYPE FILE#
	#pragma token EXP rvalue:FILE *:stderr#
	int fprintf(FILE *, const char *, ...);
	void f(void) {
		fprintf(stderr,"hello world\n");
	}
The first line of the program introduces a token, FILE, for a type. By using its identification, FILE, this token can be used wherever a type could have been used throughout the rest of the program. The compiler can then compile this program to TDF (the abstract TenDRA machine) even though it contains an undefined type. This is fundamental to the construction of portable software, where the developer cannot assume the definitions of various types as they may be different on different machines.

The second line of the example, which introduces a token for an expression, is somewhat more complicated. In order to make use of an expression, it is necessary to know its type and whether or not it is an lvalue (i.e. whether or not it can be assigned to). As can be seen from the example however, it is not necessary to know the exact type of the expression because a token can be used to represent its type.

The TenDRA compiler makes no assumptions about the possible definitions of tokens and will raise an error if a program requires information about an undefined token. In this way many errors resulting from inadvertent use of a definition present on the developer's system can be detected. For example, developers often assume that the type FILE will be implemented by a structure type when in fact the ISO C standard permits the implementation of FILE by any type. In the program above, any attempt to access members of stderr would cause the compiler to raise an error.


F.2 Program construction using TDF

Traditional program construction using the C language has two phases: compilation and linking.

In the compilation phase the source text written in the C language is mapped to an object code format. This object code is generally not complete in itself and must be linked with other program segments such as definitions from the system libraries.

When tokens are involved there is an extra stage in the construction process where undefined tokens in one program segment are linked with their definitions in another program segment. To summarise, program construction using TDF and the TenDRA tools has four basic operations:

  1. Source file compilation to TDF. The TDF produced may be incomplete in the sense that it may contain undefined tokens;

  2. TDF linking. The undefined tokens represented in TDF are linked to their definitions in other compilation modules or libraries. During linking, tokens with the same identifier are treated as the same token;

  3. TDF translation. This is the conversion of TDF into standard object file format for a particular machine system. The program is still incomplete at this stage in the sense that it may contain undefined library functions;

  4. Object file linking. This corresponds directly to standard object file linking.


F.3 The token syntax

The token syntax is an extension to the ISO C standard language to allow the use of tokens to represent program constructs. Tokens can be used either in place of, or as well as, the definitions required by a program. In the latter case, the tokens exist merely to enforce correct definitions and usage of the objects they reference. However it should be noted that the presence of a token introduction can alter the semantics of a program (examples are given in
F.5 Expression tokens). The semantics have been altered to force programs to respect token interfaces where they would otherwise fail to do so.

The token syntax takes the following basic form:

	#pragma token token-introduction token-identification
It is introduced as a pragma to allow other compilers to ignore it, though if tokens are being used to replace the definitions needed by a program, ignoring these pragmas will generally cause the compilation to fail.

The token-introduction defines the kind of token being introduced along with any additional information associated with that kind of token. Currently there are five kinds of token that can be introduced, corresponding approximately to expressions, statements, type-names, member designators and function-like macros.

The token-identification provides the means of referring to the token, both internally within the program and externally for TDF linking purposes.


F.4 Token identification

The syntax for the token-identification is as follows:

	token identification:
		name-spaceopt identifier # external-identifier
opt

	name-space: 
		TAG
There is a default name space associated with each kind of token and internal identifiers for tokens generally reside in these default name spaces. The ISO C standard describes the five name spaces as being:

  1. The label space, in which all label identifiers reside;

  2. The tag space, in which structure, union and enumeration tags reside;

  3. The member name space, in which structure and union member selectors reside;

  4. The macro name space, in which all macro definitions reside. Token identifiers in the macro name space have no definition and so are not expanded. However, they behave as macros in all other respects;

  5. The ordinary name space in which all other identifiers reside.

The exception is compound type-token identifiers (see
F.7.4 Compound type tokens) which by default reside in the ordinary name space but can be forced to reside in the tag name space by setting the optional name-space to be TAG.

The first identifier of the token-identification provides the internal identification of the token. This is the name used to identify the token within the program. It must be followed by a #.

All further preprocessing tokens until the end of the line are treated as part of the external-identifier with non-empty white space sequences being replaced by a single space. The external-identifier specifies the external identification of the token which is used for TDF linking. External token identifications reside in their own name space which is distinct from the external name space for functions and objects. This means that it is possible to have both a function and a token with the same external identification. If the external-identifier is omitted it is assumed that the internal and external identifications are the same.


F.5 Expression tokens

There are various properties associated with expression tokens which are used to determine the operations that may be performed upon them.

  • Designation is the classification of the value delivered by evaluating the expression. The three possible designations, implied by the ISO C standard, section 6.3, are:
    • value - expression describes the computation of a value;

    • object - expression designates a variable which may have an associated type qualifier giving the access conditions;

    • function designation - expression designates a function.

  • Type specifies the type of the expression ignoring any type qualification;

  • Constancy is the property of being a constant expression as specified in the ISO C Standard section 6.4.

The syntax for introducing expression tokens is:
	exp-token:
		EXP exp-storage : type-name :
		NAT
	exp-storage:
		rvalue
		lvalue
		const
Expression tokens can be introduced using either the EXP or NAT token introductions. Expression tokens introduced using NAT are constant value designations of type int i.e. they reference constant integer expressions. All other expression tokens are assumed to be non-constant and are introduced using EXP.

  • The exp-storage is either lvalue or rvalue. If it is lvalue, then the token is an object designation without type qualification. If it is rvalue then the token is either a value or a function designation depending on whether or not its type is a function type.

  • The type-name is the type of the expression to which the token refers.

All internal expression token identifiers must reside in the macro name space and this is consequently the default name space for such identifiers. Hence the optional name-space, TAG, should not be present in an EXP token introduction. Although the use of an expression token after it has been introduced is very similar to that of an ordinary identifier, as it resides in the macro name space, it has the additional properties listed below:

  • expression tokens cannot be hidden by using an inner scope;

  • with respect to #ifdef, expression tokens are defined;

  • the scope of expression tokens can be terminated by #undef.

In order to make use of tokenised expressions, a new symbol, exp-token-name , has been introduced at translation phase seven of the syntax analysis as defined in the ISO C standard. When an expression token identifier is encountered by the preprocessor, an exp-token-name symbol is passed through to the syntax analyser. An exp-token-name provides information about an expression token in the same way that a typedef-name provides information about a type introduced using a typedef. This symbol can only occur as part of a primary-expression (ISO C standard section 6.3.1) and the expression resulting from the use of exp-token-name will have the type, designation and constancy specified in the token introduction. As an example, consider the pragma:

	#pragma token EXP rvalue : int : x#
This introduces a token for an expression which is a value designation of type int with internal and external name x.

Expression tokens can either be defined using #define statements or by using externals. They can also be resolved as a result of applying the type-resolution or assignment-resolution operators (see F.7.5 Type token compatibility, definitions etc.). Expression token definitions are subject to the following constraints:

  • if the exp-token-name refers to a constant expression (i.e. it was introduced using the NAT token introduction), then the defining expression must also be a constant expression as expressed in the ISO C standard, section 6.4;

  • if the exp-token-name refers to an lvalue expression, then the defining expression must also designate an object and the type of the expression token must be resolvable to the type of the defining expression. All the type qualifiers of the defining expression must appear in the object designation of the token introduction;

  • if the exp-token-name refers to an expression that has function designation, then the type of the expression token must be resolvable to the type of the defining expression.

The program below provides two examples of the violation of the second constraint.

	#pragma token EXP lvalue : int : i#
	extern short k;
	#define i 6
	#define i k
The expression token i is an object designation of type int. The first violation occurs because the expression, 6, does not designate an object. The second violation is because the type of the token expression, i, is int which cannot be resolved to the type short.

If the exp-token-name refers to an expression that designates a value, then the defining expression is converted, as if by assignment, to the type of the expression token using the assignment-resolution operator (see F.7.5 Type token compatibility, definitions etc.). With all other designations the defining expression is left unchanged. In both cases the resulting expression is used as the definition of the expression token. This can subtly alter the semantics of a program. Consider the program:

	#pragma token EXP rvalue:long:li#
	#define li 6
	int f() {
		return sizeof(li);
	}
The definition of the token li causes the expression, 6, to be converted to long (this is essential to separate the use of li from its definition). The function, f, then returns sizeof(long). If the token introduction was absent however f would return sizeof(int).

Although they look similar, expression token definitions using #defines are not quite the same as macro definitions. A macro can be defined by any preprocessing tokens which are then computed in phase 3 of translation as defined in the ISO C standard, whereas tokens are defined by assignment-expressions which are computed in phase 7. One of the consequences of this is illustrated by the program below:

	#pragma token EXP rvalue:int :X#
	#define X M+3
	#define M sizeof(int)
	int f(int x) { return (x+X); }
If the token introduction of X is absent, the program above will compile as, at the time the definition of X is interpreted (when evaluating x+X), both M and X are in scope. When the token introduction is present the compilation will fail as the definition of X, being part of translation phase 7, is interpreted when it is encountered and at this stage M is not defined. This can be rectified by reversing the order of the definitions of X and M or by bracketing the definition of X. i.e.

	#define X (M+3)
Conversely consider:

	#pragma token EXP rvalue:int:X#
	#define M sizeof(int)
	#define X M+3
	#undef M
	int M(int x) { return (x+X); }
The definition of X is computed on line 3 when M is in scope, not on line 6 where it is used. Token definitions can be used in this way to relieve some of the pressures on name spaces by undefining macros that are only used in token definitions. This facility should be used with care as it may not be a straightforward matter to convert the program back to a conventional C program.

Expression tokens can also be defined by declaring the exp-token-name that references the token to be an object with external linkage e.g.

	#pragma token EXP lvalue:int:x#
	extern int x;
The semantics of this program are effectively the same as the semantics of:
	#pragma token EXP lvalue:int:x#
	extern int _x;
	#define x _x

F.6 Statement tokens

The syntax for introducing a statement token is simply:

	#pragma token STATEMENT init_globs#
	int g(int);
	int f(int x) { init_globs return g(x);}
Internal statement token identifiers reside in the macro name space. The optional name space, TAG, should not appear in statement token introductions.

The use of statement tokens is analogous to the use of expression tokens (see F.5 Expression tokens). A new symbol, stat-token-name, has been introduced into the syntax analysis at phase 7 of translation as defined in the ISO C standard. This token is passed through to the syntax analyser whenever the preprocessor encounters an identifier referring to a statement token. A stat-token-name can only occur as part of the statement syntax (ISO C standard, section 6.6).

As with expression tokens, statement tokens are defined using #define statements. An example of this is shown below:

	#pragma token STATEMENT i_globs#
	#define i_globs {int i=x;x=3;}
The constraints on the definition of statement tokens are:

  • the use of labels is forbidden unless the definition of the statement token occurs at the outer level (i.e outside of any compound statement forming a function definition);

  • the use of return within the defining statement is not allowed.

The semantics of the defining statement are precisely the same as the semantics of a compound statement forming the definition of a function with no parameters and void result. The definition of statement tokens carries the same implications for phases of translation as the definition of expression tokens (see F.5 Expression tokens).


F.7 Type tokens

Type tokens are used to introduce references to types. The ISO C standard, section 6.1.2.5, identifies the following classification of types:

  • the type char;
  • signed integral types;
  • unsigned integral types;
  • floating types;
  • character types;
  • enumeration types;
  • array types;
  • structure types;
  • union types;
  • function types;
  • pointer types;
These types fall into the following broader type classifications:

  • integral types - consisting of the signed integral types, the unsigned integral types and the type char;

  • arithmetic types - consisting of the integral types and the floating types;

  • scalar types - consisting of the arithmetic types and the pointer types;

  • aggregate types - consisting of the structure and array types;

  • derived types - consisting of array, structure, union, function and pointer types;

  • derived declarator types - consisting of array, function and pointer types.

The classification of a type determines which operations are permitted on objects of that type. For example, the ! operator can only be applied to objects of scalar type. In order to reflect this, there are several type token introductions which can be used to classify the type to be referenced, so that the compiler can perform semantic checking on the application of operators. The possible type token introductions are:

	type-token:
		TYPE
		VARIETY
		ARITHMETIC
		STRUCT
		UNION

F.7.1 General type tokens

The most general type token introduction is TYPE. This introduces a type of unknown classification which can be defined to be any C type. Only a few generic operations can be applied to such type tokens, since the semantics must be defined for all possible substituted types. Assignment and function argument passing are effectively generic operations, apart from the treatment of array types. For example, according to the ISO C standard, even assignment is not permitted if the left operand has array type and we might therefore expect assignment of general token types to be illegal. Tokens introduced using the TYPE token introduction can thus be regarded as representing non-array types with extensions to represent array types provided by applying non-array semantics as described below.

Once general type tokens have been introduced, they can be used to construct derived declarator types in the same way as conventional type declarators. For example:

	#pragma token TYPE t_t#
	#pragma token TYPE t_p#
	#pragma token NAT n#
	typedef t_t *ptr_type; /* introduces pointer type */
	typedef t_t fn_type(t_p);/*introduces function type */
	typedef t_t arr_type[n];/*introduces array type */
The only standard conversion that can be performed on an object of general token type is the lvalue conversion (ISO C standard section 6.2). Lvalue conversion of an object with general token type is defined to return the item stored in the object. The semantics of lvalue conversion are thus fundamentally altered by the presence of a token introduction. If type t_t is defined to be an array type the lvalue conversion of an object of type t_t will deliver a pointer to the first array element. If, however, t_t is defined to be a general token type, which is later defined to be an array type, lvalue conversion on an object of type t_t will deliver the components of the array.

This definition of lvalue conversion for general token types is used to allow objects of general tokenised types to be assigned to and passed as arguments to functions. The extensions to the semantics of function argument passing and assignment are as follows: if the type token is defined to be an array then the components of the array are assigned and passed as arguments to the function call; in all other cases the assignment and function call are the same as if the defining type had been used directly.

The default name space for the internal identifiers for general type tokens is the ordinary name space and all such identifiers must reside in this name space. The local identifier behaves exactly as if it had been introduced with a typedef statement and is thus treated as a typedef-name by the syntax analyser.

F.7.2 Integral type tokens

The token introduction VARIETY is used to introduce a token representing an integral type. A token introduced in this way can only be defined as an integral type and can be used wherever an integral type is valid.

Values which have integral tokenised types can be converted to any scalar type (see F.7 Type tokens). Similarly values with any scalar type can be converted to a value with a tokenised integral type. The semantics of these conversions are exactly the same as if the type defining the token were used directly. Consider:

	#pragma token VARIETY i_t#
	short f(void) {
		i_t x_i = 5;
		return x_i;
	}
	short g(void) {
		long x_i = 5;
		return x_i;
	}
Within the function f there are two conversions: the value, 5, of type int, is converted to i_t, the tokenised integral type, and a value of tokenised integral type i_t is converted to a value of type short. If the type i_t were defined to be long then the function f would be exactly equivalent to the function g.

The usual arithmetic conversions described in the ISO C standard (section 6.3.1.5) are defined on integral type tokens and are applied where required by the ISO C standard.

The integral promotions are defined according to the rules introduced in Chapter 4. These promotions are first applied to the integral type token and then the usual arithmetic conversions are applied to the resulting type.

As with general type tokens, integral type tokens can only reside in their default name space, the ordinary name space (the optional name-space, TAG, cannot be specified in the token introduction). They also behave as though they had been introduced using a typedef statement.

F.7.3 Arithmetic type tokens

The token introduction ARITHMETIC introduces an arithmetic type token. In theory, such tokens can be defined by any arithmetic type, but the current implementation of the compiler only permits them to be defined by integral types. These type tokens are thus exactly equivalent to the integral type tokens introduced using the token introduction VARIETY.

F.7.4 Compound type tokens

For the purposes of this document, a compound type is a type describing objects which have components that are accessible via member selectors. All structure and union types are thus compound types, but, unlike structure and union types in C, compound types do not necessarily have an ordering on their member selectors. In particular, this means that some objects of compound type cannot be initialised with an initialiser-list (see ISO C standard section 6.5.7).

Compound type tokens are introduced using either the STRUCT or UNION token introductions. A compound type token can be defined by any compound type, regardless of the introduction used. It is expected, however, that programmers will use STRUCT for compound types with non-overlapping member selectors and UNION for compound types with overlapping member selectors. The compound type token introduction does not specify the member selectors of the compound type - these are added later (see F.8 Selector tokens).

Values and objects with tokenised compound types can be used anywhere that a structure and union type can be used.

Internal identifiers of compound type tokens can reside in either the ordinary name space or the tag name space. The default is the ordinary name space; identifiers placed in the ordinary name space behave as if the type had been declared using a typedef statement. If the identifier, id say, is placed in the tag name space, it is as if the type had been declared as struct id or union id. Examples of the introduction and use of compound type tokens are shown below:

	#pragma token STRUCT n_t#
	#pragma token STRUCT TAG s_t#
	#pragma token UNION TAG u_t#
	
	void f() {
		n_t x1;
		struct n_t x2;		/* Illegal,n_t not in the tag name space */
		s_t x3;			/* Illegal,s_t not in the ordinary name space*/
		struct s_t x4; 
		union u_t x5;
	}

F.7.5 Type token compatibility, definitions etc.

A type represented by an undefined type token is incompatible (ISO C standard section 6.1.3.6) with all other types except for itself. A type represented by a defined type token is compatible with everything that is compatible with its definition.

Type tokens can only be defined by using one of the operations known as type-resolution and assignment-resolution. Note that, as type token identifiers do not reside in the macro name space, they cannot be defined using #define statements.

Type-resolution operates on two types and is essentially identical to the operation of type compatibility (ISO C standard section 6.1.3.6) with one major exception. In the case where an undefined type token is found to be incompatible with the type with which it is being compared, the type token is defined to be the type with which it is being compared, thereby making them compatible.

The ISO C standard prohibits the repeated use of typedef statements to define a type. However, in order to allow type resolution, the compiler allows types to be consistently redefined using multiple typedef statements if:

  • there is a resolution of the two types;
  • as a result of the resolution, at least one token is defined.
As an example, consider the program below:

	#pragma token TYPE t_t#
	typedef t_t *ptr_t_t;
	typedef int **ptr_t_t;
The second definition of ptr_t_t causes a resolution of the types t_t * and int **. The rules of type compatibility state that two pointers are compatible if their dependent types are compatible, thus type resolution results in the definition of t_t as int *.

Type-resolution can also result in the definition of other tokens. The program below results in the expression token N being defined as (4*sizeof(int)).

	#pragma token EXP rvalue:int:N#
	typedef int arr[N];
	typedef int arr[4*sizeof(int)];
The type-resolution operator is not symmetric; a resolution of two types, A and B say, is an attempt to resolve type A to type B. Thus only the undefined tokens of A can be defined as a result of applying the type-resolution operator. In the examples above, if the typedefs were reversed, no type-resolution would take place and the types would be incompatible.

Assignment-resolution is similar to type-resolution but it occurs when converting an object of one type to another type for the purposes of assignment. Suppose the conversion is not possible and the type to which the object is being converted is an undefined token type. If the token can be defined in such a way that the conversion is possible, then that token will be suitably defined. If there is more than one possible definition, the definition causing no conversion will be chosen.


F.8 Selector tokens

The use of selector tokens is the primary method of adding member selectors to compound type tokens. (The only other method is to define the compound type token to be a particular structure or union type.) The introduction of new selector tokens can occur at any point in a program and they can thus be used to add new member selectors to existing compound types.

The syntax for introducing member selector tokens as follows:

	selector-token:
		MEMBER selector-type-name : type-name :
	selector-type-name:
		type-name
		type-name % constant-expression
The selector-type-name specifies the type of the object selected by the selector token. If the selector-type-name is a plain type-name, the member selector token has that type. If the selector-type-name consists of a type-name and a constant-expression separated by a % sign, the member selector token refers to a bitfield of type type-name and width constant-expression . The second type-name gives the compound type to which the member selector belongs. For example:

	#pragma token STRUCT TAG s_t#
	#pragma token MEMBER char*: struct s_t:s_t_mem#
introduces a compound token type, s_t, which has a member selector, s_t_mem, which selects an object of type char*.

Internal identifiers of member selector tokens can only reside in the member name space of the compound type to which they belong. Clearly this is also the default name space for such identifiers.

When structure or union types are declared, according to the ISO C standard there is an implied ordering on the member selectors. In particular this means that:

  • during initialisation with an initialiser-list the identified members of a structure are initialised in the order in which they were declared. The first identified member of a union is initialised;

  • the addresses of structure members will increase in the order in which they were declared.

The member selectors introduced as selector tokens are not related to any other member selectors until they are defined. There is thus no ordering on the undefined tokenised member selectors of a compound type. If a compound type has only undefined token selectors, it cannot be initialised with an initialiser-list. There will be an ordering on the defined members of a compound type and in this case, the compound type can be initialised automatically.

The decision to allow unordered member selectors has been taken deliberately in order to separate the decision of which members belong to a structure from that of where such member components lie within the structure. This makes it possible to represent extensions to APIs which require extra member selectors to be added to existing compound types.

As an example of the use of token member selectors, consider the structure lconv specified in the ISO C Standard library (section 7.4.3.1). The standard does not specify all the members of struct lconv or the order in which they appear. This type cannot be represented naturally by existing C types, but can be described by the token syntax.

There are two methods for defining selector tokens, one explicit and one implicit. As selector token identifiers do not reside in the macro name space they cannot be defined using #define statements.

Suppose A is an undefined compound token type and mem is an undefined selector token for A. If A is later defined to be the compound type B and B has a member selector with identifier mem then A.mem is defined to be B.mem providing the type of A.mem can be resolved to the type of B.mem. This is known as implicit selector token definition.

In the program shown below the redefinition of the compound type s_t causes the token for the selector mem_x to be implicitly defined to be the second member of struct s_tag. The consequential type resolution leads to the token type t_t being defined to be int.

	#pragma token TYPE t_t#
	#pragma token STRUCT s_t#
	#pragma token MEMBER t_t : s_t : mem_x#
	#pragma token MEMBER t_t : s_t : mem_y#
	struct s_tag { int a, mem_x, b; }
	typedef struct s_tag s_t;
Explicit selector token definition takes place using the pragma:

	#pragma DEFINE MEMBER type-name identifier : member-designator


	member-designator:
		identifier
		identifier . member-designator
The type-name specifies the compound type to which the selector belongs.

The identifier provides the identification of the member selector within that compound type.

The member-designator provides the definition of the selector token. It must identify a selector of a compound type.

If the member-designator is an identifier, then the identifier must be a member of the compound type specified by the type-name. If the member-designator is an identifier, id say, followed by a further member-designator, M say, then:

  • the identifier id must be a member identifying a selector of the compound type specified by type-name;

  • the type of the selector identified by id must have compound type, C say;

  • the member-designator M must identify a member selector of the compound type C.

As with implicit selector token definitions, the type of the selector token must be resolved to the type of the selector identified by the member-designator.

In the example shown below, the selector token mem is defined to be the second member of struct s which in turn is the second member of struct s_t.

	#pragma token STRUCT s_t#
	#pragma token MEMBER int : s_t : mem#
	typedef struct {int x; struct {char y; int z;} s; } s_t;
	#pragma DEFINE MEMBER s_t : mem s.z

F.9 Procedure tokens

Consider the macro SWAP defined below:

	#define SWAP(T,A,B) { \
		T x; \
		x=B; \
		B=A; \
		A=x; \
	}
SWAP can be thought of as a statement that is parameterised by a type and two expressions.

Procedure tokens are based on this concept of parameterisation. Procedure tokens reference program constructs that are parameterised by other program constructs.

There are three methods of introducing procedure tokens. These are described in the sections below.

F.9.1 General procedure tokens

The syntax for introducing a general procedure token is:

	general procedure:
		PROC { bound-toksopt | prog-pars
opt } token-introduction

	simple procedure:
		PROC ( bound-toksopt ) token-introduction


	bound-toks:
		bound-token
		bound-token, bound-toks

	bound-token:
		token-introduction name-spaceopt identifier


	prog-pars:
		program-parameter
		program-parameter, prog-pars

	program parameter:
		EXP identifier
		STATEMENT identifier
		TYPE type-name-identifier
		MEMBER type-name-identifier : identifier
The final token-introduction specifies the kind of program construct being parameterised. In the current implementation of the compiler, only expressions and statements may be parameterised. The internal procedure token identifier is placed in the default name space of the program construct which it parameterises. For example, the internal identifier of a procedure token parameterising an expression would be placed in the macro name space.

The bound-toks are the bound token dependencies which describe the program constructs upon which the procedure token depends. These should not be confused with the parameters of the token. The procedure token introduced in:

	#pragma token PROC {TYPE t,EXP rvalue:t**:e|EXP e} EXP:rvalue:t:dderef#
is intended to represent a double dereference and depends upon the type of the expression to be dereferenced and upon the expression itself but takes only one argument, namely the expression, from which both dependencies can be deduced.

The bound token dependencies are introduced in exactly the same way as the tokens described in the previous sections with the identifier corresponding to the internal identification of the token. No external identification is allowed. The scope of these local identifiers terminates at the end of the procedure token introduction, and whilst in scope, they hide all other identifiers in the same name space. Such tokens are referred to as "bound" because they are local to the procedure token.

Once a bound token dependency has been introduced, it can be used throughout the rest of the procedure token introduction in the construction of other components.

The prog-pars are the program parameters. They describe the parameters with which the procedure token is called. The bound token dependencies are deduced from these program parameters.

Each program parameter is introduced with a keyword expressing the kind of program construct that it represents. The keywords are as follows:

  • EXP . The parameter is an expression and the identifier following EXP must be the identification of a bound token for an expression. When the procedure token is called, the corresponding parameter must be an assignment-expression and is treated as the definition of the bound token, thereby providing definitions for all dependencies relating to that token. For example, the call of the procedure token dderef, introduced above, in the code below:

    	char f(char **c_ptr_ptr){
    		return dderef(c_ptr_ptr);
    	}
    
    causes the expression, e, to be defined to be c_ptr_ptr thus resolving the type t** to be char **. The type t is hence defined to be char, also providing the type of the expression obtained by the application of the procedure token dderef;

  • STATEMENT. The parameter is a statement. Its semantics correspond directly to those of EXP;

  • TYPE. The parameter is a type. When the procedure token is applied, the corresponding argument must be a type-name. The parameter type is resolved to the argument type in order to define any related dependencies;

  • MEMBER. The parameter is a member selector. The type-name specifies the composite type to which the member selector belongs and the identifier is the identification of the member selector. When the procedure token is applied, the corresponding argument must be a member-designator of the compound type.

Currently PROC tokens cannot be passed as program parameters.

F.9.2 Simple procedure tokens

In cases where there is a direct, one-to-one correspondence between the bound token dependencies and the program parameters a simpler form of procedure token introduction is available.

Consider the two procedure token introductions below, corresponding to the macro SWAP described earlier.

	/* General procedure introduction */
	#pragma token PROC{TYPE t,EXP lvalue:t:e1,EXP lvalue:t:e2 | \
		TYPE t,EXP e1,EXP e2 } STATEMENT SWAP#
	/* Simple procedure introduction */
	#pragma token PROC(TYPE t,EXP lvalue:t:,EXP lvalue:t: ) STATEMENT SWAP#
The simple-token syntax is similar to the bound-token syntax, but it also introduces a program parameter for each bound token. The bound token introduced by the simple-token syntax is defined as though it had been introduced with the bound-token syntax. If the final identifier is omitted, then no name space can be specified, the bound token is not identified and in effect there is a local hidden identifier.

F.9.3 Function procedure tokens

One of the commonest uses of simple procedure tokens is to represent function in-lining. In this case, the procedure token represents the in-lining of the function, with the function parameters being the program arguments of the procedure token call, and the program construct resulting from the call of the procedure token being the corresponding in-lining of the function. This is a direct parallel to the use of macros to represent functions.

The syntax for introducing function procedure tokens is:

	function-procedure:
		FUNC type-name :
The type-name must be a prototyped function type. The pragma results in the declaration of a function of that type with external linkage and the introduction of a procedure token suitable for an in-lining of the function. (If an ellipsis is present in the prototyped function type, it is used in the function declaration but not in the procedure token introduction.) Every parameter type and result type is mapped onto the token introduction:

	EXP rvalue:
The example below:

	#pragma token FUNC int(int): putchar#
declares a function, putchar, which returns an int and takes an int as its argument, and introduces a procedure token suitable for in-lining putchar. Note that:

	#undef putchar
will remove the procedure token but not the underlying function.

F.9.4 Defining procedure tokens

All procedure tokens are defined by the same mechanism. Since simple and function procedure tokens can be transformed into general procedure tokens, the definition will be explained in terms of general procedure tokens.

The syntax for defining procedure tokens is given below and is based upon the standard parameterised macro definition. However, as in the definitions of expressions and statements, the #defines of procedure token identifiers are evaluated in phase 7 of translation as described in the ISO C standard.

	#define identifier ( id-listopt ) assignment-expression

	#define identifier ( id-listopt ) statement


	id-list:
		identifier
		identifer, id-list
The id-list must correspond directly to the program parameters of the procedure token introduction. There must be precisely one identifier for each program parameter. These identifiers are used to identify the program parameters of the procedure token being defined and have a scope that terminates at the end of the procedure token definition. They are placed in the default name spaces for the kinds of program constructs which they identify.

None of the bound token dependencies can be defined during the evaluation of the definition of the procedure token since they are effectively provided by the arguments of the procedure token each time it is called. To illustrate this, consider the example below based on the dderef token used earlier.

	#pragma token PROC{TYPE t, EXP rvalue:t**:e|EXP e}EXP rvalue:t:dderef#
	#define dderef (A) (**(A))
The identifiers t and e are not in scope during the definition, being merely local identifiers for use in the procedure token introduction. The only identifier in scope is A. A identifies an expression token which is an rvalue whose type is a pointer to a pointer to a type token. The expression token and the type token are provided by the arguments at the time of calling the procedure token.

Again, the presence of a procedure token introduction can alter the semantics of a program. Consider the program below.

	#pragma token PROC {TYPE t, EXP lvalue:t:,EXP lvalue:t:}STATEMENT SWAP#
	#define SWAP(T,A,B)\
		{T x; x=B; B=A; A=x;}
	void f(int x, int y) {
		SWAP(int, x, y)
	}
The definition and call of the procedure token are extremely straightforward. However, if the procedure token introduction is absent, the swap does not take place because x refers to the variable in the inner scope.

Function procedure tokens are introduced with tentative implicit definitions, defining them to be direct calls of the functions they reference and effectively removing the in-lining capability. If a genuine definition is found later in the compilation, it overrides the tentative definition. An example of a tentative definition is shown below:

	#pragma token FUNC int(int, long) : func#
	#define func(A, B) (func) (A, B)

F.10 Tokens and APIs

In Chapter 1 we mentioned that one of the main problems in writing portable software is the lack of separation between specification and implementation of APIs. The TenDRA technology uses the token syntax described in the previous sections to provide an abstract description of an API specification. Collections of tokens representing APIs are called "interfaces". Tchk can compile programs with these interfaces in order to check applications against API specifications independently of any particular implementation that may be present on the developer's machine.

In order to produce executable code, definitions of the interface tokens must be provided on all target machines. This is done by compiling the interfaces with the system headers and libraries.

When developing applications, programmers must ensure that they do not accidentally define a token expressing an API. Implementers of APIs, however, do not want to inadvertently fail to define a token expressing that API. Token definition states have been introduced to enable programmers to instruct the compiler to check that tokens are defined when and only when they wish them to be. This is fundamental to the separation of programs into portable and unportable parts.

When tokens are first introduced, they are in the free state. This means that the token can be defined or left undefined and if the token is defined during compilation, its definition will be output as TDF.

Once a token has been given a valid definition, its definition state moves to defined. Tokens may only be defined once. Any attempt to define a token in the defined state is flagged as an error.

There are three more token definition states which may be set by the programmer. These are as follows:

  • Indefinable - the token is not defined and must not be defined. Any attempt to define the token will cause an error. When compiling applications, interface tokens should be in the indefinable state. It is not possible for a token to move from the state of defined to indefinable;

  • Committed - the token must be defined during the compilation. If no definition is found the compiler will raise an error. Interface tokens should be in the committed state when being compiled with the system headers and libraries to provide definitions;

  • Ignored - any definition of the token that is assigned during the compilation of the program will not be output as TDF;

These token definition states are set using the pragmas:

	#pragma token-op token-id-listopt

	token-op:
		define
		no_def
		ignore
		interface

	token-id-list:	
		TAGopt identifier dot-listopt
 token-id-listopt

	dot-list:
		. member-designator
The token-id-list is the list of tokens to which the definition state applies. The tokens in the token-id-list are identified by an identifier, optionally preceded by TAG. If TAG is present, the identifier refers to the tag name space, otherwise the macro and ordinary name spaces are searched for the identifier. If there is no dot-list present, the identifier must refer to a token. If the dot-list is present, the identifier must refer to a compound type and the member-designator must identify a member selector token of that compound type.

The token-op specifies the definition state to be associated with the tokens in the token-id-list. There are three literal operators and one context dependent operator, as follows:

  1. no_def causes the token state to move to indefinable.

  2. define causes the token state to move to committed;

  3. ignore causes the token state to move to ignored;

  4. interface is the context dependent operator and is used when describing extensions to existing APIs.

As an example of an extension API, consider the POSIX stdio.h. This is an extension of the ANSI stdio.h and uses the same tokens to represent the common part of the interface. When compiling applications, nothing can be assumed about the implementation of the ANSI tokens accessed via the POSIX API so they should be in the indefinable state. When the POSIX tokens are being implemented, however, the ANSI implementations can be assumed. The ANSI tokens are then in the ignored state. (Since the definitions of these tokens will have been output already during the definition of the ANSI interface, they should not be output again.)

The interface operator has a variable interpretation to allow the correct definition state to be associated with these `base-API tokens'. The compiler associates a compilation state with each file it processes. These compilation states determine the interpretation of the interface operator within that file.

The default compilation state is the standard state. In this state the interface operator is interpreted as the no_def operator. This is the standard state for compiling applications in the presence of APIs;

Files included using:

	#include header
have the same compilation state as the file from which they were included.

The implementation compilation state is associated with files included using:

	#pragma implement interface header
In this context the interface operator is interpreted as the define operator.

Including a file using:

	#pragma extend interface header
causes the compilation state to be extension unless the file from which it was included was in the standard state, in which case the compilation state is the standard state. In the extension state the interface operator is interpreted as the ignore operator.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tdfc/tdfc21.html100644 1750 1750 50066 6466607534 17577 0ustar brooniebroonie C Checker Reference Manual: API checking

C Checker Reference Manual

January 1998

next section previous section current document TenDRA home page document index


G.1 - Introduction
G.2 - Specifying APIs to tcc
G.3 - API Checking Examples
G.4 - Redeclaring Objects in APIs
G.5 - Defining Objects in APIs
G.6 - Stepping Outside an API
G.7 - Using the System Headers
G.8 - Abstract API headers and API usage analysis

G API checking


G.1 Introduction

The token syntax described in the previous annex provides the means of describing an API specification independently of any particular implementation of the API. Every object in the API specification is described using the appropriate #pragma token statement. These statements are arranged in TenDRA header files corresponding to the headers comprising the API. Each API consists of a separate set of header files. For example, if the ANSI API is used, the statement:

	#include <sys/types.h>
will lead to a "header not found" error, whereas the header will be found in the POSIX API.

Where relationships exist between APIs these have been made explicit in the headers. For example, the POSIX version of stdio.h consists of the ANSI version plus some extra objects. This is implemented by making the TenDRA header describing the POSIX version of stdio.h include the ANSI version of stdio.h.


G.2 Specifying APIs to tcc

The API against which a program is to be checked is specified to tchk by means of a command-line option of the form -Yapi where api is the API name. For example, ANSI X3.159 is specified by -Yansi (this is the default API) and POSIX 1003.1 is specified by -Yposix (for a full list of the supported APIs see Chapter 2).

Extension APIs, such as X11, require special attention. The API for a program is never just X11, but X11 plus some base API, for example, X11 plus POSIX or X11 plus XPG3. These composite APIs may be specified by, for example, passing the options -Yposix -Yx5_lib (in that order) to tcc to specify POSIX 1003.1 plus X11 (Release 5) Xlib. The rule is that base APIs, such as POSIX, override the existing API, whereas extension APIs, such as X11, extend it. The command-line option -info causes tcc to print the API currently in use. For example:

	> tcc -Yposix -Yx5_lib -info file.c
will result in the message:

	tcc: Information: API is X11 Release 5 Xlib plus POSIX (1003.1).

G.3 API Checking Examples

As an example of the TenDRA compiler's API checking capacities, consider the following program which prints the names and inode numbers of all the files in the current directory:

	#include <stdio.h>
	#include <sys/types.h>
	#include <dirent.h>
	int main ()
	{
		DIR *d = opendir ( "." );
		struct dirent *e;
		if (d = NULL) return ( 1 );
		while(e=readdir(d),e!=NULL) 
		{
			printf ( "%s %lu\n", e->d_name, e->d_ino );
		}
		closedir ( d );
		return ( 0 );
	}
A first attempted compilation using strict checking:

	> tcc -Xs a.c 
results in messages to the effect that the headers <sys/types.h> and <dirent.h> cannot be found, plus a number of consequential errors. This is because tcc is checking the program against the default API, that is against the ANSI API, and the program is certainly not ANSI compliant. It does look as if it might be POSIX compliant however, so a second attempted compilation might be:

	> tcc -Xs -Yposix a.c
This results in one error and three warnings. Dealing with the warnings first, the returns of the calls of printf and closedir are being discarded and the variable d has been set and not used. The discarded function returns are deliberate, so they can be made explicit by casting them to void. The discarded assignment to d requires a little more thought - it is due to the mistyping d = NULL instead of d == NULL on line 9. The error is more interesting. In full the error message reads:

	"a.c":11
		printf ( "%s %lu\n", e->d_name, e->d_ino!!!! );
	Error:ISO[6.5.2.1]{ANSI[3.5.2.1]}: The identifier 'd_ino' is not a member of 
	'struct/union posix.dirent.dirent'.
	ISO[6.3.2.3]{ANSI[3.3.2.3]}: The second operand of '->' must be a member of 
	the struct/union pointed to by the first.
That is, struct dirent does not have a field called d_ino. In fact this is true; while the d_name field of struct dirent is specified in POSIX, the d_ino field is an XPG3 extension (This example shows that the TenDRA representation of APIs is able to differentiate between APIs at a very fine level). Therefore a third attempted compilation might be:

	> tcc -Xs -Yxpg3 a.c
This leads to another error message concerning the printf statement, that the types unsigned long and (the promotion of) ino_t are incompatible. This is due to a mismatch between the printf format string "%lu" and the type of e->d_ino. POSIX only says that ino_t is an arithmetic type, not a specific type like unsigned long. The TenDRA representation of POSIX reflects this abstract nature of ino_t, so that the potential portability error is detected. In fact it is impossible to give a printf string which works for all possible implementations of ino_t. The best that can be done is to cast e->d_ino to some fixed type like unsigned long and print that.

Hence the corrected, XPG3 conformant program reads:

	#include <stdio.h>
	#include <sys/types.h>
	#include <dirent.h>
	int main ()
	{
		DIR *d = opendir ( "." );
		struct dirent *e;
		if ( d == NULL ) return (1);
		while(e=readdir(d),e!=NULL)
		{
			( void ) printf ( "%s %lu\n", e->d_name,( unsigned long ) e->d_ino );
		}
		( void ) closedir ( d );
		return ( 0 );
	}

G.4 Redeclaring Objects in APIs

Of course, it is possible to redeclare the functions declared in the TenDRA API descriptions within the program, provided they are consistent. However, what constitutes a consistent redeclaration in the fully abstract TenDRA machine is not as straightforward as it might seem; an interesting example is malloc in the ANSI API. This is defined by the prototype:

	void *malloc ( size_t ); 
where size_t is a target dependent unsigned integral type. The redeclaration:

	void *malloc (); 
is only correct if size_t is its own integral promotion, and therefore is not correct in general.

Since it is not always desirable to remove these redeclarations (some machines may not have all the
necessary functions declared in their system headers) the TenDRA compiler has a facility to accept inconsistent redeclarations of API functions which can be enabled by using the pragma:

	#pragma TenDRA incompatible interface declaration allow
This pragma supresses the consistency checking of re-declarations of API functions. Replacing allow by warning causes a warning to be printed. In both cases the TenDRA API description of the function takes precedence. The normal behaviour of flagging inconsistent redeclarations as errors can be restored by replacing allow by disallow in the pragma above. (There are also equivalent command-line options to tcc of the form -X:interface_decl= status, where status can be check, warn or dont.)


G.5 Defining Objects in APIs

Since the program API is meant to define the interface between what the program defines and what the target machine defines, the TenDRA compiler normally raises an error if any attempt is made to define an object from the API in the program itself. A subtle example of this is given by compiling the program:

	#include <errno.h>
	extern int errno;
with the ANSI API. ANSI states that errno is an assignable lvalue of type int, and the TenDRA
description of the API therefore states precisely that. The declaration of errno as an extern int is therefore an inconsistent specification of errno, but a consistent implementation. Accepting the lesser of two evils, the error reported is therefore that an attempt has been made to define errno despite the fact that it is part of the API.

Note that if this same program is compiled using the POSIX API, in which errno is explicitly specified to be an extern int, the program merely contains a consistent redeclaration of errno and so does not raise an error.

The neatest workaround for the ANSI case, which preserves the declaration for those machines which need it, is as follows: if errno is anything other than an extern int it must be defined by a macro. Therefore:

	#include <errno.h>
	#ifndef errno
	extern int errno;
	#endif
should always work.

In most other examples, the definitions are more obvious. For example, a programmer might provide a memory allocator containing versions of malloc, free etc.:

	#include <stdlib.h>
	void *malloc ( size_t sz )
	{
		....
	}
	void free ( void *ptr )
	{
		....
	}
If this is deliberate then the TenDRA compiler needs to be told to ignore the API definitions of these objects and to use those provided instead. This is done by listing the objects to be ignored using the pragma:

	#pragma ignore malloc free ....
(also see section G.10). This should be placed between the API specification and the object definitions. The provided definitions are checked for conformance with the API specifications. There are special forms of this pragma to enable field selectors and objects in the tag namespace to be defined. For example, if we wish to provide a definition of the type div_t from stdlib.h we need to ignore three objects - the type itself and its two field selectors - quot and rem. The definition would therefore take the form:

	#include <stdlib.h>
	#pragma ignore div_t div_t.quot div_t.rem
	typedef struct {
		int quot;
		int rem;
	} div_t;
Similarly if we wish to define struct lconv from locale.h the definition would take the form:

	#include <locale.h>
	#pragma ignore TAG lconv TAG lconv.decimal_point 
	....
	struct lconv {
		char *decimal_point;
		....
	};
to take into account that lconv lies in the tag name space. By defining objects in the API in this way, we are actually constructing a less general version of the API. This will potentially restrict the portability of the resultant program, and so should not be done without good reason.


G.6 Stepping Outside an API

Using the TenDRA compiler to check a program against a standard API will only be effective if the appropriate API description is available to the program being tested (just as a program can only be compiled on a conventional machine if the program API is implemented on that machine). What can be done for a program whose API are not supported depends on the degree to which the program API differs from an existing TenDRA API description. If the program API is POSIX with a small extension, say, then it may be possible to express that extension to the TenDRA compiler. For large unsupported program APIs it may be possible to use the system headers on a particular machine to allow for partial program checking (see section H.7).

For small API extensions the ideal method would be to use the token syntax described in Annex G to express the program API to the TenDRA compiler, however this is not currently encouraged because the syntax of such API descriptions is not yet firmly fixed. For the time being it may be possible to use C to express much of the information the TenDRA compiler needs to check the program. For example, POSIX specifies that sys/stat.h contains a number of macros, S_ISDIR, S_ISREG, and so on, which are used to test whether a file is a directory, a regular file, etc. Suppose that a program is basically POSIX conformant, but uses the additional macro S_ISLNK to test whether the file is a symbolic link (this is in COSE and AES, but not POSIX). A proper TenDRA description of S_ISLNK would contain the information that it was a macro taking a mode_t and returning an int, however for checking purposes it is sufficient to merely give the types. This can be done by pretending that S_ISLNK is a function:

	#ifdef __TenDRA__
	/* For TenDRA checking purposes only */
	extern int S_ISLNK ( mode_t ); 
	/* actually a macro */
	#endif
More complex examples might require an object in the API to be defined in order to provide more information about it (see H.5). For example, suppose that a program is basically ANSI compliant, but assumes that FILE is a structure with a field file_no of type int (representing the file number), rather than a generic type. This might be expressed by:

	#ifdef __TenDRA__
	/* For TenDRA checking purposes only */
	#pragma ignore FILE
	typedef struct {
	/* there may be other fields here */
		int file_no;
	/* there may be other fields here */
	} FILE;
	#endif
The methods of API description above are what might be called "example implementations" rather than the "abstract implementations" of the actual TenDRA API descriptions. They should only be used as a last resort, when there is no alternative way of expressing the program within a standard API. For example, there may be no need to access the file_no field of a FILE directly, since POSIX provides a function, fileno, for this purpose. Extending an API in general reduces the number of potential target machines for the corresponding program.


G.7 Using the System Headers

One possibility if a program API is not supported by the TenDRA compiler is to use the set of system headers on the particular machine on which tcc happens to be running. Of course, this means that the API checking facilities of the TenDRA compiler will not be effective, but it is possible that the other program checking aspects will be of use.

The system headers are not, and indeed are not intended to be, portable. A simple-minded approach to portability checking with the system headers could lead to more portability problems being found in the system headers than in the program itself. A more sophisticated approach involves applying different compilation modes to the system headers and to the program. The program itself can be checked very rigorously, while the system headers have very lax checks applied.

This could be done directly, by putting a wrapper around each system header describing the mode to be applied to that header. However the mechanism of named compilation modes (see 2.2) provides an alternative solution. In addition to the normal -Idir command-line option, tcc also supports the option -Nname:dir, which is identical except that it also associates the identifier name with the directory dir. Once a directory has been named in this way, the name can be used in a directive:

	#pragma TenDRA directory name use environment mode
which tells tcc to apply the named compilation mode, mode, to any files included from the directory, name. This is the mechanism used to specify the checks to be applied to the system headers.

The system headers may be specified to tcc using the -Ysystem command-line option. This specifies /usr/include as the directory to search for headers and passes a system start-up file to tcc. This system start-up file contains any macro definitions which are necessary for tcc to navigate the system headers correctly, plus a description of the compilation mode to be used in compiling the system headers.

In fact, before searching /usr/include, tcc searches another directory for system headers. This is intended to hold modified versions of any system headers which cause particular problems or require extra information. For example:

  • A version of stdio.h is provided for all systems, which contains the declarations of printf and similar functions necessary for tcc to apply its printf-string checks (see 3.3.2).

  • A version of stdlib.h is provided for all systems which includes the declarations of exit and similar functions necessary for tcc to apply its flow analysis correctly (see 5.7).

  • Versions of stdarg.h and varargs.h are provided for all systems which work with tcc. Most system headers contain built-in functions which are recognised by cc (but not tcc) to deal with these.

The user can also use this directory to modify any system headers which cause problems. For example, not all system headers declare all the functions they should, so it might be desirable to add these declarations.

It should be noted that the system headers and the TenDRA API headers do not mix well. Both are parts of coherent systems of header files, and unless the intersection is very small, it is not usually possible to combine parts of these systems sensibly.

Even a separation, such as compiling some modules of a program using a TenDRA API description and others using the system headers, can lead to problems in the intermodular linking phase (see Chapter 9). There will almost certainly be type inconsistency errors since the TenDRA headers and the system headers will have different representations of the same object.


G.8 Abstract API headers and API usage analysis

The abstract standard headers provided with the tool are the basis for the API usage analysis checking on dump files described in Chapter 9. The declarations in each abstract header file are enclosed by the following pragmas:

	#pragma TenDRA declaration block API_name begin
	#pragma TenDRA declaration block end
API_name has a standard form e.g. api__ansi__stdio for stdio.h in the ANSI API.

This information is output in the dump format as the start and end of a header scope, i.e.

	SSH	position	ref_no = <API_name>
	SEH	position	ref_no
The first occurence of each identifier in the dump output contains scope information; in the case of an identifier declared in the abstract headers, this scope information will normally refer to a header scope. Since each use of the identifier can be traced back to its declaration, this provides a means of tracking API usage within the application when the abstract headers are used. The disadvantages of this method are that only APIs for which abstract headers are available can be used. Objects which are not part of the standard APIs are not available and if an application requires such an identifier (or indeed attempts to use a standard API identifier for which the appropriate header has not been included) the resulting errors may distort or even completely halt the dump output resulting in incomplete or incorrect analysis.

The second method of API analysis allows compilation of the application against the system headers, thereby overcoming the problems of non-standard API usage mentioned above. The dump of the application can be scanned to determine the identifiers which are used but not defined within the application itself. These identifiers form the program's external API with the system headers and libraries, and can be compared with API reference information, provided by dump output files produced from the abstract standard headers, to determine the applications API usage.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tdfc/tdfc22.html100644 1750 1750 10643 6466607535 17576 0ustar brooniebroonie C Checker Reference Manual: Specifying conversions using the token syntax

C Checker Reference Manual

January 1998

next section
previous section current document TenDRA home page document index


H.1 - Introduction
H.2 - User-defined conversions
H.3 - Specifying integer promotions

H Specifying conversions using the token syntax


H.1 Introduction

The token syntax described in Annex F can be used to fine-tune the conversion analysis and integer range checks described in section 3.2.2 and chapter 4 respectively.


H.2 User-defined conversions

In the example:

	#pragma token TYPE IP#
	#pragma no_def IP
	#pragma token PROC{ TYPE t ,EXP rvalue : t* : e | EXP e}EXP rvalue:IP:p_to_ip#
	#pragma TenDRA conversion p_to_ip allow
	void f(void){
		IP i, *pip;
		i=pip;
	}
the conversion from type pointer to IP to IP would normally result in an error. However the pragma:

	#pragma TenDRA conversion conv_list allow
allows users to define their own conversions between types that would otherwise be incompatible. Each identifier in the conv_list must be a local identifier for a PROC token (see
F.9 Procedure tokens), taking exactly one argument which must be an expression. The procedure must also deliver an expression and both the parameter and the result must be rvalues. When attempting the conversion of a value from one type to another, either by assignment or casting, if that conversion would not normally be permitted, then, for each token introduced as a conversion token by the pragma above:

An attempt is made to resolve the type of the token result to the type to which the value is being converted.

If the result is resolved and the value to be converted is a suitable argument for the token procedure, the token procedure is applied to implement the conversion.

If no suitable conversion token can be found, an error will be raised.


H.3 Specifying integer promotions

Whenever an integral type is used, e.g. in a call to a traditionally defined function, its promotion must be computed. If no promotion type has been specified, the compiler will raise an error. Although programs will generally use the default rules provided in the built-in compilation modes, the TenDRA compiler also allows programmers to specify their own set of integer promotion rules. Such promotions can be split into two categories: literal promotions and computed promotions.

Literal promotions. The promoted type is supplied directly using the pragma:

	#pragma promote type-name : type-name
The second type-name specifies the promoted type of the first type-name. Both types must be integral types.

Computed promotions. The pragma:

	#pragma compute promote proc-token
allows the programmer to provide the identification of a procedure token (see
F.9 Procedure tokens) for computing the promotion type of any integral type. This token is then called whenever the promotion of a type without a literal promotion is required. The procedure token proc-token must be declared as:

	#pragma token PROC ( VARIETY ) VARIETY proc-token #
The TenDRA technology provides two pre-defined procedure tokens, identified by ~promote and ~sign-promote. These procedures perform integer promotions according to the ISO/ANSI promotion rules or according to the traditional signed promotion rules respectively. The built-in compilation environments (see chapter 2) use these tokens to specify the appropriate set of integer promotion rules.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tdfc/tdfc4.html100644 1750 1750 11641 6466607534 17514 0ustar brooniebroonie C Checker Reference Manual: Introduction

C Checker Reference Manual

January 1998

next section previous section current document TenDRA home page document index


1.1 - Background
1.2 - The C static checker
1.3 - About this document

1 Introduction


1.1 Background

The C program static checker was originally developed as a programming tool to aid the construction of portable programs using the Application Programming Interface (API) model of software portability; the principle underlying this approach being:
If a program is written to conform to an abstract API specification, then that program will be portable to any machine which implements the API specification correctly.
The tool was designed to address the problem of the lack of separation between an API specification and an API implementation and as such was considered as a compiler for an abstract machine.

This approach gave the tool an unusually powerful basis for static checking of C programs and a large amount of development work has resulted in the production of the TenDRA C static checker (tchk). The terms, TenDRA C checker and tchk are used interchangably in this document.


1.2 The C static checker

The C static checker is a powerful and flexible tool which can perform a number of static checks on C programs, including:

  • strict interface checking. In particular, the checker can analyse programs against abstract APIs to check their conformance to the specification. Abstract versions of most standard APIs are provided with the tool; alternatively users can define their own abstract APIs using the syntax described in Annex G;

  • checking of integer sizes, overflows and implicit integer conversions including potential 64-bit problems, against a 16 bit or 32 bit architecture profile;

  • strict ISO C standard checking, plus configurable support for many non-ISO dialect features;

  • extensive type checking, including prototype-style checking for traditionally defined functions, conversion checking, type checking on printf and scanf style argument strings and type checking between translation units;

  • variable analysis, including detection of unused variables, use of uninitialised variables, dependencies on order of evaluation in expressions and detection of unused function returns, computed values and static variables;

  • detection of unused header files;

  • configurable tests for detecting many other common programming errors;

  • complete standard API usage analysis;

  • several built-in checking environments plus support for user-defined checking profiles.


1.3 About this document

This document is designed as a reference manual detailing the features of the C static checker. It contains eleven chapters (including this introduction) and nine annexes.

  • Chapter 2: Configuring the Checker describes the built-in checking modes and the design of customised environments;

  • Chapters 3-8: Type Checking, Integral Types, Data Flow and Variable Analysis , Preprocessing Checks, ISO C and Other Dialects and Common Errors respectively;

  • Chapter 9: The Symbol Table Dump deals with the detection of unused header files, type checking across translation units and complete standard API usage analysis;

  • Chapter 10: Conditional Compilation describes the checker's approach to conditional compilation;

  • Chapter 11: References lists the references used in the production of this document;

  • Annex A: Checking Modes gives a description of the built-in environments;

  • Annex B: Command Line Options lists the command line checking options;

  • Annex C: Specifying Integral Types describes the built-in integer modes and the methods for customising them;

  • Annex D: Pragma Syntax Specification;

  • Annex E: Symbol Table Dump Specification;

  • Annex F: Token Syntax describes the methods and syntax used to produce abstract APIs;

  • Annex G: Abstract API Manipulation gives details of the ways in which TenDRA abstract APIs may be extended, combined or overriden by local declarations;

  • Annex H: Specifying Conversions with Tokens.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tdfc/tdfc5.html100644 1750 1750 27417 6466607534 17525 0ustar brooniebroonie C Checker Reference Manual: Configuring the Checker

C Checker Reference Manual

January 1998

next section previous section current document TenDRA home page document index


2.1 - Introduction
2.1.1 - Built-in checking profiles
2.1.2 - Minimum integer ranges
2.1.3 - API selection
2.1.4 - Individual command line checking options
2.1.5 - Construct a customised checking environment
2.2 - Scoping checking profiles

2 Configuring the Checker


2.1 Introduction

There are several methods available for configuring the checker most of which are selected by using the relevant command line option. More detailed customisation may require special #pragma statements to be incorporated into the source code to be analysed (this commonly takes the form of a startup file). The configuration options generally act independently of one another and unless explicitly forbidden in the descriptions below, they may be combined in any way.

2.1.1 Built-in checking profiles

Six standard checking profiles are provided with the tool and are held as a set of startup files which are automatically included in each C source file. A brief description of each profile is given below, for a complete descriptions see Annex A.

  • Xs ( strict checks ) denotes strict ISO standard C with most extra checks enabled as warnings;

  • Xp ( partial checks ) denotes strict ISO standard C with some extra checks enabled;

  • Xc ( conformance ) denotes strict ISO standard C with no extra checks enabled;

  • Xw ( warning mode ) represents a `warning' oriented compilation mode. Many non-ISO standard C features are permitted with a warning. Extra checks are performed to produce warnings.

  • Xa ( `standard-ish' C ) denotes ISO standard C with syntatic relaxation and no extra checks;

  • Xt ( traditional C ) denotes traditional ( Kernighan and Ritchie ) C with no extra checks.

( The modes Xc, Xa, and Xt are meant to roughly correspond to the modes found on some System V compilers. )

The default checking environment is Xc, other environments are specified by passing the name of the environment to the checker as a command line flag, e.g. the -Xs flag specifies that the Xs environment is to be used. These environments are the base checking modes and may not be combined: if more than one base mode is specified, only the final base mode is actually used - the earlier ones are ignored.

There are also two "add-on" checking profiles, called nepc (no extra portability checks) and not_ansi, which may be used to complement any base mode. The "add-on" modes may alter the status of checks set in the base mode. The nepc mode switches off many of the checks relating to portability issues and may be specified by passing the -nepc command line option to tchk. The not_ansi mode supports a raft of non-ISO features and is specified using the -not_ansi command line flag.

2.1.2 Minimum integer ranges

By default the checker assumes that all integer ranges conform to the minimum ranges prescribed by the ISO C standard, i.e. char contain at least 8 bits, short and int contain at least 16 bits and long contains at least 32 bits. If the -Y32bit flag is passed to the checker it assumes that integers conform to the minimum ranges commonly found on most 32 bit machines, i.e. int contains at least 32 bits and int is strictly larger than short so that the integral promotion of unsigned short is int under the ISO C standard integer promotion rules.

2.1.3 API selection

By default, programs are checked against the standard ISO C API as specified in the ISO C standard Chapter 7. Other APIs are specified by passing the -Yapi-name flag to the tchk, where api-name is one of the API designators listed below. APIs fall into two categories: base APIs and extension APIs. If more than one base API is specified to tchk, only the last one is used for checking; the others are ignored. Additional extension APIs, however, may be used in addition to any suitable base API.

The base APIs available are:

  • ansi ANSI X3.159;

  • iso ISO MSE 9899:1990(Amendment 1:1993(E));

  • posix POSIX 1003.1;

  • posix2 POSIX 1003.2;

  • xpg3 X/Open Portability Guide 3;

  • xpg4 X/Open Portability Guide 4;

  • cose COSE 1170;

  • svid3 System V Interface Definition 3rd Edition;

  • aes AES Revision A;

  • system System headers as main API.

and the extension APIs are:

  • bsd_extn BSD-like extension for use with POSIX etc.;

  • x5_lib X11 (Release 5) Library;

  • x5_t X11 (Release 5) Intrinsics Toolkit;

  • x5_mu X11 (Release 5) Miscellaneous Utilities;

  • x5_aw X11 (Release 5) Athena Widgets;

  • x5_mit X11 (Release 5) MIT Implementation;

  • x5_proto X11 (Release 5) Protocol Extension;

  • x5_ext X11 (Release 5) Extensions;

  • x5_private X11 (Release 5) private headers (otherwise protected );

  • motif Motif 1.1;

  • system+ System headers as last resort API. Search the system headers only for those objects for which no declaration or definition can be found within the base API.

2.1.4 Individual command line checking options

Some of the checks available can be controlled using a command line option of the form -Xopt,opt,..., where the various opt options give a comma-separated list of commands. These commands have the form test=status, where test is the name of the check, and status is either check (apply check and give an error if it fails), warn (apply check and give a warning if it fails) or dont (do not apply check). The names of checks can be found with their descriptions in Chapters 3 - 8; for example the check for implicit function declarations described in
3.4.1 may be switched on using -X:implicit_func=check .

2.1.5 Construct a customised checking environment

The individual checks performed by the C static checker are generally controlled by #pragma directives. The reason for this is that the ISO standard places no restrictions on the syntax following a #pragma preprocessing directive, and most compilers/checkers can be configured to ignore any unknown #pragma directives they encounter.

Most of these directives begin:

	#pragma TenDRA ...
and are always checked for syntactical correctness. The individual directives, together with the checks they control are described in Chapters 3 - 8. Section 2.2 describes the method of constructing a new checking profile from these individual checks.


2.2 Scoping checking profiles

Almost all the available checks are scoped (exceptions will be mentioned in the description of the check). A new checking scope may be started by inserting the pragma:

	#pragma TenDRA begin
at the outermost level. The scope runs until the matching:

	#pragma TenDRA end
directive, or to the end of the translation unit (the ISO C standard definition of a translation unit as being a source file, together with any headers or source files included using the #include preprocessing directive, less any source lines skipped by any of the conditional inclusion preprocessing directives, is used throughout this document).

Checking scopes may be nested in the obvious way.

Each new checking scope inherits its initial set of checks from the checking scope which immediately contains it (this includes the implicit main checking scope consisting of the entire source file). Any checks switched on or off within the scope apply only to that scope and any scope it contains. The set of checks applied reverts to its previous state at the end of a scope. Thus, for example:

	#pragma TenDRA variable analysis on
	/* Variable analysis is on here */
	#pragma TenDRA begin
	#pragma TenDRA variable analysis off
	/* Variable analysis is off here */
	#pragma TenDRA end
	/* Variable analysis is on again here */
Once a check has been set any attempt to change its status within the same scope is flagged as an error. If checks need to be switched on and off in the same source file, they must be properly scoped. The built-in compilation modes have the entire source file as their scope.

The method of applying different checking profiles to different parts of a program clearly needs to take into account those properties of C which can circumvent such scoping. Consider for example:

	#pragma TenDRA begin
	#pragma TenDRA unknown escape allow
	#define STRING "hello\!"
	#pragma TenDRA end
	char * f () {
		return ( STRING ) ;
	}
The macro STRING is defined in an area where unknown escape sequences, such as \!, are allowed, but it is expanded in an area where they are not allowed (this is the default setting). The conventional approach to macro expansion would lead to the unknown escape sequence being flagged as an error, even though the user probably intended to avoid this. The checker therefore expands all macros using the checking profile in which they were defined, rather than the current checking scope.

The directives describing the user's desired checking profile could be included directly in the program itself, ideally in some configuration file which is #include'd in all source files. It is however perhaps more appropriate to store the directives as a startup file, file say, which is passed to the checker using the -ffile command line option. It should be noted that user-defined compilation modes are defined on top of a built-in mode base (normally Xc, the default mode). It is therefore important to scope the new checking profile as described above.

Names may be associated with checking scopes by using an alternative form of the begin directive:

	#pragma TenDRA begin name environment identifier
where identifier is any valid C identifier. Thereafter a statement of the form:

	#pragma TenDRA use environment identifier
changes the current checking environment to the environment associated with identifier.

Sometimes it may be desirable to use different checking profiles for different parts of a translation unit, e.g. applying less strict checks to any system headers which may be included. The checker can be configured to apply a named checking scope, env_name, to any files included from a directory which has been named dir_name, using:

	#pragma TenDRA directory dir_name use environment env_name
The directory name must be passed to the checker using the -Ndir_name :dir command line option. This is equivalent to the usual -Idir option for specifying include paths, except that it also attaches the name dir_name to the directory.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tdfc/tdfc6.html100644 1750 1750 102450 6466607534 17535 0ustar brooniebroonie C Checker Reference Manual: Type Checking

C Checker Reference Manual

January 1998

next section previous section current document TenDRA home page document index


3.1 - Introduction
3.2 - Type conversions
3.2.1 - Integer to integer conversions
3.2.2 - Pointer to integer and integer to pointer conversions
3.2.3 - Pointer to pointer conversions
3.2.4 - Example: 64-bit portability issues
3.3 - Function type checking
3.3.1 - Type checking non-prototyped functions
3.3.2 - Checking printf strings
3.3.3 - Function return checking
3.4 - Overriding type checking
3.4.1 - Implicit Function Declarations
3.4.2 - Function Parameters
3.4.3 - Incompatible promoted function arguments
3.4.4 - Incompatible type qualifiers

3 Type Checking


3.1 Introduction

Type checking is relevant to two main areas of C. It ensures that all declarations referring to the same object are consistent (clearly a pre-requisite for a well-defined program). It is also the key to determining when an undefined or unexpected value has been produced due to the type conversions which arise from certain operations in C. Conversions may be explicit (conversion is specified by a cast) or implicit. Generally explicit conversions may be regarded more leniently since the programmer was obviously aware of the conversion, whereas the implications of an implicit conversion may not have been considered.


3.2 Type conversions

The only types which may be interconverted legally are integral types, floating point types and pointer types. Even if these rules are observed, the results of some conversions can be surprising and may vary on different machines. The checker can detect three categories of conversion: integer to integer conversions, pointer to integer and integer to pointer conversions, and pointer to pointer conversions.

In the default mode, the checker allows all integer to integer conversions, explicit integer to pointer and pointer to integer conversions and the explicit pointer to pointer conversions defined by the ISO C standard (all conversions between pointers to function types and other pointers are undefined according to the ISO C standard).

Checks to detect these conversions are controlled by the pragma:

	#pragma TenDRA conversion analysis status
Unless explicitly stated to the contrary, throughout the rest of the document where status appears in a pragma statement it represents one of on (enable the check and produce errors), warning (enable the check but produce only warnings), or off (disable the check). Here status may be on to give an error if a conversion is detected, warning to produce a warning if a conversion is detected, or off to switch the checks off. The checks may also be controlled using the command line option -X:test=state where test is one of convert_all, convert_int, convert_int_explicit, convert_int_implicit, convert_int_ptr and convert_ptr and state is check,warn or dont.

Due to the serious nature of implicit pointer to integer, implicit pointer to pointer conversions and undefined explicit pointer to pointer conversions, such conversions are flagged as errors by default. These conversion checks are not controlled by the global conversion analysis pragma above, but must be controlled by the relevant individual pragmas given in sections 3.2.2 and 3.2.3.

3.2.1 Integer to integer conversions

All integer to integer conversions are allowed in C, however some can result in a loss of accuracy and so may be usefully detected. For example, conversions from int to long never result in a loss of accuracy, but conversions from long to int may. The detection of these shortening conversions is controlled by:

	#pragma TenDRA conversion analysis ( int-int ) status
Checks on explicit conversions and implicit conversions may be controlled independently using:

	#pragma TenDRA conversion analysis ( int-int explicit ) status
and

	#pragma TenDRA conversion analysis ( int-int implicit ) status
Objects of enumerated type are specified by the ISO C standard to be compatible with an implementation-defined integer type. However assigning a value of a different integral type other then an appropriate enumeration constant to an object of enumeration type is not really in keeping with the spirit of enumerations. The check to detect the implicit integer to enum type conversions which arise from such assignments is controlled using:

	#pragma TenDRA conversion analysis ( int-enum implicit ) status 
Note that only implicit conversions are flagged; if the conversion is made explicit, by using a cast, no errors are raised.

As usual status must be replaced by on, warning or off in all the pragmas listed above.

The interaction of the integer conversion checks with the integer promotion and arithmetic rules is an extremely complex issue which is further discussed in Chapter 4.

3.2.2 Pointer to integer and integer to pointer conversions

Integer to pointer and pointer to integer conversions are generally unportable and should always be specified by means of an explicit cast. The exception is that the integer zero and null pointers are deemed to be inter-convertible. As in the integer to integer conversion case, explicit and implicit pointer to integer and integer to pointer conversions may be controlled separately using:

	#pragma TenDRA conversion analysis ( int-pointer explicit ) status
and

	#pragma TenDRA conversion analysis ( int-pointer implicit ) status
or both checks may be controlled together by:

	#pragma TenDRA conversion analysis ( int-pointer ) status
where status may be on, warning or off as before and pointer-int may be substituted for int-pointer.

3.2.3 Pointer to pointer conversions.

According to the ISO C standard, section 6.3.4, the only legal pointer to pointer conversions are explicit conversions between:

  1. a pointer to an object or incomplete type and a pointer to a different object or incomplete type. The resulting pointer may not be valid if it is improperly aligned for the type pointed to;

  2. a pointer to a function of one type and a pointer to a function of another type. If a converted pointer, used to call a function, has a type that is incompatible with the type of the called function, the behaviour is undefined.

Except for conversions to and from the generic pointer which are discussed below, all other conversions, including implicit pointer to pointer conversions, are extremely unportable.

All pointer to pointer conversion may be flagged as errors using:

	#pragma TenDRA conversion analysis ( pointer-pointer ) status
Explicit and implicit pointer to pointer conversions may be controlled separately using:

	#pragma TenDRA conversion analysis ( pointer-pointer explicit ) status
and

	#pragma TenDRA conversion analysis ( pointer-pointer implicit ) status
where, as before, status may be on, warning or off.

Conversion between a pointer to a function type and a pointer to a non-function type is undefined by the ISO C standard and should generally be avoided. The checker can however be configured to treat function pointers as object pointers for conversion using:

	#pragma TenDRA function pointer as pointer permit
Unless explicitly stated to the contrary, throughout the rest of the document where permit appears in a pragma statement it represents one of allow (allow the construct and do not produce errors), warning (allow the construct but produce warnings when it is detected), or disallow (produce errors if the construct is detected) Here there are three options for permit: allow (do not produce errors or warnings for function pointer <-> pointer conversions); warning (produce a warning when function pointer <-> pointer conversions are detected); or disallow (produce an error for function pointer <-> pointer conversions).

The generic pointer, void *, is a special case. All conversions of pointers to object or incomplete types to or from a generic pointer are allowed. Some older dialects of C used char * as a generic pointer. This dialect feature may be allowed, allowed with a warning, or disallowed using the pragma:

	#pragma TenDRA compatible type : char * == void * permit
where permit is allow, warning or disallow as before.

3.2.4 Example: 64-bit portability issues

64-bit machines form the "next frontier" of program portability. Most of the problems involved in 64-bit portability are type conversion problems. The assumptions that were safe on a 32-bit machine are not necessarily true on a 64-bit machine - int may not be the same size as long, pointers may not be the same size as int, and so on. This example illustrates the way in which the checker's conversion analysis tests can detect potential 64-bit portability problems.

Consider the following code:

	#include <stdio.h>
	void print ( string, offset, scale )
	char *string;
	unsigned int offset;
	int scale;
	{
		string += ( scale * offset );
		( void ) puts ( string );
		return;
	}
	
	int main ()
	{
		char *s = "hello there";
		print ( s + 4, 2U, -2 );
		return ( 0 );
	}
This appears to be fairly simple - the offset of 2U scaled by -2 cancels out the offset in s + 4, so the program just prints "hello there". Indeed, this is what happens on most machines. When ported to a particular 64-bit machine, however, it core dumps. The fairly subtle reason is that the composite offset, scale * offset, is actually calculated as an unsigned int by the ISO C arithmetic conversion rules. So the answer is not -4. Strictly speaking it is undefined, but on virtually all machines it will be UINT_MAX - 3. The fact that adding this offset to string is equivalent to adding -4 is only true on machines on which pointers have the same size as unsigned int. If a pointer contains 64 bits and an unsigned int contains 32 bits, the result is 232 bytes out.

So the error occurs because of the failure to spot that the offset being added to string is unsigned. All mixed integer type arithmetic involves some argument conversion. In the case above, scale is converted to an unsigned int and that is multiplied by offset to give an unsigned int result. If the implicit int->int conversion checks (3.2.1 ) are enabled, this conversion is detected and the problem may be avoided.


3.3 Function type checking

The importance of function type checking in C lies in the conversions which can result from type mismatches between the arguments in a function call and the parameter types assumed by its definition or between the specified type of the function return and the values returned within the function definition. Until the introduction of function prototypes into ISO standard C, there was little scope for detecting the correct typing of functions. Traditional C allows for absolutely no type checking of function arguments, so that totally bizarre functions, such as:

	int f ( n ) int n ; {
		return ( f ( "hello", "there" ) ) ;
	}
are allowed, although their effect is undefined. However, the move to fully prototyped programs has been relatively slow. This is partially due to an understandable reluctance to change existing, working programs, but the desire to maintain compatibility with existing C compilers, some of which still do not support prototypes, is also a powerful factor. Prototypes are allowed in the checker's default mode but tchk can be configured to allow, allow with a warning or disallow prototypes, using:

	#pragma TenDRA prototype permit
where permit is allow, disallow or warning.

Even if prototypes are not supported the checker has a facility, described below, for detecting incorrectly typed functions.

3.3.1 Type checking non-prototyped functions

The checker offers a method for applying prototype-like checks to traditionally defined functions, by introducing the concept of " weak" prototypes. A weak prototype contains function parameter type information, but has none of the automatic argument conversions associated with a normal prototype. Instead weak prototypes imply the usual argument promotion passing rules for non-prototyped functions. The type information required for a weak prototype can be obtained in three ways:

  1. A weak prototype may be declared using the syntax:

    	int f WEAK ( char, char * ) ;
    
    where WEAK represents any keyword which has been introduced using:

    	#pragma TenDRA keyword WEAK for weak
    
    An alternative definition of the keyword must be provided for other compilers. For example, the following definition would make system compilers interpret weak prototypes as normal (strong) prototypes:

    	#ifdef __TenDRA__
    	#pragma TenDRA keyword WEAK for weak
    	#else
    	#define WEAK
    	#endif
    
    The difference between conventional prototypes and weak prototypes can be illustrated by considering the normal prototype for f:
    	int f (char,char *);
    
    When the prototype is present, the first argument to f would be passed as a char. Using the weak prototype, however, results in the first argument being passed as the integral promotion of char, that is to say, as an int.

    There is one limitation on the declaration of weak prototypes - declarations of the form:

    	int f WEAK() ;
    
    are not allowed. If a function has no arguments, this should be stated explicitly as:

    	int f WEAK( void ) ;
    
    whereas if the argument list is not specified, weak prototypes should be avoided and a traditional declaration used instead:

    	extern int f ();
    
    The checker may be configured to allow, allow with a warning or disallow weak prototype declarations using:
    	#pragma TenDRA prototype ( weak ) permit
    
    where permit is replaced by allow, warning or disallow as appropriate. Weak prototypes are not permitted in the default mode.

  2. Information can be deduced from a function definition. For example, the function definition:

    	int f(c,s) char c; char *s;{...}
    
    is said to have weak prototype:

    	int f WEAK (char,char *);
    
    The checker automatically constructs a weak prototype for each traditional function definition it encounters and if the weak prototype analysis mode is enabled (see below) all subsequent calls of the function are checked against this weak prototype.

    For example, in the bizarre function in 3.3, the weak prototype:

    	int f WEAK ( int );
    
    is constructed for f. The subsequent call to f:

    	f ( "hello", "there" );
    
    is then rejected by comparison with this weak prototype - not only is f called with the wrong number of arguments, but the first argument has a type incompatible with (the integral promotion of) int.

  3. Information may be deduced from the calls of a function. For example, in:

    	extern void f ();
    	void g ()
    	{
    		f ( 3 );
    		f ( "hello" );
    	}
    
    we can infer from the first call of f that f takes one integral argument. We cannot deduce the type of this argument, only that it is an integral type whose promotion is int (since this is how the argument is passed). We can therefore infer a partial weak prototype for f:

    	void f WEAK ( t );
    
    for some integral type t which promotes to int. Similarly, from the second call of f we can infer the weak prototype:

    	void f WEAK ( char * );
    
    (the argument passing rules are much simpler in this case). Clearly the two inferred prototypes are incompatible, so an error is raised.

    Note that prototype inferred from function calls alone cannot ensure that the uses of the function within a source file are correct, merely that they are consistent. The presence of an explicit function declaration or definition is required for a definitive "right" prototype.

    Null pointers cause particular problems with weak prototypes inferred from function calls. For example, in:

    	#include <stdio.h>
    	extern void f ();
    	void g () {
    		f ( "hello" );
    		f( NULL );
    	}

    the argument in the first call of f is char* whereas in the second it is int (because NULL is defined to be 0). Whereas NULL can be converted to char*, it is not necessarily passed to procedures in the same way (for example, it may be that pointers have 64 bits and ints have 32 bits). It is almost always necessary to cast NULL to the appropriate pointer type in weak procedure calls.

Functions for which explicitly declared weak prototypes are provided are always type-checked by the checker. Weak prototypes deduced from function declarations or calls are used for type checking if the weak prototype analysis mode is enabled using:

	#pragma TenDRA weak prototype analysis status
where status is one of on, warning and off as usual. Weak prototype analysis is not performed in the default mode.

There is also an equivalent command line option of the form -X:weak_proto= state, where state can be check, warn or dont.

This section ends with two examples which demonstrate some of the less obvious consequences of weak prototype analysis.

Example 1: An obscure type mismatch

As stated above, the promotion and conversion rules for weak prototypes are precisely those for traditionally declared and defined functions. Consider the program:

	void f ( n )long n;{
		printf ( "%ld\n", n );
	}
	void g (){
		f ( 3 );
	}
The literal constant 3 is an int and hence is passed as such to f. f is however expecting a long, which can lead to problems on some machines. Introducing a strong prototype declaration of f for those compilers which understand them:

	#ifdef __STDC__ 
		void f ( long );
	#endif
will produce correct code - the arguments to a function declared with a prototype are converted to the appropriate types, so that the literal is actually passed as 3L. This solves the problem for compilers which understand prototypes, but does not actually detect the underlying error. Weak prototypes, because they use the traditional argument passing rules, do detect the error. The constructed weak prototype:

	void f WEAK ( long );
conveys the type information that f is expecting a long, but accepts the function arguments as passed rather than converting them. Hence, the error of passing an int argument to a function expecting a long is detected.

Many programs, seeking to have prototype checks while preserving compilability with non-prototype compilers, adopt a compromise approach of traditional definitions plus prototype declarations for those compilers which understand them, as in the example above. While this ensures correct argument passing in the prototype case, as the example shows it may obscure errors in the non-prototype case.

Example 2: Weak prototype checks in defined programs

In most cases a program which fails to compile with the weak prototype analysis enabled is undefined. ISO standard C does however contain an anomalous rule on equivalence of representation. For example, in:

	extern void f ();
	void g () {
		f ( 3 );
		f ( 4U );
	}
the TenDRA checker detects an error - in one instance f is being passed an int, whereas in the other it is being passed an unsigned int. However, the ISO C standard states that, for values which fit into both types, the representation of a number as an int is equal to that as an unsigned int, and that values with the same representation are interchangeable in procedure arguments. Thus the program is defined. The justification for raising an error or warning for this program is that the prototype analysis is based on types, not some weaker notion of "equivalence of representation". The program may be defined, but it is not type correct.

Another case in which a program is defined, but not correct, is where an unnecessary extra argument is passed to a function. For example, in:

	void f ( a ) int a; {
		printf ( "%d\n", a );
	}
	void g () {
		f ( 3, 4 );
	}
the call of f is defined, but is almost certainly a mistake.

3.3.2 Checking printf strings

Normally functions which take a variable number of arguments offer only limited scope for type checking. For example, given the prototype:

	int execl ( const char *, const char *, ... );
the first two arguments may be checked, but we have no hold on any subsequent arguments (in fact in this example they should all be const char *, but C does not allow this information to be expressed). Two classes of functions of this form, namely the printf and scanf families, are so common that they warrant special treatment. If one of these functions is called with a constant format string, then it is possible to use this string to deduce the types of the extra arguments that it is expect ing. For example, in:

	printf ( "%ld", 4 );
the format string indicates that printf is expecting a single additional argument of type long. We can therefore deduce a quasi-prototype which this particular call to printf should conform to, namely:

	int printf ( const char *,long );
In fact this is a mixture of a strong prototype and a weak prototype. The first argument comes from the actual prototype of printf, and hence is strong. All subsequent arguments correspond to the ellipsis part of the printf prototype, and are passed by the normal promotion rules. Hence the long component of the inferred prototype is weak (see 3.3.1). This means that the error in the call to printf - the integer literal is passed as an int when a long is expected - is detected.

In order for this check to take place, the function declaration needs to tell the checker that the function is like printf. This is done by introducing a special type, PSTRING say, to stand for a printf string, using:

	#pragma TenDRA type PSTRING for ... printf
For most purposes this is equivalent to:

	typedef const char *PSTRING;
except that when a function declaration:

	int f ( PSTRING, ... );
is encountered the checker knows to deduce the types of the arguments corresponding to the ... from the PSTRING argument (the precise rules it applies are those set out in the XPG4 definition of fprintf). If this mechanism is used to apply printf style checks to user defined functions, an alternative definition of PSTRING for conventional compilers must be provided. For example:

	#ifdef __TenDRA__
	#pragma TenDRA type PSTRING for ... printf
	#else
	typedef const char *PSTRING;
	#endif
There are similar rules with scanf in place of printf.

The TenDRA descriptions of the standard APIs use this mechanism to describe those functions, namely printf, fprintf and sprintf, and scanf, fscanf and sscanf which are of these forms. This means that the checks are switched on for these functions by default. However, these descriptions are under the control of a macro, __NO_PRINTF_CHECKS, which, if defined before stdio.h is included, effectively switches the checks off. This macro is defined in the start-up files for certain checking modes, so that the checks are disabled in these modes (see chapter 2). The checks can be enabled in these cases by #undef'ing the macro before including stdio.h. There are equivalent command-line options to tchk of the form -X:printf=state, where state can be check or dont, which respectively undefine and define this macro.

3.3.3 Function return checking

Function returns normally present no difficulties. The return value is converted, as if by assignment, to the function return type, so that the problem is essentially one of type conversion (see 3.2). There is however one anomalous case. A plain return statement, without a return value, is allowed in functions returning a non-void type, the value returned being undefined. For example, in:

	int f ( int c )
	{
		if ( c ) return ( 1 );
		return;
	}
the value returned when c is zero is undefined. The test for detecting such void returns is controlled by:

	#pragma TenDRA incompatible void return permit
where permit may be allow, warning or disallow as usual.

There are also equivalent command line options to tchk of the form -X:void_ret=state, where state can be check, warn or dont. Incompatible void returns are allowed in the default mode and of course, plain return statements in functions returning void are always legal.

This check also detects functions which do not contain a return statement, but fall out of the bottom of the function as in:

	int f ( int c )
	{
		if ( c ) return ( 1 );
	}
Occasionally it may be the case that such a function is legal, because the end of the function is not reached. Unreachable code is discussed in section
5.2.


3.4 Overriding type checking

There are several commonly used features of C, some of which are even allowed by the ISO C standard, which can circumvent or hinder the type-checking of a program. The checker may be configured either to enforce the absence of these features or to support them with or without a warning, as described below.

3.4.1 Implicit Function Declarations

The ISO C standard states that any undeclared function is implicitly assumed to return int. For example, in ISO C:

	int f ( int c ) {
		return ( g( c )+1 );
	}
the undeclared function g is inferred to have a declaration:

	extern int g ();
This can potentially lead to program errors. The definition of f would be valid if g actually returned double, but incorrect code would be produced. Again, an explicit declaration might give us more information about the function argument types, allowing more checks to be applied.

Therefore the best chance of detecting bugs in a program and ensuring its portability comes from having each function declared before it is used. This means detecting implicit declarations and replacing them by explicit declarations. By default implicit function declarations are allowed, however the pragma:

	#pragma TenDRA implicit function declaration status
may be used to determine how tchk handles implicit function declarations. Status is replaced by on to allow implicit declarations, warning to allow implicit declarations but to produce a warning when they occur, or off to prevent implicit declarations and raise an error where they would normally be used.

(There are also equivalent command-line options to tcc of the form -X:implicit_func=state, where state can be check, warn or dont.)

This test assumes an added significance in API checking. If a programmer wishes to check that a certain program uses nothing outside the POSIX API, then implicitly declared functions are a potential danger area. A function from outside POSIX could be used without being detected because it has been implicitly declared. Therefore, the detection of implicitly declared functions is vital to rigorous API checking.

3.4.2 Function Parameters

Many systems pass function arguments of differing types in the same way and programs are sometimes written to take advantage of this feature. The checker has a number of options to resolve type mismatches which may arise in this way and would otherwise be flagged as errors:

  1. Type-type compatibility

    When comparing function prototypes for compatibility, the function parameter types must be compared. If the parameter types would otherwise be incompatible, they are treated as compatible if they have previously been introduced with a type-type param ter compatibility pragma i.e.

    	#pragma TenDRA argument type-name as type-name
    
    where type-name is the name of any type. This pragma is transitive and the second type in the pragma is taken to be the final type of the parameter.

  2. Type-ellipsis compatibility

    Two function prototypes with different numbers of arguments are compatible if:

    • both prototypes have an ellipsis;

    • each parameter type common to both prototypes is compatible;

    • each extra parameter type in the prototype with more parameters, is either specified in a type-ellipsis compatibility pragma or is type-type compatible (see above) to a type that is specified in a type-ellipsis compatibility.

    Type-ellipsis compatibility is introduced using the pragma:

    	#pragma TenDRA argument type-name as ...
    
    where again type-name is the name of any type.

  3. Ellipsis compatibility

    If, when comparing two function prototypes for compatibility, one has an ellipsis and the other does not, but otherwise the two types would be compatible, then if an `extra' ellipsis is allowed, the types are treated as compatible. The pragma controlling ellipsis compatibility is:

    	#pragma TenDRA extra ... permit
    
    where permit may be allow, disallow or warning as usual.

3.4.3 Incompatible promoted function arguments

Mixing the use of prototypes with old-fashioned function definitions can result in incorrect code. For example, in the program below the function argument promotion rules are applied to the definition of f, making it incompatible with the earlier prototype (a is converted to the integer promotion of char, i.e. int).

	int f(char);
	int f(a)char a;{
		...
	}
An incompatible type error is raised in the default checking mode. The check for incompatible types which arise from mixtures of prototyped and non-prototyped function declarations and definitions is controlled using:

#pragma TenDRA incompatible promoted function argument permit

Permit may be replaced by allow, warning or disallow as normal. The parameter type in the resulting function type is the promoted parameter type.

3.4.4 Incompatible type qualifiers

The declarations

	const int a;
	int a;
are not compatible according to the ISO C standard because the qualifier, const, is present in one declaration but not in the other. Similar rules hold for volatile qualified types. By default, tchk produces an error when declarations of the same object contain different type qualifiers. The check is controlled using:

	#pragma TenDRA incompatible type qualifier permit
where the options for permit are allow, disallow or warning.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tdfc/tdfc7.html100644 1750 1750 46124 6466607534 17523 0ustar brooniebroonie C Checker Reference Manual: Integral Types

C Checker Reference Manual

January 1998

next section previous section current document TenDRA home page document index


4.1 - Introduction
4.2 - Integer promotion rules
4.3 - Arithmetic operations on integer types
4.4 - Interaction with the integer conversion checks
4.5 - Target dependent integral types
4.5.1 - Integer literals
4.5.2 - Abstract API types
4.6 - Integer overflow checks
4.7 - Integer operator checks
4.8 - Support for 64 bit integer types (long long)

4 Integral Types


4.1 Introduction

The checks described in the previous chapter involved the detection of conversions which could result in undefined values. Certain conversions involving integral types, however, are defined in the ISO C standard and so might be considered safe and unlikely to cause problems. This unfortunately is not the case: some of these conversions may still result in a change in value; the actual size of each integral type is implementation-dependent; and the "old-style" integer conversion rules which predate the ISO standard are still in common use. The checker provides support for both ISO and traditional integer promotion rules. The set of rules used may be specified independently of the two integral range scenarios, 16 bit(default) and 32 bit, described in section 2.1.2.

The means of specifying and alternative sets of promotion rules, their interaction with the conversion checks described in section 3.2 and the additional checks which may be performed on integers and integer operations are described in the remainder of this chapter.


4.2 Integer promotion rules

The ISO C standard rules may be summarised as follows: long integral types promote to themselves; other integral types promote to whichever of int or unsigned int they fit into. In full the promotions are:

  • char -> int

  • signed char -> int

  • unsigned char -> int

  • short -> int

  • unsigned short -> int or unsigned int

  • int -> int

  • unsigned int -> unsigned int

  • long -> long

  • unsigned long -> unsigned long

Note that even with these simple built-in types, there is a degree of uncertainty, namely concerning the promotion of unsigned short. On most machines, int is strictly larger than short so the promotion of unsigned short is int. However, it is possible for short and int to have the same size, in which case the promotion is unsigned int. When using the ISO C promotion rules, the checker usually avoids making assumptions about the implementation by treating the promotion of unsigned short as an abstract integral type. If, however, the -Y32bit option is specified, int is assumed to be strictly larger than short, and unsigned short promotes to int.

The traditional C integer promotion rules are often referred to as the signed promotion rules. Under these rules, long integral types promote to themselves, as in ISO C, but the other integral types promote to unsigned int if they are qualified by unsigned, and int otherwise. Thus the signed promotion rules may be represented as follows:

  • char -> int

  • signed char -> int

  • unsigned char -> unsigned int

  • short -> int

  • unsigned short -> unsigned int

  • int -> int

  • unsigned int -> unsigned int

  • long -> long

  • unsigned long -> unsigned long

The traditional promotion rules are applied in the Xt built-in environment only. All of the other built-in environments specify the ISO C promotion rules. Users may also specify their own rules for integer promotions and minimum integer ranges; the methods for doing this are described in Annex H.


4.3 Arithmetic operations on integer types

The ISO C standard rules for calculating the type of an arithmetic operation involving two integer types is as follows - work out the integer promotions of the types of the two operands, then:

  • If either promoted type is unsigned long, the result type is unsigned long;

  • Otherwise, if one promoted type is long and the other is unsigned int, then if a long int can represent all values of an unsigned int, the result type is long; otherwise the result type is unsigned long;

  • Otherwise, if either promoted type is long, the result type is long;

  • Otherwise, if either promoted type is unsigned int, the result type is unsigned int;

  • Otherwise the result type is int.

Both promoted values are converted to the result type, and the operation is then applied.


4.4 Interaction with the integer conversion checks

A simple-minded implementation of the integer conversion checks described in 3.2 would interact badly with these rules. Consider, for example, adding two values of type char:

	char f ( char a, char b )
	{
		char c = a + b ;
		return ( c ) ;
	}
The various stages in the calculation of c are as follows - a and b are converted to their promotion type, int, added together to give an int result, which is converted to a char and assigned to c. The conversions of a and b from char to int are always safe, and so present no difficulties to the integer conversion checks. The conversion of the result from int to char, however, is precisely the type of value destroying conversion which these checks are designed to detect.

Obviously, an integer conversion check which flagged all char arithmetic would never be used, thereby losing the potential to detect many subtle portability errors. For this reason, the integer conversion checks are more sophisticated. In all typed languages, the type is used for two purposes - for static type checking and for expressing information about the actual representation of data on the target machine. Essentially it is a confusion between these two roles which leads to the problems above. The C promotion and arithmetic rules are concerned with how data is represented and manipulated, rather than the underlying abstract types of this data. When a and b are promoted to int prior to being added together, this is only a change in representation; at the conceptual level they are still char's. Again, when they are added, the result may be represented as an int, but conceptually it is a char. Thus the assignment to c, an actual char, is just a change in representation, not a change in conceptual type.

So each expression may be regarded as having two types - a conceptual type which stands for what the expression means, and a representational type which stands for how the expression is to represented as data on the target machine. In the vast majority of expressions, these types coincide, however the integral promotion and arithmetic conversions are changes of representational, not conceptual, types. The integer conversion checks are concerned with detecting changes of conceptual type, since it is these which are most likely to be due to actual programming errors.

It is possible to define integral types within the TenDRA extensions to C in which the split between concept and representation is made explicit. The pragma:

	#pragma TenDRA keyword TYPE for type representation
may be used to introduce a keyword TYPE for this purpose (as with all such pragmas, the precise keyword to be used is left to the user). Once this has been done, TYPE ( r, t ) may be used to represent a type which is conceptually of type t but is represented as data like type r. Both t and r must be integral types. For example:

	TYPE ( int, char ) a ;
declares a variable a which is represented as an int, but is conceptually a char.

In order to maintain compatibility with other compilers, it is necessary to give TYPE a sensible alternative definition. For all but conversion checking purposes, TYPE ( r, t ) is identical to r, so a suitable definition is:

	#ifdef __TenDRA__
	#pragma TenDRA keyword TYPE for type representation
	#else
	#define TYPE( r, t ) r
	#endif

4.5 Target dependent integral types

Since the checker uses only information about the minimum guaranteed ranges of integral types, integer values for which the actual type of the values is unknown may arise. Integer values of undetermined type generally arise in one of two ways: through the use of integer literals and from API types which are not completely specified.

4.5.1 Integer literals

The ISO C rules on the type of integer literals are set out as follows. For each class of integer literals a list of types is given. The type of an integer literal is then the first type in the appropriate list which is large enough to contain the value of the integer literal. The class of the integer literal depends on whether it is decimal, hexadecimal or octal, and whether it is qualified by U (or u) or L (or l) or both. The rules may be summarised as follows:

  • decimal -> int or long or unsigned long

  • hex or octal -> int or unsigned int or long or unsigned long

  • any + U -> unsigned int or unsigned long

  • any + L -> long or unsigned long

  • any + UL -> unsigned long

These rules are applied in all the built-in checking modes except Xt. Traditional C does not have the U and L qualifiers, so if the Xt mode is used, these qualifiers are ignored and all integer literals are treated as int, long or unsigned long, depending on the size of the number.

If a number fits into the minimal range for the first type of the appropriate list, then it is of that type; otherwise its type is undetermined and is said to be target dependent. The checker treats target dependent types as abstract integral types which may lead to integer conversion problems. For example, in:

	int f ( int n ) {
		return ( n & 0xff00 ) ;
	}
the type of 0xff00 is target dependent, since it does not fit into the minimal range for int specified by the ISO C standard (this is detected by the integer overflow analysis described in section 4.6). The arithmetic conversions resulting from the & operation is detected by the checker's conversion analysis. Note that if the -Y32bit option is specified to tchk, an int is assumed to contain at least 32 bits. In this case, 0xff00 fits into the type int, and so this is the type of the integer literal. No invalid integer conversions is then detected.

4.5.2 Abstract API types

Target dependent integral types also occur in API specifications and may be encountered when checking against one of the implementation-independent APIs provided with the checker. The commonest example of this is size_t, which is stated by the ISO C standard to be a target dependent unsigned integral type, and which arises naturally within the language as the type of a sizeof expression.

The checker has its own internal version of size_t, wchar_t and ptrdiff_t for evaluating static compile-time expressions. These internal types are compatible with the ISO C specification of size_t, wchar_t and ptrdiff_t, and thus are compatible with any conforming definitions of these types found in included files. However, when checking the following program against the system headers, a warning is produced on some machines concerning the implicit conversion of an unsigned int to type size_t:

	#include <stdlib.h>
	int main() {
		size_t size;
		size = sizeof(int);
	}
The system header on the machine in question actually defines size_t to be a signed int (this of course contravenes the ISO C standard) but the compile time function sizeof returns the checker's internal version of size_t which is an abstract unsigned integral type. By using the pragma:

	#pragma TenDRA set size_t:signed int
the checker can be instructed to use a different internal definition of size_t when evaluating the sizeof function and the error does not arise. Equivalent options are also available for the ptrdiff_t and wchar_t types.


4.6 Integer overflow checks

Given the complexity of the rules governing the types of integers and results of integer operations, as well as the variation of integral ranges with machine architecture, it is hardly surprising that unexpected results of integer operations are at the root of many programming problems. These problems can often be hard to track down and may suddenly appear in an application which was previously considered "safe", when it is moved to a new system. Since the checker supports the concept of a guaranteed minimum size of an integer it is able to detect many potential problems involving integer constants. The pragma:

	#pragma TenDRA integer overflow analysis status
where status is on, warning or off, controls a set of checks on arithmetic expressions involving integer constants. These checks cover overflow, use of constants exceeding the minimum guaranteed size for their type and division by zero. They are not enabled in the default mode.

There are two special cases of integer overflow for which checking is controlled seperately:

  1. Bitfield sizes. Obviously, the size of a bitfield must be smaller than or equal to the minimum size of its integral type. A bitfield which is too large is flagged as an error in the default mode. The check on bitfield sizes is controlled by:
    	#pragma TenDRA bitfield overflow permit
    
    where permit is one of allow, disallow or warning.

  2. Octal and hexadecimal escape sequences. According to the ISO C standard, the value of an octal or hexadecimal escape sequence shall be in the range of representable values for the type unsigned char for an integer character constant, or the unsigned type corresponding to wchar_t for a wide character constant. The check on escape sequence sizes is controlled by:
    	#pragma TenDRA character escape overflow permit
    
    where the options for permit are allow, warning and disallow. The check is switched on by default.


4.7 Integer operator checks

The results of some integer operations are undefined by the ISO C standard for certain argument types. Others are implementation-defined or simply likely to produce unexpected results.In the default mode such operations are processed silently, however a set of checks on operations involving integer constants may be controlled using:

	#pragma TenDRA integer operator analysis status
where status is replaced by on, warning or off. This pragma enables checks on:

  • shift operations where an expression is shifted by a negative number or by an amount greater than or equal to the width in bits of the expression being shifted;

  • right shift operation with a negative value of signed integral type as the first argument;

  • division operation with a negative operand;

  • test for an unsigned value strictly greater than or less than 0 (these are always true or false respectively);

  • conversion of a negative constant value to an unsigned type;

  • application of unary - operator to an unsigned value.


4.8 Support for 64 bit integer types (long long)

Although the use of long long to specify a 64 bit integer type is not supported by the ISO C standard it is becoming increasingly popular as in programming use. By default, tchk does not support the use of long long but the checker can be configured to support the long long type to different degrees using the following pragmas:

	#pragma TenDRA longlong type permit
where permit is one of allow (long long type accepted), disallow (errors produced when long long types are detected) or warning (long long type are accepted but a warning is raised).

	#pragma TenDRA set longlong type : type_name
where type_name is long or long long.

The first pragma determines the behaviour of the checker if the type long long is encountered as a type specifier. In the disallow case, an error is raised and the type specifier mapped to long, otherwise the type is stored as long long although a message alerting the user to the use of long long is raised in the warning mode. The second pragma determines the semantics of long long. If the type specified is long long, then long long is treated as a separate integer type and if code generation is enabled, long long types appears in the output. Otherwise the type is mapped to long and all objects declared long long are output as if they had been declared long (a warning is produced when this occurs). In either case, long long is treated as a distinct integer type for the purpose of integer conversion checking.

Extensions to the integer promotion and arithmetic conversion rules are required for the long long type. These have been implemented as follows:

  • the types of integer arithmetic operations where neither argument has long long type are unaffected;

  • long long and unsigned long long both promote to themselves;

  • the result type of arithmetic operations with one or more arguments of type unsigned long long is unsigned long long;

  • otherwise if either argument has type signed long long the overall type is long long if both arguments can be represented in this form, otherwise the type is unsigned long long.

There are now three cases where the type of an integer arithmetic operation is not completely determined from the type of its arguments, i.e.

  1. signed long long + unsigned long = signed long long or unsigned long long;

  2. signed long long + unsigned int = signed long long or unsigned long long;

  3. signed int + unsigned short = signed int or unsigned int ( as before ).

In these cases, the type of the operation is represented using an abstract integral type as described in section 4.2.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tdfc/tdfc8.html100644 1750 1750 72472 6466607534 17531 0ustar brooniebroonie C Checker Reference Manual: Data Flow and Variable Analysis

C Checker Reference Manual

January 1998

next section previous section current document TenDRA home page document index


5.1 - Introduction
5.2 - Unreachable code analysis
5.3 - Case fall through
5.4 - Unusual flow in conditional statements
5.4.1 - Empty if statements
5.4.2 - Use of assignments as control expressions
5.4.3 - Constant control expressions
5.5 - Operator precedence
5.6 - Variable analysis
5.6.1 - Order of evaluation
5.6.2 - Modification between sequence points
5.6.3 - Operand of sizeof operator
5.6.4 - Unused variables
5.6.5 - Values set and not used
5.6.6 - Variable which has not been set is used
5.7 - Overriding the variable analysis
5.7.1 - Discarding variables
5.7.2 - Setting variables
5.7.3 - Exhaustive switch statements
5.7.4 - Non-returning functions
5.8 - Discard Analysis
5.8.1 - Discarded function returns
5.8.2 - Discarded computed values
5.8.3 - Unused static variables and procedures
5.9 - Overriding the discard analysis
5.9.1 - Discarding function returns and computed values
5.9.2 - Preserving unused statics

5 Data Flow and Variable Analysis


5.1 Introduction

The checker has a number of features which can be used to help track down potential programming errors relating to the use of variables within a source file and the flow of control through the program. Examples of this are detecting sections of unused code, flagging expressions that depend upon the order of evaluation where the order is not defined, checking for unused static variables, etc.


5.2 Unreachable code analysis

Consider the following function definition:

	int f ( int n )
	{
		if ( n ) {
			return ( 1 );
		} else {
			return ( 0 );
		}
		return ( 2 );
	}
The final return statement is redundant since it can never be reached. The test for unreachable code is controlled by:

	#pragma TenDRA unreachable code permit
where permit is replaced by disallow to give an error if unreached code is detected, warning to give a warning, or allow to disable the test (this is the default).

There are also equivalent command-line options to tchk of the form -X:unreached=state, where state can be check, warn or dont.

Annotations to the code in the form of user-defined keywords may be used to indicate that a certain statement is genuinely reached or unreached. These keywords are introduced using:

	#pragma TenDRA keyword REACHED for set reachable
	#pragma TenDRA keyword UNREACHED for set unreachable
The statement REACHED then indicates that this portion of the program is actually reachable, whereas UNREACHED indicates that it is unreachable. For example, one way of fixing the program above might be to say that the final return is reachable (this is a blatant lie, but never mind). This would be done as follows:

	int f ( int n ) {
		if ( n ) {
			return ( 1 );
	} else {
			return ( 0 )
		}
		REACHED
		return ( 2 );
	}
An example of the use of UNREACHED might be in the function below which falls out of the bottom without a return statement. We might know that, because it is never called with c equal to zero, the end of the function is never reached. This could be indicated as follows:

	int f ( int c ){
		if ( c ) return ( 1 );
		UNREACHED
	}
As always, if new keywords are introduced into a program then definitions need to be provided for
conventional compilers. In this case, this can be done as follows:

	#ifdef __TenDRA__
	#pragma TenDRA keyword REACHED for set reachable
	#pragma TenDRA keyword UNREACHED for set unreachable
	#else
	#define REACHED
	#define UNREACHED
	#endif

5.3 Case fall through

Another flow analysis check concerns fall through in case statements. For example, in:

	void f ( int n )
	{
		switch ( n ) {
			case 1 : puts ( "one" );
			case 2 : puts ( "two" );
		}
	}
the control falls through from the first case to the second. This may be due to an error in the program (a missing break statement), or be deliberate. Even in the latter case, the code is not particularly maintainable as it stands - there is always the risk when adding a new case that it will interrupt this carefully contrived flow. Thus it is customary to comment all case fall throughs to serve as a warning.

In the default mode, the TenDRA C checker ignores all such fall throughs. A check to detect fall through in case statements is controlled by:

	#pragma TenDRA fall into case permit
where permit is allow (no errors), warning (warn about case fall through) or disallow (raise errors for case fall through).

There are also equivalent command-line options to tcc of the form -X:fall_thru=state, where state can be check, warn or dont.

Deliberate case fall throughs can be indicated by means of a keyword, which has been introduced using:

	#pragma TenDRA keyword FALL_THROUGH for fall into case
Then, if the example above were deliberate, this could be indicated by:

	void f ( int n ){
		switch ( n ) {
			case 1 : puts ( "one" );
			FALL_THROUGH
			case 2 : puts ( "two" );
		}
	}
Note that FALL_THROUGH is inserted between the two cases, rather than at the end of the list of statements following the first case.

If a keyword is introduced in this way, then an alternative definition needs to be introduced for conventional compilers. This might be done as follows:

	#ifdef __TenDRA__
	#pragma TenDRA keyword FALL_THROUGH for fall into case
	#else
	#define FALL_THROUGH
	#endif

5.4 Unusual flow in conditional statements

The following three checks are designed to detect possible errors in conditional statements.

5.4.1 Empty if statements

Consider the following C statements:

	if( var1 == 1 ) ;
	var2 = 0 ;
The conditional statement serves no purpose here and the second statement will always be executed regardless of the value of var1. This is almost certainly not what the programmer intended to write. A test for if statements with no body is controlled by:

	#pragma TenDRA extra ; after conditional permit
with the usual allow (this is the default setting), warning and disallow options for permit.

5.4.2 Use of assignments as control expressions

Using the C assignment operator, `=', when the equality operator `==' was intended is an extremely common problem. The pragma:

	#pragma TenDRA assignment as bool permit
is used to control the treatment of assignments used as the controlling expression of a conditional statement or a loop, e.g.

	if( var = 1 ) { ...
The options for permit are allow, warning and disallow. The default setting allows assignments to be used as control statements without raising an error.

5.4.3 Constant control expressions

Statements with constant control expressions are not really conditional at all since the value of the control statement can be evaluated statically. Although this feature is sometimes used in loops, relying on a break, goto or return statement to end the loop, it may be useful to detect all constant control expressions to check that they are deliberate. The check for statically constant control expressions is controlled using:

	#pragma TenDRA const conditional permit
where permit may be replaced by disallow to give an error when constant control expressions are encountered, warning to replace the error by a warning, or the check may be switched off using the allow (this is the default).


5.5 Operator precedence

The ISO C standard section 6.3, provides a set of rules governing the order in which operators within expressions should be applied. These rules are said to specify the operator precedence and are summarised in the table over the page. Operators on the same line have the same precedence and the rows are in order of decreasing precedence. Note that the unary +, -, * and & operators have higher precedence than the binary forms and thus appear higher in the table.

The precedence of operators is not always intuitive and often leads to unexpected results when expressions are evaluated. A particularly common example is to write:

	if ( var & TEST == 1) { ...
	}
	else { ...
assuming that the control expression will be evaluated as:

	( ( var & TEST ) == 1 )
However, the == operator has a higher precedence than the bitwise & operator and the control expression is evaluated as:

	( var & ( TEST == 1 ) )
which in general will give a different result

.

The TenDRA C checker can be configured to flag expressions containing certain operators whose precedence is commonly confused, namely:

  • && versus ||

  • << and >> versus + and -

  • & versus == != < > <= >= + and -

  • ^ versus & == |= < > <= >= + and -

  • | versus ^ & == |= < > <= >= + and -

The check is switched off by default and is controlled using:

	#pragma TenDRA operator precedence status
where status is on, warning or off.


5.6 Variable analysis

The variable analysis checks are controlled by:

	#pragma TenDRA variable analysis status
where status is on, warning or off as usual. The checks are switched off in the default mode.

There are also equivalent command line options to tchk of the form -X:variable=state, where state can be check, warn or dont.

The variable analysis is concerned with the evaluation of expressions and the use of local variables, including function arguments. Occasionally it may not be possible to statically perform a full analysis on an expression or variable and in these cases the messages produced indicate that there may be a problem. If a full analysis is possible a definite error or warning is produced. The individual checks are listed in sections 5.6.1 to 5.6.6 and section 5.7 describes the source annotations which can be used to fine-tune the variable analysis.

5.6.1 Order of evaluation

The ISO C standard specifies certain points in the expression syntax at which all prior expressions encountered are guaranteed to have been evaluated. These positions are called sequence points and occur:

  • after the arguments and function expression of a function call have been evaluated but before the call itself;

  • after the first operand of a logical &&, or || operator;

  • after the first operand of the conditional operator, ?:;

  • after the first operand of the comma operator;

  • at the end of any full expression (a full expression may take one of the following forms: an initialiser; the expression in an expression statement; the controlling expression in an if, while, do or switch statement; each of the three optional expressions of a for statement; or the optional expression of a return statement).

Between two sequence points however, the order in which the operands of an operator are evaluated, and the order in which side effects take place is unspecified - any order which conforms to the operator precedence rules above is permitted. For example:

	var = i + arr[ i++ ] ;
may evaluate to different values on different machines, depending on which argument of the + operator is evaluated first. The checker can detect expressions which depend on the order of evaluation of sub-expressions between sequence points and these are flagged as errors or warnings when the variable analysis is enabled.

5.6.2 Modification between sequence points

The ISO C standard states that if an object is modified more than once, or is modified and accessed other than to determine the new value, between two sequence points, then the behaviour is undefined. Thus the result of:

	var = arr[i++] + i++ ;
is undefined, since the value of i is being incremented twice between sequence points. This behaviour is detected by the variable analysis.

5.6.3 Operand of sizeof operator

According to the ISO C standard, section 6.3.3.4, the operand of the sizeof operator is not itself evaluated. If the operand has any side-effects these will not occur. When the variable analysis is enabled, the checker detects the use of expressions with side-effects in the operand of the sizeof operator.

5.6.4 Unused variables

As part of the variable analysis, a simple test applied to each local variable at the end of its scope to determine whether it has been used in that scope. For example, in:

	int f ( int n )
	{
		int r;
		return ( 0 );
	}
both the function argument n and the local variable r are unused.

5.6.5 Values set and not used

This is a more complex test since it is applied to every instance of setting the variable. For example, in:

	int f ( int n )
	{
		int r = 1;
		r = 5;
		return ( r );
	}
the first value r is set to 1 and is not used before it is overwritten by 5 (this second value is used however). This test requires some flow analysis. For example, if the program is modified to:

	int f ( int n )
	{
		int r = 1;
		if ( n == 3 ) {
			r = 5;
		}
		return ( r );
	}
the initial value of r is used when n != 3, so no error is detected. However in:

	int f ( int n )
	{
		int r = 1;
		if ( n == 3 ) {
			r = 5;
		} else {
			r = 6;
		}
		return ( r );
	}
the initial value of r is overwritten regardless of the result of the conditional, and hence is unused.

5.6.6 Variable which has not been set is used

This test also requires some flow analysis, for example in:

	int f ( int n )
	{
		int r;
		if ( n == 3 ) {
			r = 5;
		}
		return ( r );
	}
the use of the variable r as a return value is reported because there are paths leading to this statement in which r is not set (i.e. when n != 3). However, in:

	int f ( int n )
	{
		int r;
		if ( n == 3 ) {
			r = 5;
		} else {
			r = 6;
		}
		return ( r );
	}
r is always set before it is used, so no error is detected.


5.7 Overriding the variable analysis

Although many of the problems discovered by the variable analysis are genuine mistakes, some may be as the result of deliberate decisions by the program writer. In this case, more information needs to be provided to the checker to convey the programmer's intentions. Four constructs are provided for this purpose: the discard variable, the set variable, the exhaustive switch and the non-returning function.

5.7.1 Discarding variables

Actively discarding a variable counts as a use of that variable in the variable analysis, and so can be used to suppress messages concerning unused variables and values assigned to variables. There are two distinct methods to indicate that the variable x is to be discarded. The first uses a pragma:

	#pragma TenDRA discard x;
which the checker treats as if it were a C statement, ending in a semicolon. Having a statement which is noticed by one compiler but ignored by another can lead to problems. For example, in:

	if ( n == 3 )
	#pragma TenDRA discard x;
		puts ( "n is three" );
tchk believes that x is discarded if n == 3 and the message is always printed, whereas other compilers will ignore the #pragma statement and think that the message is printed if n == 3. An alternative, in many ways neater, solution is to introduce a new keyword for discarding variables. For example, to introduce the keyword DISCARD for this purpose, the pragma:

	#pragma TenDRA keyword DISCARD for discard variable
should be used. The variable x can then be discarded by means of the statement:

	DISCARD ( x );
A dummy definition for DISCARD to use with normal compilers needs to be given in order to maintain compilability with those compilers. For example, a complete definition of DISCARD might be:

	#ifdef __TenDRA__
	#pragma TenDRA keyword DISCARD for discard variable
	#else
	#define DISCARD(x) (( void ) 0 )
	#endif
Discarding a variable changes its assignment state to unset, so that any subsequent uses of the variable, without an intervening assignment to it, lead to a "variable used before being set" error. This feature can be exploited if the same variable is used for distinct purposes in different parts of its scope, by causing the variable analysis to treat the different uses separately. For example, in:

	void f ( void ) {
		int i = 0;
		while ( i++ < 10 )
			{ puts ( "hello" ); }
		while ( i++ < 10 ) 
			{ puts ( "goodbye" ); }
	}
which is intended to print both messages ten times, the two uses of i as a loop counter are independent - they could have been implemented with different variables. By discarding i after the first loop, the second loop can be analysed separately. In this way, the error of failing to reset i to 0 can be detected.

5.7.2 Setting variables

In addition to discarding variables, it is also possible to set them. In deliberately setting a variable, the programmer is telling the checker to assume that some value will always have been assigned to the variable by that point, so that any "variable used without being set" errors can be suppressed. This construct is particularly useful in programs with complex flow control, to help out the variable analysis. For example, in:

	void f ( int n )
	{
		int r;
		if ( n != 0 ) r = n;
		if ( n > 2 ) {
			printf ( "%d\n", r );
		}
	}
r is only used if n > 2, in which case we also have n != 0, so that r has already been initialised. However, in its flow analysis, the TenDRA C checker treats all the conditionals it meets as if they were independent and does not look for any such complex dependencies (indeed it is possible to think of examples where such analysis would be impossible). Instead, it needs the programmer to clarify the flow of the program by asserting that r will be set if the second condition is true.

Programmers may assert that the variable, r, is set either by means of a pragma:

	#pragma TenDRA set r;
or by using, for example:

	SET ( r );
where SET is a keyword which has previously been introduced to stand for the variable setting construct using:

	#pragma TenDRA keyword SET for set
(cf. DISCARD above).

5.7.3 Exhaustive switch statements

A special case of a flow control construct which may be used to set the value of a variable is a switch statement. Consider the program:

	char *f ( int n ){
		char *r;
		switch ( n ) {
			case 1:r="one";break;
			case 2:r="two";break;
			case 3:r="three";break;
		}
		return ( r );
	}
This leads to an error indicating that r is used but not set, because it is not set if n lies outside the three cases in the switch statement. However, the programmer might know that f is only ever called with these three values, and hence that r is always set before it is used. This information could be expressed by asserting that r is set at the end of the switch construct (see above), but it would be better to express the cause of this setting rather than just its effect. The reason why r is always set is that
the switch statement is exhaustive - there are case statements for all the possible values of n.

Programmers may assert that a switch statement is exhaustive by means of a pragma immediately following it. For example, in the above case it would take the form:

	....
	switch ( n )
	#pragma TenDRA exhaustive
		{
			case 1:r="one";break;
			....
Again, there is an option to introduce a keyword, EXHAUSTIVE say, for exhaustive switch statements using:

	#pragma TenDRA keyword EXHAUSTIVE for exhaustive
Using this form, the example program becomes:

	switch ( n ) EXHAUSTIVE {
		case 1:r="one";break;
In order to maintain compatibility with existing compilers, a dummy definition for EXHAUSTIVE must be introduced for them to use. For example, a complete definition of EXHAUSTIVE might be:

	#ifdef __TenDRA__
	#pragma TenDRA keyword EXHAUSTIVE for exhaustive
	#else
	#define EXHAUSTIVE
	#endif

5.7.4 Non-returning functions

Consider a modified version of the program above, in which calls to f with an argument other than 1, 2 or 3 cause an error message to be printed:

	extern void error (const char*);
	char *f ( int n ) {
		char *r;
		switch ( n ) {
			case 1:r="one";break;
			case 2:r="two";break;
			case 3:r="three";break;
			default:error("Illegal value");
		}
		return ( r );
	}
This causes an error because, in the default case, r is not set before it is used. However, depending on the semantics of the function, error, the return statement may never be reached in this case. This is because the fact that a function returns void can mean one of two distinct things:

  1. That the function does not return a value. This is the usual meaning of void.

  2. That the function never returns, for example the library function, exit, uses void in this sense.

If error never returns, then the program above is correct; otherwise, an unset value of r may be returned.

Therefore, we need to be able to declare the fact that a function never returns. This is done by introducing a new type to stand for the non-returning meaning of void (some compilers use volatile void for this purpose). This is done by means of the pragma:

	#pragma TenDRA type VOID for bottom
to introduce a type VOID (although any identifier may be used) with this meaning. The declaration of error can then be expressed as:

	extern VOID error (const char *);
In order to maintain compatibility with existing compilers a definition of VOID needs to be supplied. For example:

	#ifdef __TenDRA__
	#pragma TenDRA type VOID for bottom
	#else
	typedef void VOID;
	#endif
The largest class of non-returning functions occurs in the various standard APIs - for example, exit and abort. The TenDRA descriptions of these APIs contain this information. The information that a function does not return is taken into account in all flow analysis contexts. For example, in:

	#include <stdlib.h>
	
	int f ( int n )
	{
		exit ( EXIT_FAILURE );
		return ( n );
	}
n is unused because the return statement is not reached (a fact that can also be determined by the unreachable code analysis in section 5.2).


5.8 Discard Analysis

A couple of examples of what might be termed "discard analysis" have already been described - discarded (unused) local variables and discarded (unused) assignments to local variables (see section 5.6.4 and 5.6.5). The checker can perform three more types of discard analysis: discarded function returns, discarded computations and unused static variables and procedures. These three tests may be controlled as a group using:

	#pragma TenDRA discard analysis status
where status is on, warning or off.

In addition, each of the component tests may be switched on and off independently using pragmas of the form:

	#pragma TenDRA discard analysis (function return) status
	#pragma TenDRA discard analysis (value) status
	#pragma TenDRA discard analysis (static) status
There are also equivalent command line options to tchk of the form -X:test=state, where test can be discard_all, discard_func_ret , discard_value or unused_static, and state can be check, warn or dont. These checks are all switched off in the default mode.

Detailed descriptions of the individual checks follow in sections 5.8.1 - 5.8.3. Section 5.9 describes the facilities for fine-tuning the discard analysis.

5.8.1 Discarded function returns

Functions which return a value which is not used form the commonest instances of discarded values. For example, in:

	#include <stdio.h>
	int main ()
	{
		puts ( "hello" );
		return ( 0 );
	}
the function, puts, returns an int value, indicating whether an error has occurred, which is ignored.

5.8.2 Discarded computed values

A rarer instance of a discarded object, and one which is almost always an error, is where a value is computed but not used. For example, in:

	int f ( int n ) {
		int r = 4 
		if ( n == 3 ) {
			r == 5;
		}
		return ( r );
	}
the value r == 5 is computed but not used. This is actually because it is a misprint for r = 5.

5.8.3 Unused static variables and procedures

The final example of discarded values, which perhaps more properly belongs with the variable analysis tests mentioned above, is for static objects which are unused in the source module in which they are defined. Of course this means that they are unused in the entire program. Such objects can usually be removed.


5.9 Overriding the discard analysis

As with the variable analysis, certain constructs may be used to provide the checker with extra information about a program, to convey the programmer's intentions more clearly.

5.9.1 Discarding function returns and computed values

Unwanted function returns and, more rarely, discarded computed values, may be actively ignored to indicate to the discard analysis that the value is being discarded deliberately. This can be done using the traditional method of casting the value to void:

	( void ) puts ( "hello" );
or by introducing a keyword, IGNORE say, for discarding a value. This is done using a pragma of the form:

	#pragma TenDRA keyword IGNORE for discard value
The example discarded value then becomes:

	IGNORE puts ( "hello" );
Of course it is necessary to introduce a definition of IGNORE for conventional compilers in order to maintain compilability. A suitable definition might be:

	#ifdef __TenDRA__
	#pragma TenDRA keyword IGNORE for discard value
	#else
	#define IGNORE ( void )
	#endif

5.9.2 Preserving unused statics

Occasionally unused static values are introduced deliberately into programs. The fact that the static variables or procedures x, y and z are deliberately unused may be indicated by introducing the pragma:

	#pragma TenDRA suspend static x y z
at the outer level after the definition of all three objects.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tdfc/tdfc9.html100644 1750 1750 22504 6466607534 17521 0ustar brooniebroonie C Checker Reference Manual: Preprocessing checks

C Checker Reference Manual

January 1998

next section previous section current document TenDRA home page document index


6.1 - Introduction
6.2 - Preprocessor directives
6.3 - Indented Preprocessing Directives
6.4 - Multiple macro definitions
6.5 - Macro arguments
6.6 - Unmatched quotes
6.7 - Include depth
6.8 - Text after #endif
6.9 - Text after #
6.10 - New line at end of file

6 Preprocessing checks


6.1 Introduction

This chapter describes tchk's capabilities for checking the preprocessing constructs that arise in C.


6.2 Preprocessor directives

By default, the TenDRA C checker understands those preprocessor directives specified by the ISO C standard, section 6.8, i.e. #if, #ifdef, #ifndef, #elif, #else, #endif, #error, #line and #pragma. As has been mentioned, #pragma statements play a significant role in the checker. While any recognised #pragma statements are processed, all unknown pragma statements are ignored by default. The check to detect unknown pragma statements is controlled by:

	#pragma TenDRA unknown pragma permit
The option for permit are disallow (raise an error if an unknown pragma is encountered), warning (allow unknown pragmas with a warning), or allow (allow unknown pragmas without comment).

In addition, the common non-ISO preprocessor directives, #file, #ident, #assert, #unassert and #weak may be permitted using:

	#pragma TenDRA directive dir allow
where dir is the appropriate one of file, ident , assert, unassert or weak. If allow is replaced by warning then the directive is allowed, but a warning is issued. In either case, the modifier (ignore) may be added to indicate that, although the directive is allowed, its effect is ignored. Thus for example:

	#pragma TenDRA directive ident (ignore) allow
causes the checker to ignore any #ident directives without raising any errors.

Finally, the directive dir can be disallowed using:

	#pragma TenDRA directive dir disallow
Any other unknown preprocessing directives cause the checker to raise an error in the default mode. The pragma may be used to force the checker to ignore such directives without raising any errors.

	#pragma TenDRA unknown directive allow
Disallow and warning variants are also available.


6.3 Indented Preprocessing Directives

The ISO C standard allows white space to occur before the # in a preprocessing directive, and between the # and the directive name. Many older preprocessors have problems with such directives. The checker's treatment of such directives can be set using:

	#pragma TenDRA indented # directive permit
which detects white space before the # and:

	#pragma TenDRA indented directive after # permit
which detects white space between the # and the directive name. The options for permit are allow, warning or disallow as usual. The default checking profile allows both forms of indented directives.


6.4 Multiple macro definitions

The ISO C standard states that, for two definitions of a function-like macro to be equal, both the spelling of the parameters and the macro definition must be equal. Thus, for example, in:

	#define f( x ) ( x )
	#define f( y ) ( y )
the two definitions of f are not equal, despite the fact that they are clearly equivalent. Tchk has an alternative definition of macro equality which allows for consistent substitution of parameter names. The type of macro equality used is controlled by:

	#pragma TenDRA weak macro equality allow
where permit is allow (use alternative definition of macro equality),warning (as for allow but raise a warning), or disallow (use the ISO C definition of macro equality - this is the default setting).

More generally, the pragma:

	#pragma TenDRA extra macro definition allow
allows macros to be redefined, both consistently and inconsistently. If the definitions are incompatible, the first definition is overwritten. This pragma has a disallow variant, which resets the check to its default mode.


6.5 Macro arguments

According to the ISO C standard, section 6.8.3, if a macro argument contains a sequence of preprocessing tokens that would otherwise act as a preprocessing directive, the behaviour is undefined. Tchk allows preprocessing directives in macro arguments by default. The check to detect such macro arguments is controlled by:

	#pragma TenDRA directive as macro argument permit
where permit is allow, warning or disallow.

The ISO C standard, section 6.8.3.2, also states that each # preprocessing token in the replacement list for a function-like macro shall be followed by a parameter as the next preprocessing token in the replacement list. By default, if tchk encounters a # in a function-like macro replacement list which is not followed by a parameter of the macro an error is raised. The checker's behaviour in this situation is controlled by:

	#pragma TenDRA no ident after # permit
where the options for permit are allow (do not raise errors), disallow (default mode) and warning (raise warnings instead of errors).


6.6 Unmatched quotes

The ISO C standard, section 6.1, states that if a ` or " character matches the category of preprocessing tokens described as "single non-whitespace-characters that do not lexically match the other preprocessing token categories", then the behaviour is undefined. For example:

	#define a `b
would result in undefined behaviour. By default the ` character is ignored by tchk. A check to detect such statements may be controlled by:

	#pragma TenDRA unmatched quote permit
The usual allow, warning and disallow options are available.


6.7 Include depth

Most preprocessors set a maximum depth for #include directives (which may be limited by the maximum number of files which can be open on the host system). By default, the checker supports a depth equal to this maximum number. However, a smaller maximum depth can be set using:

	#pragma TenDRA includes depth n
where n can be any positive integral constant.


6.8 Text after #endif

The ISO C standard, section 6.8, specifies that #endif and #else preprocessor directives do not take any arguments, but should be followed by a newline. In the default checking mode, tchk raises an error when #endif or #else statements are not directly followed by a new line. This behaviour may be modified using:

	#pragma TenDRA text after directive permit
where permit is allow (no errors are raised and any text on the same line as the #endif or #else statement is ignored), warning or disallow.


6.9 Text after #

The ISO C standard specifies that a # occuring outside of a macro replacement list must be followed by a new line or by a preprocessing directive and this is enforced by the checker in default mode . The check is controlled by:

	#pragma TenDRA no directive/nline after ident permit
where permit may be allow, disallow or warning.


6.10 New line at end of file

The ISO C standard, section 5.1.1.2, states that source files must end with new lines. Files which do not end in new lines are flagged as errors by the checker in default mode. The behaviour can be modified using:

	#pragma TenDRA no nline after file end permit
where permit has the usual allow, disallow and warning options.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tools/ 40755 1750 1750 0 6507734505 15715 5ustar brooniebroonietendra-doc-4.1.2.orig/doc/tools/tld.html100644 1750 1750 70447 6466607531 17521 0ustar brooniebroonie TDF Linker

The TDF Linker

January 1998

next section previous section current document TenDRA home page document index


1 - Introduction
2 - Basic TDF structures
3 - Structure of a TDF Capsule
4 - Linker information unit groups
5 - Structure of a TDF library
6 - Purpose
7 - TDF Linking
7.1 - Basic linking
7.2 - Renaming
7.3 - Library capsules
7.4 - Hiding
7.5 - Writing out the capsule
8 - Constructing TDF Libraries

1. Introduction

This document describes the formats of the files used by the TDF linker and the linker's required behaviour. There are two file formats: the capsule format and the library format. It also describes the format of the linker information units within capsules. The capsule format is described in more detail in the TDF specification.


2. Basic TDF structures

The structure of a TDF capsule is defined properly in the TDF specification. This section describes the basic components of the TDF format that the linker uses, and the remaining sections describe the format of a TDF capsule, a TDF library and a linker information unit in terms of these components. The basic components are:

ALIGN
This is a byte alignment. It forces the next object to begin on an eight bit boundary.

TDFINT
This is an unsigned number of unbounded size. Its representation is described properly in the TDF specification. It is a series of nibbles (four bits), with the high bit used as a terminator and the low three bits used as an octal digit. The terminator bit is set on the final octal digit. As an example, the number ten would be represented (in binary) as: 0001 1010.

BYTE
This is an eight bit quantity. BYTEs are always aligned on an eight bit boundary.

TDFIDENT
A TDFIDENT is a sequence of characters. It is possible to change the size of the characters, although the current implementation will produce an error for TDFIDENT s with character sizes other than eight bits. A TDFIDENT is represented by two TDFINTs (the size of the characters in bits and the number of characters in the TDFIDENT), and a sequence of BYTEs.

UNIQUE
A UNIQUE is a list of TDFIDENT s. It is represented as a TDFINT specifying the number of TDFIDENTs in the UNIQUE, followed by that many TDFIDENTs.

EXTERNAL
An EXTERNAL is a mechanism for identifying external identifiers. It is represented as a discriminating tag, followed by a byte alignment, followed by either a TDFIDENT or a UNIQUE. The tag is a two bit number, where one represents a TDFIDENT, two represents a UNIQUE, and zero and three are currently illegal. UNIQUE s are only used as part of an EXTERNAL; TDFIDENT s are used as entities in their own right, as well as in EXTERNAL s.

In the following descriptions, the syntax name: type is used to specify an object in the structure. The name is used to describe the purpose of the object; the type is used to describe what the object is. A type is one of the following:

basic_type
This represents one of the basic types listed above.

type * integer
This represents a sequence of objects of the specified type. The integer may be either an integer literal, or a name that has been previously mentioned and is a TDFINT object.

{ name1: type1 ... nameN: typeN }
This represents a structure composed of the elements name1: type1 to nameN: typeN. It is used for sequences of objects where the objects are not of basic types.

type = ( value1 | ... | valueN )
This represents a type with a constraint imposed upon it. The object value must be one of value1 to valueN.


3. Structure of a TDF Capsule

A TDF capsule has the following structure:

    magic				BYTE * 4 = "TDFC"
    major_version			TDFINT
    minor_version			TDFINT
					ALIGN
    num_prop_names:			TDFINT
    prop_names:				TDFIDENT * num_prop_names
    num_linkable_entities:		TDFINT
    linkable_entities: {
	name:				TDFIDENT
	num_capsule_scope_identifiers:	TDFINT
    } * num_linkable_entities
    num_external_linkages:		TDFINT = num_linkable_entities
    external_linkages: {
	num_entries:			TDFINT
	entries: {
	    capsule_scope_id:		TDFINT
	    external_name:		EXTERNAL
	} * num_entries
    } * num_external_linkages
    num_unit_groups:			TDFINT = num_prop_names
    unit_groups: {
	num_units:			TDFINT
	units: {
	    num_counts:			TDFINT = (num_linkable_entities | 0)
	    counts:			TDFINT * num_counts
	    num_link_sets:		TDFINT = num_counts
	    link_sets: {
		num_links:		TDFINT
		links: {
		    internal:		TDFINT
		    external:		TDFINT
		} * num_links
	    } * num_link_sets
	    num_bytes_tdf:		TDFINT
	    tdf:			BYTE * num_bytes_tdf
	} * num_units
    } * num_unit_groups
The rest of this section describes the format of a capsule.

The capsule begins with a header that contains a four byte magic number (magic: "TDFC"), followed by the major (major_version) and minor (minor_version) version numbers of the TDF in the capsule. This is then followed by a byte alignment and then the the capsule body.

The first part of a capsule tells the linker how many types of unit groups there are in the capsule (num_prop_names), and what the names of these unit group types are (prop_names). There can be many unit group types, but the linker must know what they are called, and the order in which they should occur. At present the linker knows about tld, tld2, versions, tokdec, tokdef, aldef, diagtype, tagdec, diagdef, tagdef and linkinfo (this can be changed from the command line). There is nothing special about any unit group type except for the tld unit group type, which contains information that the linker uses (and the tld2 unit group type, which is obsolete, but is treated as a special case of the tld unit group type). The format of the tld unit group type is described in a later section.

The second part of the capsule tells the linker how many linkable entities it should be linking on (num_linkable_entities), the name of each linkable entity (name), and the number of identifiers of each linkable entity at capsule scope (num_capsule_scope_identifiers). Identifiers at capsule scope should be numbers from zero to one less than num_capsule_scope_identifiers. The identifier allocation may be sparse, but the linker is optimized for continuous identifier allocation.

The third part of the capsule tells the linker which external names the capsule contains for each linkable entity. For each linkable entity listed in the second part, the number of external names of that linkable entity are listed in this part (num_entries), along with each of the external names (external_name) and the corresponding capsule scope identifiers (capsule_scope_id). The ordering of the linkable entities in part three must be identical to the ordering of linkable entities in part two.

The fourth and final part of the capsule contains the unit groups themselves. The unit groups occur in the same order as the unit group types were listed in part one. For each unit group, there is a TDFINT specifying the number of units in that unit group (num_units), followed by that many units.

Each unit contains a list of counts (counts) and the number of counts in that list (num_counts), which must be either zero or the same as the number of linkable entities in the capsule (num_linkable_entities ). Each count contains the number of unit scope identifiers of the given linkable entity in the unit. If the number of counts is non-zero, then the counts must be in the same order as the linkable entity names.

After the counts come the unit scope identifier to capsule scope identifier mapping tables. The number of these tables is specified by num_link_sets and must be the same as the number of counts (num_counts), which is either zero or the same as the number of linkable entities in the capsule. There is one table for each linkable entity (if num_link_sets is non-zero), and each table contains num_links pairs of TDFINTs. The internal TDFINT is the unit scope identifier; the external TDFINT is the capsule scope identifier.

After the mapping tables there is a length (num_bytes_tdf), and that many bytes of TDF data (tdf).


4. Linker information unit groups

The tld unit group (if it exists in the capsule) should contain one unit only. This unit should begin with two zeroes (i.e. no counts, and no identifier mapping tables), a length (which must be correct), and a sequence of bytes.

The bytes encode information useful to the linker. The first thing in the byte sequence of a tld unit is a TDFINT that is the type of the unit. What follows depends upon the type. There are currently two types that are supported: zero and one. Type zero units contain the same information as the old tld2 units (if a tld2 unit is read, it is treated as if it were a tld unit that began with a type of zero; it is illegal for a capsule to contain both a tld unit group and a tld2 unit group). Type one units contain more information (described below), and are what the linker writes out in the generated capsule.

A version one unit contains a sequence of TDFINTs. There is one TDFINT for each external name in part two of the capsule. These TDFINTs should be in the same order as the external names were. The TDFINTs are treated as a sequence of bits, with the following meanings:

0 = The name is used within this capsule.

1 = The name is declared within this capsule.

2 = The name is uniquely defined within this capsule. If this bit is set for a tag, then the declared bit must also be set (i.e. a declaration must exist).

3 = The name is defined in this capsule, but may have other definitions provided by other capsules. This bit may not be set for tokens. If a tag has this bit set, then the declared bit must also be set (i.e. a declaration must exist).

All of the other bits in the TDFINT are reserved. The linker uses the information provided by this unit to check that names do not have multiple unique definitions, and to decide whether libraries should be consulted to provide a definition for an external name. If a capsule contains no linker information unit group, then the external names in that capsule will have no information, and hence these checks will not be made. A similar situation arises when the information for a name has no bits set.

A version zero unit contains a sequence of TDFINTs. There is one TDFINT for each external token name, and one TDFINT for each external tag name. These TDFINTs should be in the same order as the external names were (but the tokens always come before the tags). The TDFINTs are treated as a sequence of bits, with the same meanings as above.


5. Structure of a TDF library

A TDF library begins with a header, followed by a TDFINT, that is the type of the library. At present only type zero libraries are supported. The format of a type zero library is as follows:

    magic:				BYTE * 4 = "TDFL"
    major_version:			TDFINT
    minor_version:			TDFINT
					ALIGN
    type:				TDFINT = 0
    num_capsules:			TDFINT
    capsules: {
	capsule_name:			TDFIDENT
	capsule_length:			TDFINT
	capsule_body:			BYTE * capsule_length
    } * num_capsules
    num_linkable_entities:		TDFINT
    linkable_entities: {
	linkable_entity_name:		TDFIDENT
	num_this_linkable_entity:	TDFINT
	this_linkable_entity_names: {
	    name:			EXTERNAL
	    info:			TDFINT
	    capsule:			TDFINT
        } * num_this_linkable_entity
    } * num_linkable_entities
The library begins with a four byte magic number (magic: "TDFL"), followed by the major (major_version) and minor (minor_version) versions of the TDF in the library (the major version must be the same for each capsule in the library; the minor version is the highest of the minor version numbers of all of the the capsules contained in the library). This is followed by a byte alignment, the type of the library (type: 0), and the number of capsules in the library (num_capsules), followed by that many capsules.

Each of the capsules has a name (capsule_name), and the capsule content, which consists of the length of the capsule (capsule_length ) and that many bytes (capsule_body). The capsule name is the name of the file from which the capsule was obtained when the library was built. The names of capsules within a library must all be different.

The library is terminated by the index. This contains information about where to find definitions for external names. The index begins with the number of linkable entities whose external names the library will provide definitions for (num_linkable_entities).

For each of these linkable entities, the linkable entity index begins with the name of the linkable entity (linkable_entity_name), followed by the number of external names of the linkable entity that have entries in the index (num_this_linkable_entity). This is followed by the index information for each of the names.

For each name, the index contains the name (name); a TDFINT that provides information about the name (info) with the same meaning as the TDFINTs in the linker information units; and the index of the capsule that contains the definition for the name (capsule). The index of the first capsule is zero.


6. Purpose

The TDF linker has four uses:

  • To bind together a number of TDF capsules (including those in libraries) to produce a single capsule.

  • To produce a TDF library from a set of capsules.

  • To list the contents of a TDF library.

  • To extract capsules from a TDF library.
The most complex part of the linker is the normal linking process, which is described in the next section. Constructing libraries is described in the section after the one on linking. Listing the contents of a library, and extracting capsules from a library are not very complicated so they aren't described in this document.


7. TDF Linking

This section describes the requirements of linking capsules together. The first sub-section describes the basic linking requirements. Subsequent sub-sections detail the requirements of some more advanced features.

Before the linking process is described in detail, here is an outline of the stages of the link process:

The linker is invoked with the following inputs: a set of input capsules, a set of libraries, a set of hiding rules, a set of keeping rules, a set of renaming rules, and a set of link suppression rules.

The first thing that the linker does is to enter the renaming rules into the name hash table. The name entry lookup routines will automatically pick up the new name when a renamed name is looked up in a name table.

The next stage is to load the input capsules, and to bind them together. As part of this binding process, the capsule scope identifiers for all input capsules are mapped into a single capsule scope (to be output to the final capsule). The rules for this mapping are described below.

After binding the input capsules together, the linker tries to provide definitions for undefined names using the specified libraries. When a definition is found in a library (and it hasn't been suppressed by the link suppression rules), the capsule that contains it is loaded and bound to the input capsules as if it had been one itself.

After the library capsules have been bound in, external names are hidden according to the hiding rules, and kept according to the keeping rules. Hiding means removing an entry from the external name list of the output capsule. Keeping means forcing an entry into the list, if it would otherwise be hidden. It is illegal to hide names that have no definition.

Finally the output capsule is written to a file.

7.1. Basic linking

This sub-section describes the process of binding capsules together in greater detail.

The first thing that the linker does when reading a capsule is to read in the magic number, and the major and minor version numbers. Capsules with an incorrect magic number are rejected. The major version number of each capsule read in must be at least four. In addition, the major version numbers of all capsules that are read in must be the same.

After this, the linker reads the unit group type names (also called `prop names'), and checks that the they are known and that they are in the correct order. There is a default list of names built into the linker (the list specified in the TDF specification) and that should be sufficient for most uses of the linker, but it is possible to provide a new list by specifying a file containing the new list on the command line.

The next thing the linker does is to read in the linkable entity names and the capsule scope identifier limit for each linkable entity. It checks that there are no duplicate names, and creates a mapping vector for each linkable entity, to contain the mapping from the capsule scope identifiers in the capsule being read in to the capsule scope identifiers in the capsule that will be written out.

After reading the linkable entity names, the linker reads the external names for each linkable entity. For each name, it checks that its capsule scope identifier is in range, and maps that to the next available capsule scope identifier (for the same linkable entity) in the output capsule, unless a name with the same linkable entity and the same name (subject to the renaming rules) has already been read (in which case it is mapped to the same capsule scope identifier as the identical name). The linker also checks to ensure that each capsule scope identifier has no more than one external name bound to it.

The final phase of reading a capsule is to read the unit groups. For normal (i.e. not tld or tld2) unit groups, the following occurs for each unit in the unit group:

The unit scope identifier limits for each linkable entity are read and saved in the unit data structure (which is appended to the list of units in that unit group for all input capsules). When the unit is written out in the output capsule, it may be necessary to add extra unit scope identifier limits (and extra mapping tables) as other capsules may have different linkable entities, which will also need entries (they will just be zero length).

The capsule scope to unit scope identifier mapping tables are read, and the old capsule scope identifier (which is first checked to see if it is in range) is replaced by a new capsule scope identifier. This information is also saved in the unit data structure. The new capsule scope identifiers are either the ones that were allocated when reading in the external names (if the old identifier is bound to an external name), or the next free capsule scope identifier of the required linkable entity.

Finally, the unit body is read, and saved with the unit.

For tld and tld2 unit groups, the unit group count is checked to ensure that it is one, and the number of unit scope identifier limits and identifier mapping tables are checked to ensure that they are both zero. The size of the body is read (and it must be correct as this is checked after reading the unit), and then the body is read. If the unit is a tld unit, then the type is read, and the body is read depending upon the type; if the unit is a tld2 unit, then the body is read as if it where a type zero tld unit.

A type zero tld unit consists of a number of integers: one for each external token name, followed by one for each external tag name (in the same order as the external name tables, but tokens always precede tags). A type one tld unit consists of a number of integers: one for each external name (of all linkable entities), in the same order as the external name tables.

Each of the integers is interpreted as a series of bits, with the bits having the following meanings:

0 = The name is used within this capsule.

1 = The name is declared within this capsule.

2 = The name is uniquely defined within this capsule. If this bit is set for a tag, then the declared bit must also be set. It is illegal to have a name uniquely defined in more than one capsule. If this occurs, the linker will report an error.

3 = The name is defined in this capsule, but may have other definitions provided by other capsules. This bit may not be set for tokens. If a tag has this bit set, the declared bit must also be set. It is possible for a name to provide a unique definition, but still allow other non-unique definitions to be linked in from other capsules.

All of the other bits in the integer are reserved. The linker uses the information provided by this unit to check that names do not have multiple unique definitions, and to decide whether libraries should be consulted to provide a definition for a name. If a capsule contains no linker information unit group, then the names in that capsule will have no information, and hence will not receive the extra checking or library definition.

7.2. Renaming

Renaming just requires that any occurrence of the name being renamed is treated as though it were an occurrence of the name it is being renamed to. This includes names read from library indices.

7.3. Library capsules

After reading in all of the specified capsules, the linker loads the libraries. The library indices are read, and checked to ensure that there is no more than one definition for any external name. If there is no unique definition, but there is a single multiple definition, then this is considered to be a definition (although this can be turned off with a command line option).

Once the libraries have been loaded, each of the external names in the bound capsules that are used, but not defined are looked up in the library index. If a definition is found (and it hasn't been suppressed) then the defining capsule is loaded from the library. This process is repeated until definitions have been found for all external names that need them (and that definitions exist for), including those that are undefined in the capsules read in from the libraries.

7.4. Hiding

Hiding and keeping just require that all names which should be hidden, and are not required to be kept, are not written to the external name table of the output capsule.

7.5. Writing out the capsule

The magic number is written out, followed by the major and minor version numbers. The major version number is the same as that in each of the loaded capsules. The minor version number is the largest of the minor version numbers in the loaded capsules.

The unit group type names are written out first. Only those unit groups that are non-empty are actually written out. They are written out in the order specified by the current unit group name list.

The linkable entity names are written out next, followed by their capsule scope identifier limit. Again, only those linkable entity names that are non empty (i.e. have some unit scope or capsule scope identifiers) are written out.

After the linkable entity names have been written out, the external names are written out. These are written out in an arbitrary order within their linkable entities, with the linkable entities being written out in the same order as in the linkable entity name section (which is itself arbitrary, but must be repeatable). The names are written out with their new capsule scope identifiers. External names that have been hidden must not be written out.

Finally the unit groups are written out in the same order as the unit group type names in the first section. For normal units, the old counts (plus some zero counts that may be necessary if there are new linkable entities that weren't in the unit's original capsule) are written out in the same order as the linkable entity names in section two. The counts are followed by the new capsule scope to unit scope identifier mapping tables, in the same order as the counts. Finally the old unit content is written out.

For tld unit groups, a single version one tld unit is written out containing the use information for each of the external names, in the same order that the external names were written out in.


8. Constructing TDF Libraries

This section describes the requirements of building TDF libraries. Here is an outline of the stages of the construction process:

The linker is invoked with the following inputs: a set of input capsules, a set of libraries, and a set of link suppression rules.

The first thing that the linker does is to load the input capsules (including all capsules that are included in any of the specified libraries), and to extract their external name information into a central index. The index is checked to ensure that there is only one definition for any given name. The capsules are read in in the same way as for linking them (this checks them for validity).

The suppression rules are applied, to ensure that no suppressed external name will end up in the index in the output library.

The library is written out. The library consists of a magic number, and the major and minor version numbers of the TDF in the library (calculated in the same way as for capsules), the type zero, followed by the number of capsules. This is followed by that many capsule name and capsule content pairs. Finally, the index is appended to the library.

The index only contains entries for linkable entities that have external names defined by the library. Only external names for which there is a definition are written into the index, although this is not a requirement (when linking, the linker will ignore index entries that don't provide a definition). Either a unique definition or a single multiple definition are considered to be definitions (although the latter can be disabled using a command line option).


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tools/tnc.html100644 1750 1750 66157 6466607531 17525 0ustar brooniebroonie TDF Notation Compiler

TDF Notation Compiler

January 1998

next section previous section current document TenDRA home page document index


1 - Introduction
2 - Input classes
2.1 - Delimiters
2.2 - White space
2.3 - Comments
2.4 - Identifiers
2.5 - Numbers
2.6 - Strings
2.7 - Blanks
2.8 - Bars
3 - Input syntax
3.1 - Basic syntax
3.2 - Sorts
3.3 - Numbers and strings
3.4 - Tokens, tags, alignment tags and labels
3.5 - Outer level syntax
3.6 - Included files
3.7 - Internal and external names
3.8 - Token declarations
3.9 - Token definitions
3.10 - Alignment tag declarations
3.11 - Alignment tag definitions
3.12 - Tag declarations
3.13 - Tag definitions
4 - Shape checking
5 - Remarks
6 - Limitations
7 - Manual Page for tnc

1. Introduction

The TDF notation compiler, tnc, is a tool for translating TDF capsules to and from text. This paper gives a brief introduction to how to use this utility and the syntax of the textual form of TDF. The version here described is that supporting version 3.1 of the TDF specification.

tnc has four modes, two input modes and two output modes. These are as follows:

  • decode - translate an input TDF capsule into the tnc internal representation,
  • read - translate an input text file into the internal representation,
  • encode - translate the internal representation into an output TDF capsule,
  • write - translate the internal representation into an output text file.

Due to the modular nature of the program it is possible to form versions of tnc in which not all the modes are available. Passing the -version flag to tnc causes it to report which modes it has implemented.

Any application of tnc consists of the composite of an input mode and an output mode. The default action is read-encode, i.e. translate an input test file into an output TDF capsule. Other modes may be specified by passing the following command line options to tnc:

  • -decode or -d,
  • -read or -r,
  • -encode or -e,
  • -write or -w.

The only other really useful action is decode-write, i.e. translate an input TDF capsule into an output text file. This may also be specified by the -print or -p option. The actions decode-encode and read-write are not precise identities, they do however give equivalent input and output files.

In addition, the decode mode may be modified to accept a TDF library as input rather than a TDF capsule by passing the addition flag:

  • -lib or -l,

to tnc.

The overall syntax for tnc is as follows:

	tnc [ options ... ] input_file [ output_file ]
If the output file is not specified, the standard output is used.


2. Input classes

The rest of this paper is concerned with the form required of the input text file. The input can be divided into eight classes.

2.1. Delimiters

The characters ( and ) are used as delimiters to impose a syntactic structure on the input.

2.2. White space

White space comprises sequences of space, tab and newline characters, together with comments (see below). It is not significant to the output (TDF notation is completely free-form), and serves only to separate syntactic units. Every identifier, number etc. must be terminated by a white space or a delimiter.

2.3. Comments

Comments may be inserted in the input at any point. They begin with a # character and run to the end of the line.

2.4. Identifiers

An identifier consists of any sequence of characters drawn from the following set: upper case letters, lower case letters, decimal digits, underscore (_), dot (.), and tilde (~), which does not begin with a decimal digit. tnc generates names beginning with double tilde (~~) for unnamed objects when in decode mode, so the use of such identifiers is not recommended.

2.5. Numbers

Numbers can be given in octal (prefixed by 0), decimal, or hexadecimal (prefixed by 0x or 0X). Both upper and lower case letters can be used for hex digits. A number can be preceded by any number of + or - signs.

2.6. Strings

A string consists of a sequence of characters enclosed in double quotes ("). The following escape sequences are recognised:

  • \n represents a newline character,
  • \t represents a tab character,
  • \xxx, where xxx consists of three octal digits, represents the character with ASCII code xxx.

Newlines are not allowed in strings unless they are escaped. For all other escaped characters, \x represents x.

2.7. Blanks

A single minus character (-) has a special meaning. It may be used to indicate the absence of an optional argument or optional group of arguments.

2.8. Bars

A single vertical bar (|) has a special meaning. It may be used to indicate the end of a sequence of repeated arguments.


3. Input syntax

3.1. Basic syntax

The basic input syntax is very simple. A construct consists of an identifier followed by a list of arguments, all enclosed in brackets in a Lisp-like fashion. Each argument can be an identifier, a number, a string, a blank, a bar, or another construct. There are further restrictions on this basic syntax, described below.

	construct	: ( identifier arglist )

	argument	: construct
			| identifier
			| number
			| string
			| blank
			| bar

	arglist		: (empty)
			| argument arglist

The construct ( identifier ), with an empty argument list, is equivalent to the identifier argument identifier. The two may be used interchangeably.

3.2. Sorts

Except at the outermost level, which forms a special case discussed below, every construct and argument has an associated sort. This is one of the basic TDF sorts: access, al_tag, alignment, bitfield_variety, bool, callees, error_code, error_treatment, exp, floating_variety, label, nat, ntest, procprops, rounding_mode , shape, signed_nat, string, tag, transfer_mode, variety, tdfint or tdfstring.

Ignoring for the moment the shorthands discussed below, the ways of creating constructs of sort exp say, correspond to the TDF constructs delivering an exp. For example, contents takes a shape and an exp and delivers an exp. Thus:

	( contents arg1 arg2 )
where arg1 is an argument of sort shape and arg2 is an argument of sort exp, is a sort-correct construct. Only constructs which are sort correct in this sense are allowed.

As another example, because of the rule concerning constructs with no arguments, both

	( true )
and
	false
are valid constructs of sort bool.

TDF constructs which take lists of arguments are easily dealt with. For example:

	( make_nof arg1 ... argn )
where arg1, ..., argn are all arguments of sort exp, is valid. A vertical bar may be used to indicate the end of a sequence of repeated arguments.

Optional arguments should be entered normally if they are present. Their absence may be indicated by means of a blank (minus sign), or by simply omitting the argument.

The vertical bar and blank should be used whenever the input is potentially ambiguous. Particular care should be taken with apply_proc (which is genuinely ambiguous) and labelled.

The TDF specification should be consulted for a full list of valid TDF constructs and their argument sorts. Alternatively the tnc help facility may be used. The command:

	tnc -help cmd1 ... cmdn
prints sort information on the constructs or sorts cmd1, ..., cmdn. Alternatively:
	tnc -help
prints this information for all constructs. (To obtain help on the sort alignment as opposed to the construct alignment use alignment_sort. This confusion cannot occur elsewhere.)

3.3. Numbers and strings

Numbers can occur in two contexts, as the argument to the TDF constructs make_nat and make_signed_nat. In the former case the number must be positive. The following shorthands are understood by tnc:

	number for ( make_nat number )
	number for ( make_signed_nat number )
depending on whether a construct of sort nat or signed_nat is expected.

Strings are nominally of sort tdfstring. They are taken to be simple strings (8 bits per character). Multibyte strings (those with other than 8 bits per character) may be represented by means of the multi_string construct. This takes the form:

	( multi_string b c1 ... cn )
where b is the number of bits per character and c1, ...,cn are the codes of the characters comprising the string. These multibyte strings cannot be used as external names.

In addition, a simple (8 bit) string can be used as a shorthand for a TDF construct of sort string, as follows:

	string for ( make_string string )

3.4. Tokens, tags, alignment tags and labels

In TDF simple tokens, tags, alignment tags and labels are represented by numbers which may, or may not, be associated with external names. In tnc however they are represented by identifiers. This brings the problem of scoping which does not occur in TDF. The rules are that all tokens, tags, alignment tags and labels must be declared before they are used. Externally defined objects have global scope, and the scope of a formal argument in a token definition is the definition body. For those constructs which introduce a local tag or label - for example, identify, make_proc, make_general_proc and variable for tags and conditional, labelled and repeat for labels - the scope of the object is as set out in the TDF specification.

The following shorthands are understood by tnc, according to the argument sort expected:

	tag_id for ( make_tag tag_id )
	al_tag_id for ( make_al_tag al_tag_id )
	label_id for ( make_label label_id )

The syntax for token applications is as follows:

	( apply_construct ( token_id arg1 ... argn ) )
where apply_construct is the appropriate TDF token application construct, for example, exp_apply_token for tokens declared to deliver exp's. The token arguments arg1, ..., argn must be of the sorts indicated in the token declaration or definition. For tokens without any arguments the alternative form:
	( apply_construct token_id )
is allowed.

The token application above may be abbreviated to:

	( token_id arg1 ... argn )
the result sort being known from the token declaration. This in turn may be abbreviated to:
	token_id
when there are no token arguments.

Care needs to be taken with these shorthands, as they can lead to confusion, particularly when, due to optional arguments or lists of arguments, tnc is not sure what sort is coming next. The five categories of objects represented by identifiers - TDF constructs, tokens, tags, alignment tags and labels - occupy separate name spaces, but it is a good idea to try to avoid duplication of names.

By default all these shorthands are used by tnc in write mode. If this causes problems, the -V flag should be passed to tnc.


3.5. Outer level syntax

At the outer level tnc is expecting a sequence of constructs of the following forms:

  • an included file,
  • a token declaration,
  • a token definition,
  • an alignment tag declaration,
  • an alignment tag definition,
  • a tag declaration,
  • a tag definition.

3.6. Included files

Included files may be of three types - text, TDF capsule or TDF library. For TDF capsules and libraries there are two include modes. The first just decodes the given capsule or set of capsules. The second scans through them to extract token declaration information. These declarations appear in the output file only if they are used elsewhere.

The syntax for an included text file is:

	( include string )
where string is a string giving the pathname of the file to be included. tnc applies read to this sub-file before continuing with the present file.

Similarly, the syntaxes for included TDF capsules and libraries are:

	( code string )
	( lib string )
respectively. tnc applies decode to this capsule or set of capsules (provided this mode is available) before continuing with the present file.

The syntaxes for extracting the token declaration information from a TDF capsule or library are:

	( use_code string )
	( use_lib string )
Again, these rely on the decode mode being available.

3.7. Internal and external names

All tokens, tags and alignment tags have an internal name, namely the associated identifier, but this name does not necessarily appear in the corresponding TDF capsule. There must firstly be an associated declaration or definition at the outer level - tags internal to a piece of TDF do not have external names. Even then we may not wish this name to appear at the outer level, because it is local to this file and is not required for linking purposes. Alternatively we may wish a different external name to be associated with it in the TDF capsule.

As an example of how tnc allows for this, consider token declarations (although similar remarks apply to token definitions, alignment tag definitions etc.). The basic form of the token declaration is:

	( make_tokdec token_id ... )
This creates a token with both internal and external names equal to token_id. Alternatively:
	( local make_tokdec token_id ... )
creates a token with internal name token_id but no external name. This allows the creation of tokens local to the current file. Again:
	( make_tokdec ( string_extern string ) token_id ... )
creates a token with internal name token_id and external name given by the string string. For example, to create a token whose external name is not a valid identifier, it would be necessary to use this construct. Finally:
	( make_tokdec ( unique_extern string1 ... stringn ) token_id ... )
creates a token with internal name token_id and external name given by the unique name consisting of the strings string1, ..., stringn.

The local quantifier should be used consistently on all declarations and definitions of the token, tag or alignment tag. The alternative external name should only be given on the first occasion however. Thereafter the object is identified by its internal name.

3.8. Token declarations

The basic form of a token declaration is:

	( make_tokdec token_id ( arg1 ... argn ) res )
where the token token_id is declared to take argument sorts arg1, ..., argn and deliver the result sort res. These sorts are given by their sort names, al_tag, alignment, bitfield_variety etc. For a token with no arguments the declaration may be given in the form:
	( make_tokdec token_id res )
A token may be declared any number of times, provided the declarations are consistent.

This basic declaration may be modified in the ways outlined above to specify the external token name.

3.9. Token definitions

The basic form of a token definition is:

	( make_tokdef token_id ( arg1 id1 ... argn idn ) res def )
where the token token_id is defined to take formal arguments id1, ..., idn of sorts arg1, ..., argn respectively and have the value def, which is a construct of sort res. The scope of the tokens id1, ..., idn is def.

For a token with no arguments the definition may be given in the form:

	( make_tokdef token_id res def )
A token may be defined more than once. All definitions must be consistent with any previous declarations and definitions (the renaming of formal arguments is allowed however).

This basic definition may be modified in the ways outlined above to specify the external token name.

3.10. Alignment tag declarations

The basic form of an alignment tag declaration is:

	( make_al_tagdec al_tag_id )
where the alignment tag al_tag_id is declared to exist.

This basic declaration may be modified in the ways outlined above to specify the external alignment tag name.

3.11. Alignment tag definitions

The basic form of an alignment tag definition is:

	( make_al_tagdef al_tag_id def )
where the alignment tag al_tag_id is defined to be def, which is a construct of sort alignment. An alignment tag may be declared or defined more than once, provided the definitions are consistent.

This basic definition may be modified in the ways outlined above to specify the external alignment tag name.

3.12. Tag declarations

The basic forms of a tag declaration are:

	( make_id_tagdec tag_id info dec )
	( make_var_tagdec tag_id info dec )
	( common_tagdec tag_id info dec )
where the tag tag_id is declared to be an identity, variable or common tag with access information info, which is an optional construct of sort access, and shape dec, which is a construct of sort shape. A tag may be declared more than once, provided all declarations and definitions are consistent (including agreement of whether the tag is an identity, a variable or common).

These basic declarations may be modified in the ways outlined above to specify the external tag name.

3.13. Tag definitions

The basic forms of a tag definition are:

	( make_id_tagdef tag_id def )
	( make_var_tagdef tag_id info def )
	( common_tagdef tag_id info def )
where the tag tag_id is defined to be an identity, variable or common tag with value def, which is a construct of sort exp. Non-identity tag definitions also have an optional access construct, info. A tag must have been declared before it is defined, but may be defined any number of times. All declarations and definitions must be consistent (except that common tags may be defined inconsistently) and agree on whether the tag is an identity, a variable, or common.

These basic definitions may be modified in the ways outlined above to specify the external tag name.


4. Shape checking

The input in read (and to a lesser extent decode) mode is checked for shape correctness if the -check or -c flag is passed to tnc. This is not guaranteed to pick up all shape errors, but is better than nothing.

When in write mode the results of the shape checking may be viewed by passing the -cv flag to tnc. Each expression is associated with its shape by means of the:

	( exp_with_shape exp shape ) -> exp
pseudo-construct. Unknown shapes are indicated by ....


5. Remarks

The target independent TDF capsules produced by the C -> TDF compiler, tcc, do not contain declarations or definitions for all the tokens they use. Thus tnc cannot fully decode them as they stand. However the necessary token declaration information may be made available to tnc by using the use_lib construct. The commands:

	( use_lib library )
	( code capsule )
will decode the TDF capsule capsule which uses tokens defined in the TDF library library.


6. Limitations

The main limitations in the current version of tnc are as follows:

  • There is no error recovery,
  • There is no support for foreign sorts,
  • The support for tokenised tokens is limited and undocumented.

In addition, far more of the checks (scopes, shape checking, checking of consistency of declarations and definitions etc.) are implemented for read mode rather than decode mode. To shape check a TDF capsule, it will almost certainly be more effective to translate it into text and check that.

Another limitation is that the scoping rules for local tags do not allow such tags to be accessed outside their scopes using env_offset .


7. Manual Page for tnc

Here is the manual page for tnc.

NAME: tnc - TDF notation compiler

SYNOPSIS: tnc [ options ] input-file [ output-file ]

DESCRIPTION: tnc translates TDF capsules to and from text. It has two input modes, read and decode. In the first, which is default, input-file is a file containing TDF text. In the second input-file is a TDF capsule. There are also two output modes, encode and write. In the first, which is default, a TDF capsule is written to output-file (or the standard output if this argument is absent). In the second, TDF text is written to output-file.

Combination of these modes give four actions: text to TDF (which is default), TDF to text, text to text and TDF to TDF. The last two actions are not precise identities, but they do give equivalent files.

The form of the TDF text format and more information about tnc can be found in the document The TDF Notation Compiler.

OPTIONS:

-c or -cv or -check Specifies that tnc should apply extra checks to input-file . For example, simple shape checking is applied. These checks are more efficient in read mode than in decode mode. If the -cv option is used in write mode, all the information gleaned from the shape checking appears in output-file.

-d or -decode Specifies that tnc should be in decode mode. That is, that input-file is a TDF capsule.

-e or -encode Specifies that tnc should be in encode mode. That is, that output-file is a TDF capsule.

-help subject ... Makes tnc print its help message on the given subject(s). If no subject is given, all the help messages are printed.

-Idir Adds the directory dir to the search path used by tnc to find included files in read mode.

-l or -lib In decode mode, specifies that input-file is not a TDF capsule, but a TDF library. All the capsules comprising the library are decoded.

-o output-file Gives an alternative method of specifying the output file.

-p or -print Specifies that tnc should be in decode and write modes. That is, that input-file is a TDF capsule and output-file should consist of TDF text. This option makes tnc into a TDF pretty-printer.

-q Specifies that tnc should not check duplicate tag declarations etc for consistency, but should use the first declaration given.

-r or -read Specifies that tnc should be in read mode. That is, that input-file should consist of TDF text.

-V In write mode, specifies that the output should be in the "verbose" form, with no shorthand forms.

-version Makes tnc print its version number.

-w or -write Specifies that tnc should be in write mode. That is, that output-file should consist of TDF text.

SEE ALSO: tdf(1tdf).


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/tools/tspec.html100644 1750 1750 155264 6466607531 20075 0ustar brooniebroonie tspec - An API Specification Tool

tspec - An API Specification Tool

January 1998

next section previous section current document TenDRA home page document index


1 - Introduction
2 - Overview of tspec
2.1 - Specification Levels
2.2 - Input Layout
2.3 - Output Layout
2.4 - Copyright Messages
2.5 - Command-line Options
3 - Specifying API Structure
3.1 - +SUBSET
3.2 - +IMPLEMENT and +USE
4 - Specifying Objects
4.1 - Object Names
4.2 - +FUNC
4.3 - +EXP and +CONST
4.4 - +MACRO
4.5 - +STATEMENT
4.6 - +DEFINE
4.7 - +TYPE
4.8 - +TYPEDEF
4.9 - +FIELD
4.10 - +NAT
4.11 - +ENUM
4.12 - +TOKEN
5 - Other tspec Constructs
5.1 - +IF, +ELSE and +ENDIF
5.2 - Quoted Text
5.3 - C Comments
5.4 - File Properties
6 - Miscellaneous Topics
6.1 - Fine Control of Included Files
6.2 - Protection Macros
6.3 - Index Printing
6.4 - TDF Library Building
7 - Changes in tspec 2.0
8 - References

1. Introduction

As explained in reference 1, TDF may be regarded as an abstract target machine which can be used to facilitate the separation of target independent and target dependent code which characterises portable programs. An important aspect of this separation is the Application Programming Interface, or API, of the program. Just as, for a conventional machine, the API needs to be implemented on that machine before the program can be ported to it, so for that program to be ported to the abstract TDF machine, an "abstract implementation" of the API needs to be provided.

But of course, an "abstract implementation" is precisely what is provided by the API specification - it is an abstraction of all the possible API implementations. Therefore the TDF representation of an API must reflect the API specification. As a consequence, compiling a program to the abstract TDF machine is to check it against the API specification rather than, as with compiling to a conventional machine, against at best a particular implementation of that API.

In this document we address the problem of how to translate a standard API specification into its TDF representation, by describing a tool, tspec, which has been developed for this purpose.

The low level form which is used to represent APIs to the C to TDF producer is the #pragma token syntax described in reference 3. However this is not a suitable form in which to describe API specifications. The #pragma token syntax is necessarily complex, and can only be checked through extensive testing using the producer. Instead an alternative form, close to C, has been developed for this purpose. API specifications in this form are transformed by tspec into the corresponding #pragma token statements, while it applies various internal checks to the API description.

Another reason for introducing tspec is that the #pragma token syntax is currently limited in some areas. For example, at present it has very limited support for expressing constancy of expressions. By allowing the tspec syntax to express this information, the API description will contain all the information which may be needed in future upgrades to the #pragma token syntax. Thus describing an API using tspec is hopefully a one off process, whereas describing it directly to the #pragma token syntax could require periodic reworkings. Improvements in the #pragma token syntax will be reflected in the translations produced by future versions of tspec.

The tspec syntax is not designed to be a formal specification language. Instead it is a pragmatic attempt to capture the common specification idioms of standard API specifications. A glance at these specifications shows that they are predominantly C based, but with an added layer of abstraction - instead of saying that t is a specific C type, they say, there exists a type t, and so on. The tspec syntax is designed to reflect this.


2. Overview of tspec

2.1. Specification Levels

Let us begin by examining the various levels of specification with which tspec is concerned. At the lowest level it is concerned with objects - the types, expressions, constants etc. which comprise the API - and indeed most of this document is concerned with how tspec describes these objects. At the highest level, tspec is concerned with APIs. We could just describe an API as being a set of objects, however this is to ignore the internal structure of APIs.

At the most obvious level the objects in an API are spread over a number of different system headers. For example, in ANSI, the objects concerned with file input and output are grouped in stdio.h, whereas those concerned with string manipulation are in string.h. But a further level of refinement is also required. For example, ANSI specifies that the type size_t is defined in both stdio.h and string.h. Therefore tspec needs to be able to represent subsets of headers in order to express this intersection relation.

To conclude, tspec distinguishes four levels of specification - APIs (which are sets of headers), headers (which are sets of objects), subsets of headers, and objects. It identifies APIs by an identifying name chosen by the person performing the API description. The (purely arbitrary) convention is for short, lower case names, for example:

  • ansi refers to ANSI C (X3.159),
  • posix refers to POSIX 1003.1,
  • xpg3 refers to X/Open Portability Guide 3.

In this document, headers are identified by the API they belong to and the header name. Thus ansi:stdio.h refers to the stdio.h header of the ANSI API. Finally subsets of headers are identified by the header and the subset name. If, for example, the stdio.h header of ANSI has a subset named file, then this is referred to as ansi:stdio.h:file.

2.2. Input Layout

The tspec representation of an API is arranged as a directory with the same name as the API, containing a number of files, one for each API header. For example, the ANSI API is represented by a directory ansi containing files ansi/stdio.h, ansi/string.h etc. In addition each API directory contains a master file (for ANSI it would be called ansi/MASTER) which lists all the headers comprising that API.

When tspec needs to find an API directory it does so by searching along its input directory path. This is a colon separated list of directories to be searched. This may be specified in a number of ways. A default search list is built into tspec, however this may be overridden by the system variable TSPEC_INPUT. Directories may be added to the start of the path using the -Idir command-line option (see section 2.5 for a complete list of options). The current working directory is always added to the start of the path.

2.3. Output Layout

tspec actually outputs two sets of output files, the include output files, containing the #pragma token directives corresponding to the input API, and the source output files, which provide a rig for TDF library building (see section 6.4). These output files and directories are built up under two standard output directories - the include output directory, incl_dir say, and the source output directory, src_dir say. tspec has default values for these directories built in, but these may be overridden in a number of ways. Firstly, if the system variable TSPEC_OUTPUT is defined to be dir, say, then incl_dir is dir/include and src_dir is dir/src . Secondly, incl_dir and src_dir can be set independently using the system variables TSPEC_INCL_OUTPUT and TSPEC_SRC_OUTPUT respectively. Finally, they may also be set using the -Odir and -Sdir command-line options respectively.

As an example of the mapping from input files to output files, the header ansi:stdio.h is mapped to the include output file incl_dir/ansi.api/stdio.h and the source output file src_dir/ansi.api/stdio.c. The header subset ansi:stdio.h:file is mapped to its own pair of output files, incl_dir/shared/ansi.api/file.h and src_dir /ansi.api/file.c.

The default output file names can be overridden by means of the INCLNAME and SOURCENAME file properties described in section 5.4.

By default, tspec only creates an output file if the date stamps on all the input files it depends on indicate that it needs updating. In effect, tspec creates an internal makefile from the dependencies it deduces. This behaviour can be overridden by means of the -f command-line option, which forces all output files to be created.

In addition, tspec only creates the source output file if it is needed for TDF library building. If the corresponding include output file does not contain any token specifications then the source output file is suppressed (see section 6.4).

2.4. Copyright Messages

tspec will optionally add a copyright message to the start of each include output file. This message is copied from a file which may be specified either using the TSPEC_COPYRIGHT system variable, or by the -Cfile command-line option.

2.5. Command-line Options

There are three main forms for invoking tspec on the command-line, depending on whether it is desired to process an entire API, a single header from that API, or only a subset of that header. These are given respectively as:

	tspec [options] api
	tspec [options] api header
	tspec [options] api header subset

The valid options include:

  • The option -Cfile specifies the copyright message file (see section 2.4).
  • The option -Idir adds a directory to the input directory search path (see section 2.2).
  • The option -Odir specifies the include output directory (see section 2.3).
  • The option -Sdir specifies the source output directory (see section 2.3).
  • The -c option causes tspec to only check the input files and not to generate any output files.
  • The -e option causes tspec only to run its preprocessor phase, writing the result to the standard output.
  • The -f option forces tspec to create all output files regardless of date stamps.
  • The -i option causes tspec to print an index of all the objects in the input files (see section 6.3).
  • The -p option indicates to tspec that its input has already been preprocessed (i.e. it is the output of a previous -e option).
  • The -r option causes tspec to only produce output for implemented objects, and not used objects (see section 3.2).
  • The -s option causes tspec to check all the headers in an API separately rather than, as with the -c option, all at once.
  • The -u option causes tspec to generate unique token names for the specified objects (see section 4.1.1).
  • The -v option causes tspec to enter verbose mode, in which it reports on the output files it creates. If two -v options are given then tspec enters very verbose mode, in which it gives more information on its activities.
  • The -V option causes tspec to print its current version number (this document refers to version 2.0).

In addition tspec has a local input mode for translating single headers which are not part of an API into the corresponding #pragma token statements. The form:

	tspec [options] -l file
processes the input file file, writing the include output file to the standard output.


3. Specifying API Structure

The basic form of the tspec description of an API has already been explained in section 2.2 - it is a directory containing a set of files corresponding to the headers in that API. Each file basically consists of a list of the objects declared in that header. Each object specification is part of a tspec construct. These constructs are identified by keywords. These keywords always begin with + to avoid conflict with C identifiers. Comments may be inserted at any point. These are prefixed by # and run to the end of the line.

In addition to the basic object specification constructs, tspec also has constructs for imposing structure on the API description. It is these constructs that we consider first.

3.1. +SUBSET

A list of tspec constructs within a header can be grouped into a named subset by enclosing them within:

	+SUBSET "name" := {
	    ....
	} ;
where name is the subset name. These named subsets can be nested, but are still regarded as subsets of the parent header.

Subsets are intended to give a layer of resolution beyond that of the entire header (see section 2.1). Each subset is mapped onto a separate pair of output files, so unwary use of subsets is discouraged.

3.2. +IMPLEMENT and +USE

tspec has two import constructs which allow one API, or header, or subset of a header to be included in another. The first construct is used to indicate that the given set of objects is also declared in the including header, and takes one of the forms:

	+IMPLEMENT "api" ;
	+IMPLEMENT "api", "header" ;
	+IMPLEMENT "api", "header", "subset" ;
The second construct is used to indicate that the objects are only used in the including header, and take one of the forms:
	+USE "api" ;
	+USE "api", "header" ;
	+USE "api", "header", "subset" ;
For example, posix:stdio.h is an extension of ansi:stdio.h , so, rather than duplicate all the object specifications from the latter in the former, it is easier and clearer to use the construct:
	+IMPLEMENT "ansi", "stdio.h" ;
and just add the extra objects specified by POSIX. Note that this makes the relationship between the APIs ansi and posix absolutely explicit. tspec is as much concerned with the relationships between APIs as their actual contents.

Objects which are specified as being declared in more than one header of an API should also be treated using +IMPLEMENT. For example, the type size_t is declared in a number of ansi headers, namely stddef.h, stdio.h, string.h and time.h. This can be handled by declaring size_t as part of a named subset of, say, ansi:stddef.h:

	+SUBSET "size_t" := {
	    +TYPE (unsigned) size_t ;
	} ;
and including this in each of the other headers:
	+IMPLEMENT "ansi", "stddef.h", "size_t" ;

Another use of +IMPLEMENT is in the MASTER file used to list the headers in an API (see section 2.2). This basically consists of a list of +IMPLEMENT commands, one per header. For example, with ansi it consists of:

	+IMPLEMENT "ansi", "assert.h" ;
	+IMPLEMENT "ansi", "ctype.h" ;
	....
	+IMPLEMENT "ansi", "time.h" ;

To illustrate +USE, posix:sys/stat.h uses some types from posix:sys/types.h but does not define them. To avoid the user having to include both headers it makes sense for the description to include the latter in the former (provided there are no namespace restrictions imposed by the API). This would be done using the construct:

	+USE "posix", "sys/types.h" ;

On the command-line tspec is given one set of objects, be it an API, a header, or a subset of a header. This causes it to read that set, which may contain +IMPLEMENT or +USE commands. It then reads the sets indicated by these commands, which again may contain +IMPLEMENT or +USE commands, and so on. It is possible for this process to lead to infinite cycles, but in this case tspec raises an error and aborts. In the legal case, the collection of sets read by tspec is the closure of the set given on the command-line under +IMPLEMENT and +USE. Some of these sets will be implemented - that it to say, connected to the top level by a chain of +IMPLEMENT commands - others will merely be used. By default tspec produces output for all these sets, but specifying the -r command-line option restricts it to the implemented sets.

For further information on the +IMPLEMENT and +USE commands see section 6.1.


4. Specifying Objects

The main body of any tspec description of an API consists of a list of object specifications. Most of this section is concerned with the various tspec constructs for specifying objects of various kinds, however we start with a few remarks on object names.

4.1. Object Names

4.1.1. Internal and External Names

All objects specified using tspec actually have two names. The first is the internal name by which it is identified within the program, the second is the external name by which the TDF construct (actually a token) representing this object is referred to for the purposes of TDF linking. The internal names are normal C identifiers and obey the normal C namespace rules (indeed one of the roles of tspec is to keep track of these namespaces). The external token name is constructed by tspec from the internal name.

tspec has two strategies for making up these token names. The first, which is default, is to use the internal name as the external name (there is an exception to this simple rule, namely field selectors - see section 4.9). The second, which is preferred for standard APIs, is to construct a "unique name" from the API name, the header and the internal name. For example, under the first strategy, the external name of the type FILE specified in ansi:stdio.h would be FILE, whereas under the second it would be ansi.stdio.FILE. The unique name strategy may be specified by passing the -u command-line option to tspec (see section 2.5) or by setting the UNIQUE property to 1 (see section 5.4).

Both strategies involve flattening the several C namespaces into the single TDF token namespace, which can lead to clashes. For example, in posix:sys/stat.h both a structure, struct stat, and a procedure, stat, are specified. In C the two uses of stat are in different namespaces and so present no difficulty, however they are mapped onto the same name in the TDF token namespace. To work round such difficulties, tspec allows an alternative external form to be specified. When the object is specified the form:

	iname | ename
may be used to specify the internal name iname and the external name ename.

For example, in the stat case above we could distinguish between the two uses as follows:

	+TYPE struct stat | struct_stat ;
	+FUNC int stat ( const char *, struct stat * ) ;
With simple token names the token corresponding to the structure would be called struct_stat, whereas that corresponding to the procedure would still be stat. With unique token names the names would be posix.stat.struct_stat and posix.stat.stat respectively.

Very occasionally it may be necessary to precisely specify an external token name. This can be done using the form:

	iname | "ename"
which makes the object iname have external name ename regardless of the naming strategy used.

4.1.2. More on Object Names

Basically the legal identifiers in tspec (for both internal and external names) are the same as those in C - strings of upper and lower case letters, decimal digits or underscores, which do not begin with a decimal digit. However there is a second class of local identifiers - those consisting of a tilde followed by any number of letters, digits or underscores - which are intended to indicate objects which are local to the API description and should not be visible to any application using the API. For example, to express the specification that t is a pointer type, we could say that there is a locally named type to which t is a pointer:

	+TYPE ~t ;
	+TYPEDEF ~t *t ;

Finally it is possible to cheat the tspec namespaces. It may actually be legal to have two objects of the same name in an API - they may lie in different branches of a conditional compilation, or not be allowed to coexist. To allow for this, tspec allows version numbers, consisting of a decimal pointer plus a number of digits, to be appended to an identifier name when it is first introduced. These version numbers are purely to tell tspec that this version of the object is different from a previous version with a different version number (or indeed without any version number). If more than one version of an object is specified then which version is retrieved by tspec in any look-up operation is undefined.

4.2. +FUNC

The simplest form of object to specify is a procedure. This is done by means of:

	+FUNC prototype ;
where prototype is the full C prototype of the procedure being declared. For example, ansi:string.h contains:
	+FUNC char *strcpy ( char *, const char * ) ;
	+FUNC int strcmp ( const char *, const char * ) ;
	+FUNC size_t strlen ( const char * ) ;

Strictly speaking, +FUNC means that the procedure may be implemented by a macro, but that there is an underlying library function with the same effect. The exception is for procedures which take a variable number of arguments, such as:

	+FUNC int fprintf ( FILE *, const char *, ... ) ;
which cannot be implemented by macros. Occasionally it may be necessary to specify that a procedure is only a library function, and cannot be implemented by a macro. In this case the form:
	+FUNC (extern) prototype ;
should be used. Thus:
	+FUNC (extern) char *strcpy ( char *, const char * ) ;
would mean that strcpy was only a library function and not a macro.

Increasingly standard APIs are using prototypes to express their procedures. However it still may be necessary on occasion to specify procedures declared using old style declarations. In most cases these can be easily transcribed into prototype declarations, however things are not always that simple. For example, xpg3:stdlib.h declares malloc by the old style declaration:

	void *malloc ( sz )
	size_t sz ;
which is in general different from the prototype:
	void *malloc ( size_t ) ;
In the first case the argument is passed as the integral promotion of size_t, whereas in the second it is passed as a size_t . In general we only know that size_t is an unsigned integral type, so we cannot assert that it is its own integral promotion. One possible solution would be to use the C to TDF producer's weak prototypes (see reference 3). The form:
	+FUNC (weak) void *malloc ( size_t ) ;
means that malloc is a library function returning void * which is declared using an old style declaration with a single argument of type size_t. (For an alternative approach see section 4.8.)

4.3. +EXP and +CONST

Expressions correspond to constants, identities and variables. They are specified by:

	+EXP type exp1, ..., expn ;
where type is the base type of the expressions expi as in a normal C declaration list. For example, in ansi:stdio.h:
	+EXP FILE *stdin, *stdout, *stderr ;
specifies three expressions of type FILE *.

By default all expressions are rvalues, that is, values which cannot be assigned to. If an lvalue (assignable) expression is required its type should be qualified using the keyword lvalue. This is an extension to the C type syntax which is used in a similar fashion to const. For example, ansi:errno.h says that errno is an assignable lvalue of type int. This is expressed as follows:

	+EXP lvalue int errno ;
On the other hand, posix:errno.h states that errno is an external value of type int. As with procedures the (extern) qualifier may be used to express this as:
	+EXP (extern) int errno ;
Note that this automatically means that errno is an lvalue, so the lvalue qualifier is optional in this case.

If all the expressions are guaranteed to be literal constants then one of the equivalent forms:

	+EXP (const) type exp1, ..., expn ;
	+CONST type exp1, ..., expn ;
should be used. For example, in ansi:errno.h we have:
	+CONST int EDOM, ERANGE ;

4.4. +MACRO

The +MACRO construct is similar in form to the +FUNC construct, except that it means that only a macro exists, and no underlying library function. For example, in xpg3:ctype.h we have:

	+MACRO int _toupper ( int ) ;
	+MACRO int _tolower ( int ) ;
since these are explicitly stated to be macros and not functions. Of course the (extern) qualifier cannot be used with +MACRO.

One thing which macros can do which functions cannot is to return assignable values or to assign to their arguments. Thus it is legitimate for +MACRO constructs to have their return type or argument types qualified by lvalue, whereas this is not allowed for +FUNC constructs. For example, in svid3:curses.h, a macro getyx is specified which takes a pointer to a window and two integer variables and assigns the cursor position of the window to those variables. This may be expressed by:

	+MACRO void getyx ( WINDOW *win, lvalue int y, lvalue int x ) ;

4.5. +STATEMENT

The +STATEMENT construct is very similar to the +MACRO construct except that, instead of being a C expression, it is a C statement (i.e. something ending in a semicolon). As such it does not have a return type and so takes one of the forms:

	+STATEMENT stmt ;
	+STATEMENT stmt ( arg1, ..., argn ) ;
depending on whether or not it takes any arguments. (A +MACRO without any arguments is an +EXP, so the no argument form does not exist for +MACRO.) As with +MACRO, the argument types argi can be qualified using lvalue.

4.6. +DEFINE

It is possible to insert macro definitions directly into tspec using the +DEFINE construct. This has two forms depending on whether the macro has arguments:

	+DEFINE name %% text %% ;
	+DEFINE name ( arg1, ..., argn ) %% text %% ;
These translate directly into:
	#define name text
	#define name( arg1, ..., argn ) text

The macro definition, text, consists of any string of characters delimited by double percents. If text is a simple number or a single identifier then the double percents may be omitted. Thus in ansi:stddef.h we have:

	+DEFINE NULL 0 ;

4.7. +TYPE

New types may be specified using the +TYPE construct. This has the form:

	+TYPE type1, ..., typen ;
where each typei has one of the forms:
  • name for a general type (about which we know nothing more),
  • (struct) name for a structure type,
  • (union) name for a union type,
  • struct name for a structure tag,
  • union name for a union tag,
  • (int) name for an integral type,
  • (signed) name for a signed integral type,
  • (unsigned) name for an unsigned integral type,
  • (float) name for a floating type,
  • (arith) name for an arithmetic (integral or floating) type,
  • (scalar) name for a scalar (arithmetic or pointer) type.

To make clear the distinction between structure types and structure tags, if we have in C:

	typedef struct tag { int x, y ; } type ;
then type is a structure type and tag is a structure tag.

For example, in ansi we have:

	+TYPE FILE ;
	+TYPE struct lconv ;
	+TYPE (struct) div_t ;
	+TYPE (signed) ptrdiff_t ;
	+TYPE (unsigned) size_t ;
	+TYPE (arith) time_t ;
	+TYPE (int) wchar_t ;

4.8. +TYPEDEF

It is also possible to define new types in terms of existing types. This is done using the +TYPEDEF construct, which is identical in form to the C typedef construct. This construct can be used to define pointer, procedure and array types, but not compound structure and union types. For these see section 4.9 below.

For example, in xpg3:search.h we have:

	+TYPE struct entry ;
	+TYPEDEF struct entry ENTRY ;

There are a couple of special forms. To understand the first, note that C uses void function returns for two purposes. Firstly to indicate that the function does not return a value, and secondly to indicate that the function does not return at all (exit is an example of this second usage). In TDF terms, in the first case the function returns TOP, in the second it returns BOTTOM . tspec allows types to be introduced which have the second meaning. For example, we could have:

	+TYPEDEF ~special ( "bottom" ) ~bottom ;
	+FUNC ~bottom exit ( int ) ;
meaning that the local type ~bottom is the BOTTOM form of void. The procedure exit, which never returns, can then be declared to return ~bottom rather than void. Other such special types may be added in future.

The second special form:

	+TYPEDEF ~promote ( x ) y ;
means that y is an integral type which is the integral promotion of x. x must have previously been declared as an integral type. This gives an alternative approach to the old style procedure declaration problem described in section 4.2. Recall that:
	void *malloc ( sz )
	size_t sz ;
means that malloc has one argument which is passed as the integral promotion of size_t. This could be expressed as follows:
	+TYPEDEF ~promote ( size_t ) ~size_t ;
	+FUNC void *malloc ( ~size_t ) ;
introducing a local type to stand for the integral promotion of size_t .

4.9. +FIELD

Having specified a structure or union type, or a structure or union tag, we may wish to specify certain fields of this structure or union. This is done using the +FIELD construct. This takes the form:

	+FIELD type {
	    ftype field1, ..., fieldn ;
	    ....
	} ;
where type is the structure or union type and field1, ..., fieldn are field selectors derived from the base type ftype as in a normal C structure definition. type may have one of the forms:
  • (struct) name for a structure type,
  • (union) name for a union type,
  • struct name for a structure tag,
  • union name for a union tag,
  • name for a previously declared structure or union type.

Except in the final case (where it is not clear if type is a structure or a union), it is not necessary to have previously introduced type using a +TYPE construct - this declaration is implicit in the +FIELD construct.

For example, in ansi:time.h we have:

	+FIELD struct tm {
	    int tm_sec ;
	    int tm_min ;
	    int tm_hour ;
	    int tm_mday ;
	    int tm_mon ;
	    int tm_year ;
	    int tm_wday ;
	    int tm_yday ;
	    int tm_isdst ;
	} ;
meaning that there exists a structure with tag tm with various fields of type int. Any implementation must have these corresponding fields, but they need not be in the given order, nor do they have to comprise the whole structure.

As was mentioned above (in 4.1.1), field selectors form a special case when tspec is making up external token names. For example, in the case above, the token name for the tm_sec field is either tm.tm_sec or ansi.time.tm.tm_sec , depending on whether or not unique token names are used.

It is possible to have several +FIELD constructs referring to the same structure or union. For example, posix:dirent.h declares a structure with tag dirent and one field, d_name , of this structure. xpg3:dirent.h extends this by adding another field, d_ino.

There is a second form of the +FIELD construct which has more in common with the +TYPEDEF construct. The form:

	+FIELD type := {
	    ftype field1, ..., fieldn ;
	    ....
	} ;
means that the type type is defined to be exactly the given structure or union type, with precisely the given fields in the given order.

4.10. +NAT

In the example given in section 4.9, posix:dirent.h specifies that the d_name field of struct dirent is a fixed sized array of characters, but that the size of this array is implementation dependent. We therefore have to introduce a value to stand for the size of this array using the +NAT construct. This has the form:

	+NAT nat1, ..., natn ;
where nat1, ..., natn are the array sizes to be declared. The example thus becomes:
	+NAT ~dirent_d_name_size ;
	+FIELD struct dirent {
	    char d_name [ ~dirent_d_name_size ] ;
	} ;
Note the use of a local variable to stand for a value, namely the array size, which is invisible to the user (see section 4.1.2).

As another example, in ansi:setjmp.h we know that jmp_buf is an array type. We therefore introduce objects to stand for the type which it is an array of and for the size of the array, and define jmp_buf by a +TYPEDEF command:

	+NAT ~jmp_buf_size ;
	+TYPE ~jmp_buf_elt ;
	+TYPEDEF ~jmp_buf_elt jmp_buf [ ~jmp_buf_size ] ;
Again, local variables have been used for the introduced objects.

4.11. +ENUM

Currently tspec only has limited support for enumeration types. A +ENUM construct is translated directly into a C definition of an enumeration type. The +ENUM construct has the form:

	+ENUM etype := {
	    entry,
	    ....
	} ;
where etype is the enumeration type being defined - either a type name or enum etag for some enumeration tag etag - and each entry has one of the forms:
	name
	name = number
as in a C enumeration type. For example, in xpg3:search.h we have:
	+ENUM ACTION := { FIND, ENTER } ;

4.12. +TOKEN

As was mentioned in section 1, the #pragma token syntax is highly complex, and the token descriptions output by tspec form only a small subset of those possible. It is possible to directly access the full #pragma token syntax from tspec using the construct:

	+TOKEN name %% text %% ;
where the token name is defined by the sequence of characters text, which is delimited by double percents. This is turned into the token description:
	#pragma token text name #

No checks are applied to text. A more sophisticated mechanism for defining complex tokens may be introduced in a later version of tspec.

For example, in ansi:stdarg.h a token va_arg is defined which takes a variable of type va_list and a type t and returns a value of type t. This is given by:

	+TOKEN va_arg %% PROC ( EXP lvalue : va_list : e, TYPE t ) EXP rvalue : t : %% ;
See reference 3 for more details on the token syntax.


5. Other tspec Constructs

Although most tspec constructs are concerned either with specifying new objects or imposing structure upon various sets of objects, there are a few which do not fall into these categories.

5.1. +IF, +ELSE and +ENDIF

It is possible to introduce conditional compilation into the API description by means of the constructs:

	+IF %% text %%
	+IFDEF %% text %%
	+IFNDEF %% text %%
	+ELSE
	+ENDIF
which are translated into:
	#if text
	#ifdef text
	#ifndef text
	#else /* text */
	#endif /* text */
respectively. If text is just a simple number or a single identifier the double percent delimiters may be excluded.

A couple of special +IFDEF (and also +IFNDEF) forms are available which are useful on occasion. These are:

	+IFDEF ~building_libs
	+IFDEF ~protect ( "api", "header" )
The macros in these constructs expand respectively to __BUILDING_LIBS which, by convention is defined if and only if TDF library building is taking place (see section 6.4), and the protection macro tspec makes up to protect the file api:header against multiple inclusion (see section 6.2).

5.2. Quoted Text

It is sometimes desirable to include text in the specification file which will be copied directly into one of the output files - for example, sections of C. This can be done by enclosing the text for copying into the include output file in double percents:

	%% text %%
and text for copying into the source output file in triple percents:
	%%% text %%%

In fact more percents may be used. An even number always indicates text for the include output file, and an odd number the source output file. Note that any # characters in text are copied as normal, and not treated as comments. This also applies to the other cases where percent delimiters are used.

5.3. C Comments

A special case of quoted text are C style comments:

	/* text */
which are copied directly into the include output file.

5.4. File Properties

Various properties of individual sets of objects or global properties can be set using file properties. These take the form:

	$property = number ;
for numeric (or boolean) properties, and:
	$property = "string" ;
for string properties.

The valid property names are as follows:

  • APINAME is a string property which may be used to override the API name of the current set of objects.
  • FILE is a string property which is used by the tspec preprocessor to indicate the current input file name.
  • FILENAME is a string property which may be used to override the header name of the current set of objects.
  • INCLNAME is a string property which may be used to set the name of the include output file in place of the default name given in section 2.3. Setting the property to the empty string suppresses the output of this file.
  • INTERFACE is a numeric property which may be set to force the creation of the source output file and cleared to suppress it.
  • LINE is a numeric property which is used by the tspec preprocessor to indicate the current input file line number.
  • METHOD is a string property which may be used to specify alternative construction methods for TDF library building (see section 6.4).
  • PREFIX is a string property which may be used as a prefix to unique token names in place of the API and header names (see section 4.1.1).
  • PROTECT is a string property which may be used to set the macro used by tspec to protect the include output file against multiple inclusions (see section 6.2). Setting the property to the empty string suppresses this macro.
  • SOURCENAME is a string property which may be used to set the name of the source output file in place of the default name given in section 2.3. Setting the property to the empty string suppresses the output of this file.
  • SUBSETNAME is a string property which may be used to override the subset name of the current set of objects.
  • UNIQUE is a numeric property which may be used to switch the unique token name flag on and off (see section 4.1.1). For standard APIs it is recommended that this property is set to 1 in the API MASTER file.
  • VERBOSE is a numeric property which may be used to set the level of the verbose option (see section 2.5).
  • VERSION is a string property which may be used to assign a version number or other identification to a tspec description. This information is reproduced in the corresponding include output file.


6. Miscellaneous Topics

In this section we round up a few miscellaneous topics.

6.1. Fine Control of Included Files

The +IMPLEMENT and +USE commands described in section 3.2 are capable of further refinement. Normally each such command is translated into a corresponding inclusion command in both the include and source output files. Occasionally this is not desirable - in particular the inclusion in the source output file can cause problems during TDF library building. For this reason the tspec syntax has been extended to allow for fine control of the output corresponding to +IMPLEMENT and +USE commands. This takes the forms:

	+IMPLEMENT "api" (key) ;
	+IMPLEMENT "api", "header" (key) ;
	+IMPLEMENT "api", "header", "subset" (key) ;
with corresponding forms for +USE. key specifies which output files the inclusion commands should appear in. It can be:
  • ??, indicating neither output file,
  • !?, indicating the include output file only,
  • ?!, indicating the source output file only,
  • !!, indicating both output files (this is the same as the normal form).

The second refinement comes from the fact that APIs fall into two categories - the base APIs, such as ansi, posix and xpg3, and the extension APIs, such as x11, the X Windows API. The latter can be used to extend the former, so that we can form ansi plus x11, posix plus x11, and so on. Base APIs may be distinguished in tspec by including the command:

	+BASE_API ;
in their MASTER file. Occasionally, in an extension API, we may wish to include a version of a header from the base API, but, because this base API is not fixed, not be able to use a simple +USE command. Instead the special form:
	+USE ( "api" ), "header" ;
is provided for this purpose (this is the only permitted form). It indicates that tspec should use the api version of header for checking purposes, but allow the inclusion of the version from the base API in normal use.

6.2. Protection Macros

Each include output file is surrounded by a construct of the form:

	#ifndef MACRO
	#define MACRO
	....
	#endif /* MACRO */
to protect it against multiple inclusions. Normally tspec will generate the macro name, MACRO, but it can be set using the PROTECT file property (see section 5.4). Setting PROTECT to the empty string suppresses the protection construct altogether. (Also see section 5.1.)

6.3. Index Printing

If it is invoked with the -i command-line option, instead of creating its output file, tspec prints an index of all the objects it has read to the standard output. This information includes the external token name associated with the object, whether the object is implemented or used, and where in the API description it is defined. It also includes a brief description of the object. It is intended that these indexes should be usable as quick reference guides to the underlying APIs.

6.4. TDF Library Building

As was explained in reference 1, the #pragma token headers output by tspec are used for two purposes - checking applications against the API during normal compilation and checking implementations against the API during TDF library building. This dual use does necessitate some extra work for tspec. It is not always possible to use exactly the same code in the two cases (usually because the C rules on, for example, structure definitions get in the way during library building). tspec uses a standard macro, __BUILDING_LIBS, to distinguish between the two cases. It is assumed to be defined if and only if library building is taking place. tspec descriptions can access this macro directly using ~building_libs (see section 5.1).

The actual library building process consists of compiling the #pragma token descriptions of the objects comprising the API along with the implementation of that API from the system headers (or wherever). This creates the local token definitions for this API, which may be stored in a token library. To facilitate this process tspec creates the source output files for each implemented header api:header containing something like:

	#pragma implement interface <../api/header>
	#include <header>
together with a makefile to compile all these programs to token definitions and to combine these token definitions into a token library. In fact two makefiles are created in the source output directory (see section 2.3). The first is called M_api and is designed for stand-alone library construction. The second is called Makefile and is designed for use with the library building script MAKE_LIBS provided with tspec.

There are other methods whereby the source output file may be changed into a set of token definitions. For example, in c:sys.h the METHOD file property (see section 5.4) is set to TDP, causing the tdp program to be invoked to produce the definitions for the basic C tokens for the system. As another example consider:

	$METHOD = "TNC" ;
	+MACRO double fl_abs ( double ) ;
	%%%
	    ( make_tokdef fl_abs ( exp x ) exp
		( floating_abs impossible x ) )
	%%%

The include output file will specify a token fl_abs which takes a double and returns a double. The TNC method tells MAKE_LIBS that the source output file, which will just contain the quoted text:

	( make_tokdef fl_abs ( exp x ) exp
	    ( floating_abs impossible x ) )
is an input file for the TDF notation compiler, tnc (see reference 2). Thus we have defined a token which directly accesses the TDF floating_abs construct.


7. Changes in tspec 2.0

This document describes tspec version 2.0. tspec 2.0 contains significant changes from previous releases. For convenience the main changes which are visible to the tspec user are listed here:

  • The added specification level of named subsets of headers has been introduced (see section 2.1). This has been done by introducing the +SUBSET construct and extending the +IMPLEMENT and +USE constructs, as well as the command-line options. The previous method of dealing with such subsets - namely shared headers - is now obsolete and its use is discouraged.
  • A number of new command-line options have been added, and some of the existing options have been modified slightly (see section 2.5).
  • The suffix .api has been added to the output directories (see section 2.3) to avoid possible confusion with other include file directories.
  • The use of identifiers beginning with ~ as local variables is new (see section 4.1.2).
  • The +STATEMENT and +DEFINE constructs (see section 4.5 and section 4.6) are new.
  • The (extern), (weak) and (const) qualifiers for +FUNC and +EXP (see section 4.2 and section 4.3) are new.
  • The (signed) and (unsigned) qualifiers for +TYPE (see section 4.7) are new.
  • The ~special type constructor (see section 4.8) is new.
  • The ~abstract type constructor has been abandoned.
  • The +BASE_API command described in section 6.1 is new.
  • The indexing routines (see section 6.3) have been greatly improved.


8. References

"TDF and Portability", DRA, 1993.

"The TDF Notation Compiler", DRA, 1993.

"The C to TDF Producer", DRA, 1993.


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/utilities/ 40755 1750 1750 0 6507734506 16571 5ustar brooniebroonietendra-doc-4.1.2.orig/doc/utilities/calc.html100644 1750 1750 206661 6466607527 20537 0ustar brooniebroonie calculus users' guide

calculus Users' Guide

January 1998

next section previous section current document TenDRA home page document index


1 - Introduction
1.1 - Using calculus
1.2 - Example program
2 - Input syntax
2.1 - Primitives
2.2 - Identities
2.3 - Enumerations
2.4 - Structures
2.5 - Unions
2.6 - Type constructors
2.7 - Relations between algebras
3 - Output type system specification
3.1 - Version information
3.2 - Basic types
3.3 - Operations on sizes
3.4 - Operations on pointers
3.5 - Operations on lists
3.6 - Operations on stacks
3.7 - Operations on vectors
3.8 - Operations on vector pointers
3.9 - Operations on primitives
3.10 - Operations on identities
3.11 - Operations on enumerations
3.12 - Operations on structures
3.13 - Operations on unions
4 - Implementation details
4.1 - Implementation of types
4.2 - Support routines
4.3 - Run-time checking
5 - Disk reading and writing
5.1 - Disk writing routines
5.2 - Disk reading routines
5.3 - Object printing routines
5.4 - Aliasing
5.5 - Application to calculus
6 - Template files

1. Introduction

This document describes a tool, calculus, which allows complex type systems to be described in a simple algebraic format, and transforms this into a system of C types which implements this algebra.

calculus was initially written as a design tool for use with the TenDRA C and C++ producers. The producers' internal datatypes reflect the highly complex interdependencies between the C and C++ language concepts (types, expressions and so on) which they were designed to represent. A tool for managing this complexity and allowing changes to the internal datatypes to be made with confidence was therefore seen to be an absolute necessity. The tool can also be applied in similar situations where a complex type system needs to be described.

The tool also provides for a separation between the specification of the type system and its implementation in terms of actual C types. This separation has a number of advantages. The advantages of maintaining a level of abstraction in the specification are obvious. It is possible to apply extremely rigorous type checking to any program written using the tool, thereby allowing many potential errors to be detected at compile time. It is also possible to restrict access to the types to certain well-specified constructs, thereby increasing the maintainability of code written using the tool.

This separation also has beneficial effects on the implementation of the type system. By leaving all the type checking aspects at the specification level, it is possible to impose an extremely homogeneous underlying implementation. This means, for example, that a single memory management system can be applied to all the types within the system. It also opens the possibility of writing operations which apply to all types within the system in a straightforward manner. The disk reading and writing routines described below are an example of this.

1.1. Using calculus

The general form for invoking calculus is as follows:

	calculus [ options ] input .... [ output ]
where input is a file containing the input type system description and output is the directory where the files implementing this system are to be output. This directory must already exist. If no output argument is given then the current working directory is assumed. Note that several input files can be given. Unless otherwise specified it is the last file which is used to generate the output.

The form in which the input type systems are expressed is described below. The form of the output depends on options. By default, the C implementation of the type system is output. If the -t option is passed to calculus then #pragma token statements describing the type system specification are output. This specification is given below, with some of the more important implementation details being described in the following section.

Note that it is necessary to check any program written using calculus against the #pragma token version of the specification in order to get the full benefits of the rigorous type checking provided by the tool. Some sections of the program (if only the basic functions) may be dependent upon the implementation, and so not be suitable for this form of checking.

1.2. Example program

The program calculus itself was written using a type system specified using the calculus tool. It is designed to provide an example of its own application, with some features not strictly necessary for the functionality of the program being added for illustrative purposes.


2. Input syntax

The overall input file format is as follows:

	algebra :
		ALGEBRA identifier versionopt : item-listopt


	version :
		( integer . integer )

	item-list :
		item
		item-list item

	item :
		primitive
		identity
		enumeration
		structure
		union
The initial identifier gives the overall name of the algebra. A version number may also be associated with the algebra (if this is omitted the version is assumed to be 1.0). The main body of the algebra definition consists of a list of items describing the primitives, the identities, the enumerations, the structures and the unions comprising the algebra.

Here identifier has the same meaning as in C. The only other significant lexical units are integer, which consists of a sequence of decimal digits, and string, which consists of any number of characters enclosed in double quotes. There are no escape sequences in strings. C style comments may be used anywhere in the input. White space is not significant.

2.1. Primitives

Primitives form the basic components from which the other types in the algebra are built up. They are described as follows:

	primitive :
		object-identifier = quoted-type ;
where the primitive identifier is given by:
	object-identifier :
		#opt :opt identifier
		#opt :opt identifier ( identifier )
and the primitive definition is a string which gives the C type corresponding to this primitive:
	quoted-type :
		string
Note that each primitive (and also each identity, each enumeration, each structure and each union) has two names associated with it. The second name is optional; if it is not given then it is assumed to be the same as the first name. The first name is that which will be given to the corresponding type in the output file. The second is a short form of this name which will be used in forming constructor names etc. in the output.

The optional hash and colon which may be used to qualify an object identifier are provided for backwards compatibility only and are not used in the output routines.

2.2. Identities

Identities are used to associate a name with a particular type in the algebra. In this they correspond to typedefs in C. They are described as follows:

	identity :
		object-identifier = type ;
where the definition type, type, is as described below.

2.3. Enumerations

Enumerations are used to define types which can only take values from some finite set. They are described as follows:

	enumeration :
		enum !opt object-identifier = { enumerator-list } ;
		enum !opt object-identifier = base-enumeration + { 
enumerator-list } ;
where:
	base-enumeration :
		identifier
is the name of a previously defined enumeration type. The latter form is used to express extension enumeration types. An enumeration type may be qualified by an exclamation mark to indicate that no lists of this type will be constructed.

The enumeration constants themselves are defined as follows:

	enumerator :
		identifier
		identifier = enumerator-value

	enumerator-list :
		enumerator
		enumerator-list , enumerator
Each enumerator is assigned a value in an ascending sequence, starting at zero. The next value to be assigned can be set using an enumerator-value. This is an expression formed from integers, identifiers representing previous enumerators from the same enumeration, and the question mark character which stands for the previous enumeration value. The normal C arithmetic operations can be applied to build up more complex enumerator-values. All enumerator evaluation is done in the unsigned long type of the host machine. Values containing more than 32 bits are not portable.

Enumerations thus correspond to enumeration types in C, except that they are genuinely distinct types.

2.4. Structures

Structures are used to build up composite types from other types in the algebra. They correspond to structures in C. They are described as follows:

	structure :
		struct object-identifier = component-group ;
		struct object-identifier = base-structure + component-group
 ;
where:
	base-structure :
		identifier
is the name of a previously defined structure type. The latter form is used to express (single) inheritance of structures. All components of the base structure also become components of the derived structure.

The structure components themselves are defined as follows:

	component-group :
		{ component-listopt }

	component-list :
		component-declarations ;
		component-list component-declarations ;

	component-declarations :
		type component-declarators

	component-declarators :
		component-declarator
		component-declarators , component-declarator

	component-declarator :
		identifier component-initialiseropt

	component-initialiser :
		= string
The optional component initialiser strings are explained below.

Structures are the only algebra construct which prevent the input from being a general graph. Unions may be defined in terms of themselves, but (as in C) pointers must be used to define structures in terms of themselves.

2.5. Unions

Unions are used to build up types which can hold a variety of information. They differ from C unions in that they are discriminated. They are described as follows:

	union :
		union object-identifier = component-group + field-group map-group
opt ;
		union object-identifier = base-union + field-group map-group
opt ;
where:
	base-union :
		identifier
is the name of a previously defined union type. The latter form is used to express (single) inheritance of unions. All components, fields and maps of the base union also become components of the derived union. Note that only new fields and maps can be added in the derived union.

The component-group gives a set of components which are common to all the different union cases. The cases themselves are described as follows:

	field-group :
		{ field-list }

	field :
		#opt #opt field-identifier-list -> component-group

		#opt #opt field-identifier-list -> base-field
 + component-group

	base-field :
		identifier

	field-list :
		field
		field-list , field

	field-identifier :
		identifier

	field-identifier-list :
		field-identifier
		field-identifier-list , field-identifier
The optional one or two hashes which may be used to qualify a list of field identifiers are used to indicate aliasing in the disk reading and writing routines. The base-field case is a notational convenience which allows one field in a union to inherit all the components of another field.

Note that a number of field identifiers may be associated with the same set of field components. Any such list containing more than one identifier forms a field identifier set, named after the first field identifier.

In addition a number of maps may be associated with a union. These maps correspond to functions which take the union, plus a number of other map parameter types, and return the map return type. They are described as follows:

	map-group :
		: [ map-listopt ]

	map :
		extended-type #opt identifier ( parameter-list
opt )

	map-list :
		map
		map-list map
where:
	parameter-list :
		parameter-declarations
		parameter-list ; parameter-declarations

	parameter-declarations :
		extended-type parameter-declarators

	parameter-declarators :
		identifier
		parameter-declarators , identifier
Note that the map parameter and return types are given by:
	extended-type :
		type
		quoted-type
In addition to the types derived from the algebra it is possible to use quoted C types in this context.

A map may be qualified by means of a hash. This means that the associated function also takes a destructor function as a parameter.

2.6. Type constructors

The types derived from the algebra may be described as follows:

	type :
		identifier
		PTR type
		LIST type
		STACK type
		VEC type
		VEC_PTR type
The simple types correspond to primitive, identity, enumeration, structure or union names. It is possible for a type to be used before it is defined, but it must be defined at some point.

The derived type constructors correspond to pointers, lists, stacks, vectors and pointers into vectors. They may be used to build up further types from the basic algebra types.

2.7. Relations between algebras

As mentioned above, more than one input algebra may be specified to calculus. Each is processed separately, and output is generated for only one. By default this is the last algebra processed, however a specific algebra can be specified using the command-line option -Aname, where name is the name of the algebra to be used for output.

Types may be imported from one algebra to another by means of commands of the form:

	import :
		IMPORT identifier ;
		IMPORT identifier :: identifier ;
which fit into the main syntax as an item. The first form imports all the types from the algebra given by identifier into the current algebra. The second imports a single type, given by the second identifier from the algebra given by the first identifier.

Note that importing a type in this way also imports all the types used in its construction. This includes such things as structure components and union fields and maps. Thus an algebra consisting just of import commands can be used to express subalgebras in a simple fashion.


3. Output type system specification

In this section we document the basic output of calculus. Two forms of the output can be generated - a description of the output specification in terms of the TenDRA #pragma token constructs, and the actual C code which implements these tokens.

In this section the description given will be at the level of the output specification. The more important details of the C implementation are given in the following section.

The output is split among several header files in the specified output directory. The main output is printed into name.h, where name is the overall algebra name. Unless otherwise stated, all the objects specified below are to be found in name.h. However for each union, union, in the algebra certain information associated with the union is printed into union_ops.h. If the union has any maps associated with it then further output is printed to union_map.h and union_hdr.h.

3.1. Version information

Certain basic information about the input algebra is included in name.h. name_NAME is a string literal giving the overall algebra name. name_VERSION is a string literal giving the algebra version number. name_SPECIFICATION and name_IMPLEMENTATION are flags which take the values 0 or 1 depending on whether the specification of the type system in terms of #pragma token statements or the C implementation is included.

3.2. Basic types

Six abstract type operators, each taking a type as argument and returning a type, are specified as follows:

	TYPE PTR ( TYPE t );
	TYPE LIST ( TYPE t ) ;
	TYPE STACK ( TYPE t ) ;
	TYPE VEC ( TYPE t ) ;
	TYPE VEC_PTR ( TYPE t ) ;
	TYPE SIZE ( TYPE t ) ;
These represent a pointer to an object of type t, a list of objects of type t, a stack of objects of type t, a vector of objects of type t, a pointer into a vector of objects of type t, and an allocation block size for type t respectively. (It is possible to suppress all constructs involving VEC or VEC_PTR by passing the -x command-line option to calculus. Similarly STACK constructs may be suppressed using -z.)

These constructors can be applied to any type, although in practice they are only applied to the types specified in the algebra and those derived from them. For example, we may form the type:

	LIST ( PTR ( int ) )
representing a list of pointers to int.

An integral type:

	INT_TYPE name_dim ;
is specified, where name is the overall algebra name. This type is used to represent the sizes of vectors.

A function pointer type:

	typedef void ( *DESTROYER ) () ;
is specified, which represents a destructor function. Two destructor functions are specified:
	void destroy_name () ;
	void dummy_destroy_name () ;
where name is as above. destroy_name is the default destructor, whereas dummy_destroy_name is a dummy destructor which has no effect. The details of the arguments passed to the destructors and so on are implementation dependent.

3.3. Operations on sizes

The SIZE type constructor is used to represent a multiple of the size of a type. It is used, for example, in the pointer stepping construct STEP_ptr to specify the number of units the pointer is to be increased by. Having a separate abstract type for the size of each type greatly increases the scope for type checking of memory allocation, and other, functions.

For each basic type in the algebra (a primitive, a structure or a union), there is a constant expression:

	SIZE ( t ) SIZE_type ;
where t denotes the type itself, and type is the associated type name.

For the five other type constructors described above there are constant expressions:

	SIZE ( PTR ( t ) ) SIZE_ptr ( TYPE t ) ;
	SIZE ( LIST ( t ) ) SIZE_list ( TYPE t ) ;
	SIZE ( STACK ( t ) ) SIZE_stack ( TYPE t ) ;
	SIZE ( VEC ( t ) ) SIZE_vec ( TYPE t ) ;
	SIZE ( VEC_PTR ( t ) ) SIZE_vec_ptr ( TYPE t ) ;
for any type t.

These constructs allow the size of any type derived from the algebra to be built up. There is also a construct which corresponds to multiplying the size of a type by a constant. This takes the form:

	SIZE ( t ) SCALE ( SIZE ( t ), INT_TYPE itype ) ;
for any type t and any integral type itype.

3.4. Operations on pointers

The PTR type constructor is used to represent a pointer to an object of type t. It should be emphasised that this construct is not in general implemented by means of the C pointer construct.

Simple pointer constructs

There are several simple operations on pointers, given as follows:

	PTR ( t ) NULL_ptr ( TYPE t ) ;
	int IS_NULL_ptr ( PTR ( t ) ) ;
	int EQ_ptr ( PTR ( t ), PTR ( t ) ) ;
	PTR ( t ) STEP_ptr ( PTR ( t ), SIZE ( t ) ) ;
The construct NULL_ptr is used to form the null pointer to t for a type t. This is a constant expression. IS_NULL_ptr can be used to test whether a given pointer expression is a null pointer. Similarly EQ_ptr checks whether two pointers are equal (note that we can only compare pointers to the same type). Finally STEP_ptr can be used to add a scalar value to a pointer.

Unique pointers

There are also constructs for generating and destroying unique pointers:

	PTR ( t ) UNIQ_ptr ( TYPE t ) ;
	void DESTROY_UNIQ_ptr ( PTR ( t ) ) ;
A unique pointer is guaranteed to be different from all other undestroyed pointers. Dereferencing a unique pointer is undefined.

Pointer construction and destruction

The constructs:

	PTR ( t ) MAKE_ptr ( SIZE ( t ) ) ;
	void DESTROY_ptr ( PTR ( t ), SIZE ( t ) ) ;
are used to respectively create and destroy a pointer to a given type.

Assignment and dereference constructs

The constructs for assigning and dereferencing pointers have one of two forms depending on the complexity of the type pointed to. For simple types, including primitive, enumeration and union types, they have the form:

	void COPY_type ( PTR ( t ), t ) ;
	t DEREF_type ( PTR ( t ) ) ;
where t is the type involved and type is the associated short name.

For more complex types, including structures, the assignment or dereference cannot be done in a single expression, so the constructs are specified to be statements as follows:

	STATEMENT COPY_type ( PTR ( t ), t ) ;
	STATEMENT DEREF_type ( PTR ( t ), lvalue t ) ;
Here the lvalue type qualifier is used to indicate that the statement argument is an assignable lvalue. In this case it should give the location of an object of type t into which the pointer will be dereferenced.

The appropriate assignment and dereference constructs are generated for each of the basic algebra types (primitives, enumerations, structures and unions). In addition there are generic assignment and dereference constructs for pointer types, list types, stack types, vector types and vector pointer types. The first three are of the first form above, whereas the second two have the second form, as follows:

	void COPY_ptr ( PTR ( PTR ( t ) ), PTR ( t ) ) ;
	PTR ( t ) DEREF_ptr ( PTR ( PTR ( t ) ) ) ;
	void COPY_list ( PTR ( LIST ( t ) ), LIST ( t ) ) ;
	LIST ( t ) DEREF_list ( PTR ( LIST ( t ) ) ) ;
	void COPY_stack ( PTR ( STACK ( t ) ), STACK ( t ) ) ;
	STACK ( t ) DEREF_stack ( PTR ( STACK ( t ) ) ) ;
	STATEMENT COPY_vec ( PTR ( VEC ( t ) ), VEC ( t ) ) ;
	STATEMENT DEREF_vec ( PTR ( VEC ( t ) ), lvalue VEC ( t ) ) ;
	STATEMENT COPY_vec_ptr ( PTR ( VEC_PTR ( t ) ), VEC_PTR ( t ) ) ;
	STATEMENT DEREF_vec_ptr ( PTR ( VEC_PTR ( t ) ), lvalue VEC_PTR ( t
 ) ) ;

3.5. Operations on lists

The LIST type constructor is used to represent a list of objects of type t.

Simple list constructs

There are several simple list constructs:

	LIST ( t ) NULL_list ( TYPE t ) ;
	int IS_NULL_list ( LIST ( t ) ) ;
	int EQ_list ( LIST ( t ), LIST ( t ) ) ;
	unsigned LENGTH_list ( LIST ( t ) ) ;
	PTR ( t ) HEAD_list ( LIST ( t ) ) ;
	LIST ( t ) TAIL_list ( LIST ( t ) ) ;
	PTR ( LIST ( t ) ) PTR_TAIL_list ( LIST ( t ) ) ;
	LIST ( t ) END_list ( LIST ( t ) ) ;
	LIST ( t ) REVERSE_list ( LIST ( t ) ) ;
	LIST ( t ) APPEND_list ( LIST ( t ), LIST ( t ) ) ;
Empty lists may be constructed or tested for. NULL_list is a constant expression. Two lists may be checked for equality (note that this is equality of lists, rather than element-wise equality). The number of elements in a list can be found. The head or tail (or car and cdr) of a list may be formed. The end element of a list may be selected. The order of the elements in a list can be reversed. One list can be appended to another.

Unique lists

There are also constructs for generating and destroying unique lists:

	LIST ( t ) UNIQ_list ( TYPE t ) ;
	void DESTROY_UNIQ_list ( LIST ( t ) ) ;
A unique lists is guaranteed to be different from all other undestroyed lists. Taking the head or tail of a unique list is undefined.

List construction and destruction

For each type t there are operations for constructing and deconstructing lists. For the basic types comprising the algebra these take the form:

	STATEMENT CONS_type ( t, LIST ( t ), lvalue LIST ( t
 ) ) ;
	STATEMENT UN_CONS_type ( lvalue t, lvalue LIST ( t ), LIST ( 
t ) ) ;
	STATEMENT DESTROY_CONS_type ( DESTROYER, lvalue t, lvalue LIST ( 
t ), LIST ( t ) ) ;
where type is the short type name.

The operation CONS_type is used to build a list whose head is the first argument and whose tail is the second argument. This is assigned to the third argument. UN_CONS_type reverses this process, decomposing a list into its head and its tail. DESTROY_CONS_type is similar, but it also applies the given destructor function to the decomposed list component.

There are also generic list construction and deconstruction operations which apply to lists of pointers (with a ptr suffix), lists of lists (with a list suffix), lists of stacks (with a stack suffix), lists of vectors (with a vec suffix) and lists of pointers to vectors (with a vec_ptr suffix). For example, for lists of pointers these have the form:

	STATEMENT CONS_ptr ( PTR ( t ), LIST ( PTR ( t ) ), lvalue LIST ( PTR ( 
t ) ) ) ;
	STATEMENT UN_CONS_ptr ( lvalue PTR ( t ), lvalue LIST ( PTR ( t ) ), LIST ( PTR ( 
t ) ) ) ;
	STATEMENT DESTROY_CONS_ptr ( DESTROYER, lvalue PTR ( t ), lvalue LIST ( PTR ( 
t ) ), LIST ( PTR ( t ) ) ) ;
There is also a generic list destruction construct:
	STATEMENT DESTROY_list ( LIST ( t ), SIZE ( t ) ) ;
which may be used to destroy all the elements in a list.

3.6. Operations on stacks

The STACK type constructor is used to represent stacks of objects of type t. Empty stacks can be created and tested for:

	STACK ( t ) NULL_stack ( TYPE t ) ;
	int IS_NULL_stack ( STACK ( t ) ) ;
For each type t there are operations for pushing objects onto a stack and for popping elements off. For the basic types comprising the algebra these take the form:
	STATEMENT PUSH_type ( t, lvalue STACK ( t ) ) ;
	STATEMENT POP_type ( lvalue t, lvalue STACK ( t ) ) ;
where type is the short type name. There are also generic constructs, such as PUSH_ptr for pushing any pointer type, similar to the generic list constructors above.

Stacks are in fact just modified lists with pushing corresponding adding an element to the start of a list, and popping to removing the first element. There are constructs:

	LIST ( t ) LIST_stack ( STACK ( t ) ) ;
	STACK ( t ) STACK_list ( LIST ( t ) ) ;
for explicitly converting between lists and stacks.

3.7. Operations on vectors

The VEC type constructor is used to represent vectors of objects of type t. There are a number of simple operations on vectors:

	VEC ( t ) NULL_vec ( TYPE t ) ;
	name_dim DIM_vec ( VEC ( t ) ) ;
	name_dim DIM_ptr_vec ( PTR ( VEC ( t ) ) ) ;
	PTR ( t ) PTR_ptr_vec ( PTR ( VEC ( t ) ) ) ;
An empty vector can be constructed (note that, unlike null pointers and null lists, this is not a constant expression). The number of elements in a vector (or in a vector given by a pointer) can be determined (note how the type name_dim is used to represent vector sizes). A pointer to a vector can be transformed into a pointer to the elements of the vector.

In general a vector may be created or destroyed using the operators:

	STATEMENT MAKE_vec ( SIZE ( t ), name_dim, lvalue VEC ( t ) ) ;
	STATEMENT DESTROY_vec ( VEC ( t ), SIZE ( t ) ) ;
Finally a vector can be trimmed using:
	STATEMENT TRIM_vec ( VEC ( t ), SIZE ( t ), int, int, lvalue VEC ( 
t ) ) ;
the two integral arguments giving the lower and upper bounds for the trimming operation.

3.8. Operations on vector pointers

The VEC_PTR type constructor is used to represent a pointer to an element of a vector of objects of type t. Apart from the basic constructors already mentioned, there are only two operations on vector pointers:

	VEC_PTR ( t ) VEC_PTR_vec ( VEC ( t ) ) ;
	PTR ( t ) PTR_vec_ptr ( VEC_PTR ( t ) ) ;
The first transforms a vector into a vector pointer (pointing to its first argument). The second transforms a vector pointer into a normal pointer.

3.9. Operations on primitives

Each primitive specified within the algebra maps onto a typedef defining the type in terms of its given definition. The only operations on primitives are those listed above - the size constructs, the pointer assignment and dereference operations, the list construction and deconstruction operations and the stack push and pop operations.

3.10. Operations on identities

Each identity specified within the algebra maps onto a typedef in the output file. There are no operations on identities.

3.11. Operations on enumerations

Each enumeration specified within the algebra maps onto an unsigned integral type in the output file. The basic operations listed above are always generated unless it has been indicated that no lists of this type will be formed, when CONS_type and the other list and stack operators are suppressed. In addition each enumerator which is a member of this enumeration maps onto a constant expression:

	t enum_member ;
where t is the enumeration type, enum is the short type name, and member is the enumerator name. It is guaranteed that the first enumerator will have value 0, the second 1, and so on, unless the value of the enumerator is explicitly given (as in C). There is also a constant expression:
	unsigned long ORDER_enum ;
giving one more than the maximum enumerator in the enumeration.

3.12. Operations on structures

Each structure specified within the algebra maps onto a typedef defining the type to be the C structure with the given components. In addition to the basic operations listed above there are also field selectors defined.

Suppose that t is a structure type with short name struct, and that comp is a component of t of type ctype. Then there is an operation:

	PTR ( ctype ) struct_comp ( PTR ( t ) ) ;
which transforms a pointer to the structure into a pointer to the component. There is also an operation:
	STATEMENT MAKE_struct ( ctype, ...., PTR ( t ) ) ;
where ctype ranges over all the component types which do not have a component initialiser string in the structure definition. This is used to assign values to all the components of a structure. If a component has an initialiser string then this is used as an expression giving the initial value, otherwise the given operation argument is used. The initialiser strings are evaluated in the context of the MAKE_struct statement. The parameters to the operation may be referred to by the corresponding component name followed by _. The partially initialised object may be referred to by the special character sequence %0. Any remainder operations should be written as %% rather than %.

Inheritance in structures is handled as follows: if t is a derived structure with base structure btype then there is an operation:

	PTR ( btype ) CONVERT_struct_base ( PTR ( t ) ) ;
where struct and base are the short names of t and btype respectively.

3.13. Operations on unions

Each union specified within the algebra maps onto an opaque type in the output file. In addition to the basic operations listed above there are a number of other constructs output into name.h, namely:

	unsigned int ORDER_union ;
	t NULL_union ;
	int IS_NULL_union ( t ) ;
	int EQ_union ( t, t ) ;
where t denotes the union type, and union is the short type name. ORDER_union is a constant value giving the number of union fields. NULL_union is a distinguished constant value of type t. Values of type t may be compared against this distinguished value, or against each other.

Union construction operations

Most of the output for the union type t is output into the file union_ops.h. This contains a construct:

	unsigned int TAG_union ( t ) ;
for extracting the discriminant tag from a union.

For each shared component, comp, of t of type ctype, say, there is an operator:

	PTR ( ctype ) union_comp ( t ) ;
which extracts this component from the union.

For each field, field, of the union there are constructs:

	unsigned int union_field_tag ;
	int IS_union_field ( t ) ;
giving the (constant) discriminant tag associated with this field, and a test for whether an object of type t has this discriminant.

In addition, for each unshared component, comp, of field of type ctype, say, there is an operator:

	PTR ( ctype ) union_field_comp ( t ) ;
which extracts this component from the union.

There are also operations for constructing and deconstructing objects of type t for field field given as follows:

	STATEMENT MAKE_union_field ( ctype, ...., lvalue t ) ;
	STATEMENT DECONS_union_field ( lvalue ctype, ...., t ) ;
	STATEMENT DESTROY_union_field ( DESTROYER, lvalue ctype, ...., 
t ) ;
The operation MAKE_union_field constructs a value of type t from the various components and assigns it into the last argument. The ctype arguments range over all the components of field, both the shared components and the unshared components, which do not have a component initialiser string in the definition. The union component are initialised either using the initialiser string or the given operation argument.

DECONS_union_field performs the opposite operation, deconstructing an object of type t into its components for this field. DESTROY_union_field is similar, except that it also applies the given destructor function to the original value. For these two operations ctype ranges over all the components, including those with initialiser strings.

Union field sets

Recall that a number of union field identifiers may be associated with the same set of components. In this case these fields form a union field set named after the first element of the set. There are operations:

	int IS_union_field_etc ( t ) ;
	PTR ( ctype ) union_field_etc_comp ( t ) ;
	STATEMENT MAKE_union_field_etc ( unsigned int, ctype, ...., lvalue 
t ) ;
	STATEMENT MODIFY_union_field_etc ( unsigned int, t ) ;
	STATEMENT DECONS_union_field_etc ( lvalue ctype, ...., t ) ;
	STATEMENT DESTROY_union_field_etc ( DESTROYER, lvalue ctype, ...., 
t ) ;
defined on these sets. These are exactly analogous to the single field operations except that they work for any field in the set. Note that MAKE_union_field_etc takes an extra argument, giving the tag associated with the particular element of the set being constructed. Also the construct MODIFY_union_field_etc is introduced to allow the tag of an existing object to be modified to another value within the same set.

The valid range of union tags for this set can be given by the two constants:

	unsigned int union_field_tag ;
	unsigned int union_field_etc_tag ;
A union is a member of this set if its tag is greater than or equal to union_field_tag and strictly less than union_field_etc_tag.

Inheritance in unions

Inheritance in unions is handled as follows: if t is a derived union with base union btype then there is an operation:

	btype CONVERT_union_base ( t ) ;
where union and base are the short names of t and btype respectively.

Union maps

For each map, map, on the union t, we have declared in union_ops.h an operator:

	map_ret map_union ( t, map_par, .... ) ;
where map_ret is the map return type and map_par ranges over the map parameter types. This is except for maps with destructors (i.e. those qualified by a hash symbol) which have the form:
	map_ret map_union ( t, DESTROYER, map_par, .... ) ;
These maps are implemented by having one function per field for each map. The output file union_map.h contains tables of these functions. These have the form:
	map_ret ( *map_union_table [ ORDER_union ] ) ( t, 
map_par, .... ) = {
		....
		map_union_field,
		....
	} ;
where there is one entry per union field.

In order to aid the construction of these functions a set of function headers is provided in union_hdr.h. These take the form:

	#define HDR_map_union_field\
		map_ret map_union_field ( name_union, destroyer, par
, .... )\
		t name_union ;\
		DESTROYER destroyer ; /* if required */\
		map_par par ;\
		....\
		{\
			ctype comp ;\
			....\
			DECONS_union_field ( comp, ...., name_union ) ;
There is also an alternative function header, HDR_map_d_union_field, which calls DESTROY_union_field rather than DECONS_union_field. The destructor function used is destroyer, if this parameter is present, or the default destructor, destroy_name, otherwise.


4. Implementation details

4.1. Implementation of types

The C implementation of the type system specified above is based on a single type, name, with the same name as the input algebra. This is defined as follows:

	typedef union name_tag {
		unsigned int ag_tag ;
		union name_tag *ag_ptr ;
		unsigned ag_enum ;
		unsigned long ag_long_enum ;
		name_dim ag_dim ;
		t ag_prim_type ;
		....
	} name ;
where t runs over all the primitive types. All of the types in the algebra can be packed into a block of cells of type name. The following paragraphs describe the implementation of each type, together with how they are packed as blocks of cells. This is illustrated by the following diagram:
Packing of types

Primitive types are implemented by a typedef defining the type in terms of its given definition. A primitive type can be packed into a single cell using the appropriate ag_prim_type field of the union.

Identity types are implemented by a typedef defining the type as being equal to another type from the algebra.

Enumeration types are all implemented as unsigned int if all its values fit into 16 bits, or unsigned long otherwise. An enumeration type can be packed into a single cell using the ag_enum or ag_long_enum field of the union.

Structure types are implemented by a typedef defining the type to be the C structure with the given components. A structure type may be packed into a block of cells by packing each of the components in turn.

Union types are all implemented by a pointer to name. This pointer references a block of cells. The null pointer represents NULL_union, otherwise the first cell contains the union discriminant tag (in the ag_tag field), with the subsequent cells containing the packed field components (shared components first, then unshared components). If the union has only one field then the discriminant can be omitted. The union itself can be packed into a single cell using the ag_ptr field of the union.

Pointer types are all implemented by a pointer to name. Null pointers represent NULL_ptr, otherwise the pointer references a single cell. This cell contains a pointer to the packed version of the object being pointed to in its ag_ptr field. A pointer type itself can be packed into a single cell using the ag_ptr field of the union.

List types are all implemented by a pointer to name. Null pointers represent NULL_list, otherwise the pointer references a block of two cells. The first cell gives the tail of the list in its ag_ptr field; the second cell contains a pointer to the packed version of the head of the list in its ag_ptr field. A list type itself can be packed into a single cell using the ag_ptr field of the union.

Stack types are identical to list types, with the head of the list corresponding to the top of the stack.

Vector pointer and vector types are all implemented by structures defined as follows:

	typedef unsigned int name_dim ;

	typedef struct {
		name *vec ;
		name *ptr ;
	} name_VEC_PTR ;

	typedef struct {
		name_dim dim ;
		name_VEC_PTR elems ;
	} name_VEC ;
The vec field of a vector pointer contains a pointer to a block of cells containing the packed versions of the elements of the vector. The ptr field is a pointer to the current element of this block. A vector type also has a field giving the vector size. A vector pointer type can be packed into a block of two cells, using the ag_ptr field of each to hold its two components. A vector type can similarly be packed into a block of three cells, the first holding the vector size in its ag_dim field.

All size types are implemented as int.

4.2. Support routines

The implementation requires the user to provide certain support functions. Most fundamental are the routines for allocating and deallocating the blocks of cells which are used to store the types. These are specified as follows:

	name *gen_name ( unsigned int ) ;
	void destroy_name ( name *, unsigned int ) ;
	void dummy_destroy_name ( name *, unsigned int ) ;
where gen_name allocates a block of cells of the given size, destroy_name deallocates the given block, and dummy_destroy_name has no effect. The precise details of how the memory management is to be handled are left to the user.

Certain generic list functions must also be provided, namely:

	void destroy_name_list ( name *, unsigned int ) ;
	name *reverse_name_list ( name * ) ;
	name *append_name_list ( name *, name * ) ;
	name *end_name_list ( name * ) ;
which implement the constructs DESTROY_list, REVERSE_list, APPEND_list and END_list respectively.

Finally a dummy empty vector:

	name_VEC empty_name_vec ;
needs to be defined.

Examples of these support routines can be found in the calculus program itself.

4.3. Run-time checking

The type checking facilities supported by calculus allow for compile-time detection of many potential programming errors, however there are many problems, such as dereferencing null pointers, deconstructing empty lists and union tag errors which can only be detected at run-time. For this reason calculus can be made to add extra assertions to check for such errors into its output. This is done by specifying the -a command-line option.

These assertions are implemented by means of macros. If the macro ASSERTS is defined then these macros expand to give the run-time checks. If not the output is exactly equivalent to that which would have been output if the -a option had not been given.

The assertions require certain support functions which are output into a separate file, assert_def.h. These functions need to be compiled into the program when ASSERTS is defined. Note that the functions are implementation specific.


5. Disk reading and writing

One of the facilities which the homogeneous implementation of the type system described above allows for is the addition of persistence. Persistence in this context means allowing objects to be written to, and read from, disk. Also discussed in this section is the related topic of the object printing routines, which allow a human readable representation of objects of the type system to be output for debugging or other purposes.

The disk reading and writing routines are output into the files read_def.h and write_def.h respectively if the -d command-line option is passed to calculus. The object printing routines are output if the -p option is given, with additional code designed for use with run-time debuggers being added if the -a option is also given.

All of these routines use extra constructs output in the main output files (name.h and union_ops.h), but not normally accessible. The macro name_IO_ROUTINES should be defined in order to make these available for the disk reading and writing routines to use.

5.1. Disk writing routines

The disk writing routines output in write_def.h consist of a function:

	static void WRITE_type ( t ) ;
for each type t mentioned within the algebra description, which writes an object of that type to disk.

Note that such routines are output not only for the basic types, such as unions and structures, but also for any composite types, such as pointers and lists, which are used in their definition. The type component of the name WRITE_type is derived from basic types t by using the short type name. For composite types it is defined recursively as follows:

	LIST ( t ) -> list_type
	PTR ( t ) -> ptr_type
	STACK ( t ) -> stack_type
	VEC ( t ) -> vec_type
	VEC_PTR ( t ) -> vptr_type
Such functions are defined for identity types and composite types involving identity types by means of macros which define them in terms of the identity definitions. WRITE_type functions for the primitive types should be provided by the user to form a foundation on which all the other functions may be built.

The user may wish to generate WRITE_type (or other disk reading and writing) functions for types other than those mentioned in the algebra definition. This can be done by means of a command-line option of the form -Einput where input is a file containing a list of the extra types required. In the notation used above the syntax for input is given by:

	extra :
		type-listopt

	type-list :
		type ;
		type-list type ;
The WRITE_type functions are defined recursively in an obvious fashion. The user needs to provide the writing routines for the primitives already mentioned, plus support routines (or macros):
	void WRITE_BITS ( int, unsigned int ) ;
	void WRITE_DIM ( name_dim ) ;
	void WRITE_ALIAS ( unsigned int ) ;
for writing a number of bits to disk, writing a vector dimension and writing an object alias.

Any of the WRITE_type functions may be overridden by the user by defining a macro WRITE_type with the desired effect. Note that the WRITE_type function for an identity can be overridden independently of the function for the identity definition. This provides a method for introducing types which are representationally the same, but which are treated differently by the disk reading and writing routines.

5.2. Disk reading routines

The disk reading routines output in read_def.h are exactly analogous to the disk writing routines. For each type t (except primitives) there is a function or macro:

	static t READ_type ( void ) ;
which reads an object of that type from disk. The user must provide the READ_type functions for the primitive types, plus support routines:
	unsigned int READ_BITS ( int ) ;
	name_dim READ_DIM ( void ) ;
	unsigned int READ_ALIAS ( void ) ;
for reading a number of bits from disk, reading a vector dimension and reading an object alias. The READ_type functions may be overridden by means of macros as before.

5.3. Object printing routines

The object printing routines output in print_def.h consist of a function or macro:

	static void PRINT_type ( FILE *, t, char *, int ) ;
for each type t, which prints an object of type t to the given file, using the given object name and indentation value. The user needs to provide basic output routines:
	void OUTPUT_type ( FILE *, t ) ;
for each primitive type. The PRINT_type functions may be overridden by means of macros as before.

The printing routines are under the control of three variables defined as follows:

	static int print_indent_step = 4 ;
	static int print_ptr_depth = 1 ;
	static int print_list_expand = 0 ;
These determine the indentation to be used in the output, to what depth pointers are to be dereferenced when printing, and whether lists and stacks are to be fully expanded into a sequence of elements or just split into a head and a tail.

One application of these object printing routines is to aid debugging programs written using the calculus tool. The form of the type system implementation means that it is not easy to extract information using run-time debuggers without a detailed knowledge of the structure of this implementation. As a more convenient alternative, if both the -p and -a command-line options are given then calculus will generate functions:

	void DEBUG_type ( t ) ;
defined in terms of PRINT_type, for printing an object of the given type to the standard output. Many debuggers have problems passing structure arguments, so for structure, vector and vector pointer types DEBUG_type takes the form:
	void DEBUG_type ( t * ) ;
These debugging routines are only defined conditionally, if the macro DEBUG is defined.

5.4. Aliasing

An important feature of the disk reading and writing routines, namely aliasing, has been mentioned but not yet described. The problem to be faced is that many of the objects built up using type systems defined using calculus will be cyclic - they will include references to themselves in their own definitions. Aliasing is a mechanism for breaking such cycles by ensuring that only one copy of an object is ever written to disk, or that only one copy is created when reading from disk. This is done by associating a unique number as an alias for the object.

For example, when writing to disk, the first time the object is written the alias definition is set up. Consequently the alias number is written instead of the object it represents. Similarly when reading from disk, an alias may be associated with an object when it is read. When this alias is encountered subsequently it will always refer to this same object.

The objects on which aliasing can be specified are the union fields. A union field may be qualified by one or two hash symbols to signify that objects of that type should be aliased.

The two hash case is used to indicate that the user wishes to gain access to the objects during the aliasing mechanism. In the disk writing case, the object to be written, x say, is split into its components using the appropriate DECONS_union_field construct. Then the user-defined routine, or macro:

	ALIAS_union_field ( comp, ...., x ) ;
(where comp ranges over all the union components) is called prior to writing the object components to disk.

Similarly in the disk reading case, the object being read, x, is initialised by calling the user-defined routine:

	UNALIAS_union_field ( x ) ;
prior to reading the object components from disk. Each object component is then read into a local variable, comp. Finally the user-defined routine:
	UNIFY_union_field ( comp, ...., x ) ;
(where comp ranges over all the union components) is called to assign these values to x before returning.

In the single hash case the object is not processed in this way. It is just written straight to disk, or has its components immediately assigned to it when reading from disk.

Note that aliasing is used, not just in the disk reading and writing routines, but also in the object printing functions. After calling any such function the user should call the routine:

	void clear_name_alias ( void ) ;
to clear all aliases.

Aliases are implemented by adding an extra field to the objects to be aliased, which contains the alias number, if this has been assigned, or zero, otherwise. A list of all these extra fields is maintained. In addition to the routine clear_name_alias mentioned above, the user should provide support functions and variables:

	unsigned int crt_name_alias ;
	void set_name_alias ( name *, unsigned int ) ;
	name *find_name_alias ( unsigned int ) ;
giving the next alias number to be assigned, and routines for adding an alias to the list of all aliases, and looking up an alias in this list. Example implementations of these routines are given in the calculus program itself.

5.5. Application to calculus

As mentioned above, the calculus program itself is an example of its own application. It therefore contains routines for reading and writing a representation of an algebra to and from disk, and for pretty-printing the contents of an algebra. These may be accessed using the command-line options mentioned above.

If the -w command-line option is specified to calculus then it reads its input file, input, as normal, but writes a disk representation of the input algebra to output, which in this instance is an output file, rather than an output directory. An output file produced in this way can then be specified as an input file to calculus if the -r option is given. Finally the input algebra may be pretty-printed to an output file (or the standard output if no output argument is given) by specifying the -o option.


6. Template files

It is possible to use calculus to generate an output file from a template input file, template, using the syntax:

	calculus [ options ] input .... -Ttemplate output
The template file consists of a list of either template directives or source lines containing escape sequences which are expanded by calculus. Template directive lines are distinguished by having @ as their first character. Escape sequences consist of % following by one or more characters.

There are two template directives; loops take the form:

	@loop control
	....
	@end
and conditionals take the form:
	@if condition
	....
	@else
	....
	@endif
or:
	@if condition
	....
	@endif
where .... stands for any sequence of template directives or source lines.

The control statements in a loop can be primitive, identity, enum, struct or union to loop over all the primitive, identity, enumeration structure or union types within the input algebra. Within an enum loop it is possible to use enum.const to loop over all the enumeration constants of the current enumeration type. Within a struct loop it is possible to use struct.comp to loop over all the components of the current structure. Within a union loop it is possible to use union.comp to loop over all the shared components of the current union, union.field to loop over all the fields of the current union, and union.map to loop over all the maps of the current union. Within a union.field loop it is possible to use union.field.comp to loop over all the components of the current union field. Within a union.map loop it is possible to use union.map.arg to loop over all the arguments of the current union map.

The valid condition statements in a conditional are true and false, plus comp.complex, which is true if the current structure or union field component has a complex type (i.e. those for which COPY_type and DEREF_type require two arguments), and comp.default, which is true if the current structure or union field component has a default initialiser value.

A number of escape sequences can be used anywhere. %ZX and %ZV give the name and version number of the version of calculus used. %Z and %V give the name and version number of the input algebra. %% and %@ give % and @ respectively, and % as the last character in a line suppresses the following newline character.

Within a primitive loop, %PN gives the primitive name, %PM gives the short primitive name and %PD gives the primitive definition.

Within an identity loop, %IN gives the identity name, %IM gives the short identity name and %IT gives the identity definition.

Within an enum loop, %EN gives the enumeration name, %EM gives the short enumeration name and %EO gives the enumeration order, ORDER_enum. Within an enum.const loop, %ES gives the enumeration constant name and %EV gives its value.

Within a struct loop, %SN gives the structure name and %SM gives the short structure name.

Within a union loop, %UN gives the union name, %UM gives the short union name and %UO gives the union order, ORDER_union. Within a union.field loop, %FN gives the field name. Within a struct.comp, union.comp or union.field.comp loop, %CN gives the component name, %CT gives the component type, %CU gives the short form of the component type and %CV gives the default component initialiser value (if comp.default is true). Within a union.map loop, %MN gives the map name and %MR gives the map return type. Within a union.map.arg loop, %AN gives the argument name and %AT gives the argument type.

As an example, the following template file gives a simple algebra pretty printer:

	ALGEBRA %X (%V):

	/* PRIMITIVE TYPES */
	@loop primitive
	%PN (%PM) = "%PD" ;
	@end

	/* IDENTITY TYPES */
	@loop identity
	%IN (%IM) = %IT ;
	@end

	/* ENUMERATION TYPES */
	@loop enum

	enum %EN (%EM) = {
	@loop enum.const
		%ES = %EV,
	@end
	} ;
	@end

	/* STRUCTURE TYPES */
	@loop struct

	struct %SN (%SM) = {
	@loop struct.comp
		%CT %CN ;
	@end
	} ;
	@end

	/* UNION TYPES */
	@loop union

	union %UN (%UM) = {
	@loop union.comp
		%CT %CN ;
	@end
	} + {
	@loop union.field
		%FN -> {
	@loop union.field.comp
			%CT %CN ;
	@end
		} ;
	@end
	} ;
	@end


Part of the TenDRA Web.
Crown Copyright © 1998.

tendra-doc-4.1.2.orig/doc/utilities/sid.html100644 1750 1750 216622 6466607527 20412 0ustar brooniebroonie sid users' guide

sid Users' Guide

January 1998

next section previous section current document TenDRA home page document index


1 - Introduction
2 - Grammars
2.1 - Parsing
2.2 - Context free grammars
2.3 - sid grammars
3 - Overview
3.1 - Left recursion elimination
3.2 - Factoring
3.3 - Optimisations
4 - Invocation
5 - The sid grammar file
5.1 - Lexical conventions
5.2 - The type declaration section
5.3 - The terminal declaration section
5.4 - The rule definition section
5.5 - The grammar entry points section
6 - The C information file
6.1 - Lexical conventions
6.2 - The prefixes section
6.3 - The maps section
6.4 - The header section
6.5 - The assignments section
6.6 - The parameter assignments section
6.7 - The result assignments section
6.8 - The terminal result extraction section
6.9 - The action definition section
6.10 - The trailer section
7 - Predicates
8 - Error handling
9 - Call by reference
10 - Calling entry points
11 - Glossary
12 - Understanding error messages
12.1 - Left recursion elimination errors
12.2 - First set computation errors
12.3 - Factoring errors
12.4 - Checking errors

1. Introduction

This document describes how to use the sid parser generator. It was written for sid version 1.9. The main features of each version of sid are listed below:

  • sid 1.0 - this was the first version of sid to support attribute grammars. The output language was C.
  • sid 1.1 - this was a bug fix version of sid 1.0.
  • sid 1.2 - this version of sid added predicates, exception handling, improved inlining options, and a grammar test pseudo-language.
  • sid 1.3 - this version of sid added anonymous rules, and a better syntax for the C specific information.
  • sid 1.4 - this was a bug fix version of sid 1.3.
  • sid 1.5 - this was a bug fix version of sid 1.4. The command line option syntax changed in this release as well.
  • sid 1.6 - this version of sid changed the input syntax, added scoped rules, and removed basics (replacing them with terminal symbols). It also added a stricter ISO C language.
  • sid 1.7 - The syntax of the actions file changed slightly in this version. The error messages and recovery from syntax errors were also improved in this version. This version also added explicit call by reference support (rather than the inconsistent function call semantics of earlier versions). The command line options changed in this version, to support language specific options, and the strict-ansi-c language was dropped. Non-local variables were also added.
  • sid 1.8 - Initialisers were added for non-local variables. Assignment was added.
  • sid 1.9 - this was a bug fix version of sid 1.8.

sid turns specifications of languages into programs that recognise those languages. One of the aims of sid was to separate the specification of the language to be recognised from the language that the recogniser program is written in. For this reason, input to sid is split into two components: output language independent information, and output language dependent information.

At present, sid will only output programs in C (either ISO or pre-ISO), but it is designed so that adding new output languages should be fairly simple. There is one other pseudo-language: the test language. This is used for testing grammars and the transforms, but will not output a parser.


2. Grammars

2.1. Parsing

There are two phases in traditional language recognition that are relevant to sid: the first is lexical analysis (breaking the input up into terminal symbols); the second is syntax analysis or parsing (ensuring that the terminal symbols in the input are in the correct order).

sid currently does very little to help with lexical analysis. It is possible to use sid to produce the lexical analyser, but sid provides no real support for it at present. For now, the programmer is expected to write the lexical analyser, or use another tool to produce it.

The lexical analyser should break the input up into a series of terminals. Each of these terminals is allocated a number. In sid, these numbers range from zero to the maximum terminal number (specified in the grammar description). The terminals may also have data associated with them (e.g. the value of a number), known as the attributes of the terminal, or the results of the terminal.

sid generates the parser. The parser is a program that reads the sequence of terminals from the lexical analyser, and ensures that they form a valid input in the language being recognised. If the input is not valid, then the parser will fail (sid provides mechanisms to allow the parser to recover from errors).

2.2. Context free grammars

This section provides a brief introduction to grammars. It is not intended to be an in-depth guide to grammars, more an introduction which defines some terminology.

sid works with a subset of what are known as context free grammars. A context free grammar consists of a set of input symbols (known as terminals), a set of rules (descriptions of what are legal forms in the language, also known as non-terminals), and an entry point (the rule from which the grammar starts).

A rule is defined as a series of alternatives (throughout this document the definition of a rule is called a production - this may conflict with some other uses of the term). Each alternative consists of a sequence of items. An item can be a terminal or a rule. As an example (using the sid notation, which now looks unlike the conventional syntax for grammars), a comma separated list of integers could be specified as:

	list-of-integers = {
		integer ;
	    ||
		integer ;
		comma ;
		list-of-integers ;
	} ;
This production contains two alternatives: the first matches the terminal integer; the second matches the sequence of terminals integer followed by comma, followed by another list of integers.

There is much documentation available on context free grammars (and other types of formal grammar). The reader is advised to find an alternative source for more information.

2.3. sid grammars

sid grammars are based upon a subset of context free grammars, known as LL(1) grammars. The main property of such grammars is that the parser can always tell which alternative it is going to parse next by looking at the current terminal symbol. sid does a number of transforms to turn grammars that are not in this form into it (although it cannot turn all possible grammars into this form). It also provides facilities that allow the user to alter the control flow of the parser.

sid makes the following changes to the context free grammars described above:

  1. There may be more than one entry point to the grammar.
  2. As well as being a rule or a terminal, an item may be an action, a predicate or an identity. An action is just a user supplied function. A predicate is a cross between a basic and an action (it is a user defined function but it may alter the control flow). An identity is like an assignment in a conventional programming language.
  3. Rules may take parameters and return results (as may actions; terminals may only return results). These may be passed between items using names.
  4. Each rule can have an exception handler associated with it. Exception handlers are used for error recovery when the input being parsed does not match the grammar.
  5. Rules may be anonymous.
  6. Rules may have non-local names associated with them. These names are in scope for that rule and any rules that are defined inside it. The value of each non-local name is saved on entry to the rule, and restored when the rule is exited.


3. Overview

sid takes the input grammar and applies several transformations to it, before it produces the parser. These transformations allow the language descriptions to be written in a slightly more natural form than would otherwise be necessary.

3.1. Left recursion elimination

The first transformation is to eliminate any left recursive productions, replacing them with right recursive ones. This transforms a set of rules of the form:

	Ai = Aj Bji, Ci
into:
	Ai = Cj Xji
	Xji = Bjk Xki, Yji
where Yji is the identity function if i equals j, and an error otherwise. In order for this transformation to work, the following conditions must hold:
  1. The parameter type for all Ai must be the same.
  2. The result type for all Ai must be the same.
  3. The exception handlers for all Ai must be the same.
  4. The parameters to each left recursive call to some Aj must be exactly the formal parameters to the calling rule Ai.
  5. Any non-local name referenced by any rule must be in scope for all rules.
  6. A rule may not define non-local variables unless it is the only entry point into the cycle (i.e. there is only one named rule in the cycle).
sid will report an error if these conditions are not met.

3.2. Factoring

The second major transformation is to factor the grammar. It is possible that this phase could go on for ever, so there is a limit to the number of rules that can be generated. This limit may be changed (see the invocation section). In practice it is rare for this to happen.

The factoring phase tries to increase the number of language specifications that sid can produce a parser for, by factoring out common initial items in alternatives, e.g.:

	X = {
		a ; b ;
	    ||	a ; c ;
	} ;
would be transformed into something like:
	X = {
		a ; X1 ;
	} ;

	X1 = {
		b ;
	    ||	c ;
	} ;
It will also expand calls to rules at the beginning of alternatives if this is necessary to allow the parser to be produced (this is the phase that the limit is needed for). The expansions are done in the following cases:
  1. If the rule is see-through (i.e. there is an expansion of the rule that does not contain any terminals or predicates) and its first set contains terminals in the first set of the rest of the alternative.
  2. If the rule has a predicate in its first set.
  3. If the rule has terminals in its first set that are also in the first set of another alternative that does not begin with the same rule.
If the rule being expanded contains an exception handler, and it is not identical to the exception handler in the enclosing rule, then an error will occur. Similarly if the rule being expanded defines any non-local variables then an error will occur.

3.3. Optimisations

After these two transformations, sid performs some optimisations on the grammar, such as reducing multiple copies of identical rules, removing tail recursion, inlining all basic rules, inlining single alternative rules, inling rules used only once, and inlining everything that can be inlined. All of the inlining is optional.

After these optimisations, sid checks the language description to ensure that it is possible to produce a parser for it, and if so it outputs the parser.


4. Invocation

sid should be invoked in the following manner:

	sid options input-files output-files
The options are described later. The input files should be a number of input file names dependent upon the output language. The first input file is the sid grammar file. In the case of either C dialects there should be one other input file that provides C specific information to sid. The number of output files is also language specific. At present, two output files should be specified for the C languages. The first should be a .c file into which the parser is written; the second should be a .h file into which the terminal definitions and external function declarations are written.

The options list should consist of zero or more of the following options. There are short forms for each of these options as well; see the sid manual page for more information on invocation.

--dump-file FILE
This option causes intermediate dumps of the grammar to be written to the named file. The format of the dump files is similar to the format of the grammar specification, with the following exceptions:
  1. Predicates are written with the predicate result replaced by the predicate identifier (this will always be zero), and the result is followed by a ? to indicate that it was a predicate. As an example, the predicate:
    	( b, ? ) = <pred> ( a )
    
    would be printed out as:
    	( b : Type1T, 0 : Type2T ) ? = <pred> ( a : Type3T )
    
  2. Items that are considered to be inlinable are prefixed by a +. Items that are tail calls which will be eliminated are prefixed by a *.
  3. Nested rules are written at the outer level, with names of the form outer-rule::....::inner-rule.
  4. Types are provided on call parameter and result tuples.
  5. Inline rules are given a generated name, and are written out as a call to the generated rule (and a definition elsewhere).

--factor-limit LIMIT
This option limits the number of rules that can be created during the factorisation process. It is probably best not to change this.

--help
This option writes a summary of the command line options to the standard error stream.

--inline INLINES
This option controls what inlining will be done in the output parser. The inlines argument should be a comma separated list of the following words:

SINGLES
This causes single alternative rules to be inlined. This inlining is no longer performed as a modification to the grammar (it was in version 1.0).
BASICS
This causes rules that contain only basics (and no exception handlers or empty alternatives) to be inlined. The restriction on exception handlers and empty alternatives is rather arbitrary, and may be changed later.
TAIL
This causes tail recursive calls to be inlined. Without this, tail recursion elimination will not be performed.
OTHER
This causes other calls to be inlined wherever possible. Unless the MULTI inlining is also specified, this will be done only for productions that are called once.
MULTI
This causes calls to be inlined, even if the rule being called is called more than once. Turning this inlining on implies OTHER. Similarly turning off OTHER inlining will turn off MULTI inlining. For grammars of any size, this is probably best avoided; if used the generated parser may be huge (e.g. a C grammar has produced a file that was several hundred MB in size).
ALL
This turns on all inlining.

In addition, prefixing a word with NO turns off that inlining phase. The words may be given in any case. They are evaluated in the order given, so:

	--inline noall,singles
would turn on single alternative rule inlining only, whilst:
	--inline singles,noall
would turn off all inlining. The default is as if sid were invoked with the option:
	--inline noall,basics,tail

--language LANGUAGE
This option specifies the output language. Currently this should be one of ansi-c, pre-ansi-c and test. The default is ansi-c.

The ansi-c and pre-ansi-c languages are basically the same. The only difference is that ansi-c initially uses function prototypes, and pre-ansi-c doesn't. The C language specific options are:

prototypes, proto, no-prototypes, no-proto
These enable or disable the use of function prototypes. By default this is enabled for ansi-c and disabled for pre-ansi-c.

numeric-ids, numeric, no-numeric-ids, no-numeric
These enable or disable the use of numeric identifiers. Numeric identifiers replace the identifier name with a number, which is mainly of use in stopping identifier names getting too long. The disadvantage is that the code becomes less readable, and more difficult to debug. Numeric identifiers are not used by default.

casts, cast, no-casts, no-cast
These enable or disable casting of action and assignment operator immutable parameters. If enabled, a parameter is cast to its own type when it is substituted into the action. This will cause some compilers to complain about attempts to modify the parameter (which can help pick out attempts at mutating parameters that should not be mutated). The disadvantage is that not all compilers will reject attempts at mutation, and that ISO C doesn't allow casting to structure and union types, which means that some code may be illegal. Parameter casting is disabled by default.

unreachable-macros, unreachable-macro, unreachable-comments, unreachable-comment
These choose whether unreachable code is marked by a macro or a comment. The default is to mark unreachable code with a comment /*UNREACHED*/, however a macro UNREACHED ; may be used instead, if desired.

The test language only takes one input file, and produces no output file. It may be used to check that a grammar is valid. In conjunction with the dump file, it may be used to check the transformations that would be applied to the grammar. There are no language specific options for the test language.

--show-errors
This option writes a copy of the current error messages to the standard output. See the manual entry for more details about changing the error message content.

--switch OPTION
This passes through OPTION as a language specific option. The valid options are described above.

--tab-width NUMBER
This option specifies the number of spaces that a tab occupies. It defaults to 8. It is only used when indenting output.

--version
This option causes the version number and supported languages to be written to the standard error stream.

5. The sid grammar file

The sid grammar file should always be the first input file specified on the command line. It provides an output language independent description of the language to be recognised. The file is split up into a number of different sections, each of which begins with a section header. All sections must be included, although it is possible to leave most of them empty. The following sections exist at present: type declaration, terminal declaration, rule definition, and grammar entry points. They must appear in that order. The sections are detailed below, after the lexical conventions.

5.1. Lexical conventions

Almost all non-printable characters (but definitely space, tab, newline and form-feed) are considered to be white space, and are ignored other than to separate other tokens. In addition, two comment forms are recognised: the C comment syntax, where comments begin with /*, and end with */, although unlike C these comments nest properly; and the C++ line comment syntax, where comments begin with //, and are terminated at the end of the line. Comments are treated in the same way as white space characters.

Section headers are enclosed in percent characters, and are case insensitive. The following section headers are currently recognised:

	%types%
	%terminals%
	%productions%
	%entry%

Identifiers must begin with a letter, an underscore or a hyphen. This may be followed by zero or more letters, digits, underscores and hyphens. Identifiers are case sensitive. The following are all legal identifiers:

	expression	mult-expr	plus_expr	expr-1
Identifiers are split into two namespaces: local names have one space; types, actions, rules, non-local names and terminals share the other namespace, so it is not possible to have an identifier that is a type as well as being a rule for example.

The following symbols are also used:

	&	;	=	[	:	]	!	,
	||	$	?	{	}	(	)	<
	>	->	::	##
All other characters are illegal.

5.2. The type declaration section

The first section is the type declaration section. This begins with the types section header, followed by the names of all of the types used by the grammar. Each type name should be terminated by a semicolon. An example of the type declaration section follows:

	%types%
	NumberT ;
	StringT ;
This declares two types: NumberT and StringT. There is no requirement for the type names to resemble names in the target language (in fact this should be avoided, as it is possible to use many different target languages). All types used in the grammar must be declared here. Similarly, all types declared here must be used in the grammar.

5.3. The terminal declaration section

After the type declaration section comes the terminal declaration section. This section declares the terminals that will be used by the grammar. A terminal is a recogniser for a symbol in the input alphabet of the language to be recognised. It is possible to declare terminals that are not used by the grammar.

The section begins with its section header, followed by the declarations of the terminals. Each terminal declaration begins with the name of the terminal being defined, followed by its type, and terminated by a semicolon. If the terminal is not used in the grammar, the declaration should be preceded by a ! symbol.

A type (for terminals, rules and actions) is written as a colon, followed by the parameter tuple, followed by the -> symbol, followed by the result tuple. If the type is zero-tuple to zero-tuple, then the type may be omitted. A tuple consists of a comma separated sequence of name-type pairs (with the name and type being separated by a colon), surrounded by parentheses. For parameter tuples, the type may be suffixed by a & symbol, indicating that call by reference should be used (the default is call by copy). For declarations, the names should be omitted. For terminals, the parameter type must be the zero-tuple.

The simplest type of terminal declaration is as follows:

	terminal1 ;
This means the same as:
	terminal1 : () -> () ;
An example of a more complex terminal declaration is:
	terminal2 : () -> ( :StringT ) ;
If these terminals were not to be used in the grammar, they should be declared as:
	!terminal1 ;
	!terminal2 : () -> ( :StringT ) ;

5.4. The rule definition section

The rule definition section follows the terminal declaration section. It begins with the section header, followed by the definitions and declarations of all of the rules used in the grammar, and the declarations of all of the actions used in the grammar.

Rule declarations look identical to terminal declarations, e.g.:

	rule1 : ( :NumberT ) -> ( :NumberListT ) ;
	rule2 ;
Action declarations are similar, although the names are surrounded by angle brackets, e.g.:
	<action1> : ( :StringT & ) -> () ;
	<action2> ;
A declaration (or a definition) may be prefixed with the :: symbol. This symbol forces the definition into the outermost scope. Scopes are described later on.

A rule definition (called a production) looks something like the following:

	add-expr : () -> ( v : NumberT ) = {
		v1 = mul-expr ;
		plus ;
		v2 = expr ;
		v  = <add> ( v1, v2 ) ;
	    ||
		v1 = mul-expr ;
		minus ;
		v2 = expr ;
		v  = <subtract> ( v1, v2 ) ;
	    ##
		v = <ex> ;
	} ;
The production begins with the rule name, followed by the parameter and result names and types (in this case, the rule name is add-expr, there are no parameters, and there is one result name v of type NumberT). This may optionally be followed by local declarations (there are none here - they are described later).

The left hand side of the rule is followed by the = symbol, a list of alternatives surrounded by curly braces, and is terminated by a semicolon. The alternatives are separated by the || symbol, and the last alternative may be separated from its predecessor (there must be one) using the ## symbol; if this is the case, then this alternative is the exception handler for the production (otherwise it has no exception handler).

An alternative may match the empty string, using the symbol $ and the terminator symbol ;, i.e.:

	$ ;
unless it is an exception handler alternative (in which case it must do something), or a sequence of items. The empty string is only valid if the production has no results. If you want to match an empty string in a production that has a result, it is necessary to use an action (or identity) to provide a result.

An item is an identity, or a call to a (possibly anonymous) rule, a terminal, an action or a predicate. An identity looks like an assignment in conventional programming languages:

	( a, b, c ) = ( d, e, f ) ;
Each tuple must contain the same number of names. In the case of a one-tuple, the parentheses may be dropped, e.g.:
	a = d ;
Note that this is a binding operation, not an assignment. Each name on the left hand side must be a new name. It is illegal to redeclare a name that is already in scope. It is possible to assign to a name which is already in scope by prefixing that name with the & symbol, e.g.:
	( a, &b, c ) = ( d, e, f ) ;
would assign to the name b, which must have been previously defined (it may be a parameter; if it is a call by reference parameter, then the change will propagate outside to the calling rule).

It is also possible to use the ! symbol in the result tuple to ignore results, e.g.:

	( a, !, b, ! ) = ( c, d, e, f ) ;
This is not particularly useful in an identity, but may be more useful in a call to a rule, terminal or action. A call to a terminal or rule looks like a call to a function in a conventional programming language, e.g.:
	( a, b ) = rule1 ( c, d ) ;
	( e, f ) = terminal1 () ;
Calls to actions have the same form, except that action names are surrounded by angle brackets, e.g.:
	( g, h, i ) = <action1> ( a, e ) ;
In addition, one of the names in the result tuple of the call to the action may be the predicate result symbol ?, in which case the action is used as a predicate (more details on predicates are given later).

When calling a rule, terminal or action, it is necessary to have declared it (or in the case of a rule declared or defined it) before the call.

If a rule or action is being invoked, and it takes one or more call by reference parameters, then the corresponding arguments should be prefixed by the & symbol, e.g.:

	length = <string-length> ( &string ) ;
If the rule, terminal or action has the zero-tuple as a result, then only the right hand side of the definition is required, e.g.:
	rule2 ( a, b ) ;
	terminal2 () ;
	<action2> ( c, d ) ;
If the rule, terminal or action has the zero-tuple as a parameter, then the parameter tuple may be omitted, e.g.:
	( a, b ) = rule3 ;
	terminal3 ;
	c = <action3> ;
In older versions of sid, it used to be possible to have ambiguous items, e.g.:
	a = b ;
where b was both a rule and a name. As local names may not shadow non-local and global names, then this is no longer a problem.

In each case, the data flow through the rule is indicated using names. In the previous example of a production, both alternatives have the same data flow: the call to mul-expr returns one value, which is stored in the name v1, and the call to expr returns one value, which is stored in the name v2. Both of these names are passed to the action (add in the first alternative, subtract in the second), which returns a value into the name v (presumably the sum or difference of the previous two values). The name v is the name of the result, so this will be returned as the result of the rule. The exception handler (which is invoked if something fails) calls the action ex to produce the result v.

It is necessary that the types of the data flow through the production are correct. It is also necessary to define all of the result names for the production in each of the alternatives in that production.

An anonymous rule is written in the same way as the body of a normal rule, e.g.:

	list : () -> ( l : ListT ) = {
		n = number ;
		/* START ANONYMOUS RULE */ {
			? = <at-eof> ;
			l = <make-list> ( n ) ;
		    ||
			comma ;
			l1 = list ;
			l  = <cons> ( n, l1 ) ;
		    ##
			l = <com-or-eof> ( n ) ;
		} ; /* END ANONYMOUS RULE */
	} ;
An anonymous rule is always inlined.

The rule name may be followed by a sequence of definitions and declarations surrounded by the [ and ] symbols (which are followed by the rest of the rule). In this case, the definitions are local to the rule, e.g.:

	x-list [
		x = {
			terminal1 ;
			terminal2 ;
		    ||
			terminal3 ;
		} ;
	] = {
		x ;
	    ||
		x ;
		x-list ;
	} ;
In this case, the rule x would be local to the rule x-list and no other rule would be able to use it. In error messages, the name of the rule x would be written as x-list::x. All declarations and definitions that occur inside of the [ and ] symbols have the scope of the enclosing rule, unless they are preceded by the :: symbol, in which case they have global scope. This is particularly necessary for actions, as actions can only be defined with global scope.

It is also possible to define non-local names. These are declared as an identifier (the name), followed by the : symbol, followed by another identifier (its type), in a similar manner to an entry in a type tuple. Non-local names are not allowed at the outermost level (so they may not be prefixed with the :: symbol either). When a non-local name is defined, it is in scope for all of the rules in its scope that are defined after it is, plus its defining rule.

Non-local names have their values saved on entry to their defining rule, and the value will be restored when the rule is exited. This includes exiting the rule tail recursively or because of an exception (if the rule has an exception handler, the non-local name will not be restored until the exception handler has exited). In almost all other respects non-local names are the same as local names. An example follows:

	rule1 [
		name1 : Type1T ;
		rule1_1 = {
			<action1> ( name1 ) ;
			rule2 ( name1 ) ;
		} ;
	] = {
		<action2> ( &name1 ) ;
		rule1_1 ;
	} ;
It is possible to associate an initialiser with a non-local name, by following the type name with a = symbol and the action name in angle brackets, e.g.:
	rule1 [
		name1 : Type1T = <action1> ;
	] = {
		// ....
	} ;
In this case the action is called at the start of the rule to initialise the non-local name. The action should return an object of the same type as the non-local name. Normally, the action takes no parameters, however it may take one parameter of the same type as the non-local name (or a reference to that type), in which case it will be given the saved value of the non-local name as an argument (this may be used to build a stack automatically for example).

5.5. The grammar entry points section

The final section lists the entry points to the grammar. It begins with the section header, followed by a comma separated list of rule names, terminated by a semicolon, e.g.:

	%entry% expr ;
If you are going to use a rule as an entry point into the grammar (i.e. you wish to call the associated function), you must list it in the entry points list. If not, the function may not exist.


6. The C information file

The grammar specification itself is not sufficient to produce a parser. There also needs to be output language specific information to allow the parser to interface with the program it is to be part of. In the case of the C output routines, sid needs to know the following information:

  1. What code should precede and succeed the automatically generated code.
  2. How to map the sid identifiers into C identifiers.
  3. How to do assignments for each type.
  4. How to get the current terminal number.
  5. How to get the result of the current terminal.
  6. How to advance the lexical analyser, to get the next terminal.
  7. What the actions are defined as, and how to pass parameters to them.
  8. How to save and restore the current terminal when an error occurs.
Eventually almost all of this should be user suppliable. At the moment, some of the information is supplied by the user in the C information file, some through macros, and some is built in. sid currently gets the information as follows:

1. The C information file has a header and a trailer section, which define code that precedes and succeeds the code that sid generates.

2. The C information file has a section that allows the user to specify mappings from sid identifiers into C identifiers. These are only valid for the following types of identifiers: types, functions (implementations of rules) and terminals. For other identifier types (or when no mapping is supplied), sid uses some default rules:

Firstly, sid applies a transform to the sid identifier name, to make it a legal C identifier. At present this maps _ to __, - to _H and : (this occurs in scoped names) to _C. All other characters are left unmodified. This transform cannot be changed.

sid also puts a prefix before all identifiers, to try to prevent clashes (and also to make automatically generated - i.e. numeric - identifiers legal). These prefixes can be redefined for each class of identifier, in the C information file. They should be chosen so as not to clash with any other identifiers (i.e. no other identifiers should begin with that prefix).

By default, the following prefixes are used:

ZT
This prefix is used before type identifiers, for the type name itself.
ZR
This prefix is used before rule identifiers, for the rule's implementation function.
ZL
This prefix is used before rule identifiers, for the rule's label when tail recursion is being eliminated. In this case, a number is added to the suffix before the identifier name, to prevent clashes when a rule is inlined twice in the same function. It is also used before other labels that are automatically generated and are just numbered.
ZI
This prefix is used before name identifiers used as parameters to functions, or in normal usage. It is also used by non-local names (which doesn't cause a problem as they always occur scoped, and local names never do).
ZO
This prefix is used before name identifiers used as results of functions. Results are passed as reference parameters, and this suffix is used then. Another identifier with the ZI prefix is also used within the function, and the type reference assignment operator is used at the end of the function to assign the results to the reference parameters.
ZB
This prefix is used before the terminal symbol names in the generated header file.

3. Normally, sid will do assignments using the C assignment operator. Sometimes, this will not do the right thing, so the user can define a set of assignment operations for any type in the C information file.

4. sid expects the CURRENT_TERMINAL macro to be defined, and its definition should return an integer that is the current terminal. The macro should be an expression, not a statement.

5. It is necessary to define how to extract the results of all terminals in the C information file (if a terminal doesn't return anything, then it is not necessary to define how to get the result).

6. sid expects the ADVANCE_LEXER macro to be defined, and its definition should cause the lexical analyser to read a new token. The new terminal number should be accessible through the CURRENT_TERMINAL macro. On entry into the parser CURRENT_TERMINAL should give the first terminal number.

7. All actions, and their parameter and result names are defined in the C information file.

8. sid expects the SAVE_LEXER and RESTORE_LEXER macros to be defined. The first is called with an argument which is the error terminal value. The macro should save the current terminal's value, and set the current terminal to be the error terminal value. The second macro is called without arguments, and should restore the saved value of the current terminal. SAVE_LEXER will never be called more than once without a call to RESTORE_LEXER, so the save stack only needs one element.

The remainder of this section describes the layout of the C information file. The lexical conventions are described first, followed by a description of the sections in the order in which they should appear. Unlike the sid grammar file, not all sections are mandatory.

6.1. Lexical conventions

The lexical conventions of the C information file are very similar to those of the sid grammar file. There is a second class of identifier: the C identifier, which is a subset of the valid sid identifiers; there is also the C code block.

A C code block begins with @{ and is terminated by @}. The code block consists of all of the characters between the start and end of the code block, subject to substitutions. All substitutions begin with the @ character. The following substitutions are recognised:

@@
This substitutes the @ character itself.

@:label
This form marks a label, which will be substituted for in the output code. This is necessary, because an action may be inlined into the same function more than once. If this happens, then without doing label substitution there would be two identical labels in the same scope. With label substitution, this problem is avoided. In general, all references to a label within an action should be prefixed with @:. This substitution may not be used in header and trailer code blocks.

@identifier
This form marks a parameter or result identifier substitution. If parameter and result identifiers are not prefixed with an @ character, then they will not be substituted. It is an error if the identifier is not a parameter or a result. Header and trailer code blocks have no parameters or results, so it is always an error to use identifier substitution in them. It is an error if any of the result identifiers are not substituted at least once.

Result identifiers may be assigned to using this form of identifier substitution, but parameter identifiers may not be (nor may there address be taken - they are immutable). To try to prevent this, parameters that are substituted may be cast to their own type, which makes them unmodifiable in ISO C (see the notes on the casts language specific option).

@&identifier
This form marks a parameter identifier whose address is to be substituted, but whose contents will not be modified. The effects of modifying the identifier are undefined. It is an error to use this in parameter assignment operator definitions.

@=identifier
This form marks a parameter identifier that will be modified. For this to be useful, the parameter should be a call by reference parameter, so that the effect of the modification will be propagated. This substitution is only valid in actions (assignment operators are not allowed to modify their parameters).

@!
This form marks an exception raise. In the generated code, a jump to the current exception handler will be substituted. This substitution is only valid in actions.

@.
This form marks an attempt to access the current terminal. This substitution is only valid in actions.

@>
This form marks an attempt to advance the lexical analyser. This substitution is only valid in actions.

All other forms are illegal. Note that in the case of labels and identifiers, no white space is allowed between the @:, @, @& or @= and the identifier name. An example of a code block is:

	@{
		/* A code block */
		{
			int i ;
			if ( @param ) {
				@! ;
			}
			@result = 0 ;
			for ( i = 0 ; i < 100 ; i++ ) {
				printf ( "{%d}\n", i ) ;
				@result += i ;
			}
			@=param += @result ;
			if ( @. == TOKEN_SEMI ) {
				@> ;
			}
		}
	@}

6.2. The prefixes section

The first section in the C information file is the prefix definition section. This section is optional. It begins with the section header, followed by a list of prefix definitions. A prefix definition begins with the prefix name, followed by a = symbol, followed by a C identifier that is the new prefix, and terminated by a semicolon. The following example shows all of the prefix names, and their default values:

	%prefixes%
	type		= ZT ;
	function	= ZR ;
	label		= ZL ;
	input		= ZI ;
	output		= ZO ;
	terminal	= ZB ;

6.3. The maps section

The section that follows the prefixes section is the maps section. This section is also optional. It begins with its section header, followed by a list of identifier mappings. An identifier mapping begins with a sid identifier (either a type, a rule or a terminal), followed by the -> symbol, followed by the C identifier it is to be mapped to, and terminated by a semicolon. An example follows:

	%maps%
	NumberT		-> unsigned ;
	calculator	-> calculator ;
Note that it is not possible to map type identifiers to be arbitrary C types. It will be necessary to typedef or macro define the type name in the C file.

It is recommended that all types, terminals and entry point rules have their names mapped in this section, although this is not necessary. If the names are not mapped, they will have funny names in the rest of the program.

6.4. The header section

After the maps section comes the header section. This begins with the section header, followed by a code block, followed by a comma, followed by a second code block, and terminated with a semicolon. The first code block will be inserted at the beginning of the generated parser file; the second code block will be inserted at the start of the generated header file. An example is:

	%header% @{
	#include "lexer.h"

	LexerT token ;

	#define CURRENT_TERMINAL	token.t
	#define ADVANCE_LEXER		next_token ()

	extern void terminal_error () ;
	extern void syntax_error () ;
	@}, @{
	@} ;

6.5. The assignments section

The assignments section follows the header section. This section is optional. Normally, assignment between two identifiers will be done using the C assignment operator. In some cases this will not do the correct thing, and it is necessary to do the assignment differently. All types for which this applies should have an entry in the assignments section. The section begins with its header, followed by definitions for each type that needs its own assignment operator. Each definition should have one parameter, and one result. The action's name should be the name of the type. An example follows:

	%assignments%

	ListT : ( l1 ) -> ( l2 ) = @{
		if ( @l2.head = @l1.head ) {
			@l2.tail = @l1.tail ;
		} else {
			@l2.tail = &( @l2.head ) ;
		}
	@} ;
If a type has an assignment operator defined, it must also have a parameter assignment operator type defined and a result assignment operator defined (more precisely it must have either no assignment operations defined, or all three assignment operations defined).

6.6. The parameter assignments section

The parameter assignments section is very similar to the assignments section (which it follows), and is also optional. If a type has an assignment section entry, it must have a parameter assignment entry as well.

The parameter assignment operator is used in function calls to ensure that the object is copied correctly: if no parameter assignment operator is provided for a type, the standard C call by copy mechanism is used; if a parameter assignment operator is provided for a type, then the address of the object is passed by the calling function, and the called function declares a local of the same type, and uses the parameter assignment operator to copy the object (this should be remembered when passing parameters to entry points that have arguments of a type that has a parameter assignment operator defined).

The difference between the parameter assignment operator and the assignment operator is that the parameter identifier to the parameter assignment operator is a pointer to the object being manipulated, rather than the object itself. An example reference assignment section is:

	%parameter-assignments%

	ListT : ( l1 ) -> ( l2 ) = @{
		if ( @l2.head = @l1->head ) {
			@l2.tail = @l1->tail ;
		} else {
			@l2.tail = &( @l2.head ) ;
		}
	@} ;

6.7. The result assignments section

The result assignments section is very similar to the assignments section and the parameter assignments section (which it follows), and is also optional. If a type has an assignment section entry, it must also have a result assignment entry. The only difference between the two is that the result identifier of the result assignment operation is a pointer to the object being manipulated, rather than the object itself. Result assignments are only used when the results of a rule are assigned back through the reference parameters passed into the function. An example result assignment section is:

	%result-assignments%

	ListT : ( l1 ) -> ( l2 ) = @{
		if ( @l2->head = @l1.head ) {
			@l2->tail = @l1.tail ;
		} else {
			@l2->tail = &( @l2->head ) ;
		}
	@} ;

6.8. The terminal result extraction section

The terminal result extraction section follows the reference assignment section. It defines how to extract the results from terminals. The section begins with its section header, followed by the terminal extraction definitions.

There must be a definition for every terminal in the grammar that returns a result. It is an error to include a definition for a terminal that doesn't return a result. The result of the definition should be the same as the result of the terminal. An example of the terminal result extraction section follows:

	%terminals%

	number : () -> ( n ) = @{
		@n = token.u.number ;
	@} ;

	identifier : () -> ( i ) = @{
		@i = token.u.identifier ;
	@} ;

	string : () -> ( s ) = @{
		@s = token.u.string ;
	@} ;

6.9. The action definition section

The action definition section follows the terminal result extractor definition section. The format is similar to the previous sections: the section header followed by definitions for all of the actions. An action definition has the following form:

	<action-name> : ( parameters ) -> ( results ) = code-block ;
This is similar to the form of all previous definitions, except that the name is surrounded in angle brackets. What follows is also true of the other definitions as well (unless they state otherwise).

The action-name is a sid identifier that is the name of the action being defined; parameters is a comma separated list of C identifiers that will be the names of the parameters passed to the action, and results is a comma separated list of C identifiers that will be the names of the result parameters passed to the action. The code-block is the C code that defines the action. It is expected that this will assign a valid result to each of the result identifier names.

The parameter and result tuples have the same form as in the language independent file, except that the types are optional. Like the language independent file, if the type of an action is zero-tuple to zero-tuple, then the type can be omitted, e.g.:

	<action> = @{ /* .... */ @} ;
An example action definition section is:
	%actions%

	<add> : ( v1, v2 ) -> ( v3 ) = @{
		@v3 = @v1 + @v2 ;
	@} ;

	<subtract> : ( v1 : NumberT, v2 : NumberT ) -> ( v3 : NumberT ) = @{
		@v3 = @v1 - @v2 ;
	@} ;

	<multiply> : ( v1 : NumberT, v2 ) -> ( v3 ) = @{
		@v3 = @v1 * @v2 ;
	@} ;

	<divide> : ( v1, v2 ) -> ( v3 : NumberT ) = @{
		@v3 = @v1 / @v2 ;
	@} ;

	<print> : ( v ) -> () = @{
		printf ( "%u\n", @v ) ;
	@} ;

	<error> = @{
		fprintf ( stderr, "ERROR\n" ) ;
		exit ( EXIT_FAILURE ) ;
	@} ;
Do not define static variables in action definitions; if you do, you will get unexpected results. If you wish to use static variables in actions definitions, then define them in the header block.

6.10. The trailer section

After the action definition section comes the trailer section. This has the same form as the header section. An example is:

	%trailer% @{
	int main ()
	{
		next_token () ;
		calculator ( NULL ) ;
		return 0 ;
	}
	@}, @{
	@} ;
The code blocks will be appended to the generated parser, and the generated header file respectively.


7. Predicates

Predicates provide the user with a mechanism for altering the control flow in a manner that terminals alone cannot do.

During the factorisation process, rules that begin with predicates are expanded if necessary to ensure that predicates that may be used to select which alternative to go down always begin the alternative, e.g.:

	rule1 = {
		rule2 ;
		/* .... */
	    ||
		/* .... */
	} ;

	rule2 = {
		? = <predicate> ;
		/* .... */
	    ||
		/* .... */
	} ;
would be expanded into:
	rule1 = {
		? = predicate ;
		/* .... */
		/* .... */
	    ||
		/* .... */
		/* .... */
	    ||
		/* .... */
	} ;
Also, if a predicate is used to select which alternative to use, it must be the first thing in the alternative, so the following would not be allowed:
	rule = {
		<action> ;
		? = <predicate> ;
		/* .... */
	    ||
		/* .... */
	} ;
When predicates begin a rule, they are executed (in some arbitrary order) until one of them returns true. The alternative that this predicate begins is then selected. If no predicates return true, then one of the remaining alternatives is selected based upon the current terminal (or an error occurs).

It is important that predicates do not contain dependencies upon the order of evaluation. In practice, predicates are likely to be simple, so this shouldn't be a problem.

When predicates are used within an alternative, they behave like terminals. If they evaluate to true, then parsing continues. If they evaluate to false, then an exception is raised.


8. Error handling

If the input given to the parser is valid, then the parser will not need to produce any errors. Unfortunately this is not always the case, so sid provides a mechanism for handling errors.

When an error occurs, an exception is raised. This passes control to the nearest enclosing exception handler. If there is no exception handler at all, the entry point function will return with the current terminal set to the error value.

An exception handler is just an alternative that is executed when a terminal or predicate fails. This should obviate the need to rely upon language specific mechanisms (such as setjmp and longjmp) for error recovery.


9. Call by reference

The default behaviour of sid is to do argument passing using call by copy semantics, and to not allow mutation of parameters of rules and actions (however inlined rules, and rules created during factoring have call by reference parameters). However it is possible to give rule and action parameters call by reference semantics, using the & symbol in the type specification (as described earlier). It is also possible to mutate parameters of actions, using the @= substitution in the action body (also described earlier). It is important to do the correct substitutions in action definitions, as sid uses this information to decide where it can optimise the output code.

If a call by copy parameter is mutated, then sid will introduce a new temporary variable and copy the parameter into it - this temporary will then be mutated. Similar code will be output for rules that have call by copy parameters that are mutated (e.g. as a call by reference argument to an action that mutates its parameters).


10. Calling entry points

When calling a function that implements an entry point rule, it should be called with the rule's parameters as the first arguments, followed by the addresses of the rule's results as the remaining arguments. The parameters should have their addresses passed if they are of a type that has a parameter assignment operator defined, or if the parameter is a call by reference parameter.

For example, given the following rule:

	rule1 : ( :Type1T, :Type2T, :Type3T & ) -> ( :Type4T ) ;
where Type2T has a parameter assignment operator defined, and rule1 is mapped to rule1 (and the type names are mapped to themselves), the call would be something like:
	Type1T a = make_type1 () ;
	Type2T b = make_type2 () ;
	Type3T c = make_type3 () ;
	Type4T d ;

	rule1 ( a, b, &c, &d ) ;


11. Glossary

This section describes some of the terms used in the sid documentation.

Alternative
An alternative is a sequence of items.

Exception handler
An exception handler is a special type of alternative. Each rule may have at most one exception handler. An exception handler is invoked if the current terminal does not match any of the expected terminals, if a predicate fails, or if an action raises an exception, within the scope of the exception handler.

Expansion
This is the process of physically inlining a rule into another rule. It is done during the factoring process to turn the grammar into a form that a parser can be produced for. See the entry for factoring.

Factoring
This is one of the transforms that sid performs on the grammar. See the overview section for a description of the factoring process.

First set
The first set of a rule (or alternative) is the set of terminals and predicates that can start that rule (or alternative).

Follow set
The follow set of a rule is the set of terminals and predicates that can follow the rule in any of its invocations.

Inlining
This is the process of outputting the code for parsing one rule within the function that parses another rule. This is normally done as part of the output process. Expansion is a form of inlining performed during the factoring process, but the inlining is done by modifying the grammar, rather than as part of the output phase.

Item
An item is the equivalent of a statement in a conventional programming language. It can be an invocation of a rule, terminal, action or predicate, or an identity operation (assignment).

Name
A name is an identifier that is used to pass information between rules and actions. Local names are defined within a rule, and only exist within the rule itself. Non-local names are defined in a rule's scoped definitions section and exists in all of the rules in that scope. Non-local rules are also saved and restored across calls to the rule that defines them.

Recursion
Recursion is where a rule invokes itself. Direct recursion is where the rule invokes itself from one of its own alternatives; indirect recursion is where a rule invokes another rule (which invokes another rule etc.) which eventually invokes the original rule.

Left recursion is a form of recursion where all of the recursive calls occur as the first item in an alternative. It is not possible to produce a parser for a grammar that contains left recursions, so sid turns left recursions into right recursions. This process is known as left recursion elimination.

Right recursion is a form of recursion where all of the recursive calls occur as the last item in an alternative.

Production
See rule.

Rule
A rule is a sequence of alternatives. A rule may contain a special alternative that is used as an exception handler. A rule is also referred to as a production; this term is normally used when talking about the definition of a rule.

See-through
A rule is said to be see-through if there is an expansion of the rule that does not contain any terminals or predicates.


12. Understanding error messages

This section tries to explain what some of the error messages that are reported by the sid transforms mean. It does not contain descriptions of messages like "type 'some type' is unknown", as these should be self-explanatory.

12.1. Left recursion elimination errors

The parameter or result types of the left recursive calls in the following productions do not match: PRODUCTIONS: This means that there is a set of rules which call each other left recursively (i.e. the first item in some of the alternatives in each rule is a call to another rule in the set), and they do not all have the same parameter and result types, e.g.:

	rule1 : ( a : Type1T, b : Type1T, c : Type2T, d : Type2T ) -> () = {
		rule2 ( a, b ) ;
	    ||
		terminal1 ;
	} ;

	rule2 : ( a : Type1T, b : Type2T ) -> () = {
		rule1 ( a, a, b, b ) ;
	    ||
		terminal2 ;
	} ;

The exception handlers in the left recursion involving the following productions do not match: PRODUCTIONS: This means that there is a set of productions which call each other left recursively (i.e. the first item in an alternative is a call to another production in the set), and they do not all have the same exception handler, e.g.:

	rule1 = {
		rule2 ;
	    ||
		terminal1 ;
	    ##
		<action1> ;
	} ;

	rule2 = {
		rule1 ;
	    ||
		terminal2 ;
	    ##
		<action2> ;
	} ;
It is quite likely that when using exception handlers, it may be necessary to do the left recursion elimination manually to ensure that the exception handlers occur at the correct place.

The argument names of the left recursive calls in the following productions do not match: PRODUCTIONS: This means that there is a set of productions which call each other left recursively (i.e. the first item in an alternative is a call to another production in the set), and the arguments to one of the left recursive calls are not the same as the parameters of the calling rule, e.g.:

	rule1 : ( a : Type1T, b : Type1T ) -> () = {
		rule1 ( b, a ) ;
	    ||
		terminal1 ;
	} ;

A non-local name in the rule 'RULE' is not in scope in the rule 'RULE' in the left recursive cycle involving the following productions: PRODUCTIONS: This means that there is a set of productions which call each other left recursively (i.e. the first item in an alternative is a call to another production in the set), and the first named rule uses a non-local name that is not in scope in the second named rule, e.g.:

	rule1 [
		name1 : Type1T ;
		rule1_1 [
			name1_1 : Type1T ;
		] = {
			rule1 ;
			<action1_1> ( name1_1 ) ;
		  ||
			terminal1 ;
		} ;
	] = {
		terminal2 ;
	  ||
		rule1_1 ;
		<action1> ( name1 ) ;
	} ;

The rule 'RULE' declares non-local names in the left recursive cycle with more than one entry point involving the following productions: PRODUCTIONS: This means that there is a set of productions which call each other left recursively (i.e. the first item in an alternative is a call to another production in the set), and the named rule defines non-local variables even though it is not the only entry point to the cycle, e.g.:

	rule1 [
		name1 : Type1T ;
		rule1_1 = {
			<action1_1> ( name1 ) ;
		} ;
	] = {
		terminal1 ;
		rule1_1 ;
	    ||
		rule2 ;
		<action1> ( name1 ) ;
	} ;

	rule2 = {
		rule1 ;
		<action2> ;
	    ||
		terminal2 ;
	} ;

No cycle termination for the left recursive set involving the following rules: RULES: This means that there is a set of productions which call each other left recursively (i.e. the first item in an alternative is a call to another production in the set), and they do not contain an alternative that begins with a non-left recursive call, e.g.:

	rule1 = {
		rule2 ;
	    ||
		rule3 ;
	} ;

	rule2 = {
		rule1 ;
	    ||
		rule3 ;
	} ;

	rule3 = {
		rule1 ;
	    ||
		rule2 ;
	} ;

12.2. First set computation errors

Cannot compute first set for PRODUCTION: This means that sid cannot compute the set of terminals and predicates that may start the production. This is normally because there is a recursive call (or cycle) that contains no terminals, e.g.:

	rule1 = {
		<action1> ;
		rule1 ;
	    ||
		terminal1 ;
	} ;
This is not removed by the left recursion elimination phase, as the call is not the leftmost item in the alternative.

Can see through to predicate 'PREDICATE' in production PRODUCTION: This means that there is a predicate that isn't the first item in its alternative, but is preceded only by see-through items, e.g.:

	rule1 = {
		<action1> ;
		? = <predicate> ;
	    ||
		terminal1 ;
	} ;

Can see through to predicates in rule 'RULE' in production PRODUCTION: This means that the first rule has at least one predicate in its first set, and the second rule calls it in a position where it is not the first item in the alternative and is preceded only by see-through items, e.g.:

	rule1 = {
		? = <predicate> ;
	    ||
		terminal1 ;
	} ;

	rule2 = {
		<action> ;
		rule1 ;
	    ||
		terminal2 ;
	} ;

The rule 'RULE' has all terminals in its first set and has a redundant see-through alternative: This means that the rule's first set (the set of all terminals that can start the rule) includes all possible input terminals, and the rule also has a see-through alternative. The see-through alternative will never be used, as one of the other alternatives will always be chosen.

12.3. Factoring errors

Too many productions (NUMBER) created during factorisation: This normally means that sid cannot factor the grammar. You will need to rewrite the offending part. Unfortunately there is no easy way to do this. Start by looking at the dump file for a set of rules that seem to have been expanded a lot.

The rule 'RULE' cannot be expanded into 'RULE' as the exception handlers don't match: When sid performs factoring, it needs to expand calls to certain rules into the rules that calls them (this is described in the overview section). If the called rule has an exception handler and it is not the same as the exception handler of the calling rule, then the expansion will fail.

The rule 'RULE' cannot be expanded into 'RULE' as it contains non-local name definitions: When sid performs factoring, it needs to expand calls to certain rules into the rules that calls them (this is described in the overview section). If the called rule defines any non-local names, then the expansion will fail.

12.4. Checking errors

Collision of terminal(s) TERMINALS in rule 'RULE': This error means that more than one alternative in the named rule begins with the named terminals, e.g.:

	rule1 = {
		<action1> ;
		terminal1 ;
	    ||
		terminal1 ;
	} ;
Normally, the factoring process will remove the problem, but when something like the above happens to stop the factoring occurring, this error will be produced.

Collision of predicate 'PREDICATE' in rule 'RULE': This error occurs when more than one alternative in the named rule begins with the named predicate, e.g.:

	rule1 = {
		( a, ? ) = <predicate> ;
		<action1> ( a ) ;
	    ||
		( ?, b ) = <predicate> ;
		<action2> ( b ) ;
	} ;
Again, it is normally the case that the factoring process will remove this problem, but if the same predicate uses different predicate results in different alternatives, this error will be produced.

The terminal(s) TERMINALS can start rule 'RULE' which is see-through, and the same terminal(s) may appear in the following situations: ALTERNATIVES: This means that there are one or more terminals that can start the named rule (which is see-through), and may also follow it, e.g.:

	rule1 = {
		terminal1 ;
	    ||
		$ ;
	} ;

	rule2 = {
		rule1 ;
		terminal1 ;
	    ||
		terminal2 ;
	} ;
The alternatives listed are the alternatives which call the rule, and contain (some of) the named terminals after the call. The call is highlighted.

The predicate(s) PREDICATES can start rule 'RULE' which is see-through and the same predicate(s) may appear in the following situations: ALTERNATIVES: This means that there are one or more predicates that can start the named rule (which is see-through), and may also follow it, e.g.:

	rule1 = {
		? = <predicate> ;
	    ||
		$ ;
	} ;

	rule2 = {
		terminal1 ;
		rule1 ;
		? = <predicate> ;
	    ||
		terminal2 ;
	} ;
The alternatives listed are the alternatives which call the rule, and contain (some of) the named predicates after the call. The call is highlighted.

The rule 'RULE' contains more than one see-through alternative: This error occurs if the rule has more than one alternative that doesn't need to read a terminal or a predicate, e.g.:

	rule1 = {
		<action1> ;
	    ||
		<action2> ;
	} ;


Part of the TenDRA Web.
Crown Copyright © 1998.