Objectives of lecture:
1 Basic computer graphics transformations
2 Clipping
3 Line crossing
4 Line generalization (REDUCE/REDUCT)
Data space => Display space (preferably with some intermediary
device ind. space)
Views are not comprehensive, sheet edge model not adequate...
Hence, "windowing and clipping" required
Data window, "viewport", screen window (all three may
be manipulated as with scroll bars moving over some virtual "screen")
Code fragment from ODYSSEY based on classical Comp. graphics
work
windows parallel to the axes are simpler.
clipping polygons is trickier as you have to figure out which
portion to shade.
Saalfeld sets up six alternative representations of a line
and proposes the point vector form. (lecture from Bowyer and Woodwark)
Clarke (11.3.2 pages 198-203) makes each mistake described by
Douglas, so does Wise (Chapter 3).
Lines parameterized by y=a + bx will produce lots of special cases
and trouble.
Wise just "stop"s the program if both lines are vertical (line 5 page 48)! Is that a solution?
His test for "vertical" is EXACT, which implies perfect
hardware and no issues with resolution. Other code uses some constant
for SMALL... (eg. Clarke)
Also, the use of a fixed SMALL does not account for different
scale of coordinates
extremely sensitive to nearly colinear lines (divides by difference
in slopes; Wise page 49 line 6))
Wise also "stop"s when lines are exactly parallel,
but there is still a need to decide if they intersect! [If you
have shape files, there are a LOT of these lines; they are the
shared segments on common borders...]
Also, potential problem with coordinate range losing significant
digits (no offset)
Douglas (along with writers of Programmer's Geometry) show reasons
for use of
ax + by + c = 0 notation for lines. (must normalize too...)
Works by computing distance from point to a line (leaving sign
for above, below)
Douglas paper presents "6 ways two points can be located
relative to another line";
thus creating a 6X6 matrix of results (one segment's case against
other segment's case);
These are a start, but adding a tolerance (as in ODYSSEY and in
Arc, etc.) adds even more cases.
Dougenik expanded the cases by recognizing that each point could
be left, right or "too close to call" [within tolerance"]
of the test line. This makes 9 cases (three for each end are independent
of the others) thus a 9X9 matrix with 13 distinct outcomes.
Line treatment anomalies still happen (underflow results detected...)
Big picture: it is more important to try to avoid calling
CROSS (intersect) altogether
Wise suggests using the bounding box (he calls it MER for minimum enveloping rectangle), but this does nothing to reduce complexity...
[more data structures, not more algorithms]
Next topic: Line generalization (REDUCE)
Index from here: Next lecture
| Schedule | Questions
|
Version of 13 January 2003