MODE 32 Colour Formats)

I don't know the proper vocabulary for some of this so I'll just make it up as I go:

In mode 32 the formal colour designation (the "design format") is in the form of a 16 bit word, thus:

                   MSB      LSB
                -------- --------
1.              rrrrrggg.gggbbbbb

in other words, 5 bits red colour information, 6 bits green and 5 bits blue. (See Tony Tebby's GD2-V2.doc rev. 28.05.2001.) So this is how you present the colours to the driver in NATIVE mode 32. According to that:

COLOUR_NATIVE: CLS#2
BLOCK#2; 60, 10, 10,  0, %1111100000000000
BLOCK#2; 60, 10, 10, 10, %0000111111000000
BLOCK#2; 60, 10, 10, 20, %0000000000011111

will produce red, green and blue stripes in console window#2.

Note that in high colour mode all graphic operations are performed in "native" mode. Using the commands COLOUR_QL, COLOUR_NATIVE, COLOUR_PAL, COLOUR_24 only alter which traps are used to set the colour and therefore how the colour code you use to set INK, PAPER etc is to be interpreted. Then that colour code is converted to the internal format.

For technical reasons, presumably, machines with Intel-infected CPUs operate internally with Little-endian numbers. (If you really, really want to know more about this look up Endianness in Wikipedia!) So in order to maximise manipulation and throughput of graphics data on SMSQ/E systems that run on such hardware (emulators all) SMSQ/E operates with this data switched around compared to the original Motorola model. So you get the anomaly of having, in effect, two mode 32 definitions: The "design format" as (1) above, and the "implementation format" (2) as seen below:

                   MSB      LSB
                -------- --------
2.              gggbbbbb.rrrrrggg

(See the SMSQ/E documentation dev8_extras_doc_display_txt.)

The colour traps all take their colour parameters according to the design format (1), and then, internally, switch the bytes around into the implementation format (2). This latter format is how screen data is stored in memory - and also how it is stored in PICtures and SPRites.

Therefore if you extract a pixel off the screen, you cant just give that back to PAPER or BLOCK without first switching it back. And vice versa, you cant just poke a design-format colour code into screen memory or a sprite; you first need to switch into the implementation-format colour code. You can, however, peek a pixel off screen and poke that directly into a sprite to get the same colour because they are both in the internal implementation format. And you can, of course, also sbytes a portion of the screen to get a valid mode 32 screen dump or picture.

A value fetched from within the system, eg off the screen as RPIX% does or from the Channel Definition Block as GETCOL% does, get the Little-endian (2) version of the colour code and switches it around to Big-endian (1) so the result can be used directly by INK, PAPER, BLOCK, etc. This allows the commands to work the same way whichever GD2 mode your program happens to be running in.

However, to use those colour codes to produce sprites or pictures, they have to be switched back again! - Or they simply have to be left in their internal, Little-endian state.

Since there has been some confusion about this topic recently I've gone through the relevant Knoware toolkit commands to list how each command works in relation to this. Commands that expect parameters or return results in the mode 32 "design format" are labelled (1), and those in raw/Little-endian/internal or "implementation format" are labelled (2).

Not directly related to graphics in- or output but still relevant to this, There are a few commands on Knoware to swap bytes around where needed to convert between the two formats:

I have never been lost, but I will admit to being confused for several weeks.
- pjw with Daniel Boone