Rolling D20s on the Z80
and other random thoughts
about randomness
My current project uses simulated dice rolls of varying face counts for combat results. I originally was not thinking of targeting 8-bit systems, but obviously I changed my mind. I went through a couple phases finding the best implementation, and I thought it interesting enough to write down!
To start: Generating a random 8-bit number through a linear feedback shift register on Z80 has been well documented.
I use the following method, which is found all over the internet:
;James Montelongo
;optimized by Spencer Putt
;out:
; a = 8 bit random number
; clobbers ALL
ld hl,_LFSRSeed+4
ld e,(hl)
inc hl
ld d,(hl)
inc hl
ld c,(hl)
inc hl
ld a,(hl)
ld b,a
rl e \ rl d
rl c \ rla
rl e \ rl d
rl c \ rla
rl e \ rl d
rl c \ rla
ld h,a
rl e \ rl d
rl c \ rla
xor b
rl e \ rl d
xor h
xor c
xor d
ld hl,_LFSRSeed+6
ld de,_LFSRSeed+7
ld bc,7
lddr
ld (de),a
ret
0-255? Cake!
But what if we are trying to generate a percentage of 1-100? But what about a d8? A d20? What about a complicated dice roll for our 8-bit OGL 2.0 games?? Well, that's where things get complicated.
If you are only given an input of 0-255, and you have to make that number fit WITH A SIMILAR PROBABILITY into, say, 20 digits (for a 1d20), then suddenly you have some complications.
If the number generated by the LFSR is in between 1-20, its no problem. But that is only 20 out of 256 cases. If the number is, say, 149, what's the fairest way to squash that down into the range we ask for? What if its 0?
A mathematician will tell you to find the lowest common denominator via cross product between the die sides (20) and the result range, 256. e.g.
n/20 x 149/256
== 2980n/256
== (rounded down to) n=11.
149/256 is approximately 11/20, great. Wouldn't this suck to do in assembly though?Before I wanted to try mul8 and div16, which feels like a lot of wasted cycles, I was thinking about various shortcuts that I could use.
My first attempt was to simply srl
the value until it fits within the constraints. When discussing this with my sibling, they correctly pointed out that, in effect, this would keep the value in the top half of the range; especially with higher 'die face' values this would skew results far too high. Its super quick, but extremely dirty. Not good for a game.
My second attempt was a similar method of bitmasking:
MAXU8:
ld c,rangeMax
ld b,0xff
_m:
cp c
ret c ; exit
srl b
and b
jr _m
This masks the input with B each iteration, this time B is shifted right each loop. This provides, surprisingly, pretty good results. With an input of 0...n, and a max of 20, this is what I was getting:00 01 02 03 04 05 06 07
08 09 0a 0b 0c 0d 0e 0f
10 11 12 13 14 05 06 07
07 08 0a 0b 0c 0d 0e 0f
00 01 02 03...
See the pattern? A similar pattern continues for all rangeMax values. The masking ends up knocking out the bits that would make up the top and bottom edge half of the results range. So, for 1-20, 5-15 occur twice as often. For 1-100, its 25-75, etc.
This isn't the end of the world, but it makes for odd probabilities, and potentially a bad game feel. Rolling average isn't bad, but when it happens twice as often as really good or really bad rolls, the game is going to feel really "normalized".
At first, I thought I would lean into this. Astute observers will note that in the hex values above, I'm generating numbers of 0-20, which is actually 21 values. The period of these inputs is 32, so the actual probabilities of the values are:
Thinking to favor the player, I figured instead of removing 0, I would just make results of 0 equal to the highest die roll instead -- by having a 2/32 chance to roll the max value, perhaps it would offset the slightly un-randomness of other rolls.
1/32 : 0-4
2/32 : 5-15
1/32 : 16-20
And know what? It would be fine. This actually might be preferable! Average rolls are easier to get, and criticals are just a teensy bit more common (1 every 16 rolls instead of 1/20).
I thought I had it, and I was done. But then I started writing this post. I realized that a multiplication and division really wouldn't be TOO bad, and sketched it out:
DivHLbyBC:
; HL = HL / BC
; clobbers HL, BC, AF
push de
ld de,0
and 0
_dsl:
sbc hl,bc ; HL -= BC
jr c,_dds ; if > 0
inc de
jr _dsl ; inc DE / LOOP
_dds:
ld h,d
ld l,e
pop de
ret
MulHbyL:
; H * L
; returns in HL
push bc
push de
; smaller number should be in H
ld d,h
ld h,0
ld b,0
ld c,l
_mhl:
add hl,bc ; l += l
dec d ; h times
jr nz,_mhl
pop de
pop bc
ret
It might be sloppy or slow, I am very tired, but it seemed simple enough, so I gave it a try. But guess what I noticed?
If you are dividing by 256 (the lowest common)... isn't that just the same as the high byte of the result? For instance, 0x0312 divided by 256 (or 0x100) is 3. 0x41aa divided by 256 is 0x41 (and yes, the low byte is just the remainder).
Wow. I'm glad I tried to write this blog post! I commented out the division, moved H into A, and ran a full series of 0-255 to see the spread.
Inputs 0 - 80:
00 00 00 00 00 00 00 00
00 00 00 00 00 01 01 01
01 01 01 01 01 01 01 01
01 02 02 02 02 02 02 02
02 02 02 02 02 03 03 03
03 03 03 03 03 03 03 03
03 04 04 04 04 04 04 04
04 04 04 04 04 04 05 05
05 05 05 05 05 05 05 05
05 05 06 06 06 06 06 06 ...
... 200+:
10 10 10 10 10 10 10 10
11 11 11 11 11 11 11 11
11 11 11 11 12 12 12 12
12 12 12 12 12 12 12 12
13 13 13 13 13 13 13 13
13 13 13 13 14 14 14 14
14 14 14 14 14 14 14 14
About as even as you can ask for! How about that. So this is a little key, then, to convert any base-256 number into any other base! Or in our language, a way to evenly smooth out a number from 0-255 into any smaller number.
rollDie:
call z80rand
ld l,a ; random result into l
ld h,DIESIDES-1 ; sides of die-1
call MulHbyL
; ld bc,256
; call DivHLbyBC
ld a,h ; skip division! add one!
inc a
ret
That's it! It's really fast, and has a nice, even spread - even for edge numbers. All that's left is calling it several times for multiple die and add or subtract an arbitrary modifier.
Hope this was interesting, or helps you in some way!