Ok, yes, I do think that 'nockieboy's following bridge interface naming convention could be amended:
We know it is a Z80 interface specifically with Z80 names and IO symbols on one side which can be confused since his uses 'h_xxx', and the '_xxx' may be confused with the his other 'h_xxx' inside the FPGA. What belongs where? And how to use?
The 'h_' connections for the GPU ram should have been just been a GPU_RAM... port. And his Z80 Verilog bridge should have had Z80_xxx names for each Z80_ signal. And maybe the 245 buffer controls could have been labeled 'Z80_245buffer_xx'. OMG, this would end the difficult time I have reading his code & understanding what's being wired where and how it works without needing the block diagram illustrations.
Yes, using the Avalon bus is nice for many.
This project will eventually need read and write simultaneously to the GPU ram with 0 wait states, as this project will have a hardware draw / copy function which shares the Z80's GPU memory port. And we want to fill/draw/copy as close to the 125Mhz in million pixels per second as possible, minus the 2.7 million possible Z80 transactions a second. So, for this project, Yes on adopting the Avalon naming scheme if you want, but, we will need to hard wire each specific unidirectional address/data read path, write path channels into a priority encoder/selector, and take the return data into the right registers with acknowledge will be nothing more than a address & read req, or write req on each dedicated address line to the ram. Coming out will be the read rdy which is just the read_req piped through with the right clk delay of the FPGA memory core. That's 2 or 3 wires in and 2 or 3 wires out. (Counting the data and address as 1 wire + the req signal). There won't be time for any bus arbitration except on this tiny Z80 interface bridge.
Now, here is the big + for Avalon. Instead of this Z80 to GPU ram module which is currently being designed, but, if a GPU to Avalon module was being designed instead, and, in the public domain, a Z80 to Avalon bridge exists, as well as 6502 to Avalon, as well as 68000 to Avalon, ect, this may be the perfect spot to use Avalon. However, once again, on the other side of this Avalon bridge where the GPU ram port resides, we just want those 3 wire interface ports since we want 3 to 5 new high speed parallel channels to access the GPU ram directly as static ram. The other big + if if someone else want to port this GPU into their project, the Avalon bridge link would help if they are familiar, however, since the 'host' GPU ram interface is only supposed to be and address, data in and data out with a write enable, just using the Avalon correct 4 labels would be enough to function.
Note that inside the current project, the 5 existing 5 port parallel access to the GPU ram already exists with only 1 address input and 1 data output, and currently unused, a request auxiliary flag input and output as well as passing the address through.