Having both PCIE slot and an edge connector will require using a high-speed switch for PCIE lanes and a switch for the reference clock line (because PCIE addon cards are supposed to use a clock provided by the host), and I think that's not that great of idea as PCIE addon design has some limitations on a form factor as well as on connector placement, which is why I would prefer to design a separate PCB with an edge connector. Incidentally PCIE connector has an optional JTAG lines, which - if connected properly - can allow programming both host and an addon at the same time using a single JTAG connection (this is called a JTAG chain). AC701 devboard has such connection for a FMC connector with an switch which is tripped automatically when something is connected to that connector, so we can use the same idea for a PCIE port. That will require an addon card designed specifically to support such scenario - as JTAG from FPGA will need to be wired to an edge connector, but I like that idea nonetheless.
Yes, I've noticed the JTAG lines and wondered about the possibilities there. So they'd need to be connected to the FPGA following the rules for daisy-chained JTAG devices (TDI being the chained link, everything else is a parallel bus.)
I've been looking at the AC701 design and intend to use a 74LV541A buffer to (presumably) prevent the stubbing issue you've mentioned previously if there's going to be more than two endpoints on the JTAG bus.
I like the automatic switch idea for connecting/disconnecting the PCIE JTAG to the board's FPGA programming bus - is there a purpose-built IC I can use for that task, or is it a case of OR-ing the PRSNT#2's together into a transistor to short TDI/TDO across the PCIE connector?
Speaking of which, I'm not 100% sure how the PRSNT lines are used by the PCIE host; does the host have a weak pullup on PRSNT#1 and the remaining PRSNT#2s are connected to IOs so the FPGA can detect which one is high and determine if a card is connected and, if so, whether it's a x1 or x4 card, or does something else go on there?
I was under impression that you will need a 5V rail anyway to power your existing Z80 sandwitch, but if it's not required, than we can eliminate that rail.
Originally yes, but now I've decided to ditch the old uCOM stack and go all-in for making this board a full soft-core CPU system, I don't see the need for a 5V rail any more. 100k LEs should be enough to emulate most systems people would want to run, including up to Linux-running 32-bit systems. It shouldn't take much effort for me to get a Z80 core running and emulate the ROM.
In fact, ROM is going to be something I need to think about. There needs to be some form of permanent storage on the board for ROM software/data. i.e., my uCOM boots from ROM (as do most computers, I suspect), so I'd need space on an EEPROM or other form of storage (I'm open to suggestions) to hold this data and allow the soft-core CPU to boot up without having to rely on using the FPGA's internal memory. Should be simple enough to connect a FRAM or serial EEPROM to the FPGA and map its memory into wherever it would need to go for the soft-core CPU? Would it be possible to use spare room on the FPGA's config flash chip for this?
That's what I'm thinking too. I would also connect 12V and 3.3V pins of a PCIE connector through fat (something like 0805) zero ohm resistors to allow disconnecting those rails in case it's required for a connected card (thinking about scenario of a PCIE jumper cable which would connect 3.3V PCIE regulator on one board with the same regulator on another one which will cause problems, or if you want to power a second board with a separate power supply - for example if the one you use isn't powerful enough to power both).
Good point. What about instead of 0805 links, would DIP switches be okay/suitable? Would make it all much more easily configurable.