OK, so here is "everything you always wanted to know about memory but was afraid to ask":
1. DIMM modules contain a small EEPROM non-volatile memory which is called SPD (for "Serial Presense Detect", a name is a bit of a historical misnomer because there used to be a PPD - Parallel Presense Detect), which contain information about the module - supported mode, timings, voltage, etc. It uses an SMBus protocol, which is a variant of I2C bus. This is why DIMM requires 3.3 V power in addition to regular DDRx power like 1.5 V - that EEPROM device is powered by 3.3 V.
2. PCs and high-end SoCs read the SPD during early startup (when BIOS/UEFI is executing), and reconfigure memory controller to these specifications. Starting from DDR3, it includes a phase of "memory training", which adjust signal timings to ensure the best signal quality. However since that memory training can take a while (in case of DDR5 it can be minutes!), PCs typically save connected modules' "fingerprints" along with trained delay adjustments in the battery-backed memory - CMOS, this way after a power cycle they can check if connected DIMMs are still the same, and if so - they simply load and apply saved parameters, which is orders of magnitude faster than doing the training.
3. In contrast to PCs, FPGA systems typically don't presume changing DIMMs, and so they don't read SPD because it would be a waste of FPGA resources, instead all nessesary timings and delays are built into the controller HDL when it's configured for specific DIMM module.
4. As a result of (3), whenever you swap a module for another one, you will need to reconfigure your controller and generate a new bitstream, which would contain parameters of that new module.
5. The problem with modules you can buy in computer parts stores is that it's typically impossible to get your hands on datasheets for modules, and for memory devices used on that module, which makes it hard to figure out what timing parameters to use with an FPGA memory controller. Exception to this is Crucial, as it's wholly-owned by Micron and so all their modules use Micron's memory devices, datasheets for which are publicly available on a Micron's website. Micron itself also produces memory modules, but they are typically on a pricey side, though they do have a good reputation on a market.
Now onto specifics of our project:
1. Because of limitations on a MIG pinout for the part we have chosen, we can only use banks 16, 15 and 14 to implement a 64 bit memory controller.
2. However a byte group 0 of the bank 14 also contains pins used by FPGA during configuration - specifically pins D0-D3, and a chip select (CS) of a QSPI flash memory, which is where a bitstream is stored. And since flash memory which can be powered by 1.5 V does not exist, we will have to use a 1.8 V QSPI flash device and a voltage translators to convert between 1.5 V and 1.8 V. We can not use 3.3 V QSPI flash in this case.
3. As memory interface will consume pretty much the entirety of banks 14, 15 and 16, we will only have about 130 pins available from banks 34, 35 and partially-bonded out 13 for everything else. That is not a lot of pins.
4. Due to the large size of a resulting board (SODIMM is rather long, and needs to be placed far enough from FPGA to ensure sufficient clearance for a heatsink and a fan), and not many IO pins remaining available, I'm not really convinced that it's worth making it a module, as opposed to adding all peripherals with their interface connectors on that board, and only having a low-ish speed connector for other peripherals via regular 0.1" headers. That is something that we need to weigh against the costs of making a large baseboard/carrier which would accomodate such large module and some high-speed interfaces.
5. I have never personally implemented such a scheme with SODIMM and voltage translators, so there is an increased design risk that something can go wrong. I'm not saying it will, but I can't be 100% sure due to the lack of personal experience.