We are using Artix-7 devices but the curriculum (inherited by me) starts them off with schematic entry. The thinking (confirmed in my experience) is that they need to see the HDL as generating gates first and foremost, lest they start trying to "code" in HDL. So they design some simple circuits, up to and including a basic state machine, pulling gates, flip flops, etc. from the Xilinx library.
I'd suggest this may not be an approach that's kindest to your students, or which gives them the most useful grounding in either VHDL or the wider way in which FPGAs are used.
I totally understand the desire to show how an FPGA can be programmed to behave as though it were some combination of 74 series logic devices all connected together. I get that there's a certain intuitive clarity to this - but there's also a major problem.
It
used to be the case that, if an engineer wanted a device that behaves a certain way, then they'd have to work out for themselves what logic would produce that behaviour. The starting point has always been that desired behaviour, and the exercise of figuring out what combination of gates and memories might actually achieve that behaviour
used to be a thing we all had to be able to do.
The thing is, we simply don't any more. That step, of figuring out manually how to map complex behaviour onto 74 series chips, is one which has been both automated and superseded. The building blocks aren't even AND, OR and NOT gates any longer; they're much more complex n-element look-up tables with clocked D-types in between.
The existence of the synthesis tool brings with it a whole different design flow. The engineer can now describe, in a clear and readable scripting language, what behaviour is required. Compared to a list of logic gates and a wiring diagram, the script is clearer, more maintainable, and much better able to describe a complex system.
We all 'get' the idea that an FPGA is a device that can be programmed to behave like a bunch of logic gates. The concept of a programmable device is a trivial, familiar one. We don't need a lab practical to prove the point.
Instead, why not develop a practical that illustrates some of the most relevant issues that crop up when developing for an FPGA; for example:
- start with a simple clocked process, which could even just be a logic gate, but with registered inputs, or outputs, or both. Or neither. Show how the clock may add latency but eliminates glitches
- show how VHDL can be used to describe complex behaviour
without the need to describe every logical case independently. My favourite example for this is to calculate the write request for a FIFO, which must be: 0 on every clock edge by default, is set to 1 when some specific conditions are met that means data is available, but despite all of the above, must be 0 whenever a separate reset signal is asserted. This behaviour is trivial to describe in VHDL, and doesn't require a complex chain of if - then - else structures. Just put the assignments in order from top to bottom, explain how later assignments take precedence. This demonstrates how important it is to understand that the statements are NOT being executed one after the other, and that the resulting signal does NOT have glitches on it
- move on to consider what happens when inputs change at just the same time as the clock does. Introduce static timing analysis, ensure that setup & hold times are always met.
- then consider how data is moved from a device in one clock domain to another. What happens if the timing of one clock relative to another is unknown or cannot be controlled? It's never too early to mention double sampling or metastability, they're
so fundamental to robust FPGA design that they really are day one concepts, IMHO.