Let me ask a serious question: Is the a Qualified Hardware List for Linux? Is there a single site I can visit that either provides a list of hardware known to work or, at least, a list of hardware known to be problematic?
No, there is no such list. Several distributions have tried to provide one, but manufacturers have shown zero interest, and since they often change the hardware without changing the version number or product name, it is basically impossible for volunteers to maintain such lists.
So, what list do I take to the store when I am shopping for parts?
You first collect a list of candidates to choose from. Then, you check the LKML and bugzillas (using a web search) for support and issues on each part, and reject the ones that have an issue reported more than once. Start with the motherboard.
Current motherboards with integrated Intel graphics should be well supported in Linux, but the very newest models often need tweaking -- for example, the temperature sensors may not be supported yet. So, the best bet is to look for established models, and look for problem/success reports. In general, only "gaming" motherboards tend to have issues, and those mostly dealing with graphics and overclocking-related features. Dual graphics chipset motherboards are particularly quirky, since the motherboard manufacturer decides their wiring, and they do not always bother to tell Linux devs how to do that. For AMD chipsets, check if the support is already upstream, or whether you need to download stuff from AMD's website (which is not tenable in the long run; you want upstream support for Things To Just Work). I've built a few machines using Gigabyte, Asrock, and MSI motherboards, but keep away from Asus for various personal reasons.
All motherboard manufacturers have qualified vendor lists (QVL) for recommended memory. These are by chip, and usually include the wait states etc. details. Just remember that "almost the same" is not "the same". Linux uses unused memory as an I/O cache, so more memory means a larger part of your working set stays in memory; with SSDs, that is less of an issue. In fall 2019, I'd look at using all lanes with 16 GB of RAM at least.
If you intend to just stripe or mirror drives, use softraid. It is faster and portable across hardware. Never use motherboard-integrated RAID features, they suck and leave you in a bind if you migrate the machine. If you need just additional ports, look for JBOD (Just a Bunch Of Disks) support on the card. If you are building a server with proper NAS, there is darn good support for server iron (because almost all clusters run on Linux). The drives themselves are compatible, but their reliability differs A LOT. For example, I will not trust any Seagate spinning-disk drive with my data, and for a good reason. For SSD, I like Samsung, obviously, but I have much more experience with the spinning disk variety. (Funnily enough, Samsung manufactured some really good but cheap 500GB and 1TB HDDs before they sold the unit to Seagate.)
USB devices are the nastiest to deal with. Many physically different devices can be sold as the same device, and unless you check the actual USB vendor and product ID (VVVV:PPPP in hexadecimal), you won't know. It is never printed on the package, though; you need to connect it to a computer and run
lsusb to find out. And, when you do that, it's less work to just test it as well. (But do do a web trawl to see if the device is a dud, though.)
Graphic cards' ostensible support status you can find out by doing a web search, but only real-world testing will actually tell. I don't like 'em, except for GPGPU use. Yeah, I simulate a lot of stuff, and only play old-style HTML5 platform puzzle games.
For other extension cards you might need, for example extra network cards, do a web search like on the motherboards, or find out the exact chipset. Basically all will be supported, but the quality varies. This is particularly true with wireless networking. USB is easier for wireless networking, but is limited to 480 Mbit/s (about 45 Mbytes/sec in practice), as you can use up to 5m long USB cable. For PCIe cards, you may need extra antenna cables to move the antennae to somewhere sensible.
After you have the above sorted out, it is time to pick the chassis and the power supply. I personally go for silencing, and spend quite a bit of time adding vibration and noise dampening, and extra fans to control the airflow. I like having a 4-way fan and temperature controller in the front panel, just in case; with at least one of the temperature sensors measuring enclosure air temperature via a string-mounted heatsink. I've thought about building a separate double box for my optical drive -- old backups and such --, to silence the darn thing. I like making custom cases. Heavy cases are easier to silence than lightweight ones.
So, overall, you do need to do a lot of extra work to build machines fully supported in upstream Linux without proprietary drivers and hassle. You need to trawl through web and mailing lists to find out possible problems in each component beforehand. Most stuff is supported; you just don't want to get stuck with an important component that isn't. This is why I recommend testing with a USB stick or external hard drive with ones preferred distro, instead.
After a couple of years of actual use, I
can recommend HP EliteBook 830 G3 and 840 G4 laptops for Linux use, though. But that's just because I happen to be writing this on one, with the other nearby. I don't have current desktop or server hardware at hand to recommend.