Author Topic: $1000 USD CAD and Rendering Workhorse. Getting the Balance right?  (Read 35163 times)

0 Members and 15 Guests are viewing this topic.

Offline olkipukki

  • Frequent Contributor
  • **
  • Posts: 790
  • Country: 00
Re: $1000 USD CAD and Rendering Workhorse. Getting the Balance right?
« Reply #150 on: July 09, 2019, 01:19:41 pm »
I reckon OP's bottleneck will be on GPU side, DaVinci Resolve in 4K mode is really demanding and worth to consider how to expand system with 2nd or 3rd card in a future.

On the other side, Fusion 360 is a generic package and not so pedantic to hardware regardless modelling or CAM workflows.
 

Offline mnementh

  • Super Contributor
  • ***
  • Posts: 17541
  • Country: us
  • *Hiding in the Dwagon-Cave*
Re: $1000 USD CAD and Rendering Workhorse. Getting the Balance right?
« Reply #151 on: July 09, 2019, 02:19:53 pm »
The only reason there's a video card with 8 GB added is currently DaVinci, of which it's specifically stated that it really appreciates a video card with 8 GB of VRAM if you want to do 4K color grading. Puget benchmarks also indicate that the very best card for DaVinci is NVIDIA RTX 2080 Ti, better than any Quadro card. I doubt that's within beanflying's budget though and it should be noted that this doesn't appear to take the recent releases into account. The video posted seems to indicate that the new AMD cards could mean a significant improvement. Apparently Fusion 360 doesn't really care about whether it's running on a consumer or professional card. From a technical point of views it's vastly different from the classic CAD applications like AutoCAD and even those are being modernized. beanflying has also indicated that he'd like to be able to play a game every now and then. It does show that building what you call a "gaming rig" is appropriate for the situation. Which card exactly depends on the budget and how important DaVinci is compared to other tasks.

There's zero evidence for PCIe 4.0 being an upgrade with real world benefits. Without any evidence presented that topic is dismissed. It'd be appreciated if you could dial back the attitude towards other people in this thread. People are spending time and effort helping beanflying making a solid choice and they may actually know what they talking about. Let's have some fun rather than endlessly bickering.
It'd be appreciated if you could dial back the attitude towards other people in this thread. People are spending time and effort helping beanflying making a solid choice and they may actually know what they talking about. Let's have some fun rather than endlessly bickering.

I've ALREADY stated this; again and again. What bean has asked for is the equivalent of a top-tier gaming rig, WITH massive bandwidth and multi-thread processing ON TOP of that. There's a HUGE difference between THAT and building last year's "Budget gaming rig", which is obviously what you're ALL doing. You can see it in where you cut corners; pretty much EVERYTHING that boosts bandwidth and multi-thread is what YOU seem to think is unimportant. B-series MBs? DDR3200? SERIOUSLY?

No evidence? There's evidence right in the video y'all linked that pcie4.0 is a big thing. The difference between supporting it and not supporting it is effing $40-80. Even SINGLE nvme SSD performance is markedly improved, and it brings the capability to run MULTIPLE nvme SSDs at full bandwidth AT ONCE.

Now you attempt to be "the reasonable one"? You were, and still are, being deliberately obtuse. I called you on it. Not sorry if that hurt your feelings. Also not sorry for telling inane feces-flingers like wraper to stop.

Cheers,

mnem
 :palm:
« Last Edit: July 19, 2019, 11:24:03 pm by mnementh »
alt-codes work here:  alt-0128 = €  alt-156 = £  alt-0216 = Ø  alt-225 = ß  alt-230 = µ  alt-234 = Ω  alt-236 = ∞  alt-248 = °
 

Offline mnementh

  • Super Contributor
  • ***
  • Posts: 17541
  • Country: us
  • *Hiding in the Dwagon-Cave*
Re: $1000 USD CAD and Rendering Workhorse. Getting the Balance right?
« Reply #152 on: July 09, 2019, 02:42:34 pm »
As I am fairly badly colourblind and lack a $1k+ monitor there is little to be gained with looking to hard on grading but a better monitor is planned after the box. The more I look at it the RX580 is the low point and new fits in well for the budget. A lot of what I have been looking at is filtering out 100-200+FPS BS on cards with game X down to some real numbers and productivity.

Much as I have set a budget I only have to justify changing it to myself and my Cal gear is testament to not being bound by $ for a result. Does Beanflying need a 2060super or an RX 5700  >:D

You want to look into the newest crop of VA panel displays. While the pixels themselves aren't as fast as the top-tier IPS (2-3ms vs 1-2ms) costing 5x more, and nowhere near a really fast TFT display, the display architecture permits some REALLY fast signal processing; as in 1-3ms vs 20s & teens signal processing latency for previous generations top gaming screens. The great part is decent color gamut as well.



I picked one of these up because it was on sale for $299 and I had it. LG is now offering the same panel under their own name; MicroCenter here has it for the same price. I think it's an amazing balance between the two types. Still not decided whether I like it for gaming; it has a bit of the haloing effect of IPS.

Cheers,

mnem
 :popcorn:
alt-codes work here:  alt-0128 = €  alt-156 = £  alt-0216 = Ø  alt-225 = ß  alt-230 = µ  alt-234 = Ω  alt-236 = ∞  alt-248 = °
 

Offline Black Phoenix

  • Super Contributor
  • ***
  • Posts: 1129
  • Country: hk
Re: $1000 USD CAD and Rendering Workhorse. Getting the Balance right?
« Reply #153 on: July 09, 2019, 02:53:36 pm »
Ok Sir:

Let's analyse some things since you clearly say one thing and then change to another as you see fit:

Picking up an old reply from you:

Here's my quick thumbnail cost analysis; exclusive of the usual "sundries" which I'm pretty sure bean has plenty:

$150 - +/- Decent Case
$180 - 570X MB
$140 - DDR4 (Corsair Vegeance 32GB DDR4-3200; now sold out) Still average price for decent
$120 - Decent PSU
$125 - nvme SSD ~0.5GB

That leaves ~$285 for video and CPU, + approx 30-50$ if you go with 16GB of name-brand DDR4. The 3600X is available right now for $US249.00 shipped from Amazon. The 3700 is listed right now at $329 pre-order from B&H Photo and the 3900 at $499, just as suggested in the press release.

mnem
 :popcorn:

My configuration:


Regarding the PC Config, I recommend something like this: (A 1200$ without Monitors and Keyboards/Mouse):
Quote
PCPartPicker Part List: https://pcpartpicker.com/list/JPWMRJ

CPU: AMD - Ryzen 7 2700X 3.7 GHz 8-Core Processor  ($254.99 @ Newegg)
CPU Cooler: be quiet! - Shadow Rock Slim 67.8 CFM Rifle Bearing CPU Cooler  ($49.80 @ OutletPC)
Motherboard: Gigabyte - X570 AORUS ELITE ATX AM4 Motherboard  ($199.99 @ Amazon)
Memory: Corsair - Vengeance LPX 16 GB (2 x 8 GB) DDR4-3200 Memory  ($69.99 @ Newegg)
Memory: Corsair - Vengeance LPX 16 GB (2 x 8 GB) DDR4-3200 Memory  ($69.99 @ Newegg)
Storage: Corsair - MP510 480 GB M.2-2280 Solid State Drive  ($64.99 @ Newegg Business)
Storage: Seagate - BarraCuda 4 TB 3.5" 5400RPM Internal Hard Drive  ($79.99 @ Newegg)
Video Card: Asus - Radeon RX 580 4 GB Dual Video Card  ($159.99 @ Newegg)
Case: Fractal Design - Define R5 (Black) ATX Mid Tower Case  ($129.99 @ Newegg Business)
Power Supply: Corsair - RMx (2018) 850 W 80+ Gold Certified Fully Modular ATX Power Supply  ($129.99 @ Newegg Business)
Total: $1209.71

I added a GPU because in the main configuration the OP posted, the CPU in question doesn't have IGU, so a GPU is needed. I put a normal gaming GPU as reference, it should work with CAD if it have OpenGL available. But if the OP can, search for good deals in FirePro/Quadro Used Models.

Plus this config is Linux Compatible, so if the OP instead of Windows want to use Linux it will work without any problems.


See something equal to what you added. I can make my config fit your price and have allowance for the last gen CPU plus badass graphic card by taking the 4TB hard drive for storage and updating to a 3700X. The 2700x was for price and having headroom for other things.

Now another thing:

Quote
There's a HUGE difference between THAT and building last year's "Budget gaming rig", which is obviously what you're ALL doing. You can see it in where you cut corners; pretty much EVERYTHING that boosts bandwidth and multi-thread is what YOU seem to think is unimportant. B-series MBs? DDR3200? SERIOUSLY?

I don't see a B series Motherboard in my config. DDR4 3200 is more that able for what hes going to use, as stated again the performance plus cost curve is basically negligible because of the way the Infinite Fabric scales.

Quote
Even SINGLE nvme SSD performance is markedly improved, and it brings the capability to run MULTIPLE nvme SSDs at full bandwidth AT ONCE.

Full bandwidth at once you say? The 3000 series have only 24 PCIE lanes, 16 are wired directly to the first PCIE Slot and the rest are for the other peripherals. Block Diagram for your reference:

780690-0

Plus the supposed full PCIE4 bandwidth SSDs are not true by limitations of the chipset, at max 1 Full speed, add more and it reduces because of the lack of available lanes, see Block diagram.

Per Diagram the CPU have x16 or 2x8 to the dedicated GPU or dGPU, 2x for NVMe and 2x for SATA or 4X NVME if SATA is not used.

x16 plus x4 is x20. After that the remaining x4 is for the PCH and there its a lot of stuff that uses that x4 so from there you don't take any full speed.

If the OP uses a SATA for any reason (lets be realistic, an NVME of 1TB is 150 last Gen, 280 the Gen 4, prices from Gigabyte. An 4TB HDD is 4 times less the price of the new shiny SSD packing 4x more Info) bye bye full x4 PCIE Gen4.

Lastly, I may be new here but as said there are other ways of proving your idea instead of resorting to violence or bad wording.
« Last Edit: July 09, 2019, 03:01:50 pm by Black Phoenix »
 

Online wraper

  • Supporter
  • ****
  • Posts: 17314
  • Country: lv
Re: $1000 USD CAD and Rendering Workhorse. Getting the Balance right?
« Reply #154 on: July 09, 2019, 02:56:40 pm »
No evidence? There's evidence right in the video y'all linked that pcie4.0 is a big thing. The difference between supporting it and not supporting it is effing $40-80. Even SINGLE nvme SSD performance is markedly improved, and it brings the capability to run MULTIPLE nvme SSDs at full bandwidth AT ONCE.

Now you attempt to be "the reasonable one"? You were, and still are, being deliberately obtuse. Clearly, you DON'T know as much as you think you do. I called you on it. Not sorry if that hurt your feelings. Also not sorry for telling inane feces-flingers like wraper to stop.
Yep no evidence. Some "expert" in 10yo Phenom who've have seen some marketing wank now boasts that PCI-E 4.0 is a big thing. It might be in some cases but not in 8 core PC doing tasks stated. Please elaborate how it will improve doing actual workload in any way? Especially compared with same money spent on better CPU (3800x instead of 3700X) or GPU.
Quote
capability to run MULTIPLE nvme SSDs at full bandwidth AT ONCE.
Which you can do with PCI-E 3.0 too. Say put one into m2 slot on MOBO, second into PCI-E>m2 adapter.
 

Offline mnementh

  • Super Contributor
  • ***
  • Posts: 17541
  • Country: us
  • *Hiding in the Dwagon-Cave*
Re: $1000 USD CAD and Rendering Workhorse. Getting the Balance right?
« Reply #155 on: July 09, 2019, 03:04:51 pm »
Sir,

Can you use Xeons, yes you can, can you use ECC memory, yes you can. Can you buy old workstations that were decommissioned from a company? Yes you can but if they were decommissioned it's because they are using new stuff that its better. It's the value/perfomance better? Probably depends, of how much you paid.

I deployed exactly the same configuration at my last work but with an Ryzen 1700X back when I was in Portugal with an equivalent Motherboard with proper VRMs with a AMD FirePro V7100 for CAD Working.

The advantages in extra speed in memory are negligible after certain threshold. Plus Memory speed without proper timings its worse that lower speed but tighter timings.

PCIE Gen4 yes is a great deal, no denial, and PCIE Gen5 going to be released next year even more. Although the only hardware released that pings the advantage of PCIE4 currently are some NVME SSDs released this year in Computex.

Current Graphic Cards don't tap the full advantage of PCIE Gen3 16x speed.

For a production environment the use of Water Cooling is basically a risk to be taken. Did you see any company using PCs water cooled, Custom loop builds or AIO even? If it fails is basically production time lost and lost of hardware, specially if the AIO have defects (Enermax AIOs with corrosion, Corsair Pumps failing, etc... If you want I can link recall docs for what I'm saying). A Custom loop should be drained and clean each year, an AIO are good for around 4 years, they are made to be throw away when the pump fails. Never had a Fan in a AirCooler fail in the last 10 years, saw some AIOs fail because of evaporation of the liquid through the rubber tubes:

Quote
Finally, tubes are generally made of either FEP or EPDM rubber. The more rigid tubes tend to be FEP, which has excellent reduction of permeation, but less flexibility during installs. Kinking an FEP tube will result in cracking the inner PTFE coating, which results in permeation and poor cooling ability. EPDM tubes have the opposite set of pros and cons: They won’t really get damaged if bent and are more flexible, but it requires an expensive R&D process to get the compound to a point of resisting permeation. Ultimately, all tubes will exhibit the effects of age and will slowly lose fluid to natural processes. It’s just a matter of how long they last. Most CLCs are rated for use in the 4-6 year range, though it’s around years 4-5 that noise begins to get more noticeable. This is because enough of the fluid has permeated the tubes to allow for more air in the line, which gets sucked through the pump and causes gurgling. Users can mitigate this by mounting the tubes down in a vertical CLC install.
https://www.gamersnexus.net/guides/2926-how-liquid-coolers-work-deep-dive

An air cooler with good fans just need a blow with compressed air, and it's good as new. If you want to use water in your own computer for your own production, yes go ahead. But in a deployment of 40 machines no thank you I will not do it.
Yes. All over the place. The advent of AIOs has transformed the marketplace. I'm seeing them in high-end workstations, ready-made gaming rigs and even training simulators. Anywhere you have high-demand workload and want to keep it cool quietly.

I'm the guy they call to air-drop in and clean up the mess when nobody else is willing to. My days are spent going from one business or datacenter to another, replacing network gear, CPUs, RAM, VRMs and PSUs in places locked down so tightly you need an escort and you put your phone in a locker before you enter.

Even in THOSE places I've been seeing liquid-cooled servers for years now. They use AIOs specifically configured for those servers; but the general config is the same: pump is still on the CPU, the chassis is designed so you can lift the entire cooler out as an assembly without disturbing the rest of the MB and the cooler is treated as a consumable supply. You are literally thinking 10 years ago technology, not today's.  :palm:

Cheers,

mnem
 :-/O

alt-codes work here:  alt-0128 = €  alt-156 = £  alt-0216 = Ø  alt-225 = ß  alt-230 = µ  alt-234 = Ω  alt-236 = ∞  alt-248 = °
 

Offline gnif

  • Administrator
  • *****
  • Posts: 1691
  • Country: au
Re: $1000 USD CAD and Rendering Workhorse. Getting the Balance right?
« Reply #156 on: July 09, 2019, 03:08:20 pm »
I have not read this thread in it's entirety, but figured i'd add my personal experience with the PCIe 3.0x bus in some extreme use circumstances.

I am the author of Looking Glass, a program that allows use of a Windows VM with a passthrough GPU inside of Linux by transferring the captured frame between GPUs via system RAM. We are talking about transferring 4K 100+FPS video across the PCIe bus while competing for GPU and CPU time and resources running pro CAD applications and AAA game titles.

3840 x 2160 x 4 = 33,177,600 bytes per frame x 100 = 3,317,760,000 bytes per second = 3GB/s

We can do this on a PCIe 3.0 bus, while PCIe 4.0 will help in some extremely rare corner cases, it's simply not that huge a deal at this point in time with the current workloads. Getting a CPU with more lanes IMO is far more useful then a PCIe 4.0 system. If you want to ensure you have enough lanes, go for a CPU with a ton of them like a Threadripper (note I am aware that this alone is too expensive for the OP's budget).

 
The following users thanked this post: beanflying

Online wraper

  • Supporter
  • ****
  • Posts: 17314
  • Country: lv
Re: $1000 USD CAD and Rendering Workhorse. Getting the Balance right?
« Reply #157 on: July 09, 2019, 03:08:33 pm »
mnementh, as you like watching Linus, here you go. Nothing more than eye candy.

 

Offline Simon

  • Global Moderator
  • *****
  • Posts: 17882
  • Country: gb
  • Did that just blow up? No? might work after all !!
    • Simon's Electronics
Re: $1000 USD CAD and Rendering Workhorse. Getting the Balance right?
« Reply #158 on: July 09, 2019, 03:12:20 pm »
mnementh would you please colm down. Everyone is entitiled to an opinion.
 
The following users thanked this post: gnif, sokoloff

Offline Black Phoenix

  • Super Contributor
  • ***
  • Posts: 1129
  • Country: hk
Re: $1000 USD CAD and Rendering Workhorse. Getting the Balance right?
« Reply #159 on: July 09, 2019, 03:13:44 pm »
Yes I really wanted to see a Server in a 48U rack, one of the very top start leaking water to the others down him... Yes, a full rack burned...

So I will break my answer in serveral parts:

    Physical properties of water versus air and mineral oil
    Risks of water use and historical bad experiences
    Total cost of cooling a datacenter
    Weakenesses of classic liquid cooling systems

Physical properties of water compared to others

First a few simple rules:

    Liquid can transport more heat than gases
    Evaporating a liquid extract more heat (used in refrigerator)
    Water has the best cooling properties of all liquids.
    A moving fluid extract heat way better than a non moving fluid
    Turbulent flow requires more energy to be moved but extract heat way better than laminar flow.

If you compare water and mineral oil versus air (for the same volume)

    mineral oil is around 1500 times better than air

    water is around 3500 times better than air

    oil is a bad electricity conductor in all conditions and is used to cool high power transformers.
    oil depending on its exact type is a solvant and is able to dissolve plastic
    water is a good conductor of electricity if it is not pure (contains minerals...) otherwise not
    water is a good electrolyt. So metals put in contact with water can be dissolved under certain conditions.

Now some comments about what I said above: Comparisons are made at atmospheric pressure. In this condition water boils at 100°C which is above the maximum temperature for processors. So when cooling with water, water stays liquid. Cooling with organic compounds like mineral oil or freon(what is used in refrigerator) is a classical method of cooling for some application (power plants, military vehicules...) but long term use of oil in direct contact with plastic has never been done in the IT sector. So its influence on the reliability of server parts is unknown (Green Evolution doesn't say a word about is). Making you liquid move is important. Relying on natural movement inside a non moving liquid to remove heat is inefficient and directing correctly a liquid without pipe is difficult. For these reasons, immersion cooling is far from being the perfect solution to cooling issues.
Technical issues

Making air move is easy and leaks are not a threat to safety (to efficiency well). It requires a lot of space and consume energy (15% of your desktop cinsumption goes to your fans)

Making a liquid move is troublesome. You need pipes, cooling blocks (cold plates) attached to every component you want to cool, a tank, a pump and maybe a filter. Moreover, servicing such a system is difficult since you need to remove the liquid. But it requires less space and requires less energy.

Another important point is that a lot of reasearch and standardization has been down on how to design motherboards,desktop and servers based on a air based system with cooling fans. And the resulting designs are not adequate for liquid based systems. More info at formfactors.org
Risks

    Water cooling systems can leak if your design is poorly done. Heat pipes are a good example of a liquid based system that has no leak (look on here for more info)
    Common water cooling systems cool only hot component and thus still require an air flow for other component. So you have 2 cooling systems instead of one and you degrade the performances of your air cooling system.
    With standard designs, a water leak has an huge risk of causing a lot of damage if it enters in contact with metal parts.

Remarks

    Pure water is a bad conductor of electricity
    Nearly every part of electronic components are coated with a non conductive coating. Only solder pads are not. So a few drops of water can be harmless
    Water risks can be mitigated by existing technical solutions

Cooling air reduces its capacity to contain water (humidity) and so there is a risk of condensation (bad for electronics). So when you cool air, you need to remove water. This requires energy. Normal humidity level for a human is around 70% of humidity.So it is possible that you need after cooling to put back water in your air for the people.
Total cost of a datacenter

When you consider cooling in a datacenter you have to take into account every part of it:

    Conditioning the air (filtering, removing excess humidity, moving it around...)
    Cool and hot air should never mix otherwise you lower your efficiency and there is a risk of hot spot (points that are not cooled enough)
    You need a system to extract the heat in excess or you have to limit the heat production density (less servers per rack)
    You may already have pipes to remove the heat from the room (to transport it up to the roof)

The cost of a datacenter is driven by its density (amount of servers per square meter) and its power consumption. (some other factors enters also into account but not for this discussion) Total datacenter surface is divided into the surface used by the server themselves, by the cooling system, by the utilities (electricity...) and by servicing rooms. If you have more server per rack, you need more cooling and so more space for cooling. This limits the actual density of your datacenter.
Habits

A datacenter is something highly complex that requires a lot of reliability. Statistics of downtime causes in a datacenter say that 80% of downtime are caused by human errors.

To achieve the best level of reliability, you need a lot of procedures and safety measures. So historically in datacenters, all of the procedures are made for air cooling systems and water is restricted to its safest use if not banned from datacenters. Basically, it is impossible for water to ever come into contact with servers.

Yes there are companies like HP who have water cooling solutions, and I saw them too, but very rare and in specific cases.

    Technically water is better
    Server design and datacenter designs are not adapted to water cooling
    Current maintenance and safety procedures forbid the use of water cooling inside servers
    No commercial product is good enough to be used in datacenters

Commercial product, not proprietary product made by a manufacture that only fits the same manufacture in specific cases.

Yes. All over the place. The advent of AIOs has transformed the marketplace. I'm seeing them in high-end workstations, ready-made gaming rigs and even training simulators. Anywhere you have high-demand workload and want to keep it cool quietly.

I'm the guy they call to air-drop in and clean up the mess when nobody else is willing to. My days are spent going from one business or datacenter to another, replacing network gear, CPUs, RAM, VRMs and PSUs in places locked down so tightly you need an escort and you put your phone in a locker before you enter.

Even in THOSE places I've been seeing liquid-cooled servers for years now. They use AIOs specifically configured for those servers; but the general config is the same: pump is still on the CPU, the chassis is designed so you can lift the entire cooler out as an assembly without disturbing the rest of the MB and the cooler is treated as a consumable supply. You are literally thinking 10 years ago technology, not today's.  :palm:

Cheers,

mnem
 :-/O



You don't know what I do, you don't know where I worked to assume that you are the only one with access to places so secure you need an escort... But well... I really will stop, nothing can really go through thick heads.

I hope the OP get the best config he can with the help of the ones who really know what they are saying... Or say in the correct way without conflict.
« Last Edit: July 09, 2019, 03:18:32 pm by Black Phoenix »
 

Offline mnementh

  • Super Contributor
  • ***
  • Posts: 17541
  • Country: us
  • *Hiding in the Dwagon-Cave*
Re: $1000 USD CAD and Rendering Workhorse. Getting the Balance right?
« Reply #160 on: July 09, 2019, 03:16:29 pm »
GNIF, most of this argument is over whether or not it's WORTH the $40-80 difference of the X570 boards that will support pcie4.0, which we HAVE SEEN can support markedly faster throughput on even a single SSD with CURRENT hardware.

I think that's idiotic.

mnem
 |O
alt-codes work here:  alt-0128 = €  alt-156 = £  alt-0216 = Ø  alt-225 = ß  alt-230 = µ  alt-234 = Ω  alt-236 = ∞  alt-248 = °
 

Offline Mr. Scram

  • Super Contributor
  • ***
  • Posts: 9810
  • Country: 00
  • Display aficionado
Re: $1000 USD CAD and Rendering Workhorse. Getting the Balance right?
« Reply #161 on: July 09, 2019, 03:18:50 pm »
I have not read this thread in it's entirety, but figured i'd add my personal experience with the PCIe 3.0x bus in some extreme use circumstances.

I am the author of Looking Glass, a program that allows use of a Windows VM with a passthrough GPU inside of Linux by transferring the captured frame between GPUs via system RAM. We are talking about transferring 4K 100+FPS video across the PCIe bus while competing for GPU and CPU time and resources running pro CAD applications and AAA game titles.

3840 x 2160 x 4 = 33,177,600 bytes per frame x 100 = 3,317,760,000 bytes per second = 3GB/s

We can do this on a PCIe 3.0 bus, while PCIe 4.0 will help in some extremely rare corner cases, it's simply not that huge a deal at this point in time with the current workloads. Getting a CPU with more lanes IMO is far more useful then a PCIe 4.0 system. If you want to ensure you have enough lanes, go for a CPU with a ton of them like a Threadripper (note I am aware that this alone is too expensive for the OP's budget).
It's obvious that the benchmarks will show a notable difference, but the end user sitting in front of his computer and actually noticing a difference was already unlikely with the move from PCIe 2.0 to PCIe 3.0. Even your example would already be rather extreme and far from something any normal or power user encounters.
 

Offline gnif

  • Administrator
  • *****
  • Posts: 1691
  • Country: au
Re: $1000 USD CAD and Rendering Workhorse. Getting the Balance right?
« Reply #162 on: July 09, 2019, 03:23:00 pm »
@mnementh, I think that you need to take a step back and cool down, I didn't argue for either side but simply stated my personal experiences.

Just for completeness, fluid (not water) cooling is getting more and more popular in data centres with the advent of fluids like 3M's immersion cooling products.

https://www.3m.com/3M/en_US/novec-us/applications/immersion-cooling/

It's obvious that the benchmarks will show a notable difference, but the end user sitting in front of his computer and actually noticing a difference was already unlikely with the move from PCIe 2.0 to PCIe 3.0. Even your example would already be rather extreme and far from something any normal or power user encounters.

This is not a benchmark, Looking Glass is being used by hundreds of people over on the L1Tech forums and has been featured both in the L1Tech videos as well as on Linus Tech Tips. It is niche, but sees a ton of good real world usage across many different hardware platforms, from pcie x4 1.0 through to pcie x16 4.0. While I appreciate that you are pointing out that it is niche and not as common, it is a good example of how well the older busses hold up to modern workload with this additional overhead thrown on top.
« Last Edit: July 09, 2019, 03:27:06 pm by gnif »
 

Offline Black Phoenix

  • Super Contributor
  • ***
  • Posts: 1129
  • Country: hk
Re: $1000 USD CAD and Rendering Workhorse. Getting the Balance right?
« Reply #163 on: July 09, 2019, 03:25:27 pm »

Just for completeness, fluid (not water) cooling is getting more and more popular in data centres with the advent of fluids like 3M's immersion cooling products.

https://www.3m.com/3M/en_US/novec-us/applications/immersion-cooling/


I was doing another big post about that... Saved a full wall of text.
 
The following users thanked this post: gnif

Online wraper

  • Supporter
  • ****
  • Posts: 17314
  • Country: lv
Re: $1000 USD CAD and Rendering Workhorse. Getting the Balance right?
« Reply #164 on: July 09, 2019, 03:26:14 pm »
which we HAVE SEEN can support markedly faster throughput on even a single SSD with CURRENT hardware.
And it will make zero difference in anything other than one trick pony benchmarks. Once it comes to combined load, zero difference as it won't be the limiting factor.
 

Offline Black Phoenix

  • Super Contributor
  • ***
  • Posts: 1129
  • Country: hk
Re: $1000 USD CAD and Rendering Workhorse. Getting the Balance right?
« Reply #165 on: July 09, 2019, 03:35:23 pm »
mnementh, as you like watching Linus, here you go. Nothing more than eye candy.



That review is flawed because the AIOs cold plate don't cover the totally of the Threadripper die:



Again Linus not being totally accurate with all the things.


 

Offline mnementh

  • Super Contributor
  • ***
  • Posts: 17541
  • Country: us
  • *Hiding in the Dwagon-Cave*
Re: $1000 USD CAD and Rendering Workhorse. Getting the Balance right?
« Reply #166 on: July 09, 2019, 03:36:53 pm »
Yes I really wanted to see a Server in a 48U rack, one of the very top start leaking water to the others down him... Yes, a full rack burned...

So I will break my answer in serveral parts:

(SNIP WOT)

You don't know what I do, you don't know where I worked to assume that you are the only one with access to places so secure you need an escort... But well... I really will stop, nothing can really go through thick heads.

I hope the OP get the best config he can with the help of the ones who really know what they are saying... Or say in the correct way without conflict.

One of my regular clients is CIARA. That is what they make. Liquid-cooled high-speed servers. They have several clients with datacenters here in Houston FULL OF THEM. CIARA is NOT The only one; Lenovo is also making them, and I've even seen Dell servers with liquid-cooling on some of these locations.

Just because YOU don't believe in it doesn't make it not so. The arrogance of such a statement is simply staggering.

I have not read this thread in it's entirety, but figured i'd add my personal experience with the PCIe 3.0x bus in some extreme use circumstances.

I am the author of Looking Glass, a program that allows use of a Windows VM with a passthrough GPU inside of Linux by transferring the captured frame between GPUs via system RAM. We are talking about transferring 4K 100+FPS video across the PCIe bus while competing for GPU and CPU time and resources running pro CAD applications and AAA game titles.

3840 x 2160 x 4 = 33,177,600 bytes per frame x 100 = 3,317,760,000 bytes per second = 3GB/s

We can do this on a PCIe 3.0 bus, while PCIe 4.0 will help in some extremely rare corner cases, it's simply not that huge a deal at this point in time with the current workloads. Getting a CPU with more lanes IMO is far more useful then a PCIe 4.0 system. If you want to ensure you have enough lanes, go for a CPU with a ton of them like a Threadripper (note I am aware that this alone is too expensive for the OP's budget).
It's obvious that the benchmarks will show a notable difference, but the end user sitting in front of his computer and actually noticing a difference was already unlikely with the move from PCIe 2.0 to PCIe 3.0. Even your example would already be rather extreme and far from something any normal or power user encounters.

So you're saying it's not worth the $40-80 difference to lay the foundation with next-gen architecture THAT WE CAN ALREADY SEE IS MARKEDLY FASTER rather than continually looking backwards? REALLY?

Cheers,

mnem
I forgot to put something pithy down here.
« Last Edit: July 09, 2019, 03:38:36 pm by mnementh »
alt-codes work here:  alt-0128 = €  alt-156 = £  alt-0216 = Ø  alt-225 = ß  alt-230 = µ  alt-234 = Ω  alt-236 = ∞  alt-248 = °
 

Offline Mr. Scram

  • Super Contributor
  • ***
  • Posts: 9810
  • Country: 00
  • Display aficionado
Re: $1000 USD CAD and Rendering Workhorse. Getting the Balance right?
« Reply #167 on: July 09, 2019, 03:38:23 pm »
@mnementh, I think that you need to take a step back and cool down, I didn't argue for either side but simply stated my personal experiences.

Just for completeness, fluid (not water) cooling is getting more and more popular in data centres with the advent of fluids like 3M's immersion cooling products.

https://www.3m.com/3M/en_US/novec-us/applications/immersion-cooling/

This is not a benchmark, Looking Glass is being used by hundreds of people over on the L1Tech forums and has been featured both in the L1Tech videos as well as on Linus Tech Tips. It is niche, but sees a ton of good real world usage across many different hardware platforms, from pcie x4 1.0 through to pcie x16 4.0. While I appreciate that you are pointing out that it is niche and not as common, it is a good example of how well the older busses hold up to modern workload with this additional overhead thrown on top.
I fully agree, it's a rather convincing and telling example. I just meant to indicate that numbers in benchmarks don't always translate to performance in the real world. In the case of PCIe bandwidth, they regularly don't translate at all. Even when you compare SATA 3 and NVMe on PCIe 3.0, the difference is in many cases literally nil. No measurable difference in actual tasks performed, including a couple of decimal places. Obviously the difference between PCIe 3.0 and 4.0 is going to be even smaller in the real world, even if throughput is another story on paper and in benchmarks. There don't seem to be many real world examples of a workload where it would make a noticeable difference to the actual user, if any at all. Years in the future, perhaps.
 

Offline mnementh

  • Super Contributor
  • ***
  • Posts: 17541
  • Country: us
  • *Hiding in the Dwagon-Cave*
Re: $1000 USD CAD and Rendering Workhorse. Getting the Balance right?
« Reply #168 on: July 09, 2019, 03:44:06 pm »
Again... all this argument over a $40-80 investment in a next-gen MB.

Idiotic.

mnem
 :palm:
alt-codes work here:  alt-0128 = €  alt-156 = £  alt-0216 = Ø  alt-225 = ß  alt-230 = µ  alt-234 = Ω  alt-236 = ∞  alt-248 = °
 

Offline Black Phoenix

  • Super Contributor
  • ***
  • Posts: 1129
  • Country: hk
Re: $1000 USD CAD and Rendering Workhorse. Getting the Balance right?
« Reply #169 on: July 09, 2019, 03:49:17 pm »
One of my regular clients is CIARA. That is what they make. Liquid-cooled high-speed servers. They have several clients with datacenters here in Houston FULL OF THEM. CIARA is NOT The only one; Lenovo is also making them, and I've even seen Dell servers with liquid-cooling on some of these locations.
Just because YOU don't believe in it doesn't make it not so. The arrogance of such a statement is simply staggering.

Yes CiaraTech makes watercooling servers. The AIO is based in Asetek Design.

Quote
A 2U system with a closed loop water cooling set up made by asetek (called the ORION HF). We have fans that cool the RAD and the PCI cards. So far, our system with the 6950x runs at 4.3GHZ/4.4Ghz (we have 2 profiles loaded on the system, you choose which is more stable for your application). It uses the ASUS X99WS IPMI motherboard, with a special BIOS build made for CIARA by ASUS. Its built more for High Frequency Trading, but you add in a graphics card or GPU and you're set.
from LTT forums.

So do HP does. So do Lenovo, and Supermicro and Dell too.

I didn't said that it doesn't exist watercooling in servers, I stated the reasons why water is not the best in terms of watercooling for servers. And was starting to write something about the 3M liquid but someone already posted that, that was the conclusion of the big wall of text.
 

Offline gnif

  • Administrator
  • *****
  • Posts: 1691
  • Country: au
Re: $1000 USD CAD and Rendering Workhorse. Getting the Balance right?
« Reply #170 on: July 09, 2019, 03:50:44 pm »
So you're saying it's not worth the $40-80 difference to lay the foundation with next-gen architecture THAT WE CAN ALREADY SEE IS MARKEDLY FASTER rather than continually looking backwards? REALLY?

Not at all, but since the OP is clearly on what I would consider a tight budget, for his limited amount of money that extra $40-80 could mean getting something else that is far more useful to them.

Also, just because it's the latest and greatest doesn't mean you should adopt it the first chance you get.

A-Bit brought out the first ATA 66 motherboards, which had a fatal flaw that randomly corrupted your HDDs making the bus unusable.
Fijitsu brought out the first budget home 6-10GB HDDs, that every single one failed due to the new method of encapsulating the controller IC.
Intel mass produced and sold the Intel Atom CPUs to the enterprise sector for mission critical infrastructure where flip-chip BGA construction was used, which are now all failing due to unforeseen issues with the at the time new technology.
Samsung brought out the first 1TB home SATA SSDs that suffered a fatal performance flaw due to issues with the wear levelling implemented in silicon that was rectified in later models.
AMD brought out the Ryzen 7 series of CPUs that have a critical bug that exhibits under Linux when doing multi threaded workloads causing a full system halt that was fixed in later revisions.

These are just a few examples of new tech having critical bugs/flaws in new unproven technology that has bitten the early adopters.

« Last Edit: July 09, 2019, 03:55:29 pm by gnif »
 

Offline Mr. Scram

  • Super Contributor
  • ***
  • Posts: 9810
  • Country: 00
  • Display aficionado
Re: $1000 USD CAD and Rendering Workhorse. Getting the Balance right?
« Reply #171 on: July 09, 2019, 03:53:58 pm »
One of my regular clients is CIARA. That is what they make. Liquid-cooled high-speed servers. They have several clients with datacenters here in Houston FULL OF THEM. CIARA is NOT The only one; Lenovo is also making them, and I've even seen Dell servers with liquid-cooling on some of these locations.

Just because YOU don't believe in it doesn't make it not so. The arrogance of such a statement is simply staggering.


So you're saying it's not worth the $40-80 difference to lay the foundation with next-gen architecture rather than continually looking backwards? REALLY?

Cheers,

mnem
I forgot to put something pithy down here.
Taking my comments in my previous post into account, I don't see it making a real difference anytime soon and definitely not a significant difference. In benchmarks it's faster, in the real world it doesn't appear to be. If I had any reason to think beanflying would actually benefit from the upgrade in the next 5 years I'd probably recommend going with the upgrade. Instead, it's money which can make an actual quantifiable impact elsewhere. Don't get me wrong, I understand the urge to buy that thing with the highest number and more being better regardless. But that's a game of pride and not one which takes the actual demands and limitations into account. I fully understand people going "sod it" and just flat out buying the latest and greatest for the sake of it, but that has little to do with use cases. I also don't think that was the intention of this thread, or the budget.
« Last Edit: July 09, 2019, 04:08:44 pm by Mr. Scram »
 

Offline olkipukki

  • Frequent Contributor
  • **
  • Posts: 790
  • Country: 00
Re: $1000 USD CAD and Rendering Workhorse. Getting the Balance right?
« Reply #172 on: July 09, 2019, 03:59:25 pm »
I don't know how many actually tried a water-cooling and what have changed for last 5-6 years with tech,
but I were trying to "eliminate" a noise for non-overlocking X-series and Xeon class CPUs that sat in boxes near to me.

This is my one of worst purchase in decade, obsolete sh$t with very creepy noise!

Eventually, money very well spend on a bigger case and traditional "beefy" coolers.
« Last Edit: July 09, 2019, 04:01:37 pm by olkipukki »
 

Offline Mr. Scram

  • Super Contributor
  • ***
  • Posts: 9810
  • Country: 00
  • Display aficionado
Re: $1000 USD CAD and Rendering Workhorse. Getting the Balance right?
« Reply #173 on: July 09, 2019, 03:59:58 pm »
Not at all, but since the OP is clearly on what I would consider a tight budget, for his limited amount of money that extra $40-80 could mean getting something else that is far more useful to them.

Also, just because it's the latest and greatest doesn't mean you should adopt it the first chance you get.

A-Bit brought out the first ATA 66 motherboards, which had a fatal flaw that randomly corrupted your HDDs making the bus unusable.
Fijitsu brought out the first budget home 6-10GB HDDs, that every single one failed due to the new method of encapsulating the controller IC.
Intel mass produced and sold the Intel Atom CPUs to the enterprise sector for mission critical infrastructure where flip-chip BGA construction was used, which are now all failing due to unforeseen issues with the at the time new technology.
Samsung brought out the first 1TB home SATA SSDs that suffered a fatal performance flaw due to issues with the wear levelling implemented in silicon that was rectified in later models.
AMD brought out the Ryzen 7 series of CPUs that have a critical bug that exhibits under Linux when doing multi threaded workloads causing a full system halt that was fixed in later revisions.

These are just a few examples of new tech having critical bugs/flaws in new unproven technology that has bitten the early adopters.
Remember those first motherboards with USB 3.0, which used a separate controller chip which was often actually slower and less reliable than the native USB 2.X connections on the same boards? Or the same story when SATA3 was introduced? Lots of fun, lots of confused people. I'm not saying that'll be the case here, but they're definitely examples of early adopter woes.
 
The following users thanked this post: gnif

Offline Mr. Scram

  • Super Contributor
  • ***
  • Posts: 9810
  • Country: 00
  • Display aficionado
Re: $1000 USD CAD and Rendering Workhorse. Getting the Balance right?
« Reply #174 on: July 09, 2019, 04:03:24 pm »
I don't know how many actually tried AIO water and what have changed for last 5-6 years with tech,
but I were trying to "eliminate" a noise for non-overlocking X-series and Xeon class CPUs that sat in boxes near to me.

This is my one of worst purchase in decade, obsolete sh$t with very creepy noise!

Eventually, money very well spend on a bigger case and traditional "beefy" coolers.
It seems to be accepted that air is quieter. I would personally go with air, but can see a special use case in sustained rendering a hot shack. AIOs do seem to cool a bit more effectively, although the extra headroom is generally intended for overclocking. beanflying, are you intending to overclock?
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf