Author Topic: Paralleling DC "wall worts".  (Read 989 times)

0 Members and 1 Guest are viewing this topic.

Offline paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4247
  • Country: gb
Paralleling DC "wall worts".
« on: August 25, 2024, 03:13:28 pm »
I have 3 identical SBCs.  Single board x86 computers.  "Zimaboard". 

They come with a 12V 3A DC power supply.  The boards themselves, naked consume only 6W of the available 36W.

However.  I want to attach some large HDDs to them.  Either by spreading the load putting 1 HDD onto each machine.  Or by common railing the PSUs into a single 12V 9A DC Rail.

HDDs only pull between 5W and 7W when functioning.  The problem happens when powersaving spins them down and then access spins them back up.  It has been a while since read an HDD datasheet, but I do recall figures of 3A peak being reported when I have looked.

The manufacturers of the Zimaboard claim a single HDD is fine.  I need at least an HDD and an SSD on each.

If I just commoned all 3 PSUs together and then split them back out again to 3 PCs, then the PSUs could share the spike loads when disks spin up and as I can make it unlikely for them all to spin up at once, I can limit the total spike current to maybe 4A.

My understanding of these AC/DC "black boxes" is they are likely to be SMPSUs and the DC side is likely floating.  If they are indeed "ground referenced" for PC compatibility it still shouldn't mater as they are all on the same mains socket and on the same mains circuits as the devices it will connect to.  No adverse DC current flow should occur between them.

?

There are other options open to me.  This is the easiest, cheapest and nastiest.
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Online magic

  • Super Contributor
  • ***
  • Posts: 7166
  • Country: pl
Re: Paralleling DC "wall worts".
« Reply #1 on: August 25, 2024, 03:26:33 pm »
Nope, not gonna be easy.

They have different open circuit output voltage and the one with the highest voltage will supply all loads while others will stand by.
If it has piss poor load regulation then its output may drop under heavy load, causing the next one in order to wake up and start sharing some load.
But if its load regulation is too good, it will simply shut down at some point and come back a moment later.

You may have more luck if you join their outputs by means of series resistors (dunno, 0.1Ω~1Ω or so), particularly if their open circuit voltages are close together.

edit
Besides, I think if the board and one disk is OK, then adding an SSD is likely still OK.
If in doubt, try it and watch the 12V rail on a scope.
« Last Edit: August 25, 2024, 03:31:28 pm by magic »
 
The following users thanked this post: MK14

Offline mariush

  • Super Contributor
  • ***
  • Posts: 5135
  • Country: ro
  • .
Re: Paralleling DC "wall worts".
« Reply #2 on: August 25, 2024, 03:55:36 pm »
SSDs work with 5v or 3.3v (nvme/m.2), they'll consume pretty low amount, in the 1-2 watts when reading files.

Hard drives consume 5-7 watts, around half being from 5v , half from 12v.  Yes, the startup current will be 1-2A peaks.

The 12v 3A power supply should be able to handle a hard drive and a SSD just fine.

No, I wouldn't parallel power supplies.

Another option, you could get yourself a small Atom board and make a nas with the hard drives, and access the drives from those servers through the network if needed. Do you really need more than a SSD on each of those home servers?

Ex 40$ for this : https://www.ebay.com/itm/186302715202

2 sata 3gbps ports, ide , and a miniPci in which you could plug a second sata controller ex https://www.ebay.com/itm/315637117270

add a couple ddr2 sticks of ram  and a laptop adapter (common barrel jack 2.5 mm ID - 5.5mm OD) and you have your nas

power supplies are cheap, you can get genuine 65w+ for 15$ : https://www.digikey.com/short/3d374wd0
 
The following users thanked this post: MK14

Offline paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4247
  • Country: gb
Re: Paralleling DC "wall worts".
« Reply #3 on: August 25, 2024, 05:44:55 pm »
The reason I want to move them to the Zimaboards (even moving them from a 2.5Gbps host to a 1Gbps ZImaboard) is to remove the need for the larger server (100W) to be on to use them.

The idea is to move the old spinning HDD space to the Zimas only.  These include drives which get used frequently as they have my music and what not on them, while at the same time they hold no "system storage" so can be part time.  Turning on the main server just to access these is somewhat wasteful and also I tend to then leave it running.

I want to avoid running large 100W servers for basic skeleton loads.  I want to cram and shoehorn enough into as few mini-PCs and only turn the bigger servers on when I need a dev environment and dev VMs. 

These 6W little gizmos should, I hope be able to satisfy some MP3s and maybe some 1080P MP4s and save me having to switch on and off the big server.

(The rest of the network services etc run in an old HP Elite Desk with 32Gb RAM).  All servers Zima included run Proxmox virtualiser.
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Offline coromonadalix

  • Super Contributor
  • ***
  • Posts: 6564
  • Country: ca
Re: Paralleling DC "wall worts".
« Reply #4 on: August 25, 2024, 05:58:11 pm »
i would vouch  for a small  mini itx  nas oriented board, you have some with 8 sata's ports or more, seen some with 10 or 12 ports ... small enough for a home server

yes you want to cut consumption,  even some new raspi's board are powerful enough, cost less ...  you have hats / shields for nas purposes available ... up to 4 ssd drives
I would not be surprised if the Zimaboards don't have theses hidden inside

a 2.5 gbps  speed is normally more that enough to supply files, unless you have 4 k movies in blue-ray formats 50 gig size files ?? 
 

Offline Jeroen3

  • Super Contributor
  • ***
  • Posts: 4172
  • Country: nl
  • Embedded Engineer
    • jeroen3.nl
Re: Paralleling DC "wall worts".
« Reply #5 on: August 25, 2024, 06:00:40 pm »
This is why Synology has a spin-up seqeunced with each disk a few seconds apart.

Power Up In Standby (PUIS)

But synology probably used the pin11 method, since they also do the hardware.
 

Offline madires

  • Super Contributor
  • ***
  • Posts: 8131
  • Country: de
  • A qualified hobbyist ;)
Re: Paralleling DC "wall worts".
« Reply #6 on: August 25, 2024, 06:01:21 pm »
I'd recommend to get a suitable power brick, e.g. from Meanwell, to power all three SBCs. The efficiency might be a bit better than running three smaller wall warts.
 

Offline paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4247
  • Country: gb
Re: Paralleling DC "wall worts".
« Reply #7 on: August 25, 2024, 07:43:10 pm »
i would vouch  for a small  mini itx  nas oriented board, you have some with 8 sata's ports or more, seen some with 10 or 12 ports ... small enough for a home server

yes you want to cut consumption,  even some new raspi's board are powerful enough, cost less ...  you have hats / shields for nas purposes available ... up to 4 ssd drives
I would not be surprised if the Zimaboards don't have theses hidden inside

a 2.5 gbps  speed is normally more that enough to supply files, unless you have 4 k movies in blue-ray formats 50 gig size files ??

Yes.  A modern Mini ITX with a modern CPU, RAM, ITX-PSU etc.  Is the "mutts nuts".  Especially if you find yourself a nice NAS focused case with drive bay room.

However I have two 2022 class "gaming PC" servers with good specs and the only real difference with M-ITX is size, it would just be making smaller not lower powered.

It's not even that I am concerned about how much power the HDDs consume.  I am more concerned with running a big fat server when I'm in bed and nobody is likely using it beyond background tasks, home automation and some email.

So it's less about focusing things into "one" box and more about splitting things up onto individual machines which then be powered up or down depending on what "part time" services I need.

I'm going to just try one HDD on one zimaboard for a while to see how it performs in aggressive power saving spindown regime.

The zimaboards are 1Gbps.  They do have 2x1Gbps NICs, but trying to utilise both nics for 2Gbps is not that easy.  Aggregated links do not tend to work the way you expect and prefer heavy assorted traffic to function right.

That shouldn't be a problem.  It wasn't before I got 2.5Gbps and I will notice the different I don't think it will be a problem.
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Offline J-R

  • Super Contributor
  • ***
  • Posts: 1203
  • Country: us
Re: Paralleling DC "wall worts".
« Reply #8 on: August 25, 2024, 09:43:44 pm »
Some thoughts:
Proxmox is a bit evil with regard to hard drive spin-down, so you may have some challenges there.  If the drive is directly presented to Proxmox, it will be waking them up regularly.  If you use pass-through to a VM, it will be better.
Having a single hard drive on each host seems a bit annoying.  Will you be using replication or similar to provide redundancy/backups?  Will the data be spread out?
You could consider moving the HDDs to USB enclosures where they'll have their own power supplies.

How big and how old are the hard drives? What model of drive is it?  The power draw of a hard drive can be less if it's a model within a lineup that has fewer platters.  Say an 8TB drive out of 20TB, so only 2 platters instead of 5.

How much data is involved?  Personally I've moved to all SSDs over the last decade.  Used enterprise SSDs are my go to for VM workloads, primarily the older Dell-branded Intel DC series.  But also some other random ones for any bulk storage that needs to be online all the time.

Honestly for bulk storage, you have lots of options and pretty much any decent SSD can be OK since the data is mostly just sitting there and being read.


Another permutation:
When Synology released their DS409slim back in 2009, I jumped on that bandwagon for a while and that was nice.  Then to the DS416slim and now to the DS620slim, all running laptop hard drives.  The DS416slim (5-bay) had 2TB hard drives, and I moved those to the DS620slim (6-bay) but I eventually changed my mind and upgraded to 1.92TB SSDs instead.  Cost wasn't a huge deal, a little over $100 per SSD.  Just under 7TB of RAID6 storage available 24x7x365 and under 20W.  And it can run a couple VMs just fine with the upgraded 8GB RAM.

One big problem is that if you live and die by ROI, then that route isn't going to calculate out.  HDDs are still king for capacity.
 

Offline paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4247
  • Country: gb
Re: Paralleling DC "wall worts".
« Reply #9 on: August 26, 2024, 08:27:17 am »
Almost everything is SSD already.  I have 2x6Tb HDDs.  One is my personal media.  Ie.  My photos, videos, gopro footage, for the past 20 years or so, it's huge.  Most of it could probably be deleted, but I am a horder.  The other is my not personal media.  Movies, TVSeries, etc. etc.  Neither are full, but combined they are more than 6Tb.

I have been look into SSDs.  I already have 1 4Tb SSD.  The trouble is SSDs in the 6-8Tb range are still stupid expensive.

Rearranging the storage better to put nearly everything I could want on a reqular basis onto SSD and then put the HDDs into external enclosures, turned on only when I need to access "Glacial storage" I think they call it.  The stuff you have to get out of the cupboard.
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8750
  • Country: fi
Re: Paralleling DC "wall worts".
« Reply #10 on: August 26, 2024, 03:19:56 pm »
Yeah, as others have said paralleling only works by extremely good luck. Simply get a PSU with high enough rated power. With single +12V supply, you have nearly endless product availability to choose from. Meanwell is usually a decent choice both quality and price wise.
 

Online radiolistener

  • Super Contributor
  • ***
  • Posts: 3981
  • Country: ua
Re: Paralleling DC "wall worts".
« Reply #11 on: August 26, 2024, 03:36:42 pm »
Technically it can be possible, but it's dangerous because it depends on how PSU is implemented and how they react on applying external voltage. There is some risk that it can damage some PSU and it may put high voltage on your equipment. I would not recommend to do it, especially with cheap Chinese chargers.
 

Offline Zero999

  • Super Contributor
  • ***
  • Posts: 19895
  • Country: gb
  • 0999
Re: Paralleling DC "wall worts".
« Reply #12 on: August 26, 2024, 08:25:25 pm »
It depends on the voltage tolerance and how it responds to overloads. If the voltages are fairly close and it just goes into constant current mode on overload, then it'll be fine. Otherwise, if it starts pulsing on and off during overload, then it will become unstable.
 

Online magic

  • Super Contributor
  • ***
  • Posts: 7166
  • Country: pl
Re: Paralleling DC "wall worts".
« Reply #13 on: August 26, 2024, 09:23:37 pm »
Even if it works, it's still unhealthy for one 'wart to run near 100% load all the time, if that's what would result.

OTOH, it has just occurred to me that load sharing may actually work with typical flyback wall warts. Their main overload protection mechanism is primary current limiting, which limits output power. The reason they shut down on overload is because limited power and high load current means that secondary voltage necessarily falls out of regulation (preservation of energy). This has a side effect of reducing the auxiliary voltage which powers the switching controller, and the chip shuts down. This will not happen if other parallel wall warts activate to keep the secondary voltage in regulation.

Maybe I should try it tomorrow...
 
The following users thanked this post: MK14

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8750
  • Country: fi
Re: Paralleling DC "wall worts".
« Reply #14 on: August 27, 2024, 05:31:19 am »
constant current mode on overload

Unlimited-time constant current mode on switch mode wall-warts is nearly nonexistent. Hickup limiting is most typical. Some might have a combination of constant current mode with time limit (latching fault or hickup) so that they precharge large capacitive loads without throwing a tantrum.

This is simply for protection. Normal constant voltage wall warts are meant to power equipment, not to charge batteries or drive LEDs or other current-controlled stuff. Therefore working against current limit normally indicates a fault in equipment or wiring, which (unless it's a dead short at near-0V) then means excess power dissipation and risk of fire.

And fuses within equipment require availability of large enough short-circuit currents. Which a current-limited supply can't by definition give, unless grossly oversized. Therefore CC mode is dangerous and not used in such supplies.

These wall-warts have quite loose voltage tolerances and some might implement stuff like negative resistance cable voltage drop compensation which makes paralleling even more impossible than it already is.
 

Offline J-R

  • Super Contributor
  • ***
  • Posts: 1203
  • Country: us
Re: Paralleling DC "wall worts".
« Reply #15 on: August 27, 2024, 08:51:33 am »
Redundant power is pretty easy to do.  But while power joining combined with sharing is common in a blade chassis configuration, it's not a trivial setup.  It involves plenty of engineering/testing/work on the enclosure plus monitoring and management of the various inputs and loads to avoid flames.

So trying to reproduce that type of configuration at the DIY level is going to be very high risk.  Much better to keep each node isolated, and even on separate UPS units.


Agreed, 12TB of flash is a bit of money, especially if you want RAID.  My current setup does require me to manually move some data around between flash and HDD.  It's not that big of a deal.  Similar to flash caching methods, I only move things off of SSD when I absolutely have to.

I don't use spin-down timers, I power my NAS devices up and down manually when necessary.  I always find the automatic method to be problematic.  Either they won't spin down when I think they should, or they keep spinning up and down incessantly due to some workstation reaching out.  I have WOL enabled in case I need something when I'm away.


Also, don't neglect the importance of the 3-2-1 backup rule!
 

Offline paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4247
  • Country: gb
Re: Paralleling DC "wall worts".
« Reply #16 on: August 27, 2024, 10:31:16 am »
There was an element of redundancy in parallel supplies where 2 (or even 1) could keep all 3 little PCs running.

It sounds like my mileage may vary and there are many a blue smoke trap.  With the 3 little PCs combining to about £600 I'd rather not test the mileage in the theories.  I have personal experience of "wayward" DC currents and just how far they will go to find the lowest resistance path back to their electron source ... which tends to be the one without a current sense shunt in it.

On the application side.  Honestly only 1 of these drives is regularly called for.  The "non-personal" media.  Movies, music, TV shows, etc. etc.  It's just too bulky to SSD and while stuff I download sits on SSD drives for a while, then they get cleaned out, they get dumped on to the HDDs.  Pulling back a series or two if I'm watching is cumbersome.  None of this is backed up.  It's a single drive.  It has suffered bit rot over the years and some loss, but it's been routinely transfered up to newer drives every few years.  It is all technically replaceable.

The personal media drive is more of a long term store.  When I need to use much of the media in there, working with videos does need to be transferred to a SSD local drive.  It can be left offline for weeks.  The "guts" of this are backed up, infrequently, manually. 

Backups are in the form of ZRAID trio of 2Tb drives providing 4Tb of backup space with 1 drive loss tolerance.  These and the box they are connected to are offline and only WOL'd when I want to sync backups to it, once a week for the routine ones.  Online system drives are dual M.2 in the case of the larger server.  Single SSD or M.2 in the case of the low power ones.

All the virtualised storage and VMs/Containers are backed up, 3 daily, 2 weekly, 1 monthly for 90% of virtual disks.  All of these, plus config snapshots of all virtualisation servers is synced to those offline backup disks weekly, held locally on each node until then.  The plan was to automate it to be once a day, but ... life/time you know the drill.

The virtualised stuff is so easy to deal with, backing up, migrating, etc.  It's the physcial bulk storage which is the burden to solve.  Hopefully in a few more years 10+Tb of SSD storage will be affordable.

I'm going to take the plunge and put an HDD on a Zimaboard without any PSU modifications and see how it performs.

The idea being...  if I need to access that drive I can make it a simple "WOL" command, wait a minute, work away.  I can (time permitting) even setup a "lights out" schedule for it.  Meaning if the disks are spun down for longer than 30 minutes, shut the box down completely.  Even without this I would be forgetting to turn off a 6W PC with a 5W drive, rather than my current situation of running the 100W server 24 hours because I didn't see it's blue lights on.
« Last Edit: August 27, 2024, 10:33:03 am by paulca »
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8750
  • Country: fi
Re: Paralleling DC "wall worts".
« Reply #17 on: August 27, 2024, 11:09:46 am »
There was an element of redundancy in parallel supplies where 2 (or even 1) could keep all 3 little PCs running.

No, there is opposite of redundancy:

Does it work with one? So you don't need more current after all? Then just use one.
It does not work with one because more current is needed? Then it fails anyway when one of the supplies fail, no redundancy.
Plus, if any of the supplies fail as a short circuit (or close to that), then it brings the rest down, too.

To add redundancy, you can diode-OR the supplies together. But then every supply must be able to provide the full current alone.
 

Offline paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4247
  • Country: gb
Re: Paralleling DC "wall worts".
« Reply #18 on: August 30, 2024, 09:30:26 am »
So it runs one HDD and one SSD fine.  I just need to plug it into a monitor to enable WOL and I think it will be good.
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Offline Doctorandus_P

  • Super Contributor
  • ***
  • Posts: 3820
  • Country: nl
Re: Paralleling DC "wall worts".
« Reply #19 on: August 30, 2024, 02:50:43 pm »
No no, not worts, but warts. Those ugly things that stick out of an otherwise smooth surface:

https://duckduckgo.com/?hps=1&q=wart&iax=images&ia=images


 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf