Author Topic: DOS vs. Linux  (Read 28590 times)

0 Members and 3 Guests are viewing this topic.

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9941
  • Country: us
Re: DOS vs. Linux
« Reply #75 on: December 03, 2020, 07:44:00 pm »
If you mess something up and it appears to be unrecoverable, just reinstall the system.  Yes, on 'mature' systems, there will be a lot of lost applications and likely some important files but those were all backed up somewhere else.

It's not nearly as hard to reinstall a Linux system as Windows.  Among other things, I don't need a secret-squirrel code to activate the installation.

The Raspberry Pi is especially nice in this regard because the imager just lays down a new copy on the SD card and you're good to go.  The 'Accessories -> SD Card Copier' utility will make an image of the existing system card.  All you need is a USB <=> SD Card gadget

https://www.amazon.com/gp/product/B006T9B6R2

It is worth considering 256 GB SD cards...

You may need a male by female USB A extension cable to get the gadget away from other devices plugged into the computer.

How many stories have we heard about people wiping out their 5-1/4" system floppies?

There are many web pages with lists of bash commands.  Some are probably correct, others seem to have problems.  On one, 'mv' is described as useful for moving directories but makes no mention of moving files or just renaming them.

A fun command I didn't know was there:  'compgen'

Code: [Select]
$ compgen -c
will list all 1912 directly executable commands on my Pi and AFAICT doesn't include things installed over in /opt.  I really have no idea which directories are searched but probably those on the PATH.  My RISC cross-compiler installed in /opt is not on the path and not on the list. There doesn't seem to be a 'man' page... 

Code: [Select]
$ compgen -c | sort
will make the same list in sorted order

Code: [Select]
$ compgen -c | sort | more
And this will allow paging so you can actually see something

Code: [Select]
$ compgen -c | wc -l
will just display the number of commands

Code: [Select]
$ compgen -a
will display all active aliases


It's about the piping thing!  The big feature of Unix was pipes and making everything look like a file.

As I wrote this, I installed the Free Pascal compiler.  compgen now shows 2127 executables.

https://www.geeksforgeeks.org/compgen-command-in-linux-with-examples/

And Google is your best friend!

« Last Edit: December 03, 2020, 08:02:35 pm by rstofer »
 

Online Monkeh

  • Super Contributor
  • ***
  • Posts: 8090
  • Country: gb
Re: DOS vs. Linux
« Reply #76 on: December 03, 2020, 08:03:53 pm »
A fun command I didn't know was there:  'compgen'

This is a bash builtin. Like everything else in bash, you'll find the documentation in the bash manpage.
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4559
  • Country: nz
Re: DOS vs. Linux
« Reply #77 on: December 03, 2020, 09:49:58 pm »

I expect sudo dd if=/dev/null of=/ bs=1M would work about as well.

I often have to correct myself too, but that doesn't do anything. You meant
dd if=/dev/zero of=/
add
status=progress
to see exactly how much damage you manage to cause.

Another fun thing to do is
kill 1
which fails (since you really meant %1), and you follow up with
sudo !!


edit: it actually looks like
dd if=/dev/zero of=/
doesn't do anything either. Without trying it as root '/' is not a file and cannot be written to this way.

Yeah, you'll actually want /dev/sda or whatever.

And, yep, /dev/zero. Or /dev/random
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9941
  • Country: us
Re: DOS vs. Linux
« Reply #78 on: December 03, 2020, 11:31:57 pm »
Here's a pretty decent FREE Kindle book on bash scripting but it starts with the very basics.

I'm starting at the beginning to fill in the gaps.  I'm not overly interested in scripting but it's something I should be familiar with or at least have reference material.

https://www.amazon.com/gp/product/B081D8JFCM
 
The following users thanked this post: DiTBho

Offline westfw

  • Super Contributor
  • ***
  • Posts: 4321
  • Country: us
Re: DOS vs. Linux
« Reply #79 on: December 05, 2020, 11:48:25 am »
Quote
Recently I used DD to image a drive
Quote
In DOS, if you want to 'copy' something, you use 'copy', but it seems in Linux a plethora of commands exist to do the same thing.
Well, your claim of simplicity for DOS just isn't true.  "copy" works for simple files on DOS.  It wouldn't work on a directory, and it wouldn't help for your drive image copy.  You'd have to find "xcopy" and "diskcopy."   Linux's "cp" and DOS's "copy" are pretty much exactly similar (I think cp has more options - you can copy directory hierarchies with cp instead of needing a separate program like xcopy.)

Yeah, linux command-line program names are ... short and obscure.  You don't really have to learn very many to be moderately proficient (you didn't have to learn many DOS commands, either.  So ... still about even.  A modern linux system will have a whole bunch of other stuff beyond what DOS ever provided, and DOS probably had a lot of commands that you never learned, either.  (Certainly the "cmd prompt" of a modern windows system can potentially run MANY .exe programs installed in the c:\windows hierarchy that I have no idea what they do...))


To find commands, become more familiar with the "man" (manual) command.

"man -k keyword" will list all the man pages about copying things.   Unfortunately, these days this is "polluted" with documentation for library functions for various languages and libraries that you might have installed (ie the memcpy() C function, and also "FcCacheCopySet" and other obscure stuff.

Actual commands are in "Section 1" of the manual.  Unfortunately, I don't see a way to get "man -k" only look in section 1 :-(
You can get reasonable results with: "man -k copy | grep "(1)"":

billw@VB-ubuntu:~$ man -k copy | grep "(1)"
cp (1)               - copy files and directories
cpio (1)             - copy files to and from archives
dd (1)               - convert and copy a file
debconf-copydb (1)   - copy a debconf database
gvfs-copy (1)        - (unknown subject)
install (1)          - copy files and set attributes
mcopy (1)            - copy MSDOS files to/from Unix
objcopy (1)          - copy and translate object files
rcp (1)              - secure copy (remote file copy program)
rsync (1)            - a fast, versatile, remote (and local) file-copying tool
scp (1)              - secure copy (remote file copy program)
ssh-copy-id (1)      - use locally available keys to authorise logins on a r...
x86_64-linux-gnu-objcopy (1) - copy and translate object files


If you're lucky, an individual man page ("man scp", say) will have a "see also" section at the end referring you to related commands.



 

Offline SparkyFX

  • Frequent Contributor
  • **
  • Posts: 676
  • Country: de
Re: DOS vs. Linux
« Reply #80 on: December 05, 2020, 02:03:16 pm »
Often i find myself in front of systems on which installing additional packages is a problem, so making most use out of a basic system is an advantage.

People that just want to try out such commands or need some functions on windows might want to check out cygwin.
And if you are fond of the Norton Commander, there is also the Midnight Commander on linux.
Support your local planet.
 

Offline golden_labels

  • Super Contributor
  • ***
  • Posts: 1376
  • Country: pl
Re: DOS vs. Linux
« Reply #81 on: December 05, 2020, 02:37:23 pm »
Uhh… dd is not like any disk-whatever software of any system. It can work on disks only because the operating system exposes them as files and the user is passing the name of those files to dd. So can literally any other piece of software if it has no specific requirements about file characteristics. The feature is in the Unix-inspired operating systems, not in dd.

dd is associated with disk for cultural rather than technical reasons. When ancient dinosaurs were still inhabiting Earth, many other tools were not binary safe and so users had to resolve to using any program not mangling binary data. Old habits die hard and this is the only reason dd is still used for that purpose.(1) In 2020 not only dd rarely offers advantage over e.g. cat, but it’s harmful. The manual casually fails to clearly mention what dd actually does. If you think you know, take a simple test: what will this command do?
Code: [Select]
dd if=10mebibyte-file of=destination bs=1M count=10Is you answer: copies 10MiB (10·1M) from “10mebibyte-file” to “destination”? If yes, you have failed the test. :P

dd performs a read from if into a buffer of bs bytes and then performs a write of the same size as the read into of. Repeats count times. The fine print: the buffer is bs bytes, not the number of bytes read. The number of bytes transferred in a block may be smaller — even zero. It will not re-read to fill the block to the full size before writing it. Which means that it may as well write 10 times nothing. And yes: that does happen and did lead to data corruption/loss.

To somewhat remedy the issue, dd from GNU Coreutils is nice enough to at least scream at you if that occurs printing “dd: warning: partial read”. It also has a non-standard flag for iflag to force the program to attempt re-reads until a full block is collected: fullblock. But both of those are non-standard features specific to that very version. Even with GNU’s implementation there is usually no reason to take the risk unless you really need something found only in dd.

Therefore: stop perpetuating promotion of dd for copying data and associating it with disks.
____
(1) There are some jobs in which various features of dd may be useful, but nearly all uses you find in suggestions on the internet are not among those tasks.
People imagine AI as T1000. What we got so far is glorified T9.
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6993
  • Country: fi
    • My home page and email address
Re: DOS vs. Linux
« Reply #82 on: December 05, 2020, 02:40:31 pm »
Actual commands are in "Section 1" of the manual.  Unfortunately, I don't see a way to get "man -k" only look in section 1 :-(
Use
    man -s 1 -k term
It is especially useful when looking for a POSIX C library function, when one is unsure if it is in section 3 or section 2 (in this order of preference):
    man -s 3,2 -k term
Similarly, for administrative commands you probably want
    man -s 1,8 -k term
and for games and such
    man -s 6 -k term

Now, nobody has time to remember those, so I suggest putting in your profile (.profile in your home directory):
Code: [Select]
alias manc='man -s 1,8'
alias mang='man -s 6'
alias manf='man -s 3,2'
so you can use manc command or manc -k topic etc. to only look in the relevant sections.

Listing library function manual pages based on match in the function name is slightly more aggravating, as --names-only includes the short description.  For those, I suggest a trivial Bash function (that you can put in your .profile, if you use bash):
Code: [Select]
function whatfunc() {
    for term in "$@"; do
        man -s 3,2 -k "$term" | awk '$1 ~ '"/$term/"
    done | sort -u
}
I often want just plain C and POSIX functions, and I have both mawk and gawk installed, with mawk being much faster in this case, so I use
Code: [Select]
function whatc() {
    for term in "$@"; do
        man -s 3,2 -k "$term" | mawk 'BEGIN { ok["(2)"]++; ok["(3)"]++; ok["(3posix)"]++ } ($2 in ok) && ($1 ~ '"/$term/)"
    done | sort -u
}
In both cases all arguments are (separate) regular expressions, so e.g. whatc ^read lists all POSIX C library functions that start with read.
« Last Edit: December 05, 2020, 02:44:10 pm by Nominal Animal »
 
The following users thanked this post: DiTBho

Offline DiTBho

  • Super Contributor
  • ***
  • Posts: 4247
  • Country: gb
Re: DOS vs. Linux
« Reply #83 on: December 05, 2020, 03:04:34 pm »
It also has a non-standard flag for iflag to force the program to attempt re-reads until a full block is collected: fullblock

Code: [Select]
SYNOPSIS
       #include <stdio.h>

       size_t fread(void *ptr, size_t size, size_t nmemb, FILE *stream);
       size_t fwrite(const void *ptr, size_t size, size_t nmemb, FILE *stream);

fread returns the actual number of byte read, so if the returned value is different from "nmemb", the developer should just have to implement a while() block to achieve the rest got read.

Hasn't been this implemented? I will verify, but I doubt a "serious" developer would make such a silly mistake.

Anyway, I have been using dd to copy partitions (even across the network, dd coupled with nc) for 11 years, and haven't yet found a single byte lost/corrupted.
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline DiTBho

  • Super Contributor
  • ***
  • Posts: 4247
  • Country: gb
Re: DOS vs. Linux
« Reply #84 on: December 05, 2020, 03:09:18 pm »
Alternative to DD on Linux: Gparted!

Gparted is an amazingly flexible tool that serves as a graphical partition editor built for the GNOME desktop environment, but it can do much more than just edit partitions. One nifty trick I discovered it can do is copy partitions from one drive to another.
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline Karel

  • Super Contributor
  • ***
  • Posts: 2267
  • Country: 00
Re: DOS vs. Linux
« Reply #85 on: December 05, 2020, 03:18:56 pm »
fread returns the actual number of byte read,  ...

Nope.

On  success,  fread()  and  fwrite() return the number of items read or written.
This number equals the number of bytes transferred only when size is 1.
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9941
  • Country: us
Re: DOS vs. Linux
« Reply #86 on: December 05, 2020, 04:45:09 pm »
If there is an alternative to 'dd' for my application, I'm certainly willing to look into it.

I have a file with 3248 sectors of 512 bytes each (1,662,976 bytes).  It is the image of the IBM 1130 system disk.  It needs to be laid down on a Compact Flash in exactly the format it has.  Start at LBA 0 and write the entire image.  Don't think, don't interleave, just copy the image!

I don't need an underlying file system, don't want any kind of formatting or partitioning and I certainly don't want an MBR.  Just a plain binary copy from the file to the CF.

sudo dd if=./disk.img of=/dev/sdc   <-- or whatever dmesg says is the CF

Neither Linux nor dd need to know anything about the file, all I want to do is lay it down.

If there is a better utility, demonstrably better, than 'dd', which I can bury in a Makefile, I'm all up for learning about it.

I might consider a GUI application but since I am already at the command line doing cross-assembly and image building, leaving the terminal session wouldn't be considered a positive.

Incidentally, dd can also be used to copy the image from the CF to a file for backup purposes.  Kind of handy!

« Last Edit: December 05, 2020, 04:46:44 pm by rstofer »
 

Offline DiTBho

  • Super Contributor
  • ***
  • Posts: 4247
  • Country: gb
Re: DOS vs. Linux
« Reply #87 on: December 05, 2020, 05:29:05 pm »
Neither Linux nor dd need to know anything about the file, all I want to do is lay it down.

As I cannot always have X11, a good alternative I am evaluating is: Dcfldd, see here (project website)

It's described as an enhanced version of dd developed by the U.S. Department of Defense Computer Forensics Lab. It has some useful features for forensic investigators such as:
  • On-the-fly hashing of the transmitted data
  • Progress bar of how much data has already been sent
  • Wiping of disks with known patterns
  • Verification that the image is identical to the original drive, bit-for-bit(1)
  • Simultaneous output to more than one file/disk is possible
  • The output can be split into multiple files
  • Logs and data can be piped into external applications


(1) with dd, I usually use md5sum
e.g. you want to clone file.raw into /dev/sda1 and verify the copy is identical to image

# md5sum file.raw
fb65ba489968b8754e23f08863306218 (it returns something similar)

# md5sum /dev/sda1
fb65ba489968b8754e23f08863306218 (this may take a lot of time)

Then i simply compare these two strings.
If they are equal, then everything is ok.


« Last Edit: December 05, 2020, 05:38:39 pm by DiTBho »
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline golden_labels

  • Super Contributor
  • ***
  • Posts: 1376
  • Country: pl
Re: DOS vs. Linux
« Reply #88 on: December 05, 2020, 05:49:47 pm »
If there is an alternative to 'dd' for my application, I'm certainly willing to look into it. (…)
Pretty much any shell:
Code: [Select]
cat /disk.img >/dev/sdcIf needed to be executed with sudo:
Code: [Select]
sudo cp /disk.img /dev/sdcor
Code: [Select]
sudo tee /dev/sdc </disk.img >/dev/null
I usually use md5sum (…)
Drop caches before hashing the written data. Otherwise you’re likely to hash twice the input file cached in memory, not the output. Note that dropping caches is global: it will affect all cached data. Other processes will have to pull it back from storage when they need it, which of course makes the system slower:
Code: [Select]
sudo sync
sudo sysctl vm.drop_caches=3
sysctl vm.drop_caches in docs. Alternatvely you may try your luck with dd. This is one of the cases in which it may be useful: the iflag=direct option. But it’s not guaranteed to always work.
« Last Edit: December 05, 2020, 06:28:59 pm by golden_labels »
People imagine AI as T1000. What we got so far is glorified T9.
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9941
  • Country: us
Re: DOS vs. Linux
« Reply #89 on: December 05, 2020, 06:28:36 pm »
If there is an alternative to 'dd' for my application, I'm certainly willing to look into it. (…)
Pretty much any shell:
Code: [Select]
cat /disk.img >/dev/sdcIf needed to be executed with sudo:
Code: [Select]
sudo cp /disk.img /dev/sdcor
Code: [Select]
sudo tee /dev/sdc </disk.img >/dev/null

I never thought about cat.  Next time I build an image (which might be never), I'll give it a try.
One thing I get from dd is the number of sectors written.  That seems like a nice cross-check.  But cat, cp and tee are easier to use.  They work because devices are treated as files.

And to think that Unix is nearly 50 years old.  It was way ahead of the other OSes of the era.  It's kind of a shame that Linux isn't more popular than it is.

dc3dd is another candidate

https://sourceforge.net/projects/dc3dd/

Using cp seems to be the most intuitive because the operation really is just a copy.
 

Online PlainName

  • Super Contributor
  • ***
  • Posts: 7357
  • Country: va
Re: DOS vs. Linux
« Reply #90 on: December 05, 2020, 07:17:27 pm »
Quote
It was way ahead of the other OSes of the era.

You seem quite satisfied with it all. Perhaps you could answer a perennial question I have which no-one seems to have managed to resolve yet:

Is there a linux application that can create an image in a similar way to any of a dozen utils do on Windows/DOS? The important parts:

* Sector copying, typically of only used sectors (although all sectors would be an option, rarely used).

* Mounting of a image as a virtual drive.

* (Biggie, this) Coherent backup of a live system writing to the filesystem.

Superficially this looks like cloning, but it's a bit more sophisticated. Also a blindingly-fast way to get from bare metal to a restored system - file-by-fille restore tends to be lots slower simply because the filesystem has to be manipulated during the process.
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6993
  • Country: fi
    • My home page and email address
Re: DOS vs. Linux
« Reply #91 on: December 05, 2020, 08:26:52 pm »
Sector copying, typically of only used sectors (although all sectors would be an option, rarely used).
I do not know of a filesystem-agnostic way of copying only used sectors.  I do tend to compress the images (using xz) for storage (for images that contain filesystems I cannot reliably mount, or contain raw data).

Hole-punching would be a simple option (similar to sparse option for dd): blocks that contain all zeroes do not need to be stored.  (That is, if you have a 1 GB file that has only few blocks with nonzero data, the on-disk size it requires can be just a few kilobytes on ext, xfs, etc.)
The problem is that the advanced filesystems (ext, xfs, and so on) do not have a coherent used/free block mapping one can use while the filesystem is mounted, so one could interpose zeroes for the unused blocks.

On filesystems that can be reliably mounted tar etc. are a better option, as it can retain even xattrs of the files, and be trivially compressed and copy-able to a different-size volume.

Mounting of a image as a virtual drive.
Loopback mounts, very commonly used.  You can even specify an offset into an (uncompressed) image.  Some image formats can be mounted read-write, too; not just read-only.

In fact, if you are using a Linux machine, you probably did exactly that when booting the machine.  Most distributions use an initial ramdisk containing kernel modules and a tiny init system (in cpio format) to bring up support for necessary hardware devices before the actual root filesystem is mounted (and 'pivoted' to).  The 'snap' package format also uses loopback mounts.

Coherent backup of a live system writing to the filesystem.
Use LVM (Logical Volume Management); it supports snapshots.  Basically a layer between hardware devices and logical filesystems. Any snapshot can copied at leisure (no need to freeze the filesystem; the snapshot is atomic, and you can continue using the filesystem without affecting the snapshot if the underlying storage has sufficient space to store the subsequently modified blocks).
« Last Edit: December 05, 2020, 08:29:31 pm by Nominal Animal »
 
The following users thanked this post: PlainName

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6993
  • Country: fi
    • My home page and email address
Re: DOS vs. Linux
« Reply #92 on: December 05, 2020, 08:50:26 pm »
When the first Linux minilaptops came along (Asus Eee PC and Acer Aspire One, in 2008 or thereabouts), I used a 64 MB tmpfs mount for my Firefox profile.  When I used GDM, I had each instance of Firefox (running as myself) untar the profile to a tmpfs ("ramdisk") mount before Firefox proper was started, and tar the profile back when the last instance of Firefox exited.  GDM had certain bugs that made it impossible to run session scripts reliably.  Then I switched to LightDM, and used session scripts to populate the profile tmpfs mount when I logged in to the GUI, and recompress and tear it down when I logged out, suspended, or hibernated the machine.
This sped up Firefox immensely, and normal rabbit-hole web browsing would typically let the spinny-disk HDD spin down for hours on end.

Something very similar can be done on Linux SBCs that boot from microSD but have plenty (1 GB or more) of RAM.

I've used LVM on servers to make daily backups of black-box-user-modifiable data, by making a temporary snapshot (with a bit of magic, waiting for up to a limit for no user connections to the server to make the snapshot), copying the snapshot over a local dedicated link to another machine, then removing the snapshot.  This ensures that the snapshot is accurate at that point of time, and the only magic there just tries to make sure there are no "slow" modifications in progress at the snapshot time.  (I usually do a double check: check1, snapshot, check2; if check1 and check2 are consistent, then use snapshot; otherwise remove snapshot and retry, unless we cannot wait any longer for a snapshot.)

Virtual server farms (virtual hosting services) use LVM for backups and snapshotting, although they usually have some management software to make that real easy.  I haven't maintained a virtual server farm (only clusters and individual servers), so I don't know the exact details.
 

Online magic

  • Super Contributor
  • ***
  • Posts: 7262
  • Country: pl
Re: DOS vs. Linux
« Reply #93 on: December 05, 2020, 08:54:41 pm »
A nice "interactive" alternative to cat/dd is pv. On Linux it can also monitor progress of file descriptors opened by other processes (-d).
Everybody install it now if you haven't already :-+

Is there a linux application that can create an image in a similar way to any of a dozen utils do on Windows/DOS? The important parts:

* Sector copying, typically of only used sectors (although all sectors would be an option, rarely used).
* Mounting of a image as a virtual drive.
* (Biggie, this) Coherent backup of a live system writing to the filesystem.
Low level backup of a live filesystem requires close cooperation with the filesystem driver in the kernel. I don't think anyone supports such things on Linux. A lot of hard edge cases, like what happens if files are added or deleted or moved into an area on disk you have already backed up?

Copy on write snapshots solve this. I believe these are available if you have your FS on an LVM (never tried, and nobody uses LVM on desktop systems) and they are available on BTRFS so perhaps there would be a way to do it on BTRFS.

You can freeze a filesystem (queued writes are flushed, new writes are delayed until unfrozen) and then use any standard utilities to backup the block device. With cat/dd/pv you get a crude image of all sectors, but on XFS you can use xfs_copy which creates a sparse file with an image of the filesystem - all unused sectors are zeroed and turned into "holes" to save space on the backup device. The image can be loopback mounted or xfs_copied back to the disk.

A stupid trick that I have used to efficiently back up an NTFS filesystem from Linux - fill all unused space with a file containing nothing but zeros, create an image using dd with special option which creates a sparse file.

Long story short, situation isn't great, but frankly, I'm not sure if it's that great on Windows either. Seriously, what happens if you move, edit, rename, copy, delete large groups of files/folders during the operation?

edit
BTW, is it really that slow to populate an empty filesystem with files from a tar archive, compared to restoring a low level image? On Linux?
I rarely do such things, but I'm under impression it would run close to the full sequential throughput of the disk. I never really feel like filesystem overhead is a siginficant limit when writing bulk data to an unfragmented FS under Linux.
« Last Edit: December 05, 2020, 09:14:07 pm by magic »
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9941
  • Country: us
Re: DOS vs. Linux
« Reply #94 on: December 05, 2020, 09:05:42 pm »
You can get a lot of mileage out of a loopback device which creates a virtual disk as a file and can then create a filesystem and mount it.  From C, you can write code to read/write blocks so I suppose just about anything can be done.  Given that the loopback device can have its sectors addressed, non-filesystem applications can also work.

Will it look like a Windows app?  Probably not.  Ordinary users, those who stick to the GUI desktop, probably have no use for such a utility and the folks playing at the command line have a lot of tools at their fingertips.

https://www.thegeekdiary.com/how-to-create-virtual-block-device-loop-device-filesystem-in-linux/

Even dd allows for an offset (skip) for writing to arbitrary sectors.

I suppose a script file could be created to do just about anything to a file (including a block device).  It may not be pretty but it would probably work quite well.

https://www.computerhope.com/unix/dd.htm

My needs are simple,  I just want to copy an image.

ETA:  I tried the loopback device (link above) on my RPi and it works quite well.  I have no idea what I'm going to do with that experience but something may come up.
« Last Edit: December 05, 2020, 09:37:05 pm by rstofer »
 

Online PlainName

  • Super Contributor
  • ***
  • Posts: 7357
  • Country: va
Re: DOS vs. Linux
« Reply #95 on: December 05, 2020, 09:30:07 pm »
<some requirements>
<some suggestions>

Thanks. But is there something that already does all that? I am after something I don't need to join Udemy in order to learn how to use it - just say "do the backup right now", or schedule it, and it's done. Just as importantly, and simple single step to doing a restore. (Actually, that might be the more important bit!)
 

Online PlainName

  • Super Contributor
  • ***
  • Posts: 7357
  • Country: va
Re: DOS vs. Linux
« Reply #96 on: December 05, 2020, 09:43:45 pm »
Quote
Copy on write snapshots solve this.

Are these the same as the Windows volume shadow service?

Quote
Seriously, what happens if you move, edit, rename, copy, delete large groups of files/folders during the operation?

They get buffered from the drive until the backup is done. Anything above the shadow service doesn't know any different and will see the changed files, but the backup has snapshotted the drive at the start and nothing will change for that until it's done.

Quote
BTW, is it really that slow to populate an empty filesystem with files from a tar archive, compared to restoring a low level image? On Linux?

Don't know since I don't currently use Linux seriously (because of the backup situation), and those who do don't seem to think images are worthwhile. However, I would think there is a difference since you couldn't really do faster than blatting sequential sectors onto a disk, whereas with a filesystem you need to keep jumping back and forth, updating it as you add files. Presumably it's just a matter of scale and perception, and if you've never tried both then it could be hard to appreciate the difference.

On Windows it's quite a bit different. So much so that I don't bother with F&F backups at all - everything is an image, and if I need a couple of files or folders I'll mount the image and copy them like that.

Unfortunately, I can't quote figures since it's a loooong time since I did a F&F restore. Backups probably won't show much difference because of caching, compression and all the overheads of writing to a files, etc. It's the restore that is the speed demon (or snail).
 

Online PlainName

  • Super Contributor
  • ***
  • Posts: 7357
  • Country: va
Re: DOS vs. Linux
« Reply #97 on: December 05, 2020, 09:50:19 pm »
Quote
You can get a lot of mileage out of a loopback device which creates a virtual disk as a file and can then create a filesystem and mount it.

Thanks, but is there anything that actually does this? I am not after a roll-your-own solution (at least, not until I am very much more of a Linux user, which is looking to be a chicken and egg situation).

Quote
Ordinary users, those who stick to the GUI desktop, probably have no use for such a utility

On the contrary, it's precisely those that would benefit. They just don't know it because there isn't anything available for them to use. Kind of like having a box of screws but any quest for help using them just comes up with a massive variety of hammers.

I'm sure you're also aware that many users, not just GUI users, don't actually have any backup at all. That doesn't mean there is no need, just that they haven't got one for any of many reasons, including not knowing they could or should, as well as not knowing how.

Quote
I tried the loopback device (link above) on my RPi and it works quite well.

Thanks for the info  :-+
 

Online magic

  • Super Contributor
  • ***
  • Posts: 7262
  • Country: pl
Re: DOS vs. Linux
« Reply #98 on: December 05, 2020, 10:53:18 pm »
The equivalent of LVM in Windows appears to be "dynamic disks". It means you don't partition normally, but create a disk area where chunks of space can be arbitrarily and dynamically allocated and assigned to logical volumes. Linux LVM supports taking COW snapshots of logical volumes - if your logical volume which carries the filesystem is 90% of the total LVM area, then up to 10% can be COW-ed. It doesn't care what filesystem or data are on the volume. Problem is, your FS is smaller than the available space so it's kinda meh to use it solely for snapshoting.

I'm not familiar with Windows VSS, but it appears to work on higher level. I suspect it's integrated with NTFS driver and dynamically allocates free space within the NTFS volume to create temporary copies of files and NTFS metadata as they are modified. At any rate, it surely is better than what I expected when you mentioned 3rd party utilities. 3rd party utilities which are a GUI wrapper over a tightly integrated system feature can of course provide a whole different level of funcitonality.

The closest Linux thing is ZFS/BTRFS snapshots. With one command you create a new "root directory" within the volume which points to all the same old files and subdirectories. When anything is modified, content is duplicated so that the other "root directory" doesn't see the change. Recently XFS also gained some level of snapshot capability. Snapshots exist entirely within one filesystem and persist until they are deleted.

A snapshot can be mounted and backed up to external storage with any archiver. Not sure if there is software to make a sector level copy of a snapshot (while ignoring other snapshots). Seems possible in theory, as the snapshot can be mounted RO or not mounted at all.

On filesystems without snapshots, freezing for the duration of taking backup is the only way to get a consistent image. Meh.
« Last Edit: December 05, 2020, 11:00:12 pm by magic »
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9941
  • Country: us
Re: DOS vs. Linux
« Reply #99 on: December 05, 2020, 11:26:50 pm »
VSS is quite a package but I doubt that many single-user systems are running it.  I'm not sure how they can limit the shadow copy creation to 10 seconds but they must have a scheme.

I don't think that Linux has anything like this.  For large servers, it seems like a requirement.

https://docs.microsoft.com/en-us/windows-server/storage/file-server/volume-shadow-copy-service
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf