I just got my home server up and running and was wondering what you guys recommend for backups. I figure it will probably be worth having backups on cloud servers tjay are external, are there any good services yall use for that?

  • Qu4ndo@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    21
    ·
    1 year ago

    Borgbase with Borgmatic (Borg) as the Software. As far as I know the whole Borgbase Service is from a Homelab guy (with our needs in mind).

    Also 3-2-1 rule!

    • witten@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      1 year ago

      Ehhh I would say then you have probabilistic backups. There’s some percent chance they’re okay, and some percent chance they’re useless. (And maybe some percent chance they’re in between those extremes.) With the odds probably not in your favor. 😄

    • pacjo@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Not so much about testing, but one time I really needed to get to my backups I lost password to the repository (I’m using restic). Luckily a copy of it was stored in bitwarden, but until I remembered it, were perhaps one of the worst moments.

      Needless to say, please test backups and store secrets in more then one place.

  • kennyboy55@feddit.nl
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 year ago

    I have an unraid server which hosts an docker image of Duplicacy. It is paid though for the web interface. And it backs up to Backblaze B2. I have roughly 175GB backed up, for which I pay $0.87 a month.

    • GlitzyArmrest@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      This is almost my exact backup workflow, with another location in between. Duplicacy is great, highly recommend.

    • Rakn@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Paid for the web interface as well. I really like that it’s super simple and just does it’s job. That would be the one I’d also recommend.

    • lal309@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Do you have other clients backing up to your unraid? I’m looking for a complete solution to backing up end user workstations (windows, Mac and Linux) to my unraid server then backing up my unraid server to something like wasabi, Amazon, backblaze, etc. Preferably a single solution.

      • kennyboy55@feddit.nl
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Yes, I have another server automatically rsyncing important config files to a nfs share. And my pc has a samba share where I manually backup files to.

  • johntash@eviltoast.org
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    rsync.net is great if you need something simple and cheap. Backblaze B2 is also decent, but does have the typical download and API usage cost.

    • Crazeeeyez@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      I had never heard of rsync.net until now. I like the idea but it seems more expensive than B2. $15/TB vs $5/TB. Am I doing the math wrong or reading it wrong?

      • johntash@eviltoast.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        I don’t see it on their website right now, but they offer a discount if you’re using something like restic/borg and only need scp/sftp access. Their support is also super friendly. I’ve had an account forever and got moved to the 100+ TB pricing even though I have < 50TB stored. YMMV but it doesn’t hurt to ask if they have any additional discounts.

        Also keep in mind that B2 charges for bandwidth too. It’s $5/TB for storage, but $10/TB to download that same data.

        • Crazeeeyez@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Sure but backup is mostly data in (free on B2). Data out is rare, if ever.

          If i wasn’t backing up 12TB+ I would actually go with rsync for the features though.

          Borgbase looks interesting, too.

  • spez_@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    1 year ago

    I use Restic + Resticprofile to back up everything and store it on my local HDD.

    Then, I use Rclone to sync the local repository to Backblaze B2.

    Here’s my general setup:

    /.config/restic/
    ├── logs
    │   ├── statuses
    │   │   ├── restic-status-20230202T020202.json
    │   │   └── restic-status-20230101T010101.json
    │   ├── restic-check-20230202T020202.log
    │   └── restic-backup-20230101T010101.log
    ├── config
    │   ├── profiles.yaml
    │   ├── excludes.txt
    │   ├── rclone.conf
    │   └── password.txt
    ├── bin
    │   ├── restic_0.15.2_linux_arm64
    │   ├── rclone_1.63.1_linux_arm64
    │   └── resticprofile_0.22.0_linux_arm64
    
    version: "1"
    
    # Schedules (https://www.freedesktop.org/software/systemd/man/systemd.time.html#Calendar%20Events)
    {{ $SCHEDULE_RESTIC_BACKUP := "*-*-* 22:00:00" }}       # Daily at 10PM
    {{ $SCHEDULE_RESTIC_CHECK := "Sat *-*-* 04:00:00" }}    # Weekly at 4AM on Saturday
    {{ $SCHEDULE_SYNC_BACKUP := "Sun *-*-* 21:30:00" }}     # Weekly at 11.30PM on Sunday
    {{ $SCHEDULE_POSTGRES_BACKUP := "Fri *-*-* 20:00:00" }} # Weekly at 8PM on Friday
    
    # Directories
    {{ $LOCATION_RESTIC_BINARY := "/home/deck/Desktop/.config/restic/bin/restic_0.15.2_linux_arm64" }}
    {{ $LOCATION_RESTIC_REPO := "/home/deck/Desktop/restic-repo" }}
    {{ $LOCATION_RESTIC_LOG := "/home/deck/Desktop/.config/restic/logs" }}
    {{ $LOCATION_RESTIC_STATUS := "/home/deck/Desktop/.config/restic/logs/statuses" }}
    {{ $LOCATION_RESTIC_BLOCKED_FILE := "/home/deck/Desktop/.config/restic/BLOCKED" }}
    {{ $LOCATION_RCLONE_BINARY := "/home/deck/Desktop/.config/restic/bin/rclone_1.63.1_linux_arm64" }}
    {{ $LOCATION_RCLONE_REPO := "bucket:restic-backup-12345" }}
    {{ $LOCATION_RCLONE_CONFIG := "/home/deck/Desktop/.config/restic/config/rclone.conf" }}
    {{ $LOCATION_RESTICPROFILE_LOCK := "/tmp/resticprofile-default.lock" }}
    {{ $LOCATION_POSTGRES_DUMP := "/home/deck/Desktop/dumps" }}
    {{ $LOCATION_PRIMARY_BACKUP_SOURCE := "/home/deck/Desktop/" }}
    
    # Configs
    {{ $CONFIG_CURRENT_TIME := .Now.Format "20060102T150405" }}
    {{ $CONFIG_RESTIC_PASSWORD := "/home/deck/Desktop/.config/restic/config/password.txt" }}
    {{ $CONFIG_RESTIC_EXCLUDE := "/home/deck/Desktop/.config/restic/excludes.txt" }}
    
    global:
      default-command: snapshots                      # Run 'snapshots' when no command is specified
      initialize: false                               # Do not initialize a repository if none exists
      priority: low                                   # Use priority class on Windows and "nice" on Unixes
      min-memory: 100                                 # Minimum required RAM for Resticprofile to start
      restic-lock-retry-after: 5m                     # Retry failed restic command acquisition every 5 minutes
      restic-stale-lock-age: 10h                      # Unlock stale lock if age exceeds 10 hours
      restic-binary: '{{ $LOCATION_RESTIC_BINARY }}'  # Location of the Restic binary
    
    default:
      lock: '{{ $LOCATION_RESTICPROFILE_LOCK }}'      # Local lockfile to prevent concurrent profile runs
      force-inactive-lock: true                       # Detect and remove stale locks
      initialize: true                                # Initialize repository if it doesn't exist
      repository: '{{ $LOCATION_RESTIC_REPO }}'       # Path to Restic repository
      password-file: '{{ $CONFIG_RESTIC_PASSWORD }}'  # File containing repository password
      status-file: '{{ $LOCATION_RESTIC_STATUS }}/{{ $CONFIG_CURRENT_TIME }}-restic-status.json'  # Output status file
      compression: 'max'                              # Maximum compression level
      run-after-fail:                                 # Block syncing if there was a failure. TODO: Add an email
        - 'echo "The command ${PROFILE_COMMAND} has failed in ${PROFILE_NAME}. Please check the logs." > {{ $LOCATION_RESTIC_BLOCKED_FILE }}'
    
      backup:
        run-before:                                   # Bring down Docker before backup
          - 'systemctl stop docker.socket'
          - 'systemctl stop docker'
        run-finally:
          - 'grep --invert-match -E "^unchanged|\(0 B added, 0 B stored\)|\(0 B added\)" {{ tempFile "backup.log" }} > {{ $LOCATION_RESTIC_LOG }}/{{ $CONFIG_CURRENT_TIME }}-restic-backup.log'  # Copy log file, stripping out any unchanced files
          - 'systemctl start docker'                  # Bring Docker back online after backup
        one-file-system: false                        # Exclude other file systems
        no-error-on-warning: true                     # Don't consider warnings as backup failures
        source:                                       # Directories to back up
          - '{{ $LOCATION_PRIMARY_BACKUP_SOURCE }}'
        exclude-file: '{{ $CONFIG_RESTIC_EXCLUDE }}'  # File containing exclude patterns
        exclude-caches: true                          # Exclude cache files
        schedule: '{{ $SCHEDULE_RESTIC_BACKUP }}'     # Backup schedule
        schedule-permission: system                   # Schedule permission
        schedule-lock-wait: 10m                       # Wait time for the lock during schedule
        schedule-log: '{{ tempFile "backup.log" }}'   # Log file to /tmp. This contains all information, including unchanged files which we do not care about
        verbose: 2                                    # Log details about processed files
    
      check:
        schedule: '{{ $SCHEDULE_RESTIC_CHECK }}'      # Verification schedule
        schedule-permission: system                   # Schedule permission
        schedule-lock-wait: 10m                       # Wait time for the lock during schedule
        schedule-log: '{{ $LOCATION_RESTIC_LOG }}/{{ $CONFIG_CURRENT_TIME }}-restic-check.log'  # Log file
        read-data: true                               # Verify data during check
    
      prune:
        dry-run: true                                 # Only prune if safe to do so, change manually
        repack-uncompressed: true                     # Repack all uncompressed data
    
      forget:
        dry-run: true                                 # Only forget if safe to do so, change manually
    
      rewrite:
        dry-run: true                                 # Only rewrite if safe to do so, change manually
        forget: true                                  # Remove original snapshots after creating new ones
        exclude-file: '{{ $CONFIG_RESTIC_EXCLUDE }}'  # File containing exclude patterns
    
      mount:
        allow-other: true                             # Allow other users to access the mount point
    
      rebuild-index:
        read-all-packs: true                          # Read all pack files to generate new index from scratch
    
    # The following shell profiles are simply to run other shell scripts at a scheduled time
    # We do not actually run the primary Restic commands listed, as we exit the process early
    
    shell-postgres:                                   # Profile to run shell scripts only. We exit the current process before Restic can run.
      backup:
        schedule: '{{ $SCHEDULE_POSTGRES_BACKUP }}'   # Postgres backup schedule
        schedule-permission: system                   # Schedule permission
        schedule-lock-mode: ignore                    # Ignore locks, if any
        schedule-log: '{{ $LOCATION_RESTIC_LOG }}/{{ $CONFIG_CURRENT_TIME }}-postgres-backup.log'  # Log file
        dry-run: true                                 # Don't write data
        run-before:                                   # Dump postgres databases
          - 'chmod 777 /var/run/docker.sock'
          - 'docker exec -t immich-postgres pg_dumpall -c -U postgres | gzip > "{{ $LOCATION_POSTGRES_DUMP }}/immich-dump-{{ $CONFIG_CURRENT_TIME }}.sql.gz" && echo "Dumped Immich database: {{ $LOCATION_POSTGRES_DUMP }}/immich-dump-{{ $CONFIG_CURRENT_TIME }}.sql.gz"'
          - 'docker exec -t joplin-postgres pg_dumpall -c -U joplin | gzip > "{{ $LOCATION_POSTGRES_DUMP }}/joplin-dump-{{ $CONFIG_CURRENT_TIME }}.sql.gz" && echo "Dumped Joplin database: {{ $LOCATION_POSTGRES_DUMP }}/joplin-dump-{{ $CONFIG_CURRENT_TIME }}.sql.gz"'
          - 'kill $$'
    
    shell-sync:
      backup:
        schedule: '{{ $SCHEDULE_SYNC_BACKUP }}'       # Sync backup schedule
        schedule-permission: system                   # Schedule permission
        schedule-lock-mode: ignore                    # Ignore locks, if any
        schedule-log: '{{ $LOCATION_RESTIC_LOG }}/{{ $CONFIG_CURRENT_TIME }}-rsync-backup.log'  # Log file
        dry-run: true                                 # Don't write data
        run-before:                                   # Sync the Restic repo, after checking if the repository is in good health
          - 'if [ -f "{{ $LOCATION_RESTIC_BLOCKED_FILE }}" ]; then echo "There has been a problem with the Restic repository, please check the logs. If everything is okay, delete the BLOCKED file." && kill $$; fi'
          - '{{ $LOCATION_RCLONE_BINARY }} -v sync {{ $LOCATION_RESTIC_REPO }} {{ $LOCATION_RCLONE_REPO }} --config={{ $LOCATION_RCLONE_CONFIG }} --b2-hard-delete'
          - '{{ $LOCATION_RCLONE_BINARY }} cleanup {{ $LOCATION_RESTIC_REPO }} --config={{ $LOCATION_RCLONE_CONFIG }}'
          - 'kill $$'
    

    Resticprofile doesn’t let me run other shell commands on a schedule, and because I wanted everything in a single configuration, I just created two new profiles which call the backup command. I then made the shell commands run before Restic, and then finally killed the instance before it got to actually run, which effectively does what I needed.

    • pacjo@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      It’s the first time I hear about resticprofile and it looks nice. So far I’ve been using crestic for configuration files. Do you know how they compare?

      • spez_@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        It seems like they have the same objectives - allow for easier configuration of Restic. I’ve never heard of Crestic until now. I’d say stick with what you’re comfortable with

  • wibo@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    I use restic to backup my raspberry Pi’s to my Synology NAS and backup my NAS to backblaze.

    • loganb@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      I second restic. Have been using it for a year now and have been generally very happy. Actually had to use it in a couple occasions to restore directory content and even recover a complete workstation drive. I have had relatively easy success in both scenarios.

      • Jajcus@kbin.social
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        I know Restic before Kopia and made a set of systemd units to run Restic backups on my home server and office workstation (both online 24/7).

        Kopia seems much nicer for a regular user, so I use it on my and family laptops. I used to use Duplicati there, but that project seems dead.

    • jcg@halubilo.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      +1 for backblaze. I use docker for everything and mounted volumes directly in the folder alongside a docker compose file. So I just tar my services directory with everything in it, and pipe it to rclone which connects to backblaze and has a “cat” feature so you can pipe data directly to the destination.

    • monty@lemmy.one
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Restic and then rclone to backblaze? Or is there a way to restic directly to backblaze?

      • mellitiger@iusearchlinux.fyi
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 year ago

        I do prefer having a local copy of my backups (and therefore i use rclone), but afaik restic does support b2 directly…

    • Arrayrepairman@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      That is great for hardware failures, but what about disasters? I would hate to lose my house to a fire and all the data (including things not replaceable, like family photos) I have on my server at the same time because my primary and backup were both destroyed.

      • GustavoM@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Eh…you’ve got a point there. Then again, there is always pendrives and other extremely small devices where you can copy your (mostly important/crucial) files in and carry it along with your house/car keys or something like that.

    • raiun@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      While I agree with you, hard drives do have a shelf life. How many years seems to be up for debate but it does exist. If you don’t have multiple drives that are of different ages you may be in a world of hurt one day.

      • Chadus_Maximus@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        Why? If you check the drive once a month, and it fails once per 10 years on average, the time when both the back up drive and the main drive fail simultaneously is on average 2340 years. Of couse they are much more likely to fail if they’re old but the odds are very small.

      • randombullet@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I have a hot storage NAS that backups to a warm storage NAS.

        I backup every week and scrub every month.

        I have 2 x ZFS1 pools that contains 3 x 20TB disks each.

        With ECC ram, scrubbing, and independent pools, it’ll take a house fire to kill my local storage.

        I also have a constant backing to Backblaze and yearly encrypted backup that I ship to a friend across the world.

  • Revan343@lemmy.ca
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    rsync.net and learn to use Borg; they’re stupid cheap if you’re technically proficient enough to handle the Borg setup yourself. Like, charge by the gigabyte, but it’s 1.5¢/GB at the most expensive, and cheaper in bulk

  • ErwinLottemann@feddit.de
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    borg with an external hard drive and borgbase as a remote. I use the 2-2-1 rule (🙈), as I struggle to find a good way to do another backup and RAID does not count 😬

  • ghariksforge@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    1 year ago

    External HDD in my wifi network. It runs Samba. I can just drag and drop folders and it transfers over wifi.

  • Pechente@feddit.de
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    Backups and archived files go to my home server which then backups to backblaze b2.

    • hoodlem@hoodlem.me
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      My setup exactly, with the addition of using M-Discs to backup my core important stuff.