My journey in to the land of Linux

BlackHawkBlackHawk Bible music connoisseurThere's no place like 127.0.0.1 Icrontian

I bothered the Discord server enough that now I'm gonna inconvenience the forums.

Origins:

My reason for trying Linux is that I wanted to compartmentalize Plex Media Server and Usenet software and easily back it up.

Originally everything was on a Windows 10 installation but I was dumb and installed Insider previews. The Insider build didn't work well with NBA 2K16/17 for some reason (which I play almost everyday), so I needed to leave the Insider program and reinstall. The thing is that would create a lot of downtime and Carla would complain about not having her shows.

I decided I was gonna go the VM route where I can just back up the whole image. Problem is that I didn't have a license for Windows laying around so I decided to go with Linux. Ubuntu.

First time setting up took me like 2 days. I nuked the VM and restarted so many times. After extensive use of googling and guides I manage to set up the VM with all the software and even a reverse proxy.

Almost everything was on Docker. In essence, I had multiple mini VM's running inside a VM, running on Windows. Talk about overhead. It did keep things compartmentalized though. Now I don't know why I didn't leave it at that.

Usenet download programs like SABnzbd and NZBget had poor performance when repairing and unpacking so I decided to put them back on the host. Plex was bottle-necked by the CPU config in the VM so I put that on the host as well.

In between this I tried messing around with pfSense on an old nettop I previously used as an HTPC. Did lots of reading for that, but in the end it didn't like to play ball with the nettop I had put it on and the USB NIC I tried using. Nuked pfSense and installed Ubuntu on it for "just in case".

I proceeded to offload everything else from the VM to the nettop and put Pi-hole Pi-hole on it while I was at it (very easy to install and use).

I had 2 problems that stood out. Pi-hole's DNS server didn't work after a reboot. Had to restart it manually. I'm thinking it's because I had left the USB NIC connected during setup and it didn't know which to use when booting. Second problem was my bittorrent docker container had permission problems until the container was restarted. Only container having problems. I don't know why.

Today, I done messed up while trying to install a reverse proxy. I was installing nginx on the nettop, but I didn't realize Pi-hole had already installed an web server (lighttpd). Gave me problems trying to figure this out, so I decided to format the nettop and start over. Before I did that I found this article on how to use Pi-hole with nginx. Before that I started removing things I shouldn't have and I needed to format anyways. I only knew enough about Linux to get me in trouble.

@drasnor convinced me to try Arch Linux and do this the hard way (compared to Ubuntu/Lubuntu).

I feel that I learned everything kinda half-assed. I still don't know the fundamentals. I know very little of command line and Linux's filesystem. I don't know the "correct" way to do most task. If I mess up, I don't know how to undo it.

So now I'm gonna partition the SSD in the nettop so I can easily backup any installation I make and I'm gonna install Arch Linux.

This will be where I vent while on my journey.

«1

Comments

  • BlueTattooBlueTattoo Boatbuilder Houston, TX Icrontian

    @BlackHawk, welcome to the world of free and cranky software. After I retired and Windows XP lost support, I started using (rather than just occasionally playing with) Linux. I started with Zorin OS because it looked like Windows 7 and worked on my netbook. Later I started looking for a distro to run on my desktop, like you, in a VM. Arch proved to be too hard for my limited skills as I spent too much time looking stuff up. I tried a bunch, but settled on Ubuntu GNOME. I like the UI and it has wide support for when I have a specific task or problem to solve.

    The one thing that really worked for me in the early days was to copy the VM before making major changes. Then, if it blew up, I’d just delete the mistake and be back where I started without rebuilding.

    I eventually went dual boot on my laptop for better performance and because I don’t really mess with that configuration much. I still look at different distros, and have been thinking about trying Antergos. It’s Arch, but supposedly with less pain.

    Good luck.

  • BlackHawkBlackHawk Bible music connoisseur There's no place like 127.0.0.1 Icrontian

    @BlueTattoo said:
    I eventually went dual boot on my laptop for better performance and because I don’t really mess with that configuration much. I still look at different distros, and have been thinking about trying Antergos. It’s Arch, but supposedly with less pain.

    I might have to look at Antergos cause right now trying to setup Arch just threw me in the deep end. Their wiki's were an audiobook, they would be voiced by a stern German mother. Schnell!

    Thrax
  • ardichokeardichoke Icrontian

    Arch is great.... if you want to know how everything works. Once you've set up an Arch desktop succesfully, you'll be pretty well versed in how Linux works. At least most of the basics. Also, the Arch Wiki is an exceptionally good source of information. Antergos is good if you want an Arch system without all the pesky setup and learning of how to set everything up. Plus a fair amount of the Arch Wiki will still be relevant.

    As for Docker in a VM, that's not at all a bad choice. In fact, it's how a lot of places run their Docker images these days because you get the isolation of a full VM with the benefits of Docker. Also, Docker isn't really a VM... it's a container. They're similar but not the same thing. You're not applying another level of virtualization, containers run on the native OS which is why they can be spun up so quickly and have less overhead.

  • BlackHawkBlackHawk Bible music connoisseur There's no place like 127.0.0.1 Icrontian
    edited May 2017

    So it took me 1:30hr without an actual guide just to install GRUB parted so I can try and partition. :sawed:

  • ardichokeardichoke Icrontian
    edited May 2017

    Yeah... GRUB has nothing to do with partitioning, it's the bootloader. The piece of code that your BIOS loads which bootstraps the actual system. If you ever want to install Arch, you should follow the beginners guide: https://wiki.archlinux.org/index.php/Installation_guide but yeah, it's not for the faint of heart. You should set aside a few hours to read the documentation and work through it. It's something you do to gain a deep understanding of how things work and build a system highly tuned to your needs, not so you can have a quick, up-and-running Linux system.

  • BlackHawkBlackHawk Bible music connoisseur There's no place like 127.0.0.1 Icrontian

    I'll be honest. I think Arch is a bit too much. I don't know enough about Linux in general to set it up. I got stuck at the partitioning part cause I didn't know how to do the way I wanted.

  • KarmaKarma Likes yoga Icrontian

    I like Arch a lot, but it's not for everyone and I don't even use it any longer just not worth it. I use both Ubuntu and Fedora depending on what I am trying to do, both have a lot of support.

    Anyways for your command line learning here is a book.

    And How I feel working with Linux

    BlackHawkGarg
  • BlueTattooBlueTattoo Boatbuilder Houston, TX Icrontian

    @BlackHawk, if you need to get something working now and learn Linux later, you might try one of the easily installed distros. I use Ubuntu GNOME, but a lot of new users do well with Ubuntu MATE. The MATE desktop has several different configurations available, including options to look kinda like a Mac or Windows. It’s been awhile since I played with it, so there will be some features I don’t know about.

    Any of the Ubuntu flavors will install quickly and have plenty of community support. And they all run Docker.

  • BlackHawkBlackHawk Bible music connoisseur There's no place like 127.0.0.1 Icrontian
    edited May 2017

    I managed to set up Docker and the containers I needed in Antergos. Pi-hole's installer doesn't work with Arch, and the version in the AUR isn't up to date. I'll need to figure out how to run it in Docker, or buy a Raspberry Pi.

    During the set up and config, I think I spent the most time trying to mount a share. I mounted as a systemd unit instead of as a mount entry and it was giving me errors. Looking at my old config from Ubuntu/Lubuntu made me realize the credential file was incorrect. Fixed that and I'm good to go. On to reading about Pi-hole and Docker.

    Edit: This seems like my best bet for Pi-hole on Docker.

  • BlackHawkBlackHawk Bible music connoisseur There's no place like 127.0.0.1 Icrontian

    Stuck trying to implement this cron job.

    I've never actually dealt with cron, and maybe it's different with Arch and/or Cronie. Google results show people that are already knowledgeable about cron so no basic questions that I'd use are asked or answered.

  • primesuspectprimesuspect Beepin n' Boopin Detroit, MI Icrontian

    I CRON TIC

  • drasnordrasnor Starship Operator Hawthorne, CA Icrontian

    @BlackHawk said:
    Stuck trying to implement this cron job.

    I've never actually dealt with cron, and maybe it's different with Arch and/or Cronie. Google results show people that are already knowledgeable about cron so no basic questions that I'd use are asked or answered.

    I'm not sure exactly what trouble you're having but I have found the Gentoo docs have an alright treatment of cron. This may help you get started: https://wiki.gentoo.org/wiki/Cron#Scheduling_cron-jobs

    Garg
  • BlackHawkBlackHawk Bible music connoisseur There's no place like 127.0.0.1 Icrontian
    edited May 2017

    @drasnor said:

    @BlackHawk said:
    Stuck trying to implement this cron job.

    I've never actually dealt with cron, and maybe it's different with Arch and/or Cronie. Google results show people that are already knowledgeable about cron so no basic questions that I'd use are asked or answered.

    I'm not sure exactly what trouble you're having but I have found the Gentoo docs have an alright treatment of cron. This may help you get started: https://wiki.gentoo.org/wiki/Cron#Scheduling_cron-jobs

    Yeah, I'm gonna need a more beginner friendly version of that page, if possible. I still don't know enough to intuitively get some of the things they're saying.

    This entry in ArchWiki explained a bit about cronie.

    I guess what I'm stuck at is in the cron job mentioned about for pi-hole, other than the first two lines of code (DOCKER_NAME and PATH), what else is placeholder text? Are those two variables and I don't need to fill it out in the rest of the file? I know what it wants for DOCKER_NAME, but what PATH? The volume I gave the Docker container when I created it?

    Finally, how do I run that damn cron job? Is it a file that I put in to /etc/cron.weekly? Do I use the snippets of un-commented code, edit them and put it in my crontab?

  • AlexDeGruvenAlexDeGruven Wut? Meechigan Icrontian

    I remember being where you're at quite well. Don't worry, keep pushing, and stuff will start to click without you even realizing it sometimes.

    BlackHawk
  • BlackHawkBlackHawk Bible music connoisseur There's no place like 127.0.0.1 Icrontian
    edited May 2017

    Went to YouTube, found this vid:

    3/4 of the way through it clicked. I think.

    I'm not sure if I did it correctly, but I sudo crontab -e -u root, to edit root's crontab, which it didn't have one. Filled it with the content from the cron job I had trouble implementing.

    I think sudo crontab -l -u root shows what I wanted:

    [blackhawkpr@pepper etc]$ sudo crontab -l -u root
    [sudo] password for blackhawkpr:
    # Pi-hole: A black hole for Internet advertisements
    # (c) 2015, 2016 by Jacob Salmela
    # Network-wide ad blocking via your Raspberry Pi
    # http://pi-hole.net
    # Updates ad sources every week
    #
    # Pi-hole is free software: you can redistribute it and/or modify
    # it under the terms of the GNU General Public License as published by
    # the Free Software Foundation, either version 2 of the License, or
    # (at your option) any later version.
    #
    # This file is under source-control of the Pi-hole installation and update
    # scripts, any changes made to this file will be overwritten when the softare
    # is updated or re-installed. Please make any changes to the appropriate crontab
    # or other cron file snippets.
    
    # Your container name goes here:
    DOCKER_NAME=pihole
    PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
    
    # Pi-hole: Update the ad sources once a week on Sunday at 01:59
    #          Download any updates from the adlists
    59 1    * * 7   root    PATH="$PATH:/usr/local/bin/" docker exec $DOCKER_NAME pihole updateGravity > /dev/null
    
    # Update docker-pi-hole by pulling the latest docker image ane re-creating your container.
    # pihole software update commands are unsupported in docker!
    30 2    * * 7   root    PATH="$PATH:/usr/local/bin/" docker exec $DOCKER_NAME pihole updatePihole > /dev/null
    
    # Pi-hole: Flush the log daily at 00:00 so it doesn't get out of control
    #          Stats will be viewable in the Web interface thanks to the cron job above
    00 00   * * *   root    PATH="$PATH:/usr/local/bin/" docker exec $DOCKER_NAME pihole flush > /dev/null
    

    Don't exactly know how to test it without waiting for when it actually executes.

    Edit:

    Kinda cheated and looked at the scheduled cron jobs in Webmin, but I'm not sure if it looks right.

    primesuspect
  • ThraxThrax 🐌 Austin, TX Icrontian

    The year of Linux on the desktop is finally upon us.

    primesuspect
  • drasnordrasnor Starship Operator Hawthorne, CA Icrontian

    @BlackHawk said:
    I'm not sure if I did it correctly, but I sudo crontab -e -u root, to edit root's crontab, which it didn't have one. Filled it with the content from the cron job I had trouble implementing.

    Kinda cheated and looked at the scheduled cron jobs in Webmin, but I'm not sure if it looks right.

    Looks correct based on the crontab you posted. Daily log flush, weekly update check, set some environment variables at boot.

  • BlackHawkBlackHawk Bible music connoisseur There's no place like 127.0.0.1 Icrontian
    edited May 2017

    Trying to iron out problems before I learn how to make a system image. One of my Docker containers (Transmission) ran in to permissions issues with a mounted share.

    I used "automatic mounting as systemd unit" from Arch's wiki on Samba to mount. It worked and it mounted, but users other than root could only read and not write. I realized this after nothing would download. I checked the docker command I used to create the container and everything seemed fine.

    It's not a permissions issue from the host, since root can read and write. I had to check the service file I made for the share to see what's wrong or missing.

    Although the wiki article doesn't tell you (unless it's implied), you need to specify a user in the mount options. After putting in my user, everything seems to be working.

    Here's the contents of mnt-downloads.service that's used to mount the share:

    [Unit]
    Description=downloads
    Requires=systemd-networkd.service
    After=network-online.target
    Wants=network-online.target
    
    [Mount]
    What=//192.168.1.3/downloads/
    Where=/mnt/downloads/
    Options=credentials=/xxxxxxx/xxxxxx/.smbcredentials,uid=xxxxxxxxx,iocharset=utf8,rw,x-systemd.automount
    Type=cifs
    TimeoutSec=30
    
    [Install]
    WantedBy=multi-user.target
    
  • drasnordrasnor Starship Operator Hawthorne, CA Icrontian

    Using your own user for SMB mounts is nicely expedient though you can also specify a group ID (gid) such as the sambashare group shown in the Arch docs: https://wiki.archlinux.org/index.php/samba#Creating_usershare_path . This can be used to manage permissions for groups of users in a multi-user environment.

  • BlackHawkBlackHawk Bible music connoisseur There's no place like 127.0.0.1 Icrontian

    @drasnor said:
    Using your own user for SMB mounts is nicely expedient though you can also specify a group ID (gid) such as the sambashare group shown in the Arch docs: https://wiki.archlinux.org/index.php/samba#Creating_usershare_path . This can be used to manage permissions for groups of users in a multi-user environment.

    Do you know of any resource for users and groups in general? I kinda get what they're used for, but I don't know the best practices.

    Should I make groups for each general task? Like one group for Usenet stuff, including Docker and mounted shares? If I were to use the system for something else, make a group for that task?

  • drasnordrasnor Starship Operator Hawthorne, CA Icrontian

    You create users and groups for people and software you don't trust (or like most system daemon-type software, it tells you not to trust it) to have permissions to the whole system. Running stuff as root is expedient but if there's any kind of latent defect waiting to be exploited it can run amok all over your system.

    Setting up and troubleshooting this sort of setup manually is a huge pain which is why most server/system daemon packages create a user and/or group at install-time that has the relevant permissions. You might check to see if these were created when you installed the software or if the provider offers a tutorial.

    Arch docs cover the very basics of user and group management: https://wiki.archlinux.org/index.php/users_and_groups

    Note the default system groups at the bottom; these are useful starting points for generic daemons that need to interact with your system. Regarding your single group per task question, it's easier to manage that as single user per system daemon rather than group. These users would be granted access to the specific files they needed to operate and assigned to the nobody group and nologin shell to prevent access to the rest of the system.

  • BlackHawkBlackHawk Bible music connoisseur There's no place like 127.0.0.1 Icrontian

    @drasnor does the usershare path only apply if the machine is hosting a share? Right now it's the client.

  • BlackHawkBlackHawk Bible music connoisseur There's no place like 127.0.0.1 Icrontian

    Want to make a backup, but I don't know which is the best way. I'd like to backup to my desktop and use some type of compression while I'm at it. Any ideas?

  • KwitkoKwitko Sheriff of Banning (Retired) By the thing near the stuff Icrontian

    @primesuspect said:
    I CRON TIC

    I believe I am allowed to ban you for life for this comment.

    Ryder
  • AlexDeGruvenAlexDeGruven Wut? Meechigan Icrontian

    Says (Retired)

    Ryder
  • drasnordrasnor Starship Operator Hawthorne, CA Icrontian

    @BlackHawk said:
    @drasnor does the usershare path only apply if the machine is hosting a share? Right now it's the client.

    Sorry, I didn't read closely enough. You need to have an empty directory on your filesystem to use as a mount point and that directory should be owned by the samba user and have read/write permissions for users with the 'users' group (e.g. sudo chown samba:users /mnt/mymountpoint). From there, you have a few options for how to automatically mount the network share:

    https://wiki.archlinux.org/index.php/samba#Automatic_mounting

    I ordinarily use an fstab entry but it looks like systemd mounting (as you appear to be doing above) is the new hotness. I'll have to try that sometime.

  • GargGarg Purveyor of Lincoln Nightmares Icrontian

    @Thrax said:
    The year of Linux on the desktop is finally upon us.

    I know is good joke, but allow me to be baited anyway.

    For me it really is the year of the Linux desktop - I installed Maui Linux at home a couple weeks ago, and it's been amazing. Comes with Steam and uBlock out of the box, so it felt ready to use right away. Plus, Civ6 plays as well as I would expect it to in Windows*

    * I haven't actually installed it in Windows, but the FPS is similar to Civ5 in Windows, and 6 is a much prettier game. Thanks Radeon Linux driver devs :D

    The stuff up above sucks in Windows, too. crontabs aren't any worse than Task Scheduler. Samba shares suck on every platform.

    How do you know someone is a Linux zealot? Don't worry, they'll tell you.

  • BlackHawkBlackHawk Bible music connoisseur There's no place like 127.0.0.1 Icrontian
    edited May 2017

    May just go the rsync route to backup only certain folders, specifically the Docker configs. Installed OpenSSH on my Windows 10 desktop, but I can't seem to access the drive with my backup folder. I can only access C: and E:. More to troubleshoot I guess. I'm a dumbass and actually forgot how to use Windows's command prompt.

    Edit:

    Not sure how rsync works from linux to windows or vice-versa. How do you mix a linux file path and windows file path in a single command?

    Edit 2:

    I want to rsync over ssh instead of mounting the backup drive (not knowing a better method) is to not leave the backup drive mounted. What if something compromises the linux installation, overwrites my files and holds them for ransom?

    I could use the mnt-backups.mount stuff and manually mount and unmount through a cron job when the backup is running. Either way, I'd still like to know how to mix the file paths in a single command.

  • drasnordrasnor Starship Operator Hawthorne, CA Icrontian

    Most of them just use the Linux path notation (e.g. a windows path would be C:/stuff/more_stuff) since the windows \ is an escape sequence for nearly everything.

    You can tell rsync to only sync one-way; unfortunately that doesn't protect you from the possibility that Windows will be compromised. You can set the rsync interval slower to improve your odds of identifying the compromised system and turning off the rsync job but you can still get unlucky if you get hacked right before the job fires. The only way I'm aware of to completely protect yourself from ransom is cold backups.

    I shelled out money for an offsite incremental backup solution to protect against needing to think about these sorts of cases. See also wisdom of Mickens: https://www.usenix.org/system/files/1401_08-12_mickens.pdf

  • BlackHawkBlackHawk Bible music connoisseur There's no place like 127.0.0.1 Icrontian
    edited May 2017

    Ok, so what is the best way to make backups? Clonezilla? I usually used Acronis on Windows, but it's an old license.

    Edit:

    Crashplan has been recommended here, but using it on a headless server is not officially supported and seems overly complicated.

Sign In or Register to comment.