• 0 Posts
  • 15 Comments
Joined 1 year ago
cake
Cake day: June 20th, 2023

help-circle
  • I don’t know where I read it but IIRC religion is being used as a simple answer to very difficult and possibly uncomfortable questions: why are we here and what is our purpose?

    It is fairly easy to believe that something, a god, created us instead of that the existence of humanity was just a fluke, a stroke of luck enabling us to evolve were we are now because it is just easier to grasp even if it is proven. That we evolved from simple beings into more complex organisms instead of just “being created”. Evolution creates so many quite difficult questions that it is easier to understand and believe that someone just wanted us to exist.

    When someone is believing in a religion they also always have some form of " it won’t be over" scenario like when you die, there is nothing truly “the end”. You just won’t vanish and this can be terrifying for many because the following question could be, what sense does it make to live at all when our existence is just so insignificant in comparison to everything else?

    So, in short, it is an easy too to make sense of things that almost everyone can understand it.

    Unfortunately, things like this can and will be abused.




  • Just like the application running in your docker container, the “image”, that you base your docker container on, is also a separate thing that uses versions.

    A newly built and published docker image doesn’t necessarily mean that the application inside of it has a new version (which could definitely be the case) but that something with that docker image has changed, maybe dependencies have changed or that the process in building the docker image has been optimized.




  • Unraid “supports” docker compose. You can install and use it but you won’t be able to utilize how unraid handles docker containers.

    All that unraid does is make docker more accessible for the normal user. In the end the container template constructs a docker run command.

    So you could use portainer to manage stacks through a webui or install compose and have to SSH into the unraid server all the time.



  • I had the pleasure recently to create an ffmpeg command to transcode a video into HEVC 10bit with quicksync.

    I had tha previously running completely fine on my Nvidia GPU. You would think that it would just be replacing the parameter which device or hardware acceleration to use.

    Yeah, turns out that there are like 4 ways to set the quality value of the transcoded output, CRF didn’t work for some reason with quick sync so you need to use global quality or something. I spend days on this trying to figure this out, DAYS.

    It is a very powerful tool but every time I have to use it, it is too complicated and I have to spend hours or days to get it working.


  • According to Unraid, errors are: “Errors counts the number of unrecoverable errors reported by the device I/O drivers. Missing data due to unrecoverable array read errors is filled in on-the-fly using parity reconstruct (and we attempt to write this data back to the sector(s) which failed). Any unrecoverable write error results in disabling the disk.”

    Restarting the array will automatically get rid of the error count, but that doesn’t mean they are totally gone or that this fixes the issue.

    Unfortunately, there isn’t much for you to do other than try things out and see if it happens again. I had recently this issue in which one of my Ports on my SAS RAID extender card that I use for Unraid produced errors. when I removed 2 drives from one of the ports and plugged them into a different one, the error went away.

    This can also happen (at least from my experience) when you have insufficient cooling. So an LSI card with a heatsink might want/need a fan if your temperatures are too high or airflow not good enough.




  • A year or so ago I actually tried to get into Jellyfin and it wasn’t really that pleasant experience.

    A bit of background: I am mainly a Java and JavaScript developer and have used Plex for over a decade now. I even developed a Plugin for Plex with Python. Naturally, Jellyfin came across my radar so I checked it out but they didn’t have a Metadata Provider for the Metadata Source that I needed for some of my Libraries. There were alternatives but this would require to completely change my libraries which I wasn’t interested in.

    So, I set out to just do it myself. I did know some C# but was by far not as up-to-date as you could be but I didn’t really care because I wanted to see how that project went and if I could get into it I could learn more about C# while doing it.

    However, while I could get the Plugin compiled and loaded into a Jellyfin instance and even get some metadata downloaded, I quickly hit brick walls. From what I could tell, there weren’t even method comments for, you know, methods you need to implement so that you can write a metadata provider.

    Not being able to resolve this through trial and error or looking at other currently active Providers (who seem to all do things differently, so no consistency) I asked on the Jellyfin Subreddit for help and got told to use the Matrix Chat instead. This was already annoying because that isn’t how you amass knowledge that someone can fall back to and find when they have questions because Matrix is a walled garden. Regardless I asked there as well and didn’t get any help or the responses didn’t really help me.

    So, I shelved the project.

    What I want to say here is that FOSS Projects like Jellyfin should prioritize their documentation. The easier it is for people to understand how things work and “get into” the project the more people would be willing to actively contribute. I know that what I described above could just be my inexperience or lack of understanding and knowledge of C# and everything around it but I would imagine that many developers are in the same situation as me and would like to contribute but can’t get over those hurdles. This is even worse for new developers who might want to stretch their legs in the Open Source community but are still learning.

    Reading this with “we need developers” and “you can contribute to our documentation” looks a bit contradictory to me because shouldn’t the “experienced” contributor not create the documentation?


  • I use a pihole which is a small computer that checks every domain request and blocks them when they are on one of my blacklists. This works great for browsing the web because you just don’t see most ads anymore. I also use adblocks for, for example, YouTube because pihole can’t distinguish between ads or legitimate requests when they come from the same domain.

    I also download all videos from YouTube to watch. And I also don’t have cable.

    Basically, I see so few instances of ads anymore that any few ads are getting so annoying. The 1-2 ads in front of a YouTube video or in the middle, I just don’t watch that video anymore.

    But when I really noticed that was when I was spending the day with my father and we were watching a TV show on some free provider, every 10 minutes there were 1.5 minutes ads. Which is by far better as normal TV in my country (Germany) but damn, this was really annoying after just a single episode and I’m glad I don’t have to see those at home. It just interrupts the flow.


  • I think this is too complicated.

    Shut down all containers and VMs. Set all shares to “use cache=Yes” that otherwise are only on cache or prefer it and then invoke the mover. verify that the cache pool is empty and take the array offline or shut the server down (if you can’t hot swap) and replace the drives. start the server, assign the new drives to the cache pool, start the array and set your shares prefer and invoke the mover (this is so that shares that were set to “use cache=only” also is moved to the cache) after that is finished, set the shares to whatever it was before. Start containers and VMs. Done.

    You don’t need to restart in Safe mode, you don’t need to replace each drive individually. You just do it once, move everything off, replace and move everything on again.

    Personally, I have never done a replacement and upgrade of cache pool at the same time so I wouldn’t want to experiment without first having done it myself as a test before even if someone online says that it works.