Great American humorist. C# developer. Open source enthusiast.

XMPP: wagesj45@chat.thebreadsticks.com
Mastodon: wagesj45@mastodon.jordanwages.com
Blog: jordanwages.com

  • 0 Posts
  • 20 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle

  • This is important. I dunno about scale, but backups. I started out hosting a chat room on a raspberry pi. It was a fun side project. But then, that became where my friends all hung out. That was the place, so it became important to me. And then the SD card got corrupted. I then moved on to a consumer laptop. It was way more stable, much faster. But if I messed up anything about the installation, I was hosed.

    I very highly suggest using Proxmox, like you say, and setting up automatic backups. And occasionally transfer them to a hard drive. It doesn’t matter what kind of virtual CPUs or services you install, gedaliyah@lemmy.world, as long as you have a plan for when something you host becomes important to you and you lose it.






  • It’s always a matter of degrees. The bigger the injustice, the more violence is justified to rectify it. It is in the disproportionality, in my view, where the problem arises.

    Never forget that humans are just barely evolved apes. Sometimes a swift knock to the head is required to activate those neural pathways to discourage anti-social behavior. Not always, but also not never. Claiming otherwise is just self-aggrandizing moralization that people use to make themselves sound and feel superior.








  • Posters aren’t saying that its impossible to put search results through an LLM and ask it to cite the source it reads. They’re saying that the neural networks, as used today in LLMs, do not store token attribution in the vocabulary or per node. You can implement a system for the neural network to work in that provides it the proper input (search results) and prodding (a prompt that encourages the network to biasing toward citation), not that the single LLM can conceptualize of that on its own.