

Same, I’ve been looking for something like that for quite some time
Doing the Lord’s work in the Devil’s basement
Same, I’ve been looking for something like that for quite some time
I am looking for a solution for a ~1TB collection, and the Glacier Deep Archive storage tier is barely above 1$/m for the lot. You may want to look into it ! If I remember correctly, the retrieval (if you one day need to get your data back) was around 20$ to get the data in a few hours, or 2$ to get it in a couple days.
That’s why it’s important to heavily curate your corporate social media feed. If you see a community where this kind of comments are heavily upvoted just hide it and move on, there’s not that many and most of the niche interests are still relatively clean places.
I’m not much of a reddit defender as i pretty much left this place same time as everybody else. However there’s always been a very clear trend in this kind of subreddits. Places like “noahgettheboat” or “iamapieceofshit” or “thatsinsane” systematically attract the worst kind of misanthropic low-lifes. People will see the most abject violence and laugh “haha he fucked around and found out” and of course this makes the most fascistic types feel at home.
I don’t think they are representative of the overall slant of the community. Most places are progressive by default.
deleted by creator
They have no ability to actually reason
I’m curious about this kind of statement. “Reasoning” is not a clearly defined scientific term, in that it has a myriad different meanings depending on context.
For example, there has been science showing that LLMs cannot use “formal reasoning”, which is a branch of mathematics dedicated to proving theorems. However, the majority of humans can’t use formal reasoning. This would make humans “unable to actually reason” and therefore not Generally Intelligent.
At the other end of the spectrum, if you take a more casual definition of reasoning, for example Aristotle’s discursive reasoning, then that’s an ability LLMs definitely have. They can produce sequential movements of thought, where one proposition leads logically to another, such as answering the classic : “if humans are mortal, and Socrates is a human, is Socrates mortal ?”. They demonstrate the ability to do it beyond their training data, meaning they do encode in their weights a “world model” which they use to solve new problems absent from their training data.
Whether or not this is categorically the same as human reasoning is immaterial in this discussion. The distinct quality of human thought is a metaphysical concept which cannot be proved or disproved using the scientific method.
Lol that kind of bullshit prompt injection hasn’t worked since 2023
Interestingly the pendulum is now swinging the other way. If you look at next.js for example, server generated multi page applications are back on the menu!
I’d place it right around when angular started gaining traction. That’s when it became common to serve just one page and have all the navigation happen in JavaScript.
Good point, i was thinking more about your regular old independent artist trying to make it with their art. Obviously someone who’s an online celebrity depends on generating outrage for clicks, so they are bound to display more divisive, over-the-top opinions.
The only reason people are throwing bitch fits over AI/LLM’s is because it’s the first time the “art” industry is experiencing their own futility.
I would even go further and argue that the art industry doesn’t really care about AI. The people white knighting on the topic are evidently not artists and probably don’t know anybody legitimately living from their art.
The intellectual property angle makes it the most obvious. Typically independent artists don’t care about IP because they don’t have the means to enforce it. They make zero money from their IP and their business is absolutely not geared towards that - they are artists selling art, not patent trolls selling lawsuits. Copying their “style” or “general vibes” is not harming them, just like recording a piano cover of a musician’s song doesn’t make them lose any tickets sales, or sell fewer vinyls (which are the bulk of their revenue).
AI is not coming for the job of your independent illustrator pouring their heart and soul into their projects. It is coming for the job of corporate artists illustrating corporate blogs, and those who work in content farms. Basically swapping shitty human made slop for shitty computer made slop. Same for music - if you know any musician who’s losing business because of Suno, then it’s on them cause Suno is really mediocre.
I have yet to meet any artist with this kind of deep anti-AI sentiment. They are either vaguely anxious about the idea of the thing, but don’t touch it cause they’re busy practicing their craft - or they use the hallucination engines as a tool for inspiration. At any rate there’s no indication that their business has seen much of a slowdown linked to AI.
If I understand these things correctly, the context window only affects how much text the model can “keep in mind” at any one time. It should not affect task performance outside of this factor.
Yeh, i did some looking up in the meantime and indeed you’re gonna have a context size issue. That’s why it’s only summarizing the last few thousand characters of the text, that’s the size of its attention.
There are some models fine-tuned to 8K tokens context window, some even to 16K like this Mistral brew. If you have a GPU with 8G of VRAM you should be able to run it, using one of the quantized versions (Q4 or Q5 should be fine). Summarizing should still be reasonably good.
If 16k isn’t enough for you then that’s probably not something you can perform locally. However you can still run a larger model privately in the cloud. Hugging face for example allows you to rent GPUs by the minute and run inference on them, it should just net you a few dollars. As far as i know this approach should still be compatible with Open WebUI.
There are not that many use cases where fine tuning a local model will yield significantly better task performance.
My advice would be to choose a model with a large context window and just throw in the prompt the whole text you want summarized (which is basically what a rag would do anyway).
If you like to write, I find that story boarding with stable diffusion is definitely an improvement. The quality of the images is what it is, but they can help you map out scenes and locations, and spot visual details and cues to include in your writing.
Them dusters always complaining about something smh
Not sure what you mean by that. Do you mean ORMs? Which one and when did you try it?