• 1 Post
  • 538 Comments
Joined 9 months ago
cake
Cake day: January 29th, 2025

help-circle
  • I run Navidrome off a free small form factor PC recycled from work. My whole family accesses it via whatever app they like that supports Subsonic API (there’s dozens), and for security it’s only accessible via Tailscale, so they need Tailscale installed and connected.

    Initial cost: $0. Plus cost of the apps, which is like $5 each user. Tailscale is free for up to 100 devices. Time to set up: 1 day. Ongoing cost: the very little electricity an energy-efficient SFF PC uses - way overestimating would be $2/month. Plus whatever music we buy on Bandcamp, physical etc that we own forever.

    So it’s not way more expensive in my experience, and at the end of the day I give artists I enjoy much more money than Spotify streams ever would, and I’m not supporting a piece of shit CEO pouring a billion dollars into military spending.









  • Stable Diffusion? The same Stable Diffusion sued by Getty Images which claims they used 12 million of their images without permission? Ah yes very non-secretive very moral. And what of industry titans DALL-E and Midjourney? Both have had multiple examples of artists original art being spat out by their models, simply by finessing the prompts - proving they used particular artists copyright art without those artists permission or knowledge.

    Stable Diffusion also was from its inception in the hands of tech bros, funded and built with the help of a $3 billion dollar AI company (Runway AI), and itself owned by Stability AI, a made for profit company presently valued at $1 billion and now has James Cameron on its board. The students who worked on a prior model (Latent Diffusion) were hired for the Stable Diffusion project, that is all.

    I don’t care to drag the discussion into your opinion of whether artists have any ownership of their art the second after they post it on the internet - for me it’s good enough that artists themselves assign licences for their work (CC, CC BY-SA, ©, etc) - and if a billion dollar company is taking their work without permission (as in the © example) to profit off it - that’s stealing according to the artists intent by their own statement.

    If they’re taking CC BY-SA and failing to attribute it, then they are also breaking licencing and abusing content for their profit. An VLM could easily add attributes to images to assign source data used in the output - weird none of them want to.

    In other words, I’ll continue to treat AI art as the amoral slop it is. You are of course welcome to have a different opinion, I don’t really care if mine is ‘good enough’ for you.


  • Sooo many steps along the road to disaster. Just wrt the supreme court: Democrats allowed it when they didn’t shut down the government over the Republicans refusing to let Obama seat a Supreme Court judge for almost a year, which was legally his right and responsibility. RBG gave the conservatives another supreme court pick by selfishly refusing to retire (into her late 80s and very ill health) because she specifically wanted to see a female president nominate her replacement. The voters set it in stone by lapping up the dumbest presidential candidate of all time, voting for Trump’s first presidency. Biden later pussed out on intervening by not even attempting to expand the court, which has precedence.

    Etc. Etc. Etc.

    The supreme court is set to be highly friendly to corporations, highly conservative and religious for the next 30 years or so, at minimum.





  • Collage art retains the original components of the art, adding layers the viewer can explore and seek the source of, if desired.

    VLMs on the other hand intentionally obscure the original works by sending them through filters and computer vision transformations to make the original work difficult to backtrace. This is no accident, its designed obfuscation.

    The difference is intent - VLMs literally steal copies of art to generate their work for cynical tech bros. Classical collages take existing art and show it in a new light, with no intent to pass off the original source materials as their own creations.



  • All of that’s great and everything, but at the end of the day all of the commercial VLM art generators are trained on stolen art. That includes most of the VLMs that comfyui uses as a backend. They have their own cloud service now, that ties in with all the usual suspects.

    So even if it has some potentially genuine artistic uses I have zero interest in using a commercial entity in any way to ‘generate’ art that they’ve taken elements for from artwork they stole from real artists. Its amoral.

    If it’s all running locally on open source VLMs trained only on public data, then maybe - but that’s what… a tiny, tiny fraction of AI art? In the meantime I’m happy to dismiss it altogether as Ai slop.