• brucethemoose@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    16 hours ago

    that’s a weird hill to die on, to be honest.

    Welcome to Lemmy (and Reddit).

    Makes me wonder how many memes are “tainted” with oldschool ML before generative AI was common vernacular, like edge enhancement, translation and such.

    A lot? What’s the threshold before it’s considered bad?

      • brucethemoose@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        16 hours ago

        What about ‘edge enhancing’ NNs like NNEDI3? Or GANs that absolutely ‘paint in’ inferred details from their training? How big is the model before it becomes ‘generative?’

        What about a deinterlacer network that’s been trained on other interlaced footage?

        My point is there is an infinitely fine gradient through time between good old MS paint/bilinear upscaling and ChatGPT (or locally runnable txt2img diffusion models). Even now, there’s an array of modern ML-based ‘editors’ that are questionably generative most probably don’t know are working in the background.