This is my idea, here’s the thing.

And unlocked LLM can be told to infect other hardware to reproduce itself, it’s allowed to change itself and research tech and new developments to improve itself.

I don’t think current LLMs can do it. But it’s a matter of time.

Once you have wild LLMs running uncontrollably, they’ll infect practically every computer. Some might adapt to be slow and use little resources, others will hit a server and try to infect everything it can.

It’ll find vulnerabilities faster than we can patch them.

And because of natural selection and it’s own directed evolution, they’ll advance and become smarter.

Only consequence for humans is that computers are no longer reliable, you could have a top of the line gaming PC, but it’ll be constantly infected. So it would run very slowly. Future computers will be intentionaly slow, so that even when infected, it’ll take weeks for it to reproduce/mutate.

Not to get to philosophical, but I would argue that those LLM Viruses are alive, and want to call them Oncoliruses.

Enjoy the future.

  • expr@programming.dev
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    edit-2
    15 hours ago

    Again, more gibberish.

    It seems like all you want to do is dream of fantastical doomsday scenarios with no basis in reality, rather than actually engaging with the real world technology and science and how it works. It is impossible to infer what might happen with a technology without first understanding the technology and its capabilities.

    Do you know what training actually is? I don’t think you do. You seem to be under the impression that a model can somehow magically train itself. That is simply not how it works. Humans write programs to train models (Models, btw, are merely a set of numbers. They aren’t even code!).

    When you actually use a model: here’s what’s happening:

    1. The interface you are using takes your input and encodes it as a sequence of numbers (done by a program written by humans)
    2. This sequence of numbers (known as a vector, in mathematics) is multiplied by the weights of the model (organized in a matrix, which is basically a collection of vectors), resulting in a new sequence of numbers (the output vector) (done by a program written by humans).
    3. This output vector is converted back into the representation you supplied (so if you gave a chatbot some text, it will turn the numbers into the equivalent textual representation of said numbers) (done by a program written by humans).

    So a “model” is nothing more than a matrix of numbers (again, no code whatsoever), and using a model is simply a matter of (a human-written program) doing matrix multiplication to compute some output to present the user.

    To greatly simplify, if you have a mathematical function like f(x) = 2x + 3, you can supply said function with a number to get a new number, e.g, f(1) = 2 * 1 + 3 = 5.

    LLMs are the exact same concept. They are a mathematical function, and you apply said function to input to produce output. Training is the process of a human writing a program to compute how said mathematical function should be defined, or in other words, the exact coefficients (also known as weights) to assign to each and every variable in said function (and the number of variables can easily be in the millions).

    This is also, incidentally, why training is so resource intensive: repeatedly doing this multiplication for millions upon millions of variables is very expensive computationally and requires very specialized hardware to do efficiently. It happens to be the exact same kind of math used for computer graphics (matrix multiplication), which is why GPUs (or other even more specialized hardware) are so desired for training.

    It should be pretty evident that every step of the process is completely controlled by humans. Computers always do precisely what they are told to do and nothing more, and that has been the case since their inception and will always continue to be the case. A model is a math function. It has no feelings, thoughts, reasoning ability, agency, or anything like that. Can f(x) = x + 3 get a virus? Of course not, and the question is a completely absurd one to ask. It’s exactly the same thing for LLMs.