These models absolutely encode knowledge in their weights. One would really be showing their lack of understanding about how these systems work to suggest otherwise.
Except they don’t, definitionally. Some facts get tangled up in them and can consistently be regurgitated, but they fundamentally do not learn or model them. They no more have “knowledge” than image generating models do, even if the image generators can correctly produce specific anime characters with semi-accurate details.
These models absolutely encode knowledge in their weights. One would really be showing their lack of understanding about how these systems work to suggest otherwise.
Except they don’t, definitionally. Some facts get tangled up in them and can consistently be regurgitated, but they fundamentally do not learn or model them. They no more have “knowledge” than image generating models do, even if the image generators can correctly produce specific anime characters with semi-accurate details.
“Facts get tangled up in them”. lol Thanks for conceding my point.
I am begging you to raise your standard of what cognition or knowledge is above your phone’s text prediction lmao
Don’t be fatuous. See my other comment here: https://hexbear.net/comment/5726976