• 0 Posts
  • 30 Comments
Joined 1 year ago
cake
Cake day: August 5th, 2023

help-circle
  • This may be, but the probability is unarguably higher than with Trump. Voting exclusively for candidates you morally agree on only works if enough people have the same morale (in this case i.e. are educated on Israel and so on) and are also not willing to make compromises.

    Even if unfortunate, this is currently not the case; and you voting independent has smaller chances of changing that than voting democratic. So you will probably have to accept this situation for the moment and choose the “best actually feasible” strategy— and feasible means having the highest probability to win in real life, not merely trying.

    Personally, I’d even argue that it’s unethical to not vote for a candidate like Harris, just because the chances of getting stuff like ranked choice voting or educating voters done (which will then lead to you being able to realistically vote for others) is significantly higher when voting Democrats than… letting Trump win?

    Notice that I don’t say you have to agree with anything else she stands for, you’re trying to achieve certain goals/get out of the very unfortunate current situation, and even a low chance of reaching that is infinitely better than none.










  • Quik@infosec.pubtoMemes@lemmy.mlMe but ublock origin
    link
    fedilink
    arrow-up
    234
    arrow-down
    3
    ·
    2 months ago

    Billy should really not support them, Ad Block Plus let’s advertisers pay for having their ads checked as “acceptable advertisements”, i.e. is selling out the core functionality of their product. Billy should use uBlock origin, which afaik does not accept donations, he could however support something like PiHole .






  • Interesting take on LLMs, how are you so sure about that?

    I mean I get it, current image gen models seem clearly uncreative, but at least the unrestricted versions of Bing Chat/ChatGPT leave some room for the possibility of creativity/general intelligence in future sufficiently large LLMs, at least to me.

    So the question (again: to me) is not only “will LLM scale to (human level) general intelligence”, but also “will we find something better than RLHF/LLMs/etc. before?”.

    I’m not sure on either, but asses roughly a 2/3 probability to the first and given the first event and AGI in reach in the next 8 years a comparatively small chance for the second event.





  • As others pointed out, having the feeling of knowing (about) things without actually having experienced them yourself is a core feature of what one might call intelligence, and as such not insane.

    I would argue instead that the problem isn’t with arguments over stuff you haven’t experienced yourself, but rather people caring too much about their fixed opinion and not about actually trying to find the truth (e.g. though argument) as they might proclaim.

    (I am relatively certain of this point as I’ve seen seemingly good counter examples to this provided by the LessWrong community, where people often discuss topics they do not necessarily have experience with, but rather try to find the truth and therefore not have a fixed opinion beforehand.)