• 5 Posts
  • 82 Comments
Joined 7 months ago
cake
Cake day: March 5th, 2024

help-circle


  • Governments shouldn’t [?] whether or not specific content is ok

    Yes they should.

    Idk why do people act as if online content is detached from real life. Governments decide what type of content/things are ok irl all the time, literally laws are deciding what is ok for you to do and show in real life all the time, everywhere, in all aspects of life. Why do you think online content is untouchable?

    In most countries going out and showing your penis in public will land you in jail, why is the government deciding this is inappropriate “content” to be in public? It is just an example out of… thousands.

    What do you think would happen if you set up a huge screen on a public square irl and started playing real murder videos that happened recently to people from your own country? Do you think people would see your huge screen showing actual muders and not call the cops on you? Do you think this behaviour would not destroy your life, maybe land you in jail or get you a huge fine, get you lawuits from the victims’ families (who were real people on your videos) that you would 100% lose?

    If you think governments shouldn’t decide what type of content is ok to be shared publicly on social media, I invite you to download a collection of gore videos and set up a huge screen out on the streets and see how long you manage to be showing this in public before it lands you in trouble.

    You wouldn’t do it and I bet you know damn right that you getting in trouble for this is correct. Why is public social media different? Online = ethereal world where rules don’t matter?

    Come on dude, online content is not detached from real life.

    Remember we are talking about content shared publicly for anyone, even unintentionally, to see. Not private messages and private groups that people join willingly.










  • GreatDong3000@lemm.eetoFunny@sh.itjust.worksIt's so over
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    4 months ago

    It is a partial analogy, it takes into consideration the outputs which are related to some specific training data and disconsiders the outputs which cannot be directly related to any specific training data.

    For example, make up a new meme template and a new joke on the spot, it couldn’t have seen it before if you make sure your joke and template are new. If the AI can explain it then compression is a horrendous analogy.

    Lossy compression explains outputs being similar but not identical when trying to recover the original data, it doesn’t explain brand new content that makes sense standalone. Imagine a lossy audio compression resulting in a brand new song midway through playback, or a lossy image compression resulting in a brand new coherent image being overlayed onto some pixels of the original image. That is not what happens, lossy audio compression results in noise, lossy image compression results in noise, not in coherent unheard songs and unseen images.



  • GreatDong3000@lemm.eetoFunny@sh.itjust.worksIt's so over
    link
    fedilink
    arrow-up
    5
    arrow-down
    3
    ·
    edit-2
    4 months ago

    Oh ok, you want to claim this is compressing the entirety of the internet in a model that isn’t even 1 terabyte of data and be unimpressed that is something.

    But it isn’t compression. It is a mathematical fact that neural networks are universal function approximators, this is undisputed, and analytic functions are continuous so to be an analytical function approximator it must be able to fill in the gaps between discrete data points by itself, which necessarily means spiting out data outside of the input distribution, data it has not seen.


  • GreatDong3000@lemm.eetoFunny@sh.itjust.worksIt's so over
    link
    fedilink
    arrow-up
    7
    arrow-down
    3
    ·
    4 months ago

    Man the models can’t store verbatim its training data, the amount of data is turned into a model that is hundreds or thousands of times smaller than the original source data. If it was capable of simply recovering everything that it was trained on this would be some magical compression algorithm and that by itself would be extremely impressive.



  • Idk, where there is potential for data mining and money there is a will and a way.

    I am worried about stuff that is widespread like systemd, KDE, GNOME, flatpak, a bunch of stuff which is mantained by companies like redhat and canonical, etc. I also worry stuff like what was attempted with the XZ backdoor becomes more common.

    We can always hop to other distros but if the high level polished stuff that we’ve taken a long time to achieve gets compromised these safer distros may end up being a worse experience and set us back years or decades.

    I think I am fine with home use Linux growing a little bit, maybe if we get just under 10% or so that can be good in terms of software availability and just more people working on open source projects. Too much popularity idk, I am not onboard with that rn.


  • I think I don’t even want Linux to become too popular. It will attract the wrong kind of attention. First, being more targeted by attackers it may become less safe. Most importantly, I don’t even know how but I know that if Linux becomes a huge market for home users, corporations will look at it and go “uh, big market sitting there let’s monetize it” and there is absolutely no way Linux won’t become shittier in more ways than one when thousands of big corporations out there are trying to get their hands on Linux users and our data in multiple different ways. Again, I don’t know how it will happen but I don’t like having this kind of attention on Linux.