

When your arguments are incorrect on fundamental points of fact, pointing that out is the only engagement possible.
I'm not interested in fantasies about things that might happen. You made claims about the systems that already exist, and those claims do not hold up.I'm asking you hypotheticals to test the bounds of your logic, because that's what people do when they want to engage.
If things change in material sense, then we can address that.
Asked and answered. See the bolded word. You keep moving goalposts and changing the argument.If TikTok directed their algorithm to deliver harmful content to children, would they be protected under section 230?
The answer is "obviously no". And that's irrelevant to anything happening today. Because you have zero evidence supporting any notion of such "directing" occurring with any of these sites.
I've already stated this. Why are you returning to an already-dismissed straw man?
- - - Updated - - -
Again, this bit in bold is a delusion. It is not what is happening. No major site (I only add the adjective to deflect the possibility of some fringe nutcase site with a dozen members or something being brought up) is doing this. You're making it up, based on nothing.
Why would we entertain this claim? We shouldn't, because it's not happening. It's like demanding we take action against Haitians because they might be eating people's dogs and cats.
Last edited by Endus; 2025-04-16 at 03:35 AM.
This is unfair to ninespine. He might now know how machine learning algorithms work. In any case, algorithms do have weights and even the features you are selecting are done by people which necessarily include biases. How you define those and how much they impact? Is that editorializing? When youtube deboosts certain type of content that is editoralizing to an extent or is that part of youtubes standards like how you wouldnt allow adult content in a website geared for children? Google has expensive lawyers that prob figure this shit out and are better equipped to answer these questions. On my end, shit like TikTok does worry me. Bytedance is legally obligated to support the CPC goals.

There is no "Mathematics vs emotional" element to the law. None of this has anything to do with emotion. You just seem to believe "I had a big powerful machine do it" makes some kind of distinction. It doesn't.
There is no "purely mathematical based determination". If we are talking about LLMs, a human being builds a model, then a human being chooses training data for the model, then a human being primes the model with reinforcement, then a human being takes the output of the model and primes it further to perform the tasks it needs. It is a machine built my humans, and those humans are responsible for its behavior.
"stop puting you idiotic liberal words into my mouth"
-ynnady
Judge blocks Trump from revoking legal status for 530,000+ migrants who flew into US via Biden program *ding*
Trump saw people who came into the country legally, and tried to deport them illegally.
Trump does not care if they're legal residents or not. Therefore, as a Trump supporter, all of your claims "well actshually" are now summarily handwaved. Trump deported any reasonable claims you had, leaving you with someone breaking the law, and you defending him.

You are putting a lot of effort into dismissing "Let's test your logic" as some kind of wild, ridiculous and dishonest exercise, when in reality it is a cornerstone of basic discourse.
If the answer is "obviously no", why is it so obvious? Your argument has consistently been that Section 230 says that platforms are not publishers unless they play a role in the creation of the content. TikTok is not creating this dangerous content in my example. What part of Section 230 do you believe makes TikTok liable for this?
"stop puting you idiotic liberal words into my mouth"
-ynnady
"Winning? Is that what you think it’s about? I’m not trying to win. I’m not doing this because I want to beat someone, or because I hate someone, or because I want to blame someone. It’s not because it’s fun. God knows it’s not because it’s easy. It’s not even because it works because it hardly ever does.. I DO WHAT I DO BECAUSE IT’S RIGHT! Because it’s decent! And above all, it’s kind! It’s just that.. Just kind."

No, it is very explicitly what was stated here:
"Explaining the differences between a human and a computer's ability to process massive amounts of information and make a purely mathematical based determination over an emotional one is beyond the scope of this thread."
This argument is very literally "The exact same action is not the same when a computer does it, because a computer is very powerful and doesn't have emotions."
"stop puting you idiotic liberal words into my mouth"
-ynnady
Because it involves a person intentionally using an algorithm to drive malicious content towards users. Something that is not happening at any major site, making this a straw man regarding the real-world situation we were discussing.
Don't malicious edit my positions and misrepresent that edit as truth.Your argument has consistently been that Section 230 says that platforms are not publishers unless they play a role in the creation of the content. TikTok is not creating this dangerous content in my example. What part of Section 230 do you believe makes TikTok liable for this?
I said they don't qualify as "publishing" unless they participate in the creation or development of that content. The intentional promotion of that content would qualify as "development".
And once you add that in, that's the exact language Section 230 uses in defining an "information content provider".
Such "creation or development" is required for an interactive computer service (a "platform") to instead be treated as an information content provider (a "publisher").
And 230 (c)(1) makes it clear that distinction cannot be crossed without the platform specifically getting involved and becoming a publisher themselves. That requires engaging in said "creation or development".
So congrats; my argument has consistently been in line with what Section 230 actually says. You got me. That's sure a criticism. You're still misrepresenting the law and how it applies.

It's because a computer isn't a person. That's a requirement under the law, as already stated repeatedly. You need to establish intent by the humans who designed it, that they built it to deliver that content specifically to mislead users, and you've yet to provide any evidence of such action.
I'd agree that it's a violation that puts the platform at risk, if it happened that way. But it literally has not. That potentiality is a fiction, at this point, and I don't see the utility of that fiction when discussing the actual reality of algorithms as they're currently used.
And Section 230 is the law that would allow said action to be legally attacked and prosecuted. Meaning your distaste for Section 230 is off-base entirely.

What did I just say about not maliciously misrepresenting my argument?
I said no such thing. You know that. Stop gaslighting me and the thread. An algorithm cannot "curate". The humans behind the algorithm are curating the content. Using said algorithm as a tool. They have to identify and promote the videos/channels themselves, because there's no way the algorithm can reliably identify such things itself.
And this description of "curation" could not be applied to moderation or recommendation algorithms, either. Because banning or demonetizing content is in no way helping to develop that content (obviously), and recommendation algorithms don't work based on any comprehension of the content of the media in question beyond the very broadest of subject classification and what other users have engaged with it as well as the other videos you've watched.
Last edited by Endus; 2025-04-16 at 04:23 AM.

See above edits. Moderation and "next video" recommendation algorithms aren't "curating" anything in a sense that would be covered as "development" of the content. The former work against the content, and the later are tied to what other users with similar watch histories have also engaged with, not the content itself.
There's no human intent driving any specific messaging to any specific audiences at hand. That's where your example completely breaks apart and becomes irrelevant; you have to keep adding that human intent back in because it doesn't exist in real-world algorithm uses (at major sites, again, a caveat I only make to avoid some random tiny site being brought up out of nowhere).

https://www.independent.co.uk/news/w...-b2733665.html this is some scary shit these people are extremely white looking and talk about how they previously held high hopes for trump. In other words people who were just hoping to keep their heads down may be in for a very rude awakening.

YouTube's algorithm is making "decisions", based on the design of the algorithm, on what content to present to you, correct? Why is that publishing if the example is uncomfortable for you, but just platforming when it seems benign? Is there a "malicious intent" clause in Section 230 I am unaware of?
"stop puting you idiotic liberal words into my mouth"
-ynnady
First they came for the transgenders, and I did not speak out, for I was not transgender.
Then they came for the Hispanics, and I did not speak out, for I was not Hispanic.
Then they came for the women, and I did not speak out, for I was not a woman.
Then they came for me, and there was no one left to speak out for me.
To paraphrase Niemöller.
- - - Updated - - -
I'm not changing my position based on the malice of the content, but on whether there is human direction driving the algorithm to deliver a particular message to users.
The bit you keep trying to insert into the argument, despite it being a fiction you made up that has nothing to do with reality.
I've stated this multiple times, to correct you misinterpreting me. If you keep it up, it's clear your misrepresentation is malicious and deliberate.